Conference PaperPDF Available

Analysis of flat terrain for the Atlas robot

Authors:

Abstract and Figures

This paper gives a description of an approach to analyze the sensor information of the surroundings to select places where the foot of a humanoid can be placed. This will allow apply such robot in a rescue scenario, as foreseen in the DARPA Robotics Challenge, where a robot is forced to traverse difficult terrain.
Content may be subject to copyright.
Analysis of flat terrain for the Atlas robot
Maarten de Waard , Maarten Inja , and Arnoud Visser
Intelligent Systems Laboratory Amsterdam
Universiteit van Amsterdam
Abstract—This paper gives a description of an approach to
analyze the sensor information of the surroundings to select places
where the foot of a humanoid can be placed. This will allow
apply such robot in a rescue scenario, as foreseen in the DARPA
Robotics Challenge, where a robot is forced to traverse difficult
terrain.
I. INTRODUCTION
On October 24th 2012, the DARPA Robotics Challenge
(DRC) kicked off1. To quote their website:
”The primary technical goal of the DRC is to develop
ground robots capable of executing complex tasks
in dangerous, degraded, human-engineered environ-
ments. Competitors in the DRC are expected to focus
on robots that can use standard tools and equipment
commonly available in human environments, ranging
from hand tools to vehicles, with an emphasis on
adaptability to tools with diverse specifications.
The robot that is used in this challenge is the Atlas, a
bipedal human sized robot, shown in figure 1. The Atlas robot
is a continuation of the anthropomorphic robot developed in the
Petman project. The Petman project was intended to reproduce
the human movements. The Atlas robot is intended to be
able to traverse difficult terrain. The Atlas has 28 degrees of
freedom (DoF); the skeleton is shown in figure 2. At this
moment the robot is only available in the DRC Simulator
based on the Gazebo environment (see Fig. 3). The first real
prototypes will be delivered later this year. In simulation the
Atlas is equipped with a laser scanner and two cameras located
in the head.
Figure 1. The Atlas is a humanoid robot will be tailored for rescue operations
(Courtesy Boston Dynamics).
1http://www.darpa.mil/Our Work/TTO/Programs/DARPA Robotics
Challenge.aspx
Figure 2. Skeleton of the Atlas showing the degrees of freedom.
Three tasks are part of the virtual DARPA challenge:
Climb into a utility vehicle, drive along a roadway at
a speed no greater than 16 kph (10 mph), and climb
out of the utility vehicle.
Walk across progressively more difficult terrain, for
example, progressing from parking lot to short grass to
tall grass to tall grass on slope to ditch to rock field. In
the earlier terrain, the default balancing and walking
behaviors of Atlas will suffice. In the later terrain,
DARPA expects perception and footstep planning will
Figure 3. Screenshot of the GUI of Gazebo showing the rendering of the
Atlas URDF model.
978-1-4673-6315-0/13/$31.00 c2013 IEEE
be needed.
Connect hose to spigot. This is purely a manipulation
task, that is, the robot starts with everything within
reach and so does not need to travel to the work site.
For this paper we will try to tackle the second task in
de virtual DARPA challenge. This virtual challenge entails
using the Gazebo2simulator together with the Robot Operating
System3(ROS) to make a model of the robot perform the task.
In the Technical Guide of the Virtual Robotic Challenge [1]
this task is described in more detail. In Fig 4 the arena for the
walking test is displayed. In the front is the starting pen, the
final gate is in the hills at the back. In between those gates
the robot has to walk across flat pavement, cross a mud pit,
climb a gentle incline and traverse a rubble pile. To complete
such challenge, a robot has to interpret the terrain ahead with
its sensors, aggregate the measurements into a world model
and perform footstep planning based on this model. This is
the subject of this paper.
Figure 4. Overview of the arena for the walking test (Courtesy E. Krotkov
et al.[1]).
This task was split into two parts. The first part consists
of making a model of the surface in front of the robot and
selecting candidate surfaces for foot placement. The second
part consists of computing a path to reach this location with
the foot in a safe way. The candidate surfaces will serve as
input for the walking engine, where it first performs a stable
leg lifting motion, and then put the foot down in the desired
location while keeping the robot balanced.
This paper is organized as follows: in section 2 the related
work that has been done on both subjects will be discussed;
in sections 3 the footstep planning will be described, followed
by results, conclusion and future work.
II. RE LATE D WOR K
A. Footstep Planning
There are several approaches to the planning of footsteps.
To prefer a straight surface over a surface with slopes one
can use A* and include the slope of the ground plane into the
heuristic function [2]. This works well for their application
(crossing an uneven hill). In their algorithm the terrain is
2http://gazebosim.org/
3http://www.ros.org/wiki/
categorized in four categories (flat ground, tilted ground, stairs,
holes), none of which deals with obstacles which could be
encountered at the rubble pile. The A* path planning is divided
into two parts, first a trajectory is planned from the current
position to the goal, then the actual footsteps are planned to
follow the trajectory.
By modeling the robots valid (foot) configurations, estimat-
ing the bounding box for the shape of the robot, and modeling
the environment (obstacles) one can represent the robot and
the world in a mathematical context. This representation can
be used to determine illegal and legal positions and configura-
tions, or states, for the robot. The states can be seen as nodes
in a graph, in which the edges are the transitions from one
configuration, or pose, to another. This results in a searchable
graph that can be searched with a modified A* algorithm
without having to be built entirely (the states do not all have
to be calculated) [3].
A single rock can be seen as an obstacle over which
the robot could step entirely, but if the terrain consists of
many rocks it becomes ‘rough’ terrain. The difference with
the previous methods is that there is no flat surface at all; the
robot should attempt to find the best spot for its foot that would
minimize falling or slipping risk.
Dealing with rough terrain can successfully be learned
through reinforcement learning [4]. The observed terrain is
modeled and matched to models, called templates, which are
enriched with the positions that experts deem the best position
for foot placement.
Ideally, low ceilings should also be dealt with by the robot
by either avoiding such an area or by crouching. However, this
is considered not part of our objective.
III. MET HO D
A. Theory
Section II-A describes all the necessities we have to
implement for complete footstep planning. We focused on the
first step by investigating point cloud data and see if we could
extract surfaces and make the distinction between those that
will support the robots foot and those who do not.
A point cloud is a collection of points, in which a point
is defined as an x, y, z value, which might be collected by a
sensor, such as a Kinect sensor or laser range scanner.
The Point Cloud library4which already is integrated into
ROS environment coupled to the DRC Simulator. ROS offers
two methods to do plane segmentation: region growing and
plane segmentation using RANSAC.
1) Region Growing: Region growing segments points
based on their curvature and surface normals, which are both
local features based on the nearest neighbors of the points [5]5
A region is subset of a cloud of points which are classified
as belonging to the region, in our case we mean with a ‘region’
a plane, or surface. So any points that make up a plane
should be considered a region. For different purposes one could
4http://www.pointclouds.org/
5as implemented by Sergey Ushakov, see http://www.pointclouds.org/blog/
trcs/velizhev/.
for example want to find the regions that make up spheres.
Segmentation is the process of dividing, or segmenting, the
point cloud data in to different subsets, or regions.
The nearest neighbors could picked using several methods,
using KD-trees or octrees, but also simply by taking the points
in a radius.
It is easiest to consider the point and its nearest neighbors
as a surface of which the normal is the vector perpendicular
to the surface, and the curvature a scalar value indicating the
curvature of the surface.
Region growing starts at the point with the lowest curvature
value, this point is the start of the region, it is added to a new
set called seeds. The algorithm is as followed:
For each point in the seeds set, for each neighboring
point:
Add neighboring point to the region if the
angle between this point and the seed point
is below the angle threshold
Add the neighboring point to seed set if its cur-
vature value is below the curvature threshold
Remove the seed point from the seed set.
If the seeds set is empty, then a region has been found.
2) Plane Segmentation Using RANSAC: The second
method to plane segmentation in the point cloud library is to
match the model of a plane in the point cloud using RANSAC
(RANdom SAmpling Consensus)[6].
This method combines fast normal computation by only
considering nearby points on 6 integral images. Fast clustering
of points with similar local surface normals is accomplished
by first defining a voxel grid (a course discretization), followed
by merging those grids. The result can be a cluster of points in
the same plane but not geometrically connected. Those planes
can be separated again by splitting them in a segmentation re-
finement step. The found planes can be smoothed by RANSAC
which removes residual outliers.
The RANSAC algorithm informally goes as followed:
Randomly select a subset of the point cloud and
estimate the free model parameters
Other data is considered, if a point fits the model a
point is added (considered an inlier)
The model is re-estimated considering all the inliers
The model is evaluated by estimating the error relative
to the model
A model is sufficient if a sufficient amount of points are
considered inliers.
3) Plane Evaluation: The planes, or surfaces, that are
found using the region growing or plane modeling should be
evaluated to a scalar that indicates how much the robot would
want to place a foot on that plane.
First the average surface normal vector for a plane is
calculated, normalized. Then the Euclidean distance to the
example unit vector (which is a vector pointing straight
up) is calculated. This scalar should be sufficient for a path
planner similar to [2] to prefer flat and straight surfaces over
slopes.
Additionally we would like the size of the plane to be
considered; planes smaller than the robots feet should be
discarded.
B. Implementation
Some choices were made, to enable easy implementation of
footstep planning. At first, the point cloud sensor was chosen
as the most fitted sensor to find the environmental data that was
needed for accurate footstep planning. This was because point
cloud sensors are capable of collecting a lot of information
about the environment in a small time frame. Using the point
cloud sensor also enabled us to use the C++ Point Cloud
Library (PCL), which enables a user to easily use many state-
of-the-art point cloud processing algorithms [7]. Because of
dependencies between the DRC simulator, ROS and PCL, in
this study, the 1.5 version of PCL was used.
The current implementation6finds planes in the environ-
ment of the robot and gives those planes a measure as to how
badly the robot should want to step on them. This works in
the following three steps. These steps require a robot to have a
working point cloud sensor. Alternatively, a rotary laser range
scanner can be used to simulate one.
1) Find planes in the point cloud
2) For each plane, find its mean surface normal
3) Evaluate each plane and its surface normal
The following subsections will explain the methods in detail.
1) Finding planes in a point cloud: As mentioned in
section III-A, planes can be located in various manners. In
version 1.5 of the point cloud library, Region Growing has
not been implemented yet. That is why our software solution
uses plane segmentation based on Ransac. This is implemented
using the PCL provided SACSegmentation. The exact
parameters for the segmentation differ per goal and sensor.
The planes that are found using this segmentation algorithm
are extracted from the cloud using ExtractIndices, also
provided by PCL. Then the program loops through these
planes, and enters the next step.
2) Finding mean surface normals: Normals of the points
in a point cloud can be found using the pcl class
NormalEstimationOMP, which uses a K-nearest neighbor
search to find other close points, and estimates a normal vector
using those points and, if specified, a camera position for the
correct direction.
The mean surface normal is then calculated by simply
adding them all and dividing by the number of points.
3) Evaluate each plane and its surface normal: The surface
normal of the ideal standing surface is pointing up. That, by
definition, means that a surface is placed horizontally, which
is good to stand on. For that reason, the surface normal of
each found surface is compared with one pointing up (vector
6The code is available at https://code.google.com/p/voetlos/
Figure 5. The Gazebo environment that was used to test the algorithm with
the points and laser scanners.
). Comparison is calculated with
the following equation:
which is always used on unit vectors, assuring the result is a
scalar between 0 and 1, 0 meaning completely different, and
1 meaning completely alike.
4) Viewer: The last thing we implemented is a ‘viewer’,
which listens to the ROS topics published by the feature
calculation program, and visualizes everything. This viewer
represents the found plane segments in colors between red
and purple. This color is created by a rgb-value based on the
outcome of the comparison. A value of 125 for the red color
is always given, so that every point can be seen. The value for
blue differs from 0 to 255. For this the measure of equality
is used as follows: . We take the fifth
power of the equality, in order to exaggerate the measure in
which the ground should be horizontal: a slope is fairly
hard to stand on, especially for a robot.
This viewer enables the user to easily see what works and
what works not and is a useful combination between listening
to ROS topics and the PCL Visualizer class.
IV. RES ULTS
A. Footstep Planning
In this section we will show the results of our footstep
oriented environment segmentation on various types of point
cloud.
1) Gazebo’s Points2 Sensor: The most logical sensor to try,
in this case, is Gazebo’s built in Points2 sensor. This sensor is
mounted on the MultiSense-SL head. This sensor was used in
a world with some objects, some of which one should want to
stand on, others of which one should not. This world can be
seen in figure 5.
The resulting image can be seen in figure 6. As can be
seen the side of the golf cart is considered bad to stand on,
just like the slope. The table and floor are good. Unfortunately
planes parallel to the slope are also found in the stairs. For that
reason the stairs are also calculated to a red color value. This
problem is mainly caused by two reasons:
1) The PCL plane segmentation function has no way of
setting a threshold of cloud density. In other words:
cloud segments that exist of only five parallel lines
of points can be fitted by a plane model, no matter
how far these lines are apart from each other. This
way, instead of only finding horizontal planes in the
stairs, it is also possible to find planes that are fitted
through several stairs, and are diagonal on the ground
plane. However unwanted this result is, no solution
to this problem could be found.
2) Another problem is that the point cloud finds very
distorted points. As can be seen, only points in a grid
form are found on the ground and on the slope. This
is most probably a result of the manner in which the
point cloud sensor was implemented in Gazebo.
As a result of this noise in the point cloud, the
processing of especially the stairs could have been
worsened. Also finding good segments using any
other algorithm than plane segmentation would be
harder in the slope, because of the vertical stripes
with point clouds, instead a desired dense cloud, like
can be seen on the table top.
2) Gazebo’s Laser Range Scanner: Because of the gaps in
the point cloud, the laser range scanner was put to use. This
scanner returns points over one axis, and is capable of being
rotated. This results in the capability of collecting point data
of the surroundings. When collecting data for 10 seconds, the
point cloud of figure 7 is retrieved. The first thing that can be
seen is that this point cloud is more accurate than the point
cloud retrieved by the Points2 sensor; all planes are straight
and the points are divided more evenly. Another thing that
can be noticed when looking at this data points, is that the
laser range scanner always returns the maximum value, when
no points are found. This results in a dome of invalid points
around the scanned environment. This will later be referred to
as the ‘noise dome’.
Using the laser scanner, and an ‘assembler’ to send the
points of the last 10 seconds in one point cloud, the algorithm
Figure 6. The output image of our algorithm when used with the Points2
sensor in the world of figure 5. A purple tint indicates that that part of the
point cloud consists of a surface that is good to stand on. A red tint indicates
the opposite. The white arrows are the calculated mean surface normals. From
left to right part of the golf cart, stairs, slope and table can be seen. Table is
seen as a good solution.
Figure 7. An image of the point cloud retrieved when collecting points from
the rotary laser range scanner for 10 seconds, using the Gazebo world of figure
5.
provides the image as seen in figure 8. As can be seen, the
table top, each of the stairs and part of the golf cart are
correctly detected and calculated to be viable stepping planes.
A problem that occurs, however, is that the slope is also
detected as a plane with a mean normal that is pointing straight
up.
This has the following explanation: The plane that is found,
consists of not only the slope, but also numerous points in the
‘noise dome’ that surrounds the environment. This results in a
surface normal that is calculated based on not only the slope
normals, but also on the ‘dome normal’, which is calculated
by only one small ring of points and thus faulty.
3) A Point Cloud from the Real World: To prove that the
algorithm works, and that is should also work in real-life, in
stead of a simulator, we also downloaded a point cloud from
the internet, which was created from a real life sensor. The
point cloud that was used was the one from a Point Cloud
Library tutorial7. This resulted in the image that can be seen
in figure 9. In this image it can be seen that planes for the
ground and walls are found and that the right value is computed
for them. The couch forms a more difficult challenge for the
algorithm, because the seat and backrest are more curved. Still
most of the planes are evaluated correctly.
7http://www.pointclouds.org/documentation/tutorials/using kinfu large
scale.php
Figure 8. The output of the algorithm when using a rotary laser range scanner
in the Gazebo world from figure 5. The same color codes as before count.
Figure 9. The output of the algorithm when using a point cloud that was
retrieved from a real environment.
In the backrest of the couch, some purple planes are found.
These are considered ‘good to stand on’, because they are
returned together with the desk, that can be seen on the right
side of the image. This is due to the same problem as was
treated in the first point in section IV-A1: The points in the
couch are considered as being part of the desktop, whereas
they really should have been part of a plane in the backrest of
the couch.
V. CONCLUSION
From the results shown in section IV-A it can be concluded
that using planar segmentation and normal estimation to find
planes in the environment that can be stepped on by the robot
should work. This should work in the Gazebo Simulator, if a
noiseless point cloud is given and the points that do not have
any value are set to or omitted.
Furthermore it is safe to conclude that some improvements
can be done in the segmentation method, but that this robust
version works in theory.
VI. FUTURE WORK
A. Footstep Planning
Improvements can still be made on the footstep planning.
These improvements can be put in two categories.
1) Improvements on current footstep planning: At first the
current planning algorithm will be treated. The first thing that
could be improved is the planar segmentation. One would
want to find planes that are uninterrupted. Also planes with
a small curve in them should be allowed. Initially this could
be solved by (re)implementing region growing, as explained
in section III-A1. Another solution would be to implement the
segmentation refinement as described in [6].
Ideally other factors for stability of foot placements could
be learned: for example when dealing with a rocky area,
when hill climbing or walking through a ditch, a stable foot
placement might not always be the flattest area, but a V-shaped
hole in the ground.
Another improvement in finding good foot placement lo-
cations is retrieving the size of the plane and comparing that
with the size of the robot’s foot. This would for example make
the robot take bigger stairs, when available, to minimize the
chance of slipping when traversing upwards.
The last improvement that could be made on the current
software is ignoring the points that are in the ‘noise dome’ of
the laser range scanner. This can be done by calculating the
distance of a point from the robot, and discarding it if it is
above a certain threshold. This should improve the precision
of the algorithm, because the computed surface normals of the
planes that are currently found will then be more accurate.
2) Improvements on the overall footstep planning: To ac-
tually be able to use the algorithm, footstep locations should
be found in the planes that are currently found suitable for
walking on. These locations should be based on where the
robot’s current position is, where it wants to go and the
measure of traversal difficulty for the terrain between these
positions. As covered in section II-A, Path planning algorithms
like A* are suitable for this goal.
B. Integration with inverse kinematics
The footstep planning should output a trajectory with sev-
eral Cartesian foot placement co¨
ordinates. The path between
each foot placement should be covered by the swing of the
legs, while the robot stays in balance. This is not trivial for
the Atlas robot[8], because it is quite a tall and heavy robot
with relatively small feed.
To keep balance the relation between the contact points
with the walking surface and the center of mass has to be
calculated [9]. ROS contains a humanoid robots kinematics
library, but this library was made for the smaller Nao robot. For
the Nao robot the pelvis is a good reference point to calculate
the center of mass, but for the Atlas robot the hip joints are not
fixed which means that there is no guarantee that the pelvis is
leveled.
Even when the inverse kinematics problem is solved and
the intended swing of the legs can be calculated, a good control
of the upper body (including both arms) is needed to balance
the momentum during a step. At this moment such high level
control of the Atlas robot still has to be developed.
So we can conclude with the observation that the DARPA
Robotics Challenge and the robots developed for this challenge
will revolutionize the application of robots in rescue situations,
but that at the moment there are still enough open issues which
require a substantial research effort.
ACK NOW LE DG EM EN TS
We would like Norbert Heijne and Sander Nugteren for
their effort to solve the inverse kinematics problem of the Atlas
robot, to design an animation with resembles a first step and
to make the geometrical calculation how the upper body could
support the balance of the robot.
REFERENCES
[1] E. Krotkov and J. Manzo, “Virtual robotics challenge technical guide,”
DISTAR Case 20776, February 2013, Draft Version 5.
[2] J. Bourgeot, N. Cislo, and B. Espiau, “Path-planning and tracking in a 3d
complex environment for an anthropomorphic biped robot,” in Intelligent
Robots and Systems, 2002. IEEE/RSJ International Conference on, vol. 3.
IEEE, 2002, pp. 2509–2514.
[3] R. Cupec, I. Aleksi, and G. Schmidt, “Step sequence planning for a biped
robot by means of a cylindrical shape model and a high-resolution 2.5
d map,” Robotics and Autonomous Systems, vol. 59, no. 2, pp. 84–100,
2011.
[4] M. Kalakrishnan, J. Buchli, P. Pastor, and S. Schaal, “Learning loco-
motion over rough terrain using terrain templates,” in Intelligent Robots
and Systems, 2009. IROS 2009. IEEE/RSJ International Conference on.
IEEE, 2009, pp. 167–172.
[5] T. Rabbani, F. van Den Heuvel, and G. Vosselmann, “Segmentation
of point clouds using smoothness constraint,” International Archives
of Photogrammetry, Remote Sensing and Spatial Information Sciences,
vol. 36, no. 5, pp. 248–253, 2006.
[6] D. Holz, S. Holzer, R. B. Rusu, and S. Behnke, “Real-time plane
segmentation using rgb-d cameras,” in RoboCup 2011: Robot Soccer
World Cup XV, ser. Lecture Notes in Computer Science. Springer Berlin
Heidelberg, 2012, vol. 7416, pp. 306–317.
[7] R. Rusu and S. Cousins, “3d is here: Point cloud library (pcl),” in
Robotics and Automation (ICRA), 2011 IEEE International Conference
on. IEEE, 2011, pp. 1–4.
[8] M. Inja, N. Heijne, S. Nugteren, and M. de Waard, “Project ai - the
darpa robotics challange - f.o.o.t.l.o.o.s.e.” Project Report, Universiteit
van Amsterdam, February 2013.
[9] M. Vukobratovi´
c and B. Borovac, “Zero-moment pointthirty five years
of its life,” International Journal of Humanoid Robotics, vol. 1, no. 01,
pp. 157–173, 2004.
... Modern ground robotic complexes have a high carrying capacity, accuracy and speed. They are able to move on uneven surfaces and interact with objects with the mechanism equipped with an advanced control system [1]. Objects can be moved with a mobile or stationary robotic complex. ...
Article
Full-text available
A dynamic model of the manipulator of the robotic complex was developed on the basis of the conducted experimental studies. The concept of determining the dynamic characteristics of the mechanical system is proposed according to the results of the oscillation analysis. The algorithm is supplemented with modules considering possibility of using controlled damping devices. The constituent parts of the model represent the mechanical devices of the manipulator, in particular connections, rotary assemblies and damping devices. The model contains all the connections between the modules, which allows you to study the dynamic parameters during the operation of the mechanism. Differential dependencies for the implementation of the mathematical model, which includes the subsystem of dynamic damping of vibrational oscillations of the manipulator, are proposed. These dependencies reveal the essence of the oscillatory processes of the mechanical system in full. Guided damping devices introduced into the model allow to control parameters in order to increase the accuracy of the mechanism. The mathematical model is implemented via a software module that takes into account the impact working processes that occur in the connections and rotary assemblies of the mechanical system of the robotic complex. The algorithm involves the use of a mechatronic system equipped with feedback sensors to control the manipulator. Controlled damping devices make it possible to increase the technical level and improve the dynamic characteristics of the mechanical system. Damping of oscillations by a mechatronic system with feedback was investigated and the influence of damping of oscillations on accuracy parameters when moving a robotic complex on an uneven surface was determined. The paper presents the results of modeling an adjustable damper as part of a moving mechanical system. The innovative device uses a magnetorheological fluid as a working fluid, which allows you to control it with the help of electrical impulses. The conducted experimental studies made it possible to obtain key indicators and its operating characteristics of the damper. Based on these results, dependencies, which determine the control laws of a damper that uses a magnetorheological fluid, are proposed.
... Gazebo can create a scenario with various buildings, such as houses, hospitals, cars, people, etc. With this scenario, it is possible to evaluate the quality of the codes and trim their parameters before a test in the real environment (de Waard et al. 2013; GAZEBOSIM 2019; Koenig and Howard 2004). ...
Book
This book targets an audience with a basic understanding of deep learning, its architectures, and its application in the multimedia domain. Background in machine learning is helpful in exploring various aspects of deep learning. Deep learning models have a major impact on multimedia research and raised the performance bar substantially in many of the standard evaluations. Moreover, new multi-modal challenges are tackled, which older systems would not have been able to handle. However, it is very difficult to comprehend, let alone guide, the process of learning in deep neural networks, there is an air of uncertainty about exactly what and how these networks learn. By the end of the book, the readers will have an understanding of different deep learning approaches, models, pre-trained models, and familiarity with the implementation of various deep learning algorithms using various frameworks and libraries.
... Gazebo can create a scenario with various buildings, such as houses, hospitals, cars, people, etc. With this scenario, it is possible to evaluate the quality of the codes and trim their parameters before a test in the real environment (de Waard et al. 2013; GAZEBOSIM 2019; Koenig and Howard 2004). ...
Chapter
Intelligent vehicle system (IVS) is being designed to leverage the safety, facility, and life style of society. At the same time, it aims to enhance the driving behavior to minimize the traffic-related issues. Artificial intelligence is assisting such autonomous system, which is now not restricted only to software data, but its functionality is being utilized in decision making in various phases of the IVS in dynamic road environments. One such phase lane detection plays a significant role in IVS especially through various sensors. Here, vision-based sensor mechanism is employed which detects lane marking scheme on structured road. For this purpose, traditional image processing technique has been applied to keep the computation less complex, and public datasets KITTI is utilized. The proposed scheme is effectively identifies various lane markings on the road in the normal driving conditions.
... Modern ground-based robotic complexes are characterized by high technical characteristics. Existing manipulators that use an electric drive are controlled by an intelligent control system [1], they have a fairly high load capacity, accuracy and reliability. With remote control, objects are usually moved with a fixed chassis at low speeds. ...
Article
Full-text available
The object of research is modern robotic systems used in hotspots. In their arsenal, such mobile works are equipped with manipulators with high-precision hinges, which provide accurate positioning of the gripper (object of manipulation). Considering ground-based robotic complexes with a wheel or caterpillar base, the implementation of the process of manipulation on a stationary basis, a number of problem areas were identified that affect the accuracy of positioning. In the course of research and analysis of modern robotic complexes, their circuit and design of components and mechanisms that provide the necessary qualities and parameters. The problem of developing high-precision hinges is central to the creation of efficient ground-based robotic systems. The methodology of kinematic research of rotary hinges of the manipulator for the ground robotic complex is stated. The analysis of influence of deformations of material of impellers of not involute transfer on accuracy of positioning of a final subject is carried out. A kinetostatic analysis of the manipulator circuit was performed and the maximum moments acting in the hinged units on the drive unit were determined, which allowed to make a quantitative assessment using the Solidworks software package. The mathematical model of construction of transfer and definition of accuracy of a rotary knot for a ground robotic complex, with use of cycloidal transfer without intermediate rolling bodies is investigated and developed. Mathematical modeling and taking into account the features of mechanical processes occurring in the manipulator, allows to increase the technical level of robotic complexes. Ways of improvement are defined for maintenance of a progressive design of the manipulator that not only will satisfy necessary technical characteristics, but also will allow to simplify manufacturing technology. Modern technologies and materials (stereolithography, carbon fiber, superhard materials) make it possible to implement advanced designs of spatial drive systems. Therefore, work in this direction is relevant, as robotic mechanical complexes for special purposes are widely used when performing work in emergencies.
... Google DeepMind's AlphaGo program recently defeated the European Go champion, based on a very advanced system of neural networks, currently one of the leading trends in robotics (Silver et al. 2016, 57). Google Atlas (De Waard et al., 2013) becomes the massive human-like reflexive robot, prepared to be used as either a robotic helper or soldier. Facebook's deep learning server appears to be the most promising learning system, a "breakthrough" in AI (Novet, 2015). ...
Chapter
This paper critically explores elements of the monstrous expressed within various contemporary discourses on Artificial Intelligence, contrasting them with historical examples of social monsterization. While an increasing amount of scientific experiments explores the production of autonomous, self-aware robots with relatively little or no success, future studies speak of the potential rise of superintelligent robots, which might be responsible for world domination, after completely outsmarting humans in their mental capabilities. Robots are further accused of being the main origin of future human unemployment. Undeniably, robotic, and AI technologies are undertaking a crucial developmental step; thus, debates on future speculation are expanding in an exaggerating manner. All arguments drawn against the potential evolution of artificial entities bare significant resemblances with argumentation against groups who have been monsterized and enslaved for being “others” (gendered or colored people, immigrants, children, animals, plants). I hereby question the ethical impact of enslavement and monsterization of the seemingly conscious inanimate and inorganic, drawing from lines of thought rejecting human and animate superiority, such as object-oriented ontology and the philosophy of information. Finally, I suggest that this climate of excessive hybridity offers an opportunity for responsible technical development and reflection upon what human desires, hopes and (mostly) fears are projected on AI and robots.
... humanoid robots are made based on a variety of application purposes. The Atlas robot(de Waard et al., 2013) (Figure 3A) from Boston Dynamics has been developed for outdoor search and rescue. Several diverse, powerful non-humanoid robotic hands can be linked to its arm one at a time for use in various scenarios. ...
Article
Full-text available
Dexterity robotic hands can (Cummings, 1996) greatly enhance the functionality of humanoid robots, but the making of such hands with not only human-like appearance but also the capability of performing the natural movement of social robots is a challenging problem. The first challenge is to create the hand’s articulated structure and the second challenge is to actuate it to move like a human hand. A robotic hand for humanoid robot should look and behave human like. At the same time, it also needs to be light and cheap for widely used purposes. We start with studying the biomechanical features of a human hand and propose a simplified mechanical model of robotic hands, which can achieve the important local motions of the hand. Then, we use 3D modeling techniques to create a single interlocked hand model that integrates pin and ball joints to our hand model. Compared to other robotic hands, our design saves the time required for assembling and adjusting, which makes our robotic hand ready-to-use right after the 3D printing is completed. Finally, the actuation of the hand is realized by cables and motors. Based on this approach, we have designed a cost-effective, 3D printable, compact, and lightweight robotic hand. Our robotic hand weighs 150 g, has 15 joints, which are similar to a real human hand, and 6 Degree of Freedom (DOFs). It is actuated by only six small size actuators. The wrist connecting part is also integrated into the hand model and could be customized for different robots such as Nadine robot (Magnenat Thalmann et al., 2017). The compact servo bed can be hidden inside the Nadine robot’s sleeve and the whole robotic hand platform will not cause extra load to her arm as the total weight (150 g robotic hand and 162 g artificial skin) is almost the same as her previous unarticulated robotic hand which is 348 g. The paper also shows our test results with and without silicon artificial hand skin, and on Nadine robot.
Article
Full-text available
Dexterity robotic hands can (Cummings, 1996) significantly improve the usefulness of humanoid robots, but creating such hands that can mimic social robots' natural movements while also seeming human-like is a difficult challenge. The construction of the hand's articulated structure and actuating it so that it moves like a human hand present two distinct challenges. A robotic hand designed for a humanoid robot should appear and act human. For widespread use, it must also be lightweight and inexpensive. We begin by examining the biomechanical characteristics of the human hand before putting forth a condensed mechanical model for robotic hands that can produce the crucial local motions of the hand. To combine pin and ball joints into our hand model, we next employ 3D modelling techniques to produce a single interlocked hand model. Our design, in contrast to existing robotic hands, reduces the time needed for assembling and adjusting, making our robotic hand usable as soon as the 3D printing process is complete. Finally, cables and motors are used to actuate the hand. Based on this methodology, we created a low-cost, 3D printed, small, and light robotic hand. Our robotic hand has six degrees of freedom, 15 joints, and weighs 150 g. These features are close to those of a real human hand. (DOFs). Six tiny actuators are all that are needed to move it. Additionally built into the hand model is a wrist connecting component that could be tailored for various robots, like the Nadine robot (Magnenat Thalmann et al., 2017). The Nadine robot's sleeve can be used to conceal the little servo bed, and the weight of the robotic hand platform (150 g robotic hand and 162 g artificial skin) is about identical to that of her prior unarticulated robotic hand, which weighs 348 g. The publication also displays the outcomes of our experiments on the Nadine robot and with and without silicon artificial hand skin. And by using the cad and Catia v5 software the prototype is designed and later it developed in cura software.
Chapter
Full-text available
The application with autonomous robots is becoming more popular (Kyrkou et al. 2019), and neural networks and image processing are increasingly linked to control and decision (Jarrell et al. 2012; Prescott et al. 2013). This study seeks a technique that makes drones or robots more autonomous indoors fly. The work investigates the implementation of an autonomous control system for drones, capable of crossing windows on flights through closed places, through image processing (de Brito et al. 2019; de Jesus et al. 2019; Martins et al. 2018; Pinto et al. 2019) using convolutional neural network. Object’s detection strategy was used; through its location in the captured image, it is possible to carry out a programmable route for the drone. In this study, this location of the object was established by bounding boxes, which define the quadrilateral around the found object. The system is based on the use of an open-source autopilot, Pixhawk, which has a control and simulation environment capable of doing the job. Two detection techniques were studied. The first one is based on image processing filters, which captured polygons that represent a passage inside a window. The other approach was studied for a more real environment and implemented with the use of convolutional neural networks for object detection; with this type of network, it is possible to detect a large number of windows.
Article
This paper introduces a new mobile robot with angled spoke-based wheels (ASWs). By analyzing the combination of the assembling angle with the spokes and the assembling angle with the wheel shaft, an ASW is designed. It has a specific trajectory in which the spokes are perpendicular to and parallel to the ground simultaneously. Therefore, this wheel is in contact with the ground, similar to a conventional wheel, but the wheel trajectory does not exceed the height of the spoke length. Therefore, the robot offers an advantage in that various devices can be freely installed on its upper part. Owing to the geometrical features of the wheel, the wheel shaft is assembled at an angle of 45° on the chassis, and 90°-bevel gears are used to maximize space efficiency. The robot is 85 mm long and weighs 208 g. Using the proposed spoke wheel, the robot can travel on carpet at a sustained speed of 18 body-lengths per second. With regard to overcoming obstacles, the robot can overcome a height of 0.7 times the spoke length. Moreover, it can climb continuous obstacles having a staircase angle of 37.5° and overcome an irregular shape and an arbitrarily formed height of obstacles like a tangled-rope. It was also confirmed that the robot could carry the same weight as its own without any significant drop in speed.
Conference Paper
Full-text available
Real-time 3D perception of the surrounding environment is a crucial precondition for the reliable and safe application of mobile service robots in domestic environments. Using a RGB-D camera, we present a system for acquiring and processing 3D (semantic) information at frame rates of up to 30Hz that allows a mobile robot to reliably detect obstacles and segment graspable objects and supporting surfaces as well as the overall scene geometry. Using integral images, we compute local surface normals. The points are then clustered, segmented, and classified in both normal space and spherical coordinates. The system is tested in different setups in a real household environment. The results show that the system is capable of reliably detecting obstacles at high frame rates, even in case of obstacles that move fast or do not considerably stick out of the ground. The segmentation of all planes in the 3D data even allows for correcting characteristic measurement errors and for reconstructing the original scene geometry in far ranges.
Article
Full-text available
For automatic processing of point clouds their segmentation is one of the most important processes. The methods based on curvature and other higher level derivatives often lead to over segmentation, which later needs a lot of manual editing. We present a method for segmentation of point clouds using smoothness constraint, which finds smoothly connected areas in point clouds. It uses only local surface normals and point connectivity which can be enforced using either k-nearest or fixed distance neighbours. The presented method requires a small number of intuitive parameters, which provide a tradeoff between under-and over-segmentation. The application of the presented algorithm on industrial point clouds shows its effectiveness compared to curvature based approaches.
Conference Paper
Full-text available
With the advent of new, low-cost 3D sensing hardware such as the Kinect, and continued efforts in advanced point cloud processing, 3D perception gains more and more importance in robotics, as well as other fields. In this paper we present one of our most recent initiatives in the areas of point cloud perception: PCL (Point Cloud Library - http://pointclouds.org). PCL presents an advanced and extensive approach to the subject of 3D perception, and it's meant to provide support for all the common 3D building blocks that applications need. The library contains state-of- the art algorithms for: filtering, feature estimation, surface reconstruction, registration, model fitting and segmentation. PCL is supported by an international community of robotics and perception researchers. We provide a brief walkthrough of PCL including its algorithmic capabilities and implementation strategies.
Article
Full-text available
This paper is devoted to the permanence of the concept of Zero-Moment Point, widely-known by the acronym ZMP. Thirty-five years have elapsed since its implicit presentation (actually before being named ZMP) to the scientific community and thirty-three years since it was explicitly introduced and clearly elaborated, initially in the leading journals published in English. Its first practical demonstration took place in Japan in 1984, at Waseda University, Laboratory of Ichiro Kato, in the first dynamically balanced robot WL-10RD of the robotic family WABOT. The paper gives an in-depth discussion of source results concerning ZMP, paying particular attention to some delicate issues that may lead to confusion if this method is applied in a mechanistic manner onto irregular cases of artificial gait, i.e. in the case of loss of dynamic balance of a humanoid robot. After a short survey of the history of the origin of ZMP a very detailed elaboration of ZMP notion is given, with a special review concerning "boundary cases" when the ZMP is close to the edge of the support polygon and "fictious cases" when the ZMP should be outside the support polygon. In addition, the difference between ZMP and the center of pressure is pointed out. Finally, some unresolved or insufficiently treated phenomena that may yield a significant improvement in robot performance are considered.
Conference Paper
Full-text available
Biped robots have specific dynamical constraints and stability problems which reduce significantly their motion range. In these conditions, motion planning used for mobile robots cannot be applied to biped robots. In this paper, the path planning problem is seen as finding a sequence of footholds in a 3D environment, keeping robot stability, motion continuity and working within the structural constraints of the biped. The designed path planner contains two parts : The first one determines a reference path which maximises success rate in view of biped capabilities. This reference track is computed by the well know A* search in the graphs algorithm. The second part of the path planner is a path tracking algorithm which makes the robot follow the reference track. Simulation results concern the anthropomorphic 15 degrees of freedom robot BIP2000.
Conference Paper
We address the problem of foothold selection in robotic legged locomotion over very rough terrain. The difficulty of the problem we address here is comparable to that of human rock-climbing, where foot/hand-hold selection is one of the most critical aspects. Previous work in this domain typically involves defining a reward function over footholds as a weighted linear combination of terrain features. However, a significant amount of effort needs to be spent in designing these features in order to model more complex decision functions, and hand-tuning their weights is not a trivial task. We propose the use of terrain templates, which are discretized height maps of the terrain under a foothold on different length scales, as an alternative to manually designed features. We describe an algorithm that can simultaneously learn a small set of templates and a foothold ranking function using these templates, from expert-demonstrated footholds. Using the LittleDog quadruped robot, we experimentally show that the use of terrain templates can produce complex ranking functions with higher performance than standard terrain features, and improved generalization to unseen terrain.
Article
A novel step sequence planning (SSP) method for biped-walking robots is presented. The method adopts a free space representation custom-designed for efficient biped robot motion planning. The method rests upon the approximation of the robot shape by a set of 3D cylindrical solids. This feature allows efficient determination of feasible paths in a 2.5D map, comprising stepping over obstacles and stair climbing. A SSP algorithm based on A∗-search is proposed which uses the advantages of the aforementioned environment representation. The efficiency of the proposed approach is evaluated by a series of simulations performed for eight walking scenarios.
Virtual robotics challenge technical guide
  • E Krotkov
  • J Manzo
E. Krotkov and J. Manzo, "Virtual robotics challenge technical guide," DISTAR Case 20776, February 2013, Draft Version 5.
Project ai -the darpa robotics challange -f
  • M Inja
  • N Heijne
  • S Nugteren
  • M De
M. Inja, N. Heijne, S. Nugteren, and M. de Waard, " Project ai -the darpa robotics challange -f.o.o.t.l.o.o.s.e. " Project Report, Universiteit van Amsterdam, February 2013.
  • R Rusu
  • S Cousins
R. Rusu and S. Cousins, "3d is here: Point cloud library (pcl)," in Robotics and Automation (ICRA), 2011 IEEE International Conference on. IEEE, 2011, pp. 1-4.