Content uploaded by Johannes Lemburg
Author content
All content in this area was uploaded by Johannes Lemburg
Content may be subject to copyright.
AILA - Design of an autonomous mobile dual-arm robot
Johannes Lemburg, Jos´
e de Gea Fern´
andez, Markus Eich, Dennis Mronga,
Peter Kampmann, Andreas Vogt, Achint Aggarwal, Yuping Shi, and Frank Kirchner
Abstract— This paper presents the design of the robot AILA,
a mobile dual-arm robot system developed as a research
platform for investigating aspects of the currently booming mul-
tidisciplinary area of mobile manipulation. The robot integrates
and allows in a single platform to perform research in most of
the areas involved in autonomous robotics: navigation, mobile
and dual-arm manipulation planning, active compliance and
force control strategies, object recognition, scene representation,
and semantic perception. AILA has 32 degrees of freedom,
including 7-DOF arms, 4-DOF torso, 2-DOF head, and a mobile
base equipped with six wheels, each of them with two degrees of
freedom. The primary design goal was to achieve a lightweight
arm construction with a payload-to-weight ratio greater than
one. Besides, an adjustable body should sustain the dual-arm
system providing an extended workspace. In addition, mobility
is provided by means of a wheel-based mobile base. As a result,
AILA’s arms can lift 8kg and weigh 5.5kg, thus achieving
a payload-to-weight ratio of 1.45. The paper will provide an
overview of the design, especially in the mechatronics area, as
well as of its realization, the sensors incorporated in the system,
and its control software.
I. INTRODUCTION
In recent years, autonomous mobile manipulation has
emerged as a new research area of robotics and has been
identified as critical for future robotic applications [1]. This
area comprises and groups together so far independent re-
search topics into a common ground. Mobility, perception,
manipulation and, primarily, the use of all these subareas
within a single system that is able to perceive and understand
its environment, move around, manipulate and learn about
objects is the goal of the newly popular research area. Ob-
viously, mobile manipulation shares many similarities with
the area of humanoid robotics but with a clear difference:
anthromorphism and, specifically, the use of legs is not
mandatory. The focus lies on the combination of mobility
and manipulation, that is, not only on the use of non-mobile
robot manipulators or not only on the use of rovers, but on the
combination of these systems into a single one with enhanced
capabilities to move around and modify the arrangement
of the environment. The area of mobile manipulation is
primarily concerned with dynamic and unstructured envi-
ronments that might be partially or completely unknown,
which requires the robot to acquire information and adapt
autonomously to the ongoing situation.
Noteworthy examples within the area of mobile manipula-
tors are the PR-2 robot [2] developed by Willow Garage, the
The work presented in this paper is part of the SemProM project, funded
by the German Federal Ministry of Education and Research (BMBF), grant
No. 01IA08002.
Alls authors are with the DFKI Robotics Innovation Center, Bremen,
Germany. Email: johannes.lemburg@dfki.de
Fig. 1. Robotic system AILA
’butler’ robot HERB [3] developed at Intel Research Labs in
collaboration with Carnegie Mellon University, Rollin’ Justin
[4] developed at DLR Institute of Robotics and Mechatronics
in Germany, and the UMan [5] from the Robotics and
Biology Lab at the University of Massachusetts Amherst.
PR-2 is is a dual-arm robot with an omni-directional mobile
base. It includes a variety of sensors as tilting laser scanner
on the head, and laser scanner on the mobile base, two pairs
of stereo cameras, and an IMU located inside the body.
Currently, PR-2 is likely the most advanced autonomous
mobile manipulator, able to successfully perform complex
manipulation and navigation tasks. HERB is an autonomous
mobile manipulator that was designed for performing com-
plex manipulation tasks in home environments. The robot is
able to search, recognize, and store new objects as well as
manipulate door handles and objects and navigate in cluttered
environments. Rollin’ Justin is a progressive development
that builds on the well-known lightweight arms from DLR
(LWR-III) to build first a dual-arm robot (Justin) that fi-
nally has been equipped with a mobile base to enhance
the robot’s field of work. The UMan robot consists of a
modified Nomadic XR4000 holonomic mobile base with
three degrees of freedom, a WAM seven degree-of-freedom
manipulator arm, and a four degree-of-freedom hand from
Barret Technologies.
II. SYSTEM OVERVIEW
AILA consists of an anthropomorphic upper body
mounted on a wheeled mobile platform. The upper body
carries two arms, each of them with seven degrees of
freedom, a torso with four joints, and a head with two
degrees of freedom. The mobile platform (Fig. 5) consists
of six wheels with two degrees of freedom each, one for
the steering axis and one for driving the wheel. The main
focus of the design was on the development of the arms and
the upper torso, being the mobile base a first solution to
provide with mobility to the robot. Future developments will
concentrate on new concepts of mobility/locomotion. The
robot’s hardware includes two Prosilica GC780C cameras
that create a stereo system unit in the head which is mounted
on a neck able to pan and tilt on an anthropomorphic
path. A periodically-tilting short-ranged Hokuyo URG Laser
scanner in the chest and a Mesa SR-4000 3D Time of
Flight (TOF) camera in the robot’s stomach are combined for
object and scene recognition, as well as for pose estimation.
Two long-ranging Hokuyo UTM Laser scanners provide a
circumferential view for the mobile base. These six different
visual systems allow to extract a multimodal view of the
environment. AILA is equipped with three computers. Two
3,5-inches embedded PCs: one for motion control located in
the head and one for navigation located in the mobile base.
A mini-ITX board in combination with a dedicated graphics
card for vision processing is located in the torso. The
communication network consists of five independent CAN-
lines for controlling the two arms, the torso, and the wheel
modules of the base. GigaEthernet routed through two five-
port switches connect the head cameras, the three computers,
and the outside world. The motion computer communicates
through a dedicated RS-485 bus with the Skyetek M4 RFID
module which is integrated together with its antenna in the
left hand. Further external sensors are two six-axis force-
torque sensors at the robot’s wrists. Proprioceptive sensors
include the hall-sensors for the measurement of the absolute
position of the joints, motor current measurements, and
incremental optical encoders for the motor commutation and
joint torque measurement.
III. DESIGN CONSIDERATIONS
In spite of AILA’s anthropomorphic appearance, its basic
purpose is to be a research platform for autonomous mobile
dual-arm manipulation. Therefore, its underlying system con-
cept is a pair of arms mounted to a height-adjustable frame
on a mobile base. To enable fast and precise coordinated
dual-arm movements, the primary design goal is a high
payload-to-weight ratio of the arms by featuring low masses,
low moments of inertia, high stiffness of the limbs, and
strong drives. These requirements have already been aimed at
successfully by other robotic systems [6][7], but not with the
limitation of a real human-size design space. The following
requirements are axioms that had an important influence on
the system’s concept:
•The payload-to-weight ratio of each arm shall be as high
as technically possible to enable fast reactions with high
accelerations.
•The overall look-and-feel of the robot shall be anthro-
pomorphic and aesthetical.
•The development of the arm drives shall be an improve-
ment of existing components developed at our center for
previous robotics systems (Spaceclimber [8]) while, at
the same time, ensuring compatibility with them.
•The working height of the arms and the center of gravity
of the upper limbs shall be adjustable with regard to the
mobile base.
•The mobile base shall move holonomically on slightly
rough terrain.
IV. MECHATRONIC DESIGN
A. Mechanics
For the design of the arms, a kinematic model of seven
degrees of freedom was chosen with pair-wise grouping
of joints with intersecting axes at the wrist, and three
intersecting axes at the shoulder (Fig. 2, Top). The grouping
of joints lowers the weight of their combined housing and
helps to reduce the moment of inertia of the limb by placing
the relatively heavy motors near to the arm’s base. The joints
of the elbow and shoulder consist of brushless DC motors
in combination with harmonic drives and are independently
controlled by a stack of three PCBs housing power and
control electronics. The two degrees of freedom of the wrist
are controlled by a parallel kinematic structure driven by two
DC linear motors, each of them including planetary gear,
encoder, and its own controller.
The assembly of two joints in one housing (Fig. 2,
Bottom) is connected to the upper and lower arm structure by
four-point thin-section bearings. The housing itself consists
of machined aluminum parts joined together by carbon-
fiber reinforced plastic parts. Within each joint, a brushless
motor drives a shaft that is supported on the one end by
a bearing and on the other directly on the wave generator
of the connected harmonic drive gear. Within the harmonic
drive, a spring-loaded electromagnetic safety-brake acts on
an axially-mounted brake disk. All necessary cabling goes
through the centre of the joint, guided by a tube that also
acts for driving a magnet hovering over an absolute hall
encoder on the PCB stack. The reason for a tube diameter of
8mm is due to the opening of the smallest harmonic drives
flex-spline, used in the axial joint of the forearm. Optical
incremental encoders are integrated to measure the position
of the rotor and commutate the stator accordingly.
The conceptual considerations for the body and the head of
AILA follow a different approach than the structural concept
for the arms. The basic purpose of the body is to carry the
Fig. 2. Top: Arm specification. Bottom: Detail of the elbow housing
containing two joints
arms and make their working space adjustable in height. The
structural parts of the torso, leg, and head are sheet metal
parts (Fig. 3), because additionally to being lightweight they
also have to be exchangeable. Since AILA is a research
platform, it is foreseeable that there will be future changes
to the sensory setup or the CPUs. The leg and the torso as
main load-carrying structures have also as second function
the housing of the vision CPU, a laser scanner, and a 3D
TOF camera for navigation, network components, and the
control board for its motors.
The basic concept for designing lightweight structures is
to achieve an evenly distributed stress near to the yield point
within all of the used material. Besides choosing a material
with high specific Young modulus, the stiffness of a robotic
arm increases by a high second moment of area with its
cross-section, which leads to choose a thin shelled tube with
as few openings as possible. Therefore, AILA’s arm structure
principle design (Fig. 4) is a tube of high-modulus carbon
fiber along the force paths in combination with machined
high-strength aluminum where precision is needed or the
loads are diverse.
Fig. 3. Body specification and sheet metal structure
B. Electronics and Power Supply
AILA’s electronics include various sensor modules, three
computers, and three different kinds of motors. Therefore, it
has to provide various voltages with noticeable currents for
the CPUs and the drives. The main system setup differen-
tiates between logic and power circuits. For safety reasons,
there is one power cut-off on the mobile base near the battery
that mechanically breaks all circuits, and a second wireless
kill-switch affecting just the drive circuits for paralyzing
the robot without interrupting the sensor and logic network.
Within the mobile base, the current of the 48V battery is
split to a 1 kW 24V power supply, a 0,5 kW 12V power
supply, and direct supply of the wheel and arm drives. Fif-
teen different sub-circuits are separated by automatic circuit
breakers. Within each sub-circuit all damageable components
are protected with fast-reacting fuses according to their
nominal power consumption. Voltages below 12V, that are
needed for the chest laser-scanner and the mini ITX board,
are converted by a 120W CPU power supply placed in the
torso.
The main wire harnesses are the power cables within
the mobile base, power, and communication cabling from
the base to the chest, and three branches to the arms and
the head. Within the mobile base, a noticeable amount of
design space is dedicated to the high-current power cables.
The arrangement of components was chosen to optimize the
routing of these stiff cables with large bending radii. Due to
the main power converters and the circuit breakers placed in
Fig. 4. Hybrid joint and left arm structural parts
Fig. 5. Wheeled mobile base used to provide mobility to the first platform.
Future developments will focus on new concepts of mobility/locomotion for
AILA.
the mobile base, the number of cables routed to the torso
increases. All these cables are bundled to one harness that
follows a route similar to the placement of a spine with
minor deviations in the hip, the shoulders, and the head. To
provide enough slack for the head-movement, the spine-like
harness goes with a large bending radius from the neck into
the design space of the tongue. For the ease of maintenance,
mainly pre-tailored cables are used. Excessive cable-length is
stored in the right hemisphere of the head, all communication
and debugging interfaces are pointing to the left.
V. CONTROL SOFTWARE
A. Control of the motor joints
The torso joints as well as the vertical axes of the mobile
platform use Faulhaber DC Motors which are controlled by
an in-house developed power electronics board controlled
by a STM32 microcontroller. The board is equipped with
current, speed, and position sensors thus enabling local motor
control. The high-level commands are transmitted from the
embedded PCs via CAN messages. The arm joints and the
horizontal axes of the mobile platform use brushless DC mo-
tors from Robodrive. A similar control approach to the one
previously described has been used for these motors, which
has already been successfully integrated in the SpaceClimber
robot [8][9]. In this case, the in-house developed motor
electronics consists of a stack of three circular PCBs. These
PCBs are designed to be integrated directly behind the stator
of the Robodrive motors. The boards incorporate all sensors
which are necessary to monitor and control the motor. Three
motor current sensors are integrated in the low phases of the
three-phase H-bridges. Encoder wheels in front of the gear as
well as an absolute angular encoder behind the gear measure
the motor position for speed and position control purposes.
The stack is also equipped with two temperature sensors
and input voltage measurement. Additionally, one SD-card
module is used to log all sensor data at the highest possible
frequency. The interface to high-level control units is realised
via a CAN interface. Configuration data specific to each
motor - e.g. CAN identifier or position offsets - is written and
read to/from an EEPROM memory. All mentioned sensors as
well as current, speed, and position controllers are processed
by a Spartan3 FPGA from Xilinx.
B. Control of the mobile base
The control system consists of two different control units,
one micro controller and one FPGA (Fig. 6), each of
them communicating over a CAN bus with the navigation
computer. Each of these controller-pairs controls one wheel,
which consists of two motors which are coupled over a
common base point at the ground of the wheel. The vertical
axis steers the wheel in a range of 180 degrees by using a
Faulhaber DC motor. The horizontal axis controls the wheel
speed of the RoboDrive brushless DC motor. All sensors
needed for monitoring the state of the motors as well as
processing modules for current, speed and position control
as well as data-logging are directly integrated in these control
units. The described arrangement allows linear, lateral, and
rotary movements and hence offers the rover an almost omni-
directional movability. The wheel control system sends status
messages such as the actual speeds, angles, temperatures,
and power consumption to the navigation computer, whereas
desired wheel speed and steering angles are the possible
messages received from the wheel controllers.
C. Control of the arms and torso
The overall architecture of the manipulation software is
shown in Figure 7. The coordinator is implemented as a
hierarchical state machine and controls the robot at task-
level. It makes use of the behavior base, which represents a
collection of basic robot functionalities (e.g. Plan Trajectory,
Tilt Head, ...) that can be combined to achieve more complex
behaviors. The behaviors themselves are implemented in
different modules and can be triggered by action calls of the
Fig. 6. Control scheme of the mobile base
coordinator. Thereby, the motion planner entails the func-
tionality for trajectory planning, whilst the motion controller
contains routines for trajectory execution and other hardware-
related features. The world model collects information about
the robot environment (currently, mainly from the vision
module) and the robot’s current configuration. Moreover,
upon request this module supplies this information to any
other modules. For interprocess communication and as inte-
grating software platform, we use the open-source framework
ROS [10].
Fig. 7. Architecture of the manipulation framework
D. Semantic Perception
The robot AILA perceives the 3D environment through
a variety of sensor modalities. A stereo vision system is
integrated into the head for depth and texture perception.
Additionally, a tilting laser range finder is integrated into
the upper body for 3D shape recovery. Figure 8 shows a
typical manipulation scenario, i.e., a shelf containing several
objects for manipulation. In order to execute high-level
commands like “Take the yellow box from the shelf” a
semantic interpretation of the sensor data is mandatory. The
robot needs to identify the shelf from the raw 3D data
in order to approach it. Currently, we work predominantly
on the detection of spatial entities like tables, doors, and
shelves and how they can be described using spatial feature
descriptors [11].
In our previous work ([12],[13] and [11]) we described
in detail how the bridging between the robot’s perception
and the symbolic description can be achieved. A tilting
laser range finder generates points in a geometric coordinate
system. In our semantic perception approach, we use the 3D
point cloud data from the tilting laser in order to recover
structural information about the robot’s environment. Using
a modified region-growing approach [13] which is based
on Rabbani’s Algorithm [14], planes are detected in the
unorganized point cloud. Each detected plane is analyzed
for the structural information contained. In order to further
process the detected planes, the convex shape of each single
plane is calculated using alpha shape recovery [15]. Once
the shapes have been recovered from the unorganized point
cloud, the goal is to classify the structure the robot perceives
and to label the structure with semantics. To make semantic
labeling possible in indoor environments, we make use of
some basic assumptions. If we look around in a typical
indoor environment like a household environment or an office
environment, it is clear that most structures are of rectangular
shape and mostly parallel or orthogonal to each other. The
robot has to extract a vector of feature descriptors from the
spatial entities in order to compare them with the semantic
knowledge database. In a first approach, we define a set
of vectors which are able to describe spatial entities of an
environment.
Fig. 8. Left: The robot AILA perceives the shape of the shelf by the
segmentation of the point cloud, generated by a tilting 2D laser range finder
in the upper part of the body. Right: The shelf as it is perceived by the robot
In our current implementation, we again consider a like-
lihood function in order to deal with uncertainties. For
instance, two shapes can be parallel with the certainty of
0.9 due to noise and rounding differences in the extraction
process. Using a model based on Spatial Ontologies, which
can be expressed with the language OWL-DL , detectable
features are thereafter matched using a similarity function.
For a detailed description of the semantic labeling approach,
refer to [11].
E. Vision and Manipulation
The stereo vision system, which is integrated in the
head of the robot, consists of two Prosilica color video
cameras which are used for the perception of AILA’s nearby
operating area, including the arms. On the software side, the
vision module contains algorithms for object recognition and
tracking, scene interpretation, self-perception, and hand-eye
calibration.
For object recognition, we generate textured 3D mod-
els of the objects in AILA’s environment, extract global
properties, as well as local, texture-based features (e.g.
SIFT features [16]), and store this information in a model
database. While in operation, the vision framework consis-
tently matches the features extracted from the observed scene
with the database and computes the pose of the recognized
objects using 2D-3D point correspondences (see Figure 9).
Fig. 9. Feature matching and 3D pose estimation
For motion planning of the upper body to grasp and
manipulate objects detected by the vision system, the kine-
matic relationships are defined between the 4 torso joints, 14
arm joints and 2 head joints. For tasks involving dual-arm
grasping and manipulation of homogeneous objects (Fig. 10),
motion plans are generated for 18 DOFs (arms and torso).
Incorporating the torso joints in the motion planning allows
the utilization of the entire reachability space of the robot
which is necessary for constrained dual-arm manipulation
tasks. OpenRAVE [17] is used as the common platform
for integrating the various modules of motion planning
like kinematics, workcell representation, planning, collision
avoidance, trajectory smoothing, and simulation. The plan-
ning algorithms are based on the Bi-directional Rapidly-
exploring Random Trees approach [18] and additional task-
based constraints and criteria are incorporated for dual-
arm manipulation. The constraints include maintaining the
relative end-effector (EEF) configurations between the two
arms which ensures that the object grasped is not lost while it
is being moved. For tasks involving open-top or liquid-filled
object manipulation, additional constraints are imposed on
the orientation of the object while it is being manipulated.
Further, additional criteria like minimization of the lengths
of the EEF paths are used for selecting the optimum motion
plans. While the constraints are incorporated during the
planning phase, the criteria can be used to select an optimal
path from a set of paths or ask the planner to replan if the
criteria are not met.
Fig. 10. OpenRAVE-based simulation of AILA performing a dual-arm
manipulation task
VI. EXPERIMENTS
First experiments have been performed in which AILA
made use of its key capabilites: semantic perception for
recognising objects in an office environment, autonomous
navigation, object recognition and pose estimation, and au-
tonomous dual-arm manipulation. Figure 11 shows snap-
shots of the sequence for grasping an object using two
arms in a laboratory setup. The robot recognised the table,
navigated towards it, recognised the object on the table as
well as its 3D pose, and planned the manipulation strategy
to grasp and manipulate the object using two arms.
VII. CONCLUSIONS/FUTURE WORK
Robots have been and still are great tools for studying
artificial intelligence. While in the earlier days of AI systems
like Shakey [19] were designed to study aspects of planning,
plan execution monitoring, and navigation, their primary
achievements where to carry the sensors (mainly cameras)
through the environment that was to be explored. Today we
understand that the central question of AI research can only
be answered if the tools we use - the robots - provide a larger
amount of robotic capabilities. Internal representation of a
robot exploring an environment can be massively enriched
if along with a camera and other optical data (laser scanner-
based point clouds), internal and proprioceptive data is used.
This data can come e.g. from the motors in the kinematic
chain of a robot arm that handles objects in the environment,
for instance, opening a door. The higher the disposition of
robots to interact (actively) with the environment, the better
will be the database for environmental representation and
the enhancement of their strategies for control, navigation,
and planning. In this paper, we presented a robotic platform
that incorporates multiple degrees of freedom as well as
integrates a great deal of sensors to allow a self-evaluation of
Fig. 11. Preliminary experiments of AILA performing autonomous dual-arm manipulation
the current robot’s state and scenario. Such platform should
permit the emergence of learning capabilities by exploiting
its immersion in human environments with its own body
resources and limitations.
REFERENCES
[1] O. Brock and R. Grupen, “Nsf/nasa workshop on autonomous mobile
manipulation (amm),” http://robotics.cs.umass.edu/amm, March 2005,
houston, USA.
[2] S. Chitta, B. Cohen, and M. Likhachev, “Planning for autonomous
door opening with a mobile manipulator,” in In Proc. of IEEE
International Conference on Robotics and Automation (ICRA), 2010.
[3] S. Srinivasa, D. Ferguson, C. Helfrich, D. Berenson, A. Collet, R. Di-
ankov, G. Gallagher, G. Hollinger, J. Kuffner, and M. VandeWeghe,
“HERB: a home exploring robotic butler,” Autonomous Robots, 2009.
[4] M. Fuchs, C. Borst, P. R. Giordano, A. Baumann, E. Kraemer,
J. Langwald, R. Gruber, N. Seitz, G. Plank, K. Kunze, R. Burger,
F. Schmidt, T. Wimboeck, and G. Hirzinger, “Rollin’ justin - design
considerations and realization of a mobile platform for a humanoid
upper body,” in ICRA’09: Proceedings of the 2009 IEEE international
conference on Robotics and Automation. Piscataway, NJ, USA: IEEE
Press, 2009, pp. 1789–1795.
[5] D. Katz, E. Horrell, O. Yang, B. Burns, T. Buckley, A. Grishkan,
V. Zhylkovskyy, O. Brock, and E. Learned-Miller, “The UMass
mobile manipulator UMan: An experimental platform for autonomous
mobile manipulation,” in In Workshop on Manipulation in Human
Environments at Robotics: Science and Systems, 2006.
[6] G. Hirzinger, N. Sporer, A. Albu-Schaffer, M. Hahnle, R. Krenn,
A. Pascucci, and M. Schedl, “DLR’s torque-controlled light weight
robot III - are we reaching the technological limits now?” in IEEE
International Conference on Robotics and Automation (ICRA)., vol. 2,
2002, pp. 1710–1716.
[7] C. Ott, O. Eiberger, W. Friedl, B. Bauml, U. Hillenbrand, C. Borst,
A. Albu-Schaffer, B. Brunner, H. Hirschmuller, S. Kielhofer, R. Koni-
etschke, M. Suppa, T. Wimbock, F. Zacharias, and G. Hirzinger, “A
humanoid two-arm system for dexterous manipulation,” in 6th IEEE-
RAS International Conference on Humanoid Robots, Dec. 2006, pp.
276–283.
[8] J. Hilljegerdes, P. Kampmann, S. Bosse, and F. Kirchner, “Develop-
ment of an intelligent joint actuator prototype for climbing and walking
robots,” in International Conference on Climbing and Walking Robots
(CLAWAR-09), 2009, pp. 942–949.
[9] S. Bartsch, T. Birnschein, F. Cordes, D. K¨
uhn, P. Kampmann,
J. Hilljegerdes, S. Planthaber, M. R¨
ommermann, and F. Kirchner,
“SpaceClimber: Development of a six-legged climbing robot for space
exploration,” in Proceedings for the Joint Conference of ISR 2010
(41st International Symposium on Robotics) and ROBOTIK 2010 (6th
German Conference on Robotics). VDE Verlag GmbH, June 2010.
[10] M. Quigley, B. Gerkey, K. Conley, J. Faust, T. Foote, J. Leibs,
E. Berger, R. Wheeler, and A. Ng, “ROS: an open-source robot
operating system,” in In Proc. of IEEE International Conference on
Robotics and Automation (ICRA), 2009.
[11] M. Eich, M. Dabrowska, and F. Kirchner, “Semantic labeling: Clas-
sification of 3D entities based on spatial feature descriptors,” in
Workshop Best Practice Algorithms in 3D Perception and Modeling
for Mobile Manipulation, IEEE International Conference on Robotics
and Automation, (ICRA-10), May 2010, Anchorage, 2010.
[12] M. Eich and F. Kirchner, “Reasoning about geometry: An approach us-
ing spatial-descriptive ontologies,” in Workshop AILog, 19th European
Conference on Artificial Intelligence, (ECAI-10), 16.8.-16.8.2010, Lis-
bon, 2010.
[13] M. Eich, M. Dabrowska, and F. Kirchner, “3D scene recovery and
spatial scene analysis for unorganized point clouds,” in In Proceedings
of 13th International Conference on Climbing and Walking Robots and
the Support Technologies for Mobile Machines, (CLAWAR-10), 31.8.-
03.9.2010, Nagoya, Japan, 2010.
[14] T. Rabbani, F. van Den Heuvel, and G. Vosselmann, “Segmentation of
point clouds using smoothness constraint,” in International Archives
of Photogrammetry, Remote Sensing and Spatial Information Sciences,
vol. 36, no. 5. Citeseer, 2006, pp. 248–253.
[15] W. Shen, “Building boundary extraction based on lidar point clouds
data,” in ISPRS08. ISPRS, 2008, p. 157.
[16] D. G. Lowe, “Object recognition from local scale-invariant fatures,” in
The Proceedings of the Seventh International Conference on Computer
Vision, vol. 2, 1999, pp. 1150–1157.
[17] R. Diankov and J. Kuffner, “OpenRAVE: A planning
architecture for autonomous robotics,” Robotics Institute,
Tech. Rep. CMU-RI-TR-08-34, July 2008. [Online]. Available:
http://openrave.programmingvision.com
[18] J. Kuffner and S. LaValle, “RRT-connect: An efficient approach to
single-query path planning,” in Proc. IEEE International Conference
on Robotics and Automation (ICRA), April 2000, pp. 995–1001.
[19] N. J. Nilsson, C. A. Rosen, B. Raphael, G. Forsen, L. Chaitin, and
S. Wahlstrom, “Application of intelligent automata to reconnaissance,”
Stanford Research Institute, Tech. Rep., December 1968, project 5953
Final Report, From the Nilsson archives, SHAKEY papers.