Conference PaperPDF Available

AILA - design of an autonomous mobile dual-arm robot

Authors:

Abstract and Figures

This paper presents the design of the robot AILA, a mobile dual-arm robot system developed as a research platform for investigating aspects of the currently booming multidisciplinary area of mobile manipulation. The robot integrates and allows in a single platform to perform research in most of the areas involved in autonomous robotics: navigation, mobile and dual-arm manipulation planning, active compliance and force control strategies, object recognition, scene representation, and semantic perception. AILA has 32 degrees of freedom, including 7-DOF arms, 4-DOF torso, 2-DOF head, and a mobile base equipped with six wheels, each of them with two degrees of freedom. The primary design goal was to achieve a lightweight arm construction with a payload-to-weight ratio greater than one. Besides, an adjustable body should sustain the dual-arm system providing an extended workspace. In addition, mobility is provided by means of a wheel-based mobile base. As a result, AILA's arms can lift 8kg and weigh 5.5kg, thus achieving a payload-to-weight ratio of 1.45. The paper will provide an overview of the design, especially in the mechatronics area, as well as of its realization, the sensors incorporated in the system, and its control software.
Content may be subject to copyright.
AILA - Design of an autonomous mobile dual-arm robot
Johannes Lemburg, Jos´
e de Gea Fern´
andez, Markus Eich, Dennis Mronga,
Peter Kampmann, Andreas Vogt, Achint Aggarwal, Yuping Shi, and Frank Kirchner
Abstract This paper presents the design of the robot AILA,
a mobile dual-arm robot system developed as a research
platform for investigating aspects of the currently booming mul-
tidisciplinary area of mobile manipulation. The robot integrates
and allows in a single platform to perform research in most of
the areas involved in autonomous robotics: navigation, mobile
and dual-arm manipulation planning, active compliance and
force control strategies, object recognition, scene representation,
and semantic perception. AILA has 32 degrees of freedom,
including 7-DOF arms, 4-DOF torso, 2-DOF head, and a mobile
base equipped with six wheels, each of them with two degrees of
freedom. The primary design goal was to achieve a lightweight
arm construction with a payload-to-weight ratio greater than
one. Besides, an adjustable body should sustain the dual-arm
system providing an extended workspace. In addition, mobility
is provided by means of a wheel-based mobile base. As a result,
AILA’s arms can lift 8kg and weigh 5.5kg, thus achieving
a payload-to-weight ratio of 1.45. The paper will provide an
overview of the design, especially in the mechatronics area, as
well as of its realization, the sensors incorporated in the system,
and its control software.
I. INTRODUCTION
In recent years, autonomous mobile manipulation has
emerged as a new research area of robotics and has been
identified as critical for future robotic applications [1]. This
area comprises and groups together so far independent re-
search topics into a common ground. Mobility, perception,
manipulation and, primarily, the use of all these subareas
within a single system that is able to perceive and understand
its environment, move around, manipulate and learn about
objects is the goal of the newly popular research area. Ob-
viously, mobile manipulation shares many similarities with
the area of humanoid robotics but with a clear difference:
anthromorphism and, specifically, the use of legs is not
mandatory. The focus lies on the combination of mobility
and manipulation, that is, not only on the use of non-mobile
robot manipulators or not only on the use of rovers, but on the
combination of these systems into a single one with enhanced
capabilities to move around and modify the arrangement
of the environment. The area of mobile manipulation is
primarily concerned with dynamic and unstructured envi-
ronments that might be partially or completely unknown,
which requires the robot to acquire information and adapt
autonomously to the ongoing situation.
Noteworthy examples within the area of mobile manipula-
tors are the PR-2 robot [2] developed by Willow Garage, the
The work presented in this paper is part of the SemProM project, funded
by the German Federal Ministry of Education and Research (BMBF), grant
No. 01IA08002.
Alls authors are with the DFKI Robotics Innovation Center, Bremen,
Germany. Email: johannes.lemburg@dfki.de
Fig. 1. Robotic system AILA
’butler’ robot HERB [3] developed at Intel Research Labs in
collaboration with Carnegie Mellon University, Rollin’ Justin
[4] developed at DLR Institute of Robotics and Mechatronics
in Germany, and the UMan [5] from the Robotics and
Biology Lab at the University of Massachusetts Amherst.
PR-2 is is a dual-arm robot with an omni-directional mobile
base. It includes a variety of sensors as tilting laser scanner
on the head, and laser scanner on the mobile base, two pairs
of stereo cameras, and an IMU located inside the body.
Currently, PR-2 is likely the most advanced autonomous
mobile manipulator, able to successfully perform complex
manipulation and navigation tasks. HERB is an autonomous
mobile manipulator that was designed for performing com-
plex manipulation tasks in home environments. The robot is
able to search, recognize, and store new objects as well as
manipulate door handles and objects and navigate in cluttered
environments. Rollin’ Justin is a progressive development
that builds on the well-known lightweight arms from DLR
(LWR-III) to build first a dual-arm robot (Justin) that fi-
nally has been equipped with a mobile base to enhance
the robot’s field of work. The UMan robot consists of a
modified Nomadic XR4000 holonomic mobile base with
three degrees of freedom, a WAM seven degree-of-freedom
manipulator arm, and a four degree-of-freedom hand from
Barret Technologies.
II. SYSTEM OVERVIEW
AILA consists of an anthropomorphic upper body
mounted on a wheeled mobile platform. The upper body
carries two arms, each of them with seven degrees of
freedom, a torso with four joints, and a head with two
degrees of freedom. The mobile platform (Fig. 5) consists
of six wheels with two degrees of freedom each, one for
the steering axis and one for driving the wheel. The main
focus of the design was on the development of the arms and
the upper torso, being the mobile base a first solution to
provide with mobility to the robot. Future developments will
concentrate on new concepts of mobility/locomotion. The
robot’s hardware includes two Prosilica GC780C cameras
that create a stereo system unit in the head which is mounted
on a neck able to pan and tilt on an anthropomorphic
path. A periodically-tilting short-ranged Hokuyo URG Laser
scanner in the chest and a Mesa SR-4000 3D Time of
Flight (TOF) camera in the robot’s stomach are combined for
object and scene recognition, as well as for pose estimation.
Two long-ranging Hokuyo UTM Laser scanners provide a
circumferential view for the mobile base. These six different
visual systems allow to extract a multimodal view of the
environment. AILA is equipped with three computers. Two
3,5-inches embedded PCs: one for motion control located in
the head and one for navigation located in the mobile base.
A mini-ITX board in combination with a dedicated graphics
card for vision processing is located in the torso. The
communication network consists of five independent CAN-
lines for controlling the two arms, the torso, and the wheel
modules of the base. GigaEthernet routed through two five-
port switches connect the head cameras, the three computers,
and the outside world. The motion computer communicates
through a dedicated RS-485 bus with the Skyetek M4 RFID
module which is integrated together with its antenna in the
left hand. Further external sensors are two six-axis force-
torque sensors at the robot’s wrists. Proprioceptive sensors
include the hall-sensors for the measurement of the absolute
position of the joints, motor current measurements, and
incremental optical encoders for the motor commutation and
joint torque measurement.
III. DESIGN CONSIDERATIONS
In spite of AILA’s anthropomorphic appearance, its basic
purpose is to be a research platform for autonomous mobile
dual-arm manipulation. Therefore, its underlying system con-
cept is a pair of arms mounted to a height-adjustable frame
on a mobile base. To enable fast and precise coordinated
dual-arm movements, the primary design goal is a high
payload-to-weight ratio of the arms by featuring low masses,
low moments of inertia, high stiffness of the limbs, and
strong drives. These requirements have already been aimed at
successfully by other robotic systems [6][7], but not with the
limitation of a real human-size design space. The following
requirements are axioms that had an important influence on
the system’s concept:
The payload-to-weight ratio of each arm shall be as high
as technically possible to enable fast reactions with high
accelerations.
The overall look-and-feel of the robot shall be anthro-
pomorphic and aesthetical.
The development of the arm drives shall be an improve-
ment of existing components developed at our center for
previous robotics systems (Spaceclimber [8]) while, at
the same time, ensuring compatibility with them.
The working height of the arms and the center of gravity
of the upper limbs shall be adjustable with regard to the
mobile base.
The mobile base shall move holonomically on slightly
rough terrain.
IV. MECHATRONIC DESIGN
A. Mechanics
For the design of the arms, a kinematic model of seven
degrees of freedom was chosen with pair-wise grouping
of joints with intersecting axes at the wrist, and three
intersecting axes at the shoulder (Fig. 2, Top). The grouping
of joints lowers the weight of their combined housing and
helps to reduce the moment of inertia of the limb by placing
the relatively heavy motors near to the arm’s base. The joints
of the elbow and shoulder consist of brushless DC motors
in combination with harmonic drives and are independently
controlled by a stack of three PCBs housing power and
control electronics. The two degrees of freedom of the wrist
are controlled by a parallel kinematic structure driven by two
DC linear motors, each of them including planetary gear,
encoder, and its own controller.
The assembly of two joints in one housing (Fig. 2,
Bottom) is connected to the upper and lower arm structure by
four-point thin-section bearings. The housing itself consists
of machined aluminum parts joined together by carbon-
fiber reinforced plastic parts. Within each joint, a brushless
motor drives a shaft that is supported on the one end by
a bearing and on the other directly on the wave generator
of the connected harmonic drive gear. Within the harmonic
drive, a spring-loaded electromagnetic safety-brake acts on
an axially-mounted brake disk. All necessary cabling goes
through the centre of the joint, guided by a tube that also
acts for driving a magnet hovering over an absolute hall
encoder on the PCB stack. The reason for a tube diameter of
8mm is due to the opening of the smallest harmonic drives
flex-spline, used in the axial joint of the forearm. Optical
incremental encoders are integrated to measure the position
of the rotor and commutate the stator accordingly.
The conceptual considerations for the body and the head of
AILA follow a different approach than the structural concept
for the arms. The basic purpose of the body is to carry the
Fig. 2. Top: Arm specification. Bottom: Detail of the elbow housing
containing two joints
arms and make their working space adjustable in height. The
structural parts of the torso, leg, and head are sheet metal
parts (Fig. 3), because additionally to being lightweight they
also have to be exchangeable. Since AILA is a research
platform, it is foreseeable that there will be future changes
to the sensory setup or the CPUs. The leg and the torso as
main load-carrying structures have also as second function
the housing of the vision CPU, a laser scanner, and a 3D
TOF camera for navigation, network components, and the
control board for its motors.
The basic concept for designing lightweight structures is
to achieve an evenly distributed stress near to the yield point
within all of the used material. Besides choosing a material
with high specific Young modulus, the stiffness of a robotic
arm increases by a high second moment of area with its
cross-section, which leads to choose a thin shelled tube with
as few openings as possible. Therefore, AILA’s arm structure
principle design (Fig. 4) is a tube of high-modulus carbon
fiber along the force paths in combination with machined
high-strength aluminum where precision is needed or the
loads are diverse.
Fig. 3. Body specification and sheet metal structure
B. Electronics and Power Supply
AILA’s electronics include various sensor modules, three
computers, and three different kinds of motors. Therefore, it
has to provide various voltages with noticeable currents for
the CPUs and the drives. The main system setup differen-
tiates between logic and power circuits. For safety reasons,
there is one power cut-off on the mobile base near the battery
that mechanically breaks all circuits, and a second wireless
kill-switch affecting just the drive circuits for paralyzing
the robot without interrupting the sensor and logic network.
Within the mobile base, the current of the 48V battery is
split to a 1 kW 24V power supply, a 0,5 kW 12V power
supply, and direct supply of the wheel and arm drives. Fif-
teen different sub-circuits are separated by automatic circuit
breakers. Within each sub-circuit all damageable components
are protected with fast-reacting fuses according to their
nominal power consumption. Voltages below 12V, that are
needed for the chest laser-scanner and the mini ITX board,
are converted by a 120W CPU power supply placed in the
torso.
The main wire harnesses are the power cables within
the mobile base, power, and communication cabling from
the base to the chest, and three branches to the arms and
the head. Within the mobile base, a noticeable amount of
design space is dedicated to the high-current power cables.
The arrangement of components was chosen to optimize the
routing of these stiff cables with large bending radii. Due to
the main power converters and the circuit breakers placed in
Fig. 4. Hybrid joint and left arm structural parts
Fig. 5. Wheeled mobile base used to provide mobility to the first platform.
Future developments will focus on new concepts of mobility/locomotion for
AILA.
the mobile base, the number of cables routed to the torso
increases. All these cables are bundled to one harness that
follows a route similar to the placement of a spine with
minor deviations in the hip, the shoulders, and the head. To
provide enough slack for the head-movement, the spine-like
harness goes with a large bending radius from the neck into
the design space of the tongue. For the ease of maintenance,
mainly pre-tailored cables are used. Excessive cable-length is
stored in the right hemisphere of the head, all communication
and debugging interfaces are pointing to the left.
V. CONTROL SOFTWARE
A. Control of the motor joints
The torso joints as well as the vertical axes of the mobile
platform use Faulhaber DC Motors which are controlled by
an in-house developed power electronics board controlled
by a STM32 microcontroller. The board is equipped with
current, speed, and position sensors thus enabling local motor
control. The high-level commands are transmitted from the
embedded PCs via CAN messages. The arm joints and the
horizontal axes of the mobile platform use brushless DC mo-
tors from Robodrive. A similar control approach to the one
previously described has been used for these motors, which
has already been successfully integrated in the SpaceClimber
robot [8][9]. In this case, the in-house developed motor
electronics consists of a stack of three circular PCBs. These
PCBs are designed to be integrated directly behind the stator
of the Robodrive motors. The boards incorporate all sensors
which are necessary to monitor and control the motor. Three
motor current sensors are integrated in the low phases of the
three-phase H-bridges. Encoder wheels in front of the gear as
well as an absolute angular encoder behind the gear measure
the motor position for speed and position control purposes.
The stack is also equipped with two temperature sensors
and input voltage measurement. Additionally, one SD-card
module is used to log all sensor data at the highest possible
frequency. The interface to high-level control units is realised
via a CAN interface. Configuration data specific to each
motor - e.g. CAN identifier or position offsets - is written and
read to/from an EEPROM memory. All mentioned sensors as
well as current, speed, and position controllers are processed
by a Spartan3 FPGA from Xilinx.
B. Control of the mobile base
The control system consists of two different control units,
one micro controller and one FPGA (Fig. 6), each of
them communicating over a CAN bus with the navigation
computer. Each of these controller-pairs controls one wheel,
which consists of two motors which are coupled over a
common base point at the ground of the wheel. The vertical
axis steers the wheel in a range of 180 degrees by using a
Faulhaber DC motor. The horizontal axis controls the wheel
speed of the RoboDrive brushless DC motor. All sensors
needed for monitoring the state of the motors as well as
processing modules for current, speed and position control
as well as data-logging are directly integrated in these control
units. The described arrangement allows linear, lateral, and
rotary movements and hence offers the rover an almost omni-
directional movability. The wheel control system sends status
messages such as the actual speeds, angles, temperatures,
and power consumption to the navigation computer, whereas
desired wheel speed and steering angles are the possible
messages received from the wheel controllers.
C. Control of the arms and torso
The overall architecture of the manipulation software is
shown in Figure 7. The coordinator is implemented as a
hierarchical state machine and controls the robot at task-
level. It makes use of the behavior base, which represents a
collection of basic robot functionalities (e.g. Plan Trajectory,
Tilt Head, ...) that can be combined to achieve more complex
behaviors. The behaviors themselves are implemented in
different modules and can be triggered by action calls of the
Fig. 6. Control scheme of the mobile base
coordinator. Thereby, the motion planner entails the func-
tionality for trajectory planning, whilst the motion controller
contains routines for trajectory execution and other hardware-
related features. The world model collects information about
the robot environment (currently, mainly from the vision
module) and the robot’s current configuration. Moreover,
upon request this module supplies this information to any
other modules. For interprocess communication and as inte-
grating software platform, we use the open-source framework
ROS [10].
Fig. 7. Architecture of the manipulation framework
D. Semantic Perception
The robot AILA perceives the 3D environment through
a variety of sensor modalities. A stereo vision system is
integrated into the head for depth and texture perception.
Additionally, a tilting laser range finder is integrated into
the upper body for 3D shape recovery. Figure 8 shows a
typical manipulation scenario, i.e., a shelf containing several
objects for manipulation. In order to execute high-level
commands like “Take the yellow box from the shelf” a
semantic interpretation of the sensor data is mandatory. The
robot needs to identify the shelf from the raw 3D data
in order to approach it. Currently, we work predominantly
on the detection of spatial entities like tables, doors, and
shelves and how they can be described using spatial feature
descriptors [11].
In our previous work ([12],[13] and [11]) we described
in detail how the bridging between the robot’s perception
and the symbolic description can be achieved. A tilting
laser range finder generates points in a geometric coordinate
system. In our semantic perception approach, we use the 3D
point cloud data from the tilting laser in order to recover
structural information about the robot’s environment. Using
a modified region-growing approach [13] which is based
on Rabbani’s Algorithm [14], planes are detected in the
unorganized point cloud. Each detected plane is analyzed
for the structural information contained. In order to further
process the detected planes, the convex shape of each single
plane is calculated using alpha shape recovery [15]. Once
the shapes have been recovered from the unorganized point
cloud, the goal is to classify the structure the robot perceives
and to label the structure with semantics. To make semantic
labeling possible in indoor environments, we make use of
some basic assumptions. If we look around in a typical
indoor environment like a household environment or an office
environment, it is clear that most structures are of rectangular
shape and mostly parallel or orthogonal to each other. The
robot has to extract a vector of feature descriptors from the
spatial entities in order to compare them with the semantic
knowledge database. In a first approach, we define a set
of vectors which are able to describe spatial entities of an
environment.
Fig. 8. Left: The robot AILA perceives the shape of the shelf by the
segmentation of the point cloud, generated by a tilting 2D laser range finder
in the upper part of the body. Right: The shelf as it is perceived by the robot
In our current implementation, we again consider a like-
lihood function in order to deal with uncertainties. For
instance, two shapes can be parallel with the certainty of
0.9 due to noise and rounding differences in the extraction
process. Using a model based on Spatial Ontologies, which
can be expressed with the language OWL-DL , detectable
features are thereafter matched using a similarity function.
For a detailed description of the semantic labeling approach,
refer to [11].
E. Vision and Manipulation
The stereo vision system, which is integrated in the
head of the robot, consists of two Prosilica color video
cameras which are used for the perception of AILA’s nearby
operating area, including the arms. On the software side, the
vision module contains algorithms for object recognition and
tracking, scene interpretation, self-perception, and hand-eye
calibration.
For object recognition, we generate textured 3D mod-
els of the objects in AILA’s environment, extract global
properties, as well as local, texture-based features (e.g.
SIFT features [16]), and store this information in a model
database. While in operation, the vision framework consis-
tently matches the features extracted from the observed scene
with the database and computes the pose of the recognized
objects using 2D-3D point correspondences (see Figure 9).
Fig. 9. Feature matching and 3D pose estimation
For motion planning of the upper body to grasp and
manipulate objects detected by the vision system, the kine-
matic relationships are defined between the 4 torso joints, 14
arm joints and 2 head joints. For tasks involving dual-arm
grasping and manipulation of homogeneous objects (Fig. 10),
motion plans are generated for 18 DOFs (arms and torso).
Incorporating the torso joints in the motion planning allows
the utilization of the entire reachability space of the robot
which is necessary for constrained dual-arm manipulation
tasks. OpenRAVE [17] is used as the common platform
for integrating the various modules of motion planning
like kinematics, workcell representation, planning, collision
avoidance, trajectory smoothing, and simulation. The plan-
ning algorithms are based on the Bi-directional Rapidly-
exploring Random Trees approach [18] and additional task-
based constraints and criteria are incorporated for dual-
arm manipulation. The constraints include maintaining the
relative end-effector (EEF) configurations between the two
arms which ensures that the object grasped is not lost while it
is being moved. For tasks involving open-top or liquid-filled
object manipulation, additional constraints are imposed on
the orientation of the object while it is being manipulated.
Further, additional criteria like minimization of the lengths
of the EEF paths are used for selecting the optimum motion
plans. While the constraints are incorporated during the
planning phase, the criteria can be used to select an optimal
path from a set of paths or ask the planner to replan if the
criteria are not met.
Fig. 10. OpenRAVE-based simulation of AILA performing a dual-arm
manipulation task
VI. EXPERIMENTS
First experiments have been performed in which AILA
made use of its key capabilites: semantic perception for
recognising objects in an office environment, autonomous
navigation, object recognition and pose estimation, and au-
tonomous dual-arm manipulation. Figure 11 shows snap-
shots of the sequence for grasping an object using two
arms in a laboratory setup. The robot recognised the table,
navigated towards it, recognised the object on the table as
well as its 3D pose, and planned the manipulation strategy
to grasp and manipulate the object using two arms.
VII. CONCLUSIONS/FUTURE WORK
Robots have been and still are great tools for studying
artificial intelligence. While in the earlier days of AI systems
like Shakey [19] were designed to study aspects of planning,
plan execution monitoring, and navigation, their primary
achievements where to carry the sensors (mainly cameras)
through the environment that was to be explored. Today we
understand that the central question of AI research can only
be answered if the tools we use - the robots - provide a larger
amount of robotic capabilities. Internal representation of a
robot exploring an environment can be massively enriched
if along with a camera and other optical data (laser scanner-
based point clouds), internal and proprioceptive data is used.
This data can come e.g. from the motors in the kinematic
chain of a robot arm that handles objects in the environment,
for instance, opening a door. The higher the disposition of
robots to interact (actively) with the environment, the better
will be the database for environmental representation and
the enhancement of their strategies for control, navigation,
and planning. In this paper, we presented a robotic platform
that incorporates multiple degrees of freedom as well as
integrates a great deal of sensors to allow a self-evaluation of
Fig. 11. Preliminary experiments of AILA performing autonomous dual-arm manipulation
the current robot’s state and scenario. Such platform should
permit the emergence of learning capabilities by exploiting
its immersion in human environments with its own body
resources and limitations.
REFERENCES
[1] O. Brock and R. Grupen, “Nsf/nasa workshop on autonomous mobile
manipulation (amm),” http://robotics.cs.umass.edu/amm, March 2005,
houston, USA.
[2] S. Chitta, B. Cohen, and M. Likhachev, “Planning for autonomous
door opening with a mobile manipulator,” in In Proc. of IEEE
International Conference on Robotics and Automation (ICRA), 2010.
[3] S. Srinivasa, D. Ferguson, C. Helfrich, D. Berenson, A. Collet, R. Di-
ankov, G. Gallagher, G. Hollinger, J. Kuffner, and M. VandeWeghe,
“HERB: a home exploring robotic butler,Autonomous Robots, 2009.
[4] M. Fuchs, C. Borst, P. R. Giordano, A. Baumann, E. Kraemer,
J. Langwald, R. Gruber, N. Seitz, G. Plank, K. Kunze, R. Burger,
F. Schmidt, T. Wimboeck, and G. Hirzinger, “Rollin’ justin - design
considerations and realization of a mobile platform for a humanoid
upper body,” in ICRA’09: Proceedings of the 2009 IEEE international
conference on Robotics and Automation. Piscataway, NJ, USA: IEEE
Press, 2009, pp. 1789–1795.
[5] D. Katz, E. Horrell, O. Yang, B. Burns, T. Buckley, A. Grishkan,
V. Zhylkovskyy, O. Brock, and E. Learned-Miller, “The UMass
mobile manipulator UMan: An experimental platform for autonomous
mobile manipulation,” in In Workshop on Manipulation in Human
Environments at Robotics: Science and Systems, 2006.
[6] G. Hirzinger, N. Sporer, A. Albu-Schaffer, M. Hahnle, R. Krenn,
A. Pascucci, and M. Schedl, “DLR’s torque-controlled light weight
robot III - are we reaching the technological limits now?” in IEEE
International Conference on Robotics and Automation (ICRA)., vol. 2,
2002, pp. 1710–1716.
[7] C. Ott, O. Eiberger, W. Friedl, B. Bauml, U. Hillenbrand, C. Borst,
A. Albu-Schaffer, B. Brunner, H. Hirschmuller, S. Kielhofer, R. Koni-
etschke, M. Suppa, T. Wimbock, F. Zacharias, and G. Hirzinger, “A
humanoid two-arm system for dexterous manipulation,” in 6th IEEE-
RAS International Conference on Humanoid Robots, Dec. 2006, pp.
276–283.
[8] J. Hilljegerdes, P. Kampmann, S. Bosse, and F. Kirchner, “Develop-
ment of an intelligent joint actuator prototype for climbing and walking
robots,” in International Conference on Climbing and Walking Robots
(CLAWAR-09), 2009, pp. 942–949.
[9] S. Bartsch, T. Birnschein, F. Cordes, D. K¨
uhn, P. Kampmann,
J. Hilljegerdes, S. Planthaber, M. R¨
ommermann, and F. Kirchner,
“SpaceClimber: Development of a six-legged climbing robot for space
exploration,” in Proceedings for the Joint Conference of ISR 2010
(41st International Symposium on Robotics) and ROBOTIK 2010 (6th
German Conference on Robotics). VDE Verlag GmbH, June 2010.
[10] M. Quigley, B. Gerkey, K. Conley, J. Faust, T. Foote, J. Leibs,
E. Berger, R. Wheeler, and A. Ng, “ROS: an open-source robot
operating system,” in In Proc. of IEEE International Conference on
Robotics and Automation (ICRA), 2009.
[11] M. Eich, M. Dabrowska, and F. Kirchner, “Semantic labeling: Clas-
sification of 3D entities based on spatial feature descriptors,” in
Workshop Best Practice Algorithms in 3D Perception and Modeling
for Mobile Manipulation, IEEE International Conference on Robotics
and Automation, (ICRA-10), May 2010, Anchorage, 2010.
[12] M. Eich and F. Kirchner, “Reasoning about geometry: An approach us-
ing spatial-descriptive ontologies,” in Workshop AILog, 19th European
Conference on Artificial Intelligence, (ECAI-10), 16.8.-16.8.2010, Lis-
bon, 2010.
[13] M. Eich, M. Dabrowska, and F. Kirchner, “3D scene recovery and
spatial scene analysis for unorganized point clouds,” in In Proceedings
of 13th International Conference on Climbing and Walking Robots and
the Support Technologies for Mobile Machines, (CLAWAR-10), 31.8.-
03.9.2010, Nagoya, Japan, 2010.
[14] T. Rabbani, F. van Den Heuvel, and G. Vosselmann, “Segmentation of
point clouds using smoothness constraint,” in International Archives
of Photogrammetry, Remote Sensing and Spatial Information Sciences,
vol. 36, no. 5. Citeseer, 2006, pp. 248–253.
[15] W. Shen, “Building boundary extraction based on lidar point clouds
data,” in ISPRS08. ISPRS, 2008, p. 157.
[16] D. G. Lowe, “Object recognition from local scale-invariant fatures,” in
The Proceedings of the Seventh International Conference on Computer
Vision, vol. 2, 1999, pp. 1150–1157.
[17] R. Diankov and J. Kuffner, “OpenRAVE: A planning
architecture for autonomous robotics,” Robotics Institute,
Tech. Rep. CMU-RI-TR-08-34, July 2008. [Online]. Available:
http://openrave.programmingvision.com
[18] J. Kuffner and S. LaValle, “RRT-connect: An efficient approach to
single-query path planning,” in Proc. IEEE International Conference
on Robotics and Automation (ICRA), April 2000, pp. 995–1001.
[19] N. J. Nilsson, C. A. Rosen, B. Raphael, G. Forsen, L. Chaitin, and
S. Wahlstrom, “Application of intelligent automata to reconnaissance,”
Stanford Research Institute, Tech. Rep., December 1968, project 5953
Final Report, From the Nilsson archives, SHAKEY papers.
... Bên cạnh đó, các loại robot hình dáng giống người nhưng có thiết kế di chuyển nhờ mô đun di chuyển dạng các bánh xe như Twendy-One [5], Pepper [6], Armar [7], Aila [8]… Thiết kế tổng thể robot Twendy-One được thể hiện như ở hình 2. Hệ thống cơ điện của các robot thông minh hình dáng giống người rất phức tạp, do chúng thường có một lượng lớn các bậc tự do ở các bộ phận khác nhau, đồng thời mật độ thiết kế của các thiết bị rất cao. Các thiết bị bao gồm các cơ cấu chấp hành, cảm biến, mạch điều khiển, truyền thông, máy tính xử lý các thuật toán nhận dạng môi trường, con người, âm thanh và hình ảnh. ...
... Để phát hiện các đối tượng, các cảm biến siêu âm được đặt tại ngực và bụng của robot (hình 6). 8 Thiết kế phần thân trên của robot: khung kết cấu phần thân trên có dạng hình chữ nhật được gắn chắc chắn vào tấm eo trên, các bộ phận khác như cánh tay, đầu, các thiết bị điện điều khiển được lắp trên khung này. Phần ngực bao gồm màn hình tương tác được bố trí ở trước ngực để hiển thị các thông tin trạng thái của robot và các thông tin giúp cho việc tương tác với con người. ...
... Việc tính toán mô men các động cơ được tiến hành dựa vào việc tính các thông số động lực học như mô men quán tính của các khớp, vị trí khối tâm thông qua phần mềm thiết kế, kết hợp với tốc độ quay lớn nhất của các khớp. Thiết kế cánh tay robot được thể hiện ở hình 7. 8 Thiết kế phần thân trên của robot: khung kết cấu phần thân trên có dạng hình chữ nhật được gắn chắc chắn vào tấm eo trên, các bộ phận khác như cánh tay, đầu, các thiết bị điện điều khiển được lắp trên khung này. Phần ngực bao gồm màn hình tương tác được bố trí ở trước ngực để hiển thị các thông tin trạng thái của robot và các thông tin giúp cho việc tương tác với con người. ...
Article
This article presents the design of the mechatronic system for an intelligent humanoid robot, which is employed for teaching the English language. The robot’s appearance looks like a boy, at 1.2 m tall and 40 kg weight. The robot consists of an upper-body with 21 degrees of freedom, a head, two arms, two hands, a ribcage; and a mobile platform with three omnidirectional wheels. The control system consists of a computer that controls the entire operation of the robot, including motion planning, voice recognition and synchronization, face recognition, gestures, receiving commands from the remote control and monitoring station, receiving signals from microphones, cameras, receiving and sending signals to the mobile module controller and the upper body controller. Microphones, speakers and cameras are located at the head and chest of the robot to perform voice communication and image acquisition functions. A touch screen is arranged in front of the robot’s chest allowing the robot to interact with people and display the necessary information. The robot can communicate with people by voice, perform operations such as greetings, expressing emotions, performing dances, singing, applications for supporting English language teaching in primary schools and has extensible for many other practical applications.
... The dual arm humanoid robot AILA is a research platform to study the multidisciplinary topic of mobile manipulation, [13]. The system has the possibility to perform complex manipulation tasks. ...
Conference Paper
In this paper the active dual-arm upper body exoskeleton Capio, its application for the teleoperation of a complex humanoid robot, as well as its use in a virtual environment are presented. The Capio exoskeleton is a human-machine interface that tracks the operator's movements and transfers them to a target system. Multiple contact points at back and arms to the user enable a precise motion measurement and allow a specific force feedback. The combination of the kinematic configuration with 20 active serial elastic joints controlled by a rigid body dynamics algorithm provides mechanical transparency. The possibility to move the torso enhances the exoskeleton workspace. The Capio system is designed portable and lightweight with an easy dressing procedure.
... The R1 robot wrist [26] offers a tripod kinematics driven by 3 rods which enables a elongation of 130 mm additionally to pitch and roll motions of 30°. In the mobile dual-arm robot AILA [27] one point of a triangle arrangement is fixed by a universal joint while the two other points are actuated by two spindle driven rods. In the RoboRay hand a complex 2 DoF wrist joint including wires for the finger motion is proposed [28]. ...
Article
Full-text available
Building humanoid robots with properties similar to those of humans in terms of strength and agility is a great and unsolved challenge. This work introduces a compact and lightweight wrist joint mechanism that is singularity-free and has large range of motion. The mechanism provides two degrees of freedom (DoF) and was developed for integration into a human scale humanoid robot arm. It is based on a parallel mechanism with rolling contact joint behaviour and remote actuation that facilitates a compact design with low mass and inertia. The mechanism's kinematics along with a solution of the inverse kinematics problem for the specific design, and the manipulability analysis are presented. The first prototype of the proposed mechanism shows the possible integration of actuation, sensing and electronics in small and narrow space. Experimental evaluations shows that the design feature unique performance regarding weight, speed, payload and accuracy.
... The robot LOLA [4] presented in 2006 by TU Munich is regarded as the first humanoid robot designed with a serial-parallel hybrid system that includes PKMs at knees and ankles. In 2010, the DFKI-RIC introduced AILA [5], a mobile dual-arm manipulation robot with hybrid design, which involves parallel joints at the wrists, neck and torso. In 2013, NASA's VALKYRIE robot [6] was designed with PKMs on the wrist, torso and ankle for the DARPA challenge, with 44 Degrees of freedom (DoFs) and 140 Kg total mass. ...
Conference Paper
Full-text available
Recent studies suggest that a stiff structure along with an optimal mass distribution are key features to perform dynamic movements, and parallel designs provide these characteristics to a robot. This work presents the new upper-body design of the humanoid robot RH5 named RH5 Manus, with series-parallel hybrid design. The new design choices allow us to perform dynamic motions including tasks that involve a payload of 4 kg in each hand, and fast boxing motions. The parallel kinematics combined with an overall serial chain of the robot provides us with high force production along with a larger range of motion and low peripheral inertia. The robot is equipped with force-torque sensors, stereo camera, laser scanners, high-resolution encoders etc that provide interaction with operators and environment. We generate several diverse dynamic motions using trajectory optimization, and successfully execute them on the robot with accurate trajectory and velocity tracking, while respecting joint rotation, velocity, and torque limits.
... The modernized form of these bipedal walking robots is called a humanoid robot. They are either designed for entertainment, logistics, collaborative maintenance in the industries, and also teleoperation [12][13][14][15][16][17]. To this end, researchers have taken motivation from 2D designs, 3D bipedal robots, and humanoid robots to design exoskeleton robots for bipedal walking. ...
Article
Full-text available
Exoskeleton robots are electrically, pneumatically, or hydraulically actuated devices that externally support the bones and cartilage of the human body while trying to mimic the human movement capabilities and augment muscle power. The lower extremity exoskeleton device may support specific human joints such as hip, knee, and ankle, or provide support to carry and balance the weight of the full upper body. Their assistive functionality for physically-abled and disabled humans is demanded in medical, industrial, military, safety applications, and other related fields. The vision of humans walking with an exoskeleton without external support is the prospect of the robotics and artificial intelligence working groups. This paper presents a survey on the design and control of lower extremity exoskeletons for bipedal walking. First, a historical view on the development of walking exoskeletons is presented and various lower body exoskeleton designs are categorized in different application areas. Then, these designs are studied from design, modeling, and control viewpoints. Finally, a discussion on future research directions is provided.
... shivesh.kumar@dfki.de robot [4] employs parallel mechanisms for its wrist, neck, and torso joints. Furthermore, the design of the NASA VALKYRIE humanoid robot [5], built by the NASA Johnson Space Center, follows a similar design concept by utilizing PKM modules for its wrist, torso and ankle joints. ...
... It can copy human behaviour and motion by monitoring man. It has strong arms and can lift a weight of 8kg (Lemburg et al., 2011). With this unique ability, Aila looks promising as it can operate inside and outside space stations or perhaps, even become a permanent resident in the Moon village. ...
Article
Full-text available
Biomimicry is an alternative approach to solve problems people may encounter in all walks of life by mimicking and being inspired by nature. Robotics and space research are of utmost importance for modern researchers. When it comes to robotics design for space environments, it is seen that the ones usually preferred for space research are mostly inspired by living beings on Earth. Bio-inspired robotic technology is being developed by studying the mechanics of living beings having been seen in our world for 3.8 billion years. The locomotion, balance, behaviour, and communication skills of terrestrial beings have thus been thoroughly studied by engineers and social scientists for decades. Undoubtedly, robotics has also made use of the systems already existing in nature like many other scientific fields and thus considered the studies in biomechanics noteworthy. When we carefully examine our ecosystem, we realize that answers to our questions and solutions to our problems are just there on the shelves of a 3.8-billion-year-old laboratory with complete background on observation and application. So far, man has been quite successful in getting data by studying a splendid variety of species as well as making use of such data to put forward solutions to terrestrial problems. In this study, it has been examined how living beings in our ecosystem can be a source of inspiration for the robotic studies in space research as well as how our knowledge of biology can be put into use in robotics so that it can contribute to space studies. The study also aims to enlighten the way biology works in the areas of design for science and technology.
Article
Full-text available
Understanding the three-dimensional working envi-ronment is one of the most challenging tasks in robotics. Only by labeling perceived objects with semantics, a robot can reason about its environment, execute high-level plans and interact autonomously with it. A robot can perceive its enviroment by using 3D LIDAR systems, which generate 3D point cloud images of the environment. This data is perceived in a spatial domain, i.e. the raw data gives only positions of the measured points. The transfer from the spatial domain to the semantic domain is known as the gap problem in AI and one of the hardest to solve. In this paper we present an approach on how to extract spatial entities from unorganized point cloud data generated by a tilting laser scanner. Additionally, we describe how the extracted spatial entities can be mapped to entities in the semantic domain using feature descriptors. We also discuss, how a-priori knowledge about typical indoor environments can be used for semantic labeling.
Article
Full-text available
Autonomous robots perceive their environment usually through exteroceptive sensors which generate features in the spatial domain. Finding the corresponding semantic or symbolic description can be referred to as the anchoring or grounding problem. Some of the latest research in robotics is dedicated to the generation of seman-tic maps. This includes labeling of metric maps which are provided by 3D point clouds. Usually, a model database has to be generated beforehand in order to classify objects in the spatial domain. In our approach, we propose a semantic classification based on object prim-itives and their spatial 3D relationship. We introduce spatial feature descriptors which can be mapped directly to a symbolic level. By looking at the relationships in the spatial domain, we are able to de-scribe and classify known objects without model learning. The spa-tial entities can be defined directly using domain knowledge and on-tologies. We apply our approach to a mobile dual-manipulator robot with application to logistic scenarios. We propose an ontology-based description of an indoor environment and a probabilistic reasoning approach based on spatial feature descriptions. The paper will give an overview of our current work on applying AI methods to logistics scenarios.
Conference Paper
The understanding of the environment is mandatory for any type of autonomous robot. The ability to put semantics on self-generated sensor data is one of the most challenging tasks in robotics. While navigation tasks can be performed by pure geometric knowledge, high-level planning and intelligent reasoning can only be done if the gap between semantic and geometric representation is narrowed. In this paper, we introduce our approach for recovering 3D scene information from unorganized point clouds, generated by a tilting laser range scanner in a typical indoor environment. This unorganized information has to be analyzed for geometric and recognizable structures so that a robot is able to understand its perception. We discuss in this paper how this spatial information, which is based solely on segmented shapes and their extractable features, can be used for semantic interpretation of the scenery. This will give an idea of how the gap between semantic and spatial representation can be solved by spatial reasoning and thereby increasing robot autonomy.
Conference Paper
In this paper, a new joint actuator is introduced which builds the basis for the newly developed SpaceClimber robot by the German Research Center for Artificial Intelligence. Based on in-house developed joint actuators for ambulating robots, this complete new design combines performance, stability, and space-related components. The newly developed on-board electronics enables the possibility of a biologically inspired functionality like decentralized autonomous joint control. In this paper, we explain the design and the control architecture of the actuator. We describe the selected components and present the fully functional prototype. The results of the first performance experiments are presented.
Article
Abstract One of the challenges in developing real-world autonomous,robots is the need for integrating and rigorously test- ing high-level scripting, motion planning, perception, and control algorithms. For this purpose, we introduce an open-source cross-platform software architecture called OpenRAVE, the Open Robotics and Animation Virtual Envi- ronment. OpenRAVE is targeted for real-world autonomous robot applications, and includes a seamless integration of 3-D simulation, visualization, planning, scripting and control. A plugin architecture allows users to easily write cus- tom controllers or extend functionality. With OpenRAVE plugins, any planning algorithm, robot controller, or sensing subsystem can be distributed and dynamically loaded at run-time, which frees developers from struggling with mono- lithic code-bases. Users of OpenRAVE can concentrate on the development,of planning and scripting aspects of a problem without having to explicitly manage the details of robot kinematics and dynamics, collision detection, world updates, and robot control. The OpenRAVE architecture provides a flexible interface that can be used in conjunction with other popular robotics packages such as Player and ROS because it is focused on autonomous,motion planning and high-level scripting rather than low-level control and message,protocols. OpenRAVE also supports a powerful network scripting environment,which makes,it simple to control and monitor robots and change execution flow dur- ing run-time. One of the key advantages of open component,architectures is that they enable the robotics research community,to easily share and compare,algorithms. II Contents
Article
Abstract, Research in Autonomous Mobile Manipulation crit-ically depends on the availability of adequate experimental platforms. In this paper, we describe an ongoing effort at the University of Massachusetts Amherst to construct a hardware platform with redundant kinematic degrees of freedom, a com-prehensive sensor suite, and significant end-effector capabilities for manipulation. In our research, we pursue an end-effector centric view of autonomous mobile manipulation. In support of this view, we are developing a comprehensive software suite to provide a high level of competency in robot control and perception. This software suite is based on a multi-objective, task-level motion control framework. We use this control framework to integrate a variety of motion capabilities, including task-based force or position control of the end-effector, collision-free global motion for the entire mobile manipulator, and mapping and navigation for the mobile base. We also discuss our efforts in developing perception capabilities targeted to problems in autonomous mobile manipulation. Preliminary experiments on our UMass Mobile Manipulator (UMan) are presented.
Article
Research in the application of techniques of artificial intelligence to the control of a mobile automaton in a realistic environment is described. The main emphasis is on experimentation with a previously-developed system of hardware and software, and on research in several related areas of artificial intelligence where new efforts have been necessary to increase the capabilities of the automaton. Major areas discussed include the use of formal theorem-proving techniques of first-order logic in solving problems for the automaton; symbolic information structures for modeling the automaton's environment; results in visual scene analysis, including a decision-tree approach and the use of regional as well as local analysis; and an outline for the design of a problem-solving system based on higher-order logic. (Author)
Conference Paper
Abstract— This paper gives an overview of ROS, an open- source robot operating,system. ROS is not an operating,system in the traditional sense of process management,and scheduling; rather, it provides a structured communications layer above the host operating,systems,of a heterogenous,compute,cluster. In this paper, we discuss how ROS relates to existing robot software frameworks, and briefly overview some of the available application software,which,uses ROS.