Conference PaperPDF Available

Toward Efficient Robot Teach-In and Semantic Process Descriptions for Small Lot Sizes

Authors:

Abstract and Figures

We present a novel robot programming methodology that is aimed at reducing the level of robotics expert knowledge needed to operate industrial robotic systems by explicitly modeling this knowledge and abstracting it from the user. Most of the current robot programming paradigms are either user-centric and fully-specify the robot's task to the lowest detail (used mostly in large industrial robotic systems) or fully autonomous solutions that generate the tasks from a problem description (used often in service and personal robotics). We present an approach that is user-centric and can interpret underspecified robot tasks. Such task descriptions make the system amenable for users that are experts in a particular domain, but have limited knowledge about robotics and are thus not able to specify low-level details and instructions. Semantic models for all involved entities enable automatic reasoning about underspecified tasks and missing pieces of information. We demonstrate this approach on an industrial assembly use-case and present a preliminary evaluation—both qualitatively and quantitatively—vis-à-vis state-of-the-art solutions available from industrial robot manufacturers.
Content may be subject to copyright.
Toward Efficient Robot Teach-In and Semantic
Process Descriptions for Small Lot Sizes
Alexander Perzylo, Nikhil Somani, Stefan Profanter, Markus Rickert, Alois Knoll
fortiss GmbH, An-Institut Technische Universit¨
at M¨
unchen, Munich, Germany
Abstract—We present a novel robot programming methodology
that is aimed at reducing the level of robotics expert knowledge
needed to operate industrial robotic systems by explicitly mod-
eling this knowledge and abstracting it from the user.
Most of the current robot programming paradigms are either
user-centric and fully-specify the robot’s task to the lowest
detail (used mostly in large industrial robotic systems) or fully
autonomous solutions that generate the tasks from a problem
description (used often in service and personal robotics). We
present an approach that is user-centric and can interpret
underspecified robot tasks. Such task descriptions make the
system amenable for users that are experts in a particular
domain, but have limited knowledge about robotics and are thus
not able to specify low-level details and instructions. Semantic
models for all involved entities enable automatic reasoning about
underspecified tasks and missing pieces of information.
We demonstrate this approach on an industrial assembly use-
case and present a preliminary evaluation—both qualitatively
and quantitatively—vis-`
a-vis state-of-the-art solutions available
from industrial robot manufacturers.
I. INTRODUCTION
After a long fordistic period of industrial production, there
has been a trend in some parts of today’s industry toward
individualized products. Additionally, robot-based industrial
automation increasingly diffuses into small and medium-
sized enterprises (SMEs). In both cases, the industry has to
adapt their production processes for small lot sizes and a
high number of product variants. Hence, they require their
robot systems to allow for rapid changeovers and efficient
teaching. In this context, commercial viability of automated
production is highly influenced by the time required to teach
new processes and to adapt existing processes to variations of
a product. However, most SMEs cannot build the necessary
expertise in robotics in-house and have to rely on system
integrators. This drastically reduces the usability of robot
systems for small batch assembly.
Classical teaching concepts for robot systems offer ded-
icated robot programming languages, which are difficult to
learn and far from intuitive. Human operators have to under-
stand and use concepts like Euler angles and work with raw
Cartesian coordinates, e.g., in order to define a grasp pose. An
important shortcoming of those programming languages is the
lack of semantic meaning. For instance, from looking at such
robot programs it is not possible to see what kind of object is
being manipulated or to easily keep track of the structure of
the overall process.
(a) (b)
Fig. 1. Toward efficient robot programming: moving from (a) a classical ap-
proach using the teach pendant using low-level programming languages to (b)
efficient robot programming using high-level semantic process descriptions.
A more intuitive teaching paradigm is teaching through
manual guidance. Based on physical interaction of the operator
with the robot, robots can be quickly moved and the resulting
poses can be stored and re-used. It spares the human operator
the cumbersome effort of teaching positions through jogging
the robot via a teach pendant. Obviously, this approach is only
feasible for small robots. Besides, the resulting robot program
still does not know anything about the operator’s intention.
Industrial solutions such as Delmia or Process Simulate
offer sophisticated robot programming interfaces that scale to
a full production line with multiple robots. However, they
require the programmer to specify robot tasks on a low
level that requires expertise in robotics, thereby making these
solutions difficult to use for untrained shop floor workers. The
operators typically have to train for several weeks to learn
how to use these interfaces. Even small changes in the robot
program require significant programming effort.
In contrast to this, research—particularly in the domain
of service robotics—is being carried out to develop fully
autonomous cognitive robotic systems that are able to trans-
late high-level task descriptions to appropriate robot actions.
These systems are able to perceive their environment and to
reason about the implications of their tasks. Many of them
provide natural language interfaces, e.g., by exploiting task
descriptions from wikis or by interpreting speech input from
the operator. However, as of now, these systems have not had
a real impact on industrial applications.
In this paper, we introduce a concept tailored to intuitive
teaching of tasks for industrial robotic workcells (Fig. 1)
by utilizing approved techniques from the field of cognitive
robotics. The operator is still in charge of controlling most
aspects, whereas the cognitive capabilities of the robot system
are used to increase the efficiency in human-robot communi-
cation. Compared to classical approaches, it presupposes less
knowledge about the system. As a result, the required time
to train a human worker can be significantly reduced. The
system may resort to previously modeled knowledge about
certain industrial domains, processes, interaction objects, and
the workcell itself. Hence, the communication between the
operator and the robot system can be lifted to a more abstract
and semantically meaningful level, where the operator can talk
about, e.g., which object to pick instead of specifying raw
coordinates.
II. RE LATED WO RK
The strive for an easier, intuitive, and more automated
teaching process for robots dates back more than 40 years
now: In 1972 a prototype was developed that was able to
assemble simple objects from plan drawings using a vision
system [8]. Another early approach defined constraints be-
tween two objects using planar and cylindrical matching and
developed an offline object level language for programming
robot assembly tasks [1, 5]. RALPH [11] is a similar system
which uses information from CAD drawings to automatically
generate a process plan for the manipulator to perform the task.
A hierarchical graph-based planning algorithm for automatic
CAD-directed assembly is described in [9]. These classical
systems are not human-centric, i.e., the robot program is
generated automatically giving no possibility to a human
worker to alter or modify the proposed assembly. They also
require accurate models to perform automatic plan generation.
Additionally, the drawings or CAD models do not include
any semantic description to do further reasoning about its
properties, e.g., if a cylindrical part is the thread of a screw
and needs to be fastened to assemble it.
Pardo-Castellote [13] described a system for programming
a dual arm robot system through a graphical user interface.
Despite being one of the more advanced systems at the time
of publishing, it only supported simple pick and place actions
and did not allow in-air assembly or, for example, horizontally
placing a cylindrical object into a corresponding hole.
In recent years, there have been research efforts toward
teaching robot programs by observing human demonstra-
tion [10, 7, 18]. They rely on a predefined set of speech
commands and demonstrating actions or require various dif-
ferent sensors to detect human motions. Additional effort is
required to teach human workers how to demonstrate tasks
in a compatible way. Sensor requirements reduce the mobility
of such systems since all sensors need to be set up in each
working area.
A comprehensive overview of programming methods for
industrial robots until the year 2010, including online (operator
assisted, sensor guided), offline programming (CAD data),
and augmented reality is presented in [12]. Current research
mainly focuses on task-level programming that requires only a
Workcell Models
- robot model
- sensors
- tools
- support structures
Object Models
- shape as BREP
- recognition model
- weight, etc.
Process Models
- task types
- task parameters (e.g.
assembly constraints)
- task ordering
deployment
workcell entities
task parameters
Fig. 2. Overview of semantic models and interrelations. Object models may
serve as parameters for certain tasks specified in process models. A workcell
model relies on object models for describing its entities. For executing process
models, they are deployed on workcells exploiting the information in their
workcell models.
small library of predefined lower-level building blocks, called
skills [4, 14, 15].
With the rapid growth of the World Wide Web
and widespread availability of knowledge, Knowledge
Bases gained increasing importance for intuitively teaching
robots [3]. RoboEarth [17] and RoboHow [16] developed
robotic systems that use semantic descriptions to share knowl-
edge between different robots. Using Knowledge Bases for
defining additional skill parameters simplifies the whole teach-
ing process even further [15, 2].
III. TOWARD A NATU RAL TEACHING PARADIGM
Current industrial robotic systems require the user to be
an expert not only in the application domain but also at
robot programming. The key motivation in our approach is
to substantially reduce the level of robotics expertise required
to use such systems to a level where a shop floor worker with
minimal knowledge about robotics can instruct, interact with,
and operate the system. In this programming paradigm, the
robotics and domain specific knowledge is modeled explicitly
(Fig. 2) and the system needs to be able to understand,
interpret, and reason about it. We have chosen the Web
Ontology Language (OWL) for this knowledge representation,
primarily because OWL is based on a formal specification, i.e.,
a description logic, which facilitates logical inference. Another
advantage is the ease with which additional knowledge (e.g.,
new workpieces) can be added to the system. This facilitates
the separation of knowledge and code, thus enabling addition
of information without changing the implementation.
A key concept in this design is programming at object level.
In this approach, the user specifies robot tasks in terms of the
objects involved and the relevant parameters. This is in contrast
to the traditional teach pendant-based approaches where tasks
are specified in terms of raw coordinate frames. While there
exist some approaches that involve task specification in terms
of the coordinate frames attached to an object [6], they are
restricted to coordinate frames only.
In several manufacturing domains, especially assembly,
products are designed by domain experts using specialized
CAD software. In this process, the information and ideas
Fig. 3. Excerpt of task taxonomy. An abstract task description might have
multiple actual implementations, i.e., robot or tool skills.
that the product designer had in mind while designing the
product are lost and only the finished CAD model of the
product is sent to manufacturing. In contrast to this, we aim to
model, store, and exploit this information in order to ease the
programming of the manufacturing process. The first step in
this direction—most suitable for assembly tasks—is to include
in the product description not only the final CAD model,
but also the semantically relevant geometrical entities in it,
the constraints between these geometrical entities, and the
individual assembly steps that the designer took in designing
the final product.
IV. SEM ANT IC PROC ES S MODE LS
Semantic process models are descriptions of the steps
required for manufacturing a product, built by arranging tasks
that have been hierarchically defined in a potentially con-
strained order. Each of these tasks are object-level descriptions
of the individual steps in the process, which might be under-
specified and have pre-post-conditions. Due to this abstract
description, they can be understood and executed not only by
a machine but also by a human operator. Robotic systems pro-
vide corresponding low-level implementations (called skills)
that require a complete parametrization with actual numerical
values.
We have implemented a taxonomy of industrial task types
to deal with different industrial domains, e.g., wood working,
welding, and assembly (Fig. 3). Every type of task has a
certain set of parameters that are required to be set by the
user during the Teach-In phase. There are optional parameters
which can either be specified by the operator or automatically
inferred by the system upon deployment.
Fig. 4 visualizes an excerpt of a semantic process model
that describes the process of assembling the core part of a
gearbox as shown in Fig. 5. It shows the class level concept
GearboxAssembly and its instantiation GearboxAssembly 324.
Every attempt at executing the assembly would result in an
additional instantiation of the class level description of the
task. This allows to log all relevant information collected
during execution, e.g., in case of an anomaly we can determine
which task in which process instance failed, when it happened,
and due to what error. The GearboxAssembly 324 process con-
Fig. 4. Process model for the 3-step assembly of the core part of a gearbox.
Boxes featuring a yellow circle or a purple rhombus represent classes and
instances of those classes respectively.
(a) Exploded view (b) Assembly steps
Fig. 5. Industrial use-case based on assembling four work pieces in three
steps to form the core part of a gearbox
sists of three Assembly tasks, i.e., AssembleBearingTreeTask,
AssembleBearingPipeTask, and AssemblePipeTreeTask.
The order of these tasks is not yet fully specified. Instead,
three partial ordering constraints have been specified, which
assert that the task instance associated with PartialOrdering-
Constraint3has to succeed the tasks associated with Par-
tialOrderingConstraint2and PartialOrderingConstraint1. No
further restrictions have been modeled, which results in the
order of the two tasks AssembleBearingTreeTask and Assem-
bleBearingPipeTask not being constrained. The AssembleBear-
ingPipeTask links two interaction objects MechanicalPipe1
and MechanicalTree1 as its parameters. The geometric con-
straints specifying the assembly pose have been defined on a
sub-object level between single faces of the two object models
as explained in Section V-B.
V. SEMANTIC OBJECT MO DE LS
Object models form one of the key pillars of our proposed
robot programming approach. They are modeled in a hierarchi-
cal fashion, wherein properties of generic object types can be
re-used in the more specific ones. Process descriptions refer to
objects and their properties as parameters for each robot task.
The requirements of a task can help filter the type of object
suitable for it.
Simple object properties include information such as the
name, weight, pose, or material. Additional information such
as specialized appearance models that can be exploited by
computer vision modules to detect the objects in a workcell
can also be linked to it. Basic geometric properties include the
Fig. 6. Upper taxonomy of geometric interrelation constraints
bounding box, the corresponding dimensions, and the polygon
mesh used for rendering the object.
Most classical approaches rely only on basic geometric
information. In our approach, we aim to preserve all relevant
information produced while designing the object. This includes
the CAD model used for creating the polygon mesh. We
support geometric representations at multiple levels of detail,
from points and coordinate frames, to semantically meaningful
entities such as lines and circles or planes and cylinders.
Constraints between these geometric entities can also be used
to describe parameters for robot tasks in an intuitive way.
A. Boundary Representation of Objects
A Boundary Representation (BREP) of CAD data describes
the geometric properties of points, curves, surfaces and vol-
umes using mathematical models as its basis. CAD models are
created by defining boundary limits to given base geometries.
The BREP specification distinguishes geometric and topolog-
ical entities, as illustrated in Fig. 7. Geometric entities hold
the numerical data, while the topological entities group them
and arrange them in a hierarchical fashion.
1) Topological Entities: The BREP standard specifies eight
kinds of topological entities that are connected through prop-
erties which resemble the relations depicted in Fig. 7: Vertex,
Edge,Face,Wire,Shell,Solid,CompSolid, and Compound.
Only Vertices,Edges, and Faces have direct links to geometric
entities. A Vertex is represented by a point. An Edge is
represented by a curve and bounded by up to two Vertices.
contains
contains
contains
boundedBy
boundedBy
boundedBy
contains
represented
by
represented
by
represented
by
Fig. 7. Illustration of the basic BREP structure. Its data model comprises
geometric entities (Point, Curve, Surface) and topological entities (the rest).
AWire is a set of adjacent Edges. When the Edges of a
Wire form a loop, the Wire is considered to be closed. A
Face is represented by a surface and bounded by a closed
Wire. A Shell is a set of adjacent Faces. When the Faces of a
Shell form a closed volume, the Shell can be used to define a
Solid.Solids that share common Faces can be grouped further
into CompSolids.Compounds are top-level containers and may
contain any other topological entity.
2) Geometric Entities: The topological entities may link to
three types of geometric entities, which are Points,Curves,
and Surfaces. They represent 0-, 1-, and 2-dimensional ge-
ometries respectively. Curves and Surfaces are defined through
parameterizable mathematical models. Supported curve types
can be categorized as unbounded curves (e.g., lines, parabolas,
or hyperbolas) and bounded curves (e.g., Bezier curves, B-
spline curves, circles, or ellipses). Offset curves represent a
translated version of a given base curve along a certain vector,
whereas trimmed curves bound a given base curve by limiting
the minimum and maximum parameters of their mathematical
model. In case the exact model is unknown, a curve might
also be approximated by a polygon on triangulated data. The
geometric representation of an Edge can be specified by a 3D
curve, or a 2D curve in the parameter space of each surface
that the Edge belongs to.
Surfaces rely on unbounded mathematical models (e.g.,
planes, cones, or cylindrical surfaces) and bounded models
(e.g., Bezier surfaces, B-spline surfaces, spheres, or toruses).
Surfaces can also be defined as linearly extruded curves. An
offset surface translates a base surface along a given vector,
and a revoluted surface is created by rotating a given base
curve around a given direction vector. Again, if the exact math-
ematical model of a surface is unknown, an approximation
based on triangulation might be specified.
B. Geometric Constraints between Objects
Given the rich semantic description of CAD models, it is
possible to refer to arbitrary parts of a CAD model and to
link it with additional information. Fig. 6 shows the upper
taxonomy of the geometric constraints that have been designed
to represent assembly constraints. Those constraints are meant
to be specified between points, curves, and surfaces of objects.
In our representation a geometric constraint refers to two
geometric entities: A base entity with a defined pose and a
constrained entity whose pose depends on the fixed entity and
the constraint itself.
Fig. 8. Excerpt of a semantic workcell model. From left to right, this
illustration shows the class concept of a workcell and a particular instan-
tiation of it, called D1Workcell. In this example, a rigidly attached RGBD
sensor, work table, and two robots are associated with the workcell through
FixedJoint instances. The visualization exemplary shows the robot class
(ComauSmart5SixOne) and instance (comau-left-r1) of one of the two robots.
VI. SE MAN TI C WORKCELL MODE LS
A workcell model describes the physical setup of the
workcell, including robots, tools, sensors, and available skills.
For instance, the dual-arm robot workcell depicted in Fig. 1
contains two robot arms, a work table, and an RGBD sen-
sor. Fig. 8 shows an excerpt of the corresponding semantic
workcell description. It asserts the Workcell instance called
D1Workcell that links to its contained entities. These links are
represented through FixedJoint instances. In order to specify
the poses of the sensor, table, and robot bases with respect
to the workcell origin, the FixedJoints refer to instances of
RigidTransformationMatrix.
VII. TEACHING INTE RFACE
Having to manipulate the semantic process descriptions
manually would only shift the required expert knowledge
for using the robot system from the domain of robotics to
the domain of knowledge engineering. Therefore, we include
a graphical user interface for the human operator based on
HTML5 and JavaScript, that abstracts from the semantic
modeling language and can run in any modern web browser.
Fig. 9 depicts the main view of the GUI showing a process
plan consisting of six tasks and the parameters of the selected
task. Parameters objectToPick and objectToPlaceOn can be
set through selecting the desired objects from a list (or other
modalities that are not described in this paper). pickTool and
placeTool are optional parameters, which might be used to
specify compatible tools for grasping the two objects involved
in the assembly. Parameter endPose defines the assembly pose
and can be set through a dedicated 3D interface. In the example
shown in Fig. 10, two cylindrical surfaces have been selected
and a concentricity constraint specified.
VIII. EXECUTION FRA ME WO RK
The process plan designed using the intuitive teaching in-
terface is an underspecified, hardware independent description
of the robot’s tasks. In order to execute the task on a specific
workcell, the tasks need to be fully specified and mapped
to executable actions for the robot. As an example, the tool
for grasping an assembly object is an optional parameter in
the task specification interface. The appropriate tool can be
Fig. 9. Snippet of the relevant subpart of the teaching interface that acts as
a frontend to the semantic process descriptions
Fig. 10. Mode of the intuitive teaching interface for defining geometric
constraints between two objects. As the two objects are described in a
boundary representation, each of the two highlighted cylindrical surfaces can
be selected with a single click. Geometric constraints can be chosen from
the list on the right. A PlanePlaneCoincidenceConstraint (Fig. 6) has already
been specified and is visualized in the bottom left corner of the GUI.
automatically selected by matching the set of available tools
in the workcell to the ones suitable for manipulating the object.
Once completely specified, the assembly task is mapped
to a set of primitive actions (e.g., move, open/close gripper
commands) that are offered by the robot controller interface.
The system can reason if this mapping is even possible for a
specific task on a specific workcell and provide the user with
appropriate feedback. A world model component maintains the
state of all involved entities in the world, including instances
of objects, tools, robots, etc. In our experimental setup, poses
of detected objects in the workcell are obtained from the vision
system and updated in the world model.
Once the task execution begins, monitoring can be done
at different levels of abstraction. The individual actions (e.g.,
open/close gripper) might fail, or the task as a whole might not
be possible due to unmet pre-conditions (e.g., assembly object
missing). In either case, the error message that is generated
contains semantically meaningful information for the user.
IX. EVAL UATI ON
In order to evaluate our teaching concept, we carried out a
pre-study with one test person. The robotic task used in this
evaluation is the partial assembly of a gearbox (Fig. 5) using an
Fig. 11. Snippet of a robot program created using the teach pendant, which
is hard to understand especially without the programmer’s comments
industrial dual-arm robot system. The gearbox comprises four
parts which have to be assembled in three steps (Fig. 5(b)).
Each assembly step requires sub-millimeter precision, making
it an ideal use-case for a precise industrial robot.
The subject first followed the classical approach of pro-
gramming the task in a robot programming language using a
combination of teach pendant and PC. The second attempt was
made on a touch screen showing the graphical user interface of
our cognitive teaching framework. This comparison is biased
against our system as the test person has several years of
experience in using the teach pendant and robot programming
language, but only received a 10-minute introduction to the
intuitive framework right before he had to use it.
A video demonstrating our intuitive interface and the
results of this comparison can be found online at
http://youtu.be/B1Qu8Mt3WtQ.
A. Classical Teaching
The program for performing this robotic task is fairly
complex, involving numerous robot movement commands and
synchronization between the two robot arms (Fig. 11). Hence,
the skeleton for the program was created on a PC and the
specific robot poses for each movement were obtained by
jogging the robot using a teach pendant (Fig. 1(a)). Given the
precision required by the assembly task, the parts needed to be
placed into fixtures and the robot poses fine-tuned accordingly.
B. Intuitive Teaching
The process programmed using the intuitive teaching in-
terface is shown in Fig. 9. It consists of six steps, of which
three are assembly tasks parametrized using the BREP mating
interface. We employ a simple CAD model-based vision sys-
tem using a Kinect-like camera placed on top of the working
table (at a distance of ˜1 m) to recognize objects, providing a
positioning accuracy of ˜1 cm. There are two centering steps
that use the imprecise object poses obtained from the vision
system and center the parts between the gripper fingers of a
parallel gripper, thereby reducing their pose uncertainties.
C. Pre-Study Results
The experiments have been carried out in the same industrial
dual-arm robot workcell (Fig. 1). They investigated the times
required to program the assembly process from scratch, to
adjust the object poses in the existing program, and to adjust
the robots’ approach poses for the objects.
TABLE I
REQUIRED TIMES FOR ACCOMPLISHING VARIOUS PROGRAMMING TASKS
FOR THE CLASSICAL AND THE INTUITIVE APPROACH.
Task Classical Intuitive
time in min time in min saved time in %
Full assembly 48 8 83
Adjust object poses 23 0 100
Adjust approach poses 5 2 60
Comparing the times required to teach the full assembly
using the two different methods shows a time-saving of 83 %
for the intuitive teaching approach (Table I). Updating the pre-
existing robot program to reflect changes in the work pieces’
positions took 23 min for the classical teaching approach,
whereas the intuitive teaching framework’s vision system
naturally coped with the changes automatically. Adjusting the
approach poses for the work pieces had been accomplished in
5 min on a teach pendant. Using the intuitive teaching interface
it required 2 min.
While the programmer decided to use precise positioning by
jogging the robot using the classical approach, this option was
not available for our intuitive approach. With a more precise
vision system, the additional centering tasks would not be
necessary. This centering technique could have also been used
for the classical approach. However, as can be seen from the
second comparison (Table I), fine tuning object poses takes
much more time than programming centering tasks.
X. CONCLUSION
The pre-study indicates the potential of semantically mean-
ingful and object-centric robot programming, as compared
to classical methods. We demonstrated that domain-specific
interfaces in a cognitive system with domain knowledge eases
the tedious task of programming a robot to a large extent.
By programming at the object level, the low-level details
pertaining to robot execution did not have to be programmed.
The resulting process plan was more concise, readable, and
re-useable. The communication between the operator and the
robot system could be lifted on an abstract level, relying on
previously available knowledge on both sides. This enables a
new operator to use the system with minimal training.
The preliminary results from our pre-study are very promis-
ing and we plan to perform additional evaluations and full-
scale user studies in the future, which will provide a more
holistic assessment of the proposed robot teaching approach.
ACK NOWL ED GEM EN TS
We would like to thank Comau Robotics for providing the
industrial robot workcell used in our experiments and Alfio
Minissale for his support and participation in the evaluation.
The research leading to these results has received funding
from the European Union Seventh Framework Programme
(FP7/2007-2013) under grant agreement no. 287787 in the
project SMErobotics.
REFERENCES
[1] A.P. Ambler and R.J. Popplestone. Inferring the positions
of bodies from specified spatial relationships. Artificial
Intelligence, 6(2):157–174, June 1975.
[2] Stephen Balakirsky and Andrew Price. Implementation
of an ontology for industrial robotics. Standards for
Knowledge Representation in Robotics, pages 10–15,
2014.
[3] Muhammad Baqar Raza and Robert Harrison. Design,
development & implementation of ontological knowledge
based system for automotive assembly lines. Interna-
tional Journal of Data Mining & Knowledge Manage-
ment Process, 1(5):21–40, 2011.
[4] Simon Bøgh, Oluf Skov Nielsen, Mikkel Rath Pedersen,
Volker Kr ¨
uger, and Ole Madsen. Does your robot have
skills? Proceedings of the International Symposium on
Robotics, page 6, 2012.
[5] D. F. Corner, A. P. Anbler, and R. J. Popplestone.
Reasoning about the spatial relationships derived from a
RAPT program for describing assembly by robot. pages
842–844, August 1983.
[6] Tinne De Laet, Steven Bellens, Ruben Smits, Erwin
Aertbeli¨
en, Herman Bruyninckx, and Joris De Schutter.
Geometric relations between rigid bodies (part 1): Se-
mantics for standardization. IEEE Robotics Automation
Magazine, 20(1):84–93, March 2013.
[7] R¨
udiger Dillmann. Teaching and learning of robot tasks
via observation of human performance. Robotics and
Autonomous Systems, 47(2-3):109–116, June 2004.
[8] Masakazu Ejiri, Takeshi Uno, Haruo Yoda, Tatsuo Goto,
and Kiyoo Takeyasu. A prototype intelligent robot that
assembles objects from plan drawings. IEEE Transac-
tions on Computers, C-21(2):161–170, 1972.
[9] P. Gu and X. Yan. CAD-directed automatic assembly
sequence planning. International Journal of Production
Research, 33(11):3069–3100, 1995.
[10] Monica N. Nicolescu and Maja J. Mataric. Natural
methods for robot task learning. Proceedings of the
International Joint Conference on Autonomous Agents
and Multiagent Systems, page 241, 2003.
[11] Bartholomew O. Nnaji. Theory of automatic robot
assembly and programming. Chapman & Hall, London,
1st edition, 1993.
[12] Zengxi Pan, Joseph Polden, Nathan Larkin, Stephen van
Duin, and John Norrish. Recent progress on program-
ming methods for industrial robots. Proceedings of the
International Symposium on Robotics, 28:619–626, 2010.
[13] Gerardo Pardo-Castellote. Experiments in the integration
and control of an intelligent manufacturing workcell. Phd
thesis, Stanford University, September 1995.
[14] Mikkel Rath Pedersen, Lazaros Nalpantidis, Aaron Bo-
bick, and Volker Kr ¨
uger. On the integration of hardware-
abstracted robot skills for use in industrial scenarios.
In Proceedings of the IEEE/RSJ International Confer-
ence on Robots and Systems, Workshop on Cognitive
Robotics Systems: Replicating Human Actions and Ac-
tivities, 2013.
[15] Maj Stenmark. Instructing Industrial Robots Using
High-Level Task Descriptions. Licentiate thesis, Lund
University, 2015.
[16] Moritz Tenorth and Michael Beetz. KnowRob –
a knowledge processing infrastructure for cognition-
enabled robots. International Journal of Robotics Re-
search, 32(5):566 – 590, April 2013.
[17] Moritz Tenorth, Alexander Perzylo, Reinhard Lafrenz,
and Michael Beetz. The RoboEarth language: Represent-
ing and exchanging knowledge about actions, objects and
environments. In Proceedings of the IEEE International
Conference on Robotics and Automation, pages 1284–
1289, St. Paul, MN, USA, May 2012.
[18] Andrea L. Thomaz and Cynthia Breazeal. Teachable
robots: Understanding human teaching behavior to build
more effective robot learners. Artificial Intelligence, 172
(6-7):716–737, April 2008.
... The proposed five main actions seem to be too specific to allow a high abstraction level, even if they could be exploited to define a set of virtual actions. In [6] three models are built: i) Process model, ii) Workcell model and iii) Object model; CAD information is also used to set the geometrical constrains and the assembly steps, as well as to store information about dimensions and polygon mesh of the involved objects, which are used to implement furthers checks (e.g., collision avoidance). In this work however no procedure is proposed to automatically map the tasks into corresponding robot actions or to automatically choose the robots involved in the tasks. ...
... On the whole, such a work allows an automatic verification of the plan feasibility from the technological and geometrical point of view, and of the stability, but it does not address the case of cooperating multiple robots. In all the discussed works [5], [6] and [7] CAD models are used to import a set of information that is employed to build the internal models and to define the geometrical constraints. In [8] the task model is obtained as a set of assembly skills, which are used to define an elementary skill. ...
... In addition to the approach introduced by Steinmetz et al. (2019), other recent work has similarly focused on semantic-level robot programming, which can allow workers to more readily "program the task and not the robot" -a goal [PM1] and [PM2] considered important. Perzylo et al. (2015) and Perzylo et al. (2016) proposed programming paradigms that enable users to use semantic descriptions of work processes, workpieces, and workcells along with a graphical programming interface to program the robot. Such semantic programming interfaces can hasten robot programming by lowering the barrier to entry (in terms of necessary skills and training) to control such systems. ...
Preprint
Full-text available
Robotics and related technologies are central to the ongoing digitization and advancement of manufacturing. In recent years, a variety of strategic initiatives around the world including "Industry 4.0", introduced in Germany in 2011 have aimed to improve and connect manufacturing technologies in order to optimize production processes. In this work, we study the changing technological landscape of robotics and "internet-of-things" (IoT)-based connective technologies over the last 7-10 years in the wake of Industry 4.0. We interviewed key players within the European robotics ecosystem, including robotics manufacturers and integrators, original equipment manufacturers (OEMs), and applied industrial research institutions and synthesize our findings in this paper. We first detail the state-of-the-art robotics and IoT technologies we observed and that the companies discussed during our interviews. We then describe the processes the companies follow when deciding whether and how to integrate new technologies, the challenges they face when integrating these technologies, and some immediate future technological avenues they are exploring in robotics and IoT. Finally, based on our findings, we highlight key research directions for the robotics community that can enable improved capabilities in the context of manufacturing.
... In HRC, it is important to have an intuitive way of communication with robots. Robot programming in Teach mode [112], as an intuitive method, allows the human operators without programming skills to teach the robot. In the teach-in mode, the robot is moved to each position by hands while pressing buttons on a teach pendant, and the reach positions are stored and integrated into a textual programme sequence. ...
Conference Paper
Full-text available
Human-robot collaboration (HRC) in the manufacturing context aims to realise a shared workspace where humans can work side by side with robots in close proximity. In human-robot collaborative manufacturing, robots are required to adapt to human behaviours by dynamically changing their pre-planned tasks. However, the robots used today controlled by rigid native codes can no longer support effective human-robot collaboration. To address such challenges, programming-free and multimodal communication and control methods have been actively explored to facilitate the robust human-robot collaborative manufacturing. They can be applied as the solutions to the needs of the increased flexibility and adaptability, as well as higher effort on the conventional (re)programing of robots. These high-level multimodal commands include gesture and posture recognition, voice processing and sensorless haptic interaction for intuitive HRC in local and remote collaboration. Within the context, this paper presents an overview of HRC in manufacturing. Future research directions are also highlighted.
Article
In the field of computer-aided design (CAD) and three-dimensional (3D) modeling, constructive solid geometry (CSG) representations based on primitive 3D shapes and boundary representations (B-Rep) based on geometry and topology are widely used to represent complex shapes. Therefore, it is important to recognize primitive shapes such as cubes, cones, and cylinders and to accurately judge and classify the deformation of primitive shapes. For this purpose, various techniques have been studied, such as a vector-based determination method, a determination method using multiple images from various angles, and a determination method based on positional relationships between points. However, because large datasets are required to classify these shapes and it is difficult to respond to changes in shape due to rotation, the resulting recognition accuracy is not always high. In this work, we propose a method based on solid angles, which do not depend on the positional relationship of vectors, viewpoints, or changes due to rotation, as feature quantities. We demonstrate the effectiveness of primitive 3D figures using features based on solid angles. In addition, we show that the presence or absence of deformation can be determined when part of a primitive 3D figure is deformed.
Preprint
Full-text available
The TriRhenaTech alliance presents a collection of accepted papers of the cancelled tri-national 'Upper-Rhine Artificial Inteeligence Symposium' planned for 13th May 2020 in Karlsruhe. The TriRhenaTech alliance is a network of universities in the Upper-Rhine Trinational Metropolitan Region comprising of the German universities of applied sciences in Furtwangen, Kaiserslautern, Karlsruhe, and Offenburg, the Baden-Wuerttemberg Cooperative State University Loerrach, the French university network Alsace Tech (comprised of 14 'grandes \'ecoles' in the fields of engineering, architecture and management) and the University of Applied Sciences and Arts Northwestern Switzerland. The alliance's common goal is to reinforce the transfer of knowledge, research, and technology, as well as the cross-border mobility of students.
Chapter
Mit der Smart Factory wurde ein Begriff geschaffen, der die vielseitigen Auswirkungen der Industrie 4.0 auf die Fabrik der Zukunft beschreibt. Auch wenn Trends als Rahmenbedingungen für die Beschreibung dieses Begriffs herangezogen werden können, ist die Smart Factory ein wandelbares Konstrukt. Die zunehmende Virtualisierung in der Produktion erweitert diese Beschreibungsvielfalt nochmals, da traditionelle Grenzen in der Technik, im Unternehmen und im Prozess verschwimmen und zunehmend integriert betrachtet werden müssen.
Chapter
This paper describes fundamental concepts of future workstations that will achieve transformational gains in problem solving for end users. The results are based on 7 years of Human-Computer Interaction (HCI) research on the bRIGHT platform. bRIGHT already incorporates solutions for modeling and understanding end-user contexts/intent and employs these to design algorithms for task automation, context filtering, and contextual auto-fill to support end users in complex tasks. bRIGHT also includes solutions for collaborative problem solving. We summarize insights gained from conducting user experiments and describe the technical capabilities required of future workstations to ensure a leap of performance when groups of users can jointly solve complex problems.
Conference Paper
Full-text available
In this paper, we present a method for programming robust, reusable and hardware-abstracted robot skills. The goal of this work is to supply mobile robot manipulators with a library of skills that incorporate both sensing and action, which permit robot novices to easily reprogram the robots to perform new tasks. Critical to the success of this approach is the notion of hardware abstraction, that separates the skill level from the primitive level on specific systems. Leveraging a previously proposed architecture, we construct two complex skills by instantiating the necessary skill primitives on two very different mobile manipulators. The skills are parameterized by task level variables, such as object labels and environment locations, making re-tasking the skills by operators feasible.
Article
Full-text available
This tutorial explicitly states the semantics of all coordinate-invariant properties and operations, and, more importantly, all the choices that are made in coordinate representations of these geometric relations. This results in a set of concrete suggestions for standardizing terminology and notation, allowing programmers to write fully unambiguous software interfaces, including automatic checks for semantic correctness of all geometric operations on rigid-body coordinate representations. A concrete proposal for community-driven standardization via the Robot Engineering Task Force [4] is accepted as a Robotics Request for Comment.
Conference Paper
Full-text available
The community-based generation of content has been tremendously successful in the World Wide Web - people help each other by providing information that could be useful to others. We are trying to transfer this approach to robotics in order to help robots acquire the vast amounts of knowledge needed to competently perform everyday tasks. RoboEarth is intended to be a web community by robots for robots to autonomously share descriptions of tasks they have learned, object models they have created, and environments they have explored. In this paper, we report on the formal language we developed for encoding this information and present our approaches to solve the inference problems related to finding information, to determining if information is usable by a robot, and to grounding it on the robot platform.
Conference Paper
This article presents a unifying terminology for task-level programming of highly flexible mobile manipulators in industrial environments. With a terminology of tasks and object-centered skills, industrial applications are analyzed in the logistic and assistive domains. The analysis shows that many tasks can actually be solved with a small library of predefined building blocks, called skills. In logistics domains, it can be exploited that the sequences of skills needed to solve specific tasks follow certain patterns, whereas in assistive tasks, the sequence vary a lot, and tasks must be taught to the robot by a user, based on Standard Operating Procedures or process knowledge. An overview is presented on how to implement a skill-based architecture, enabling reuse of skills for different industrial applications, programmed by shopfloor workers. The terminology of tasks, skills, and motion primitives is introduced and designed to separate responsibilities of robot system developers, robot system integrators and shop floor users concerning programming of the robot system. The concept of testable pre- and postconditions assigned to the skills are proposed to help both in developing skills and asserting when a skill is applicable.
Article
Machines will gradually become programmed using computers which have the knowledge of how the objects in the world relate to one another. This book capitalizes on the fact that products which are manufactured can be designed on the computer and that information about the product such as its physical shape provide powerful information to reason about how to develop the process plan for their manufacture. This book explores the whole aspect of using the principles of how parts behave naturally to automatically generate programs that govern how to produce them. The last decade saw tremendous work on how machines can be programmed to perform a variety of tasks automatically. Robotics has witnessed the most work on programming techniques. But it was not until the emergence of the advanced CAD system as a proper source of information representation about objects which are to be manipulated by the robot that it became viable for automated processors to generate robot programs without human interface. It became possible for objects to be described and for principles about how they interact in the world to be developed. The functions which the features designed into the objects serve for the objects can be adequately represented and used in reasoning about the manufacturing of the parts using the robot. This book describes the necessary principles which must be developed for a robot to generate its own programs with the knowledge of the world in the CAD system.
Article
Dynamism and uncertainty are genuine threats for current high technology organisations. Capability to change is the crux of sustainability of current large organisations. Modern manufacturing philosophies, including agile and lean, are not enough to be competitive in global market therefore a new emerging paradigm i.e. reconfigurable manufacturing systems is fast emerging to complement the application of lean and agile manufacturing systems. Product, Process and Resource (PPR) are the core areas in an engineering domain of a manufacturing enterprise which are tightly coupled with each other. Change in one (usually product) affects the others therefore engineering change management activity has to tackle PPR change effects. Current software applications do not provide an unequivocal infrastructure where PPR can be explicitly related. It follows that reconfigurable techniques can be further complemented with the help of knowledge based systems to design, engineer, manufacture, commission and change existing processes and resources against changed products.
Article
Autonomous service robots will have to understand vaguely described tasks, such as “set the table” or “clean up”. Performing such tasks as intended requires robots to fully, precisely, and appropriately parameterize their low-level control programs. We propose knowledge processing as a computational resource for enabling robots to bridge the gap between vague task descriptions and the detailed information needed to actually perform those tasks in the intended way. In this article, we introduce the KnowRob knowledge processing system that is specifically designed to provide autonomous robots with the knowledge needed for performing everyday manipulation tasks. The system allows the realization of “virtual knowledge bases”: collections of knowledge pieces that are not explicitly represented but computed on demand from the robot's internal data structures, its perception system, or external sources of information. This article gives an overview of the different kinds of knowledge, the different inference mechanisms, and interfaces for acquiring knowledge from external sources, such as the robot's perception system, observations of human activities, Web sites on the Internet, as well as Web-based knowledge bases for information exchange between robots. We evaluate the system's scalability and present different integrated experiments that show its versatility and comprehensiveness.
Article
A program has been developed which takes a specification of a set of bodies and of spatial relations that are to hold between them in some goal state, and produces expressions denoting the positions of the bodies in the goal state together with residual equations linking the variables in these expressions.
Article
CAD-directed assembly sequence planning is an essential component of automated robotic assembly task planning. Many research attempts have been made in the past to integrate computer-aided design (CAD) and robot task planning to increase the efficiency and effectiveness of robotic assembly automation. This paper presents a graph-based heuristic approach for automatic generation of assembly sequences from a feature-based data base. A feature-based representation is used to model product assembly. The automatic assembly sequence planning system utilizes four major stages to generate assembly sequences without any user intervention:(1) creates connective graphs based on the product feature representation; (2) decomposes an assembly into sub-groups using the connective graphs; (3) generates the disassembly sequence for each sub-group formed at stage 2; and (4) merges the disassembly sequences of the sub-groups into a complete disassembly sequence, and converts the disassembly sequence into the final assembly sequence. The assembly planning system associated with the feature-based product model has been implemented in Smalltalk-an object-oriented programming language. Several examples are included to illustrate the approach and the heuristic algorithms. The results show that the approach can be used to automatically generate assembly sequences for robot assembly task planning directly from the feature-based database.