Toward Efﬁcient Robot Teach-In and Semantic
Process Descriptions for Small Lot Sizes
Alexander Perzylo, Nikhil Somani, Stefan Profanter, Markus Rickert, Alois Knoll
fortiss GmbH, An-Institut Technische Universit¨
unchen, Munich, Germany
Abstract—We present a novel robot programming methodology
that is aimed at reducing the level of robotics expert knowledge
needed to operate industrial robotic systems by explicitly mod-
eling this knowledge and abstracting it from the user.
Most of the current robot programming paradigms are either
user-centric and fully-specify the robot’s task to the lowest
detail (used mostly in large industrial robotic systems) or fully
autonomous solutions that generate the tasks from a problem
description (used often in service and personal robotics). We
present an approach that is user-centric and can interpret
underspeciﬁed robot tasks. Such task descriptions make the
system amenable for users that are experts in a particular
domain, but have limited knowledge about robotics and are thus
not able to specify low-level details and instructions. Semantic
models for all involved entities enable automatic reasoning about
underspeciﬁed tasks and missing pieces of information.
We demonstrate this approach on an industrial assembly use-
case and present a preliminary evaluation—both qualitatively
a-vis state-of-the-art solutions available
from industrial robot manufacturers.
After a long fordistic period of industrial production, there
has been a trend in some parts of today’s industry toward
individualized products. Additionally, robot-based industrial
automation increasingly diffuses into small and medium-
sized enterprises (SMEs). In both cases, the industry has to
adapt their production processes for small lot sizes and a
high number of product variants. Hence, they require their
robot systems to allow for rapid changeovers and efﬁcient
teaching. In this context, commercial viability of automated
production is highly inﬂuenced by the time required to teach
new processes and to adapt existing processes to variations of
a product. However, most SMEs cannot build the necessary
expertise in robotics in-house and have to rely on system
integrators. This drastically reduces the usability of robot
systems for small batch assembly.
Classical teaching concepts for robot systems offer ded-
icated robot programming languages, which are difﬁcult to
learn and far from intuitive. Human operators have to under-
stand and use concepts like Euler angles and work with raw
Cartesian coordinates, e.g., in order to deﬁne a grasp pose. An
important shortcoming of those programming languages is the
lack of semantic meaning. For instance, from looking at such
robot programs it is not possible to see what kind of object is
being manipulated or to easily keep track of the structure of
the overall process.
Fig. 1. Toward efﬁcient robot programming: moving from (a) a classical ap-
proach using the teach pendant using low-level programming languages to (b)
efﬁcient robot programming using high-level semantic process descriptions.
A more intuitive teaching paradigm is teaching through
manual guidance. Based on physical interaction of the operator
with the robot, robots can be quickly moved and the resulting
poses can be stored and re-used. It spares the human operator
the cumbersome effort of teaching positions through jogging
the robot via a teach pendant. Obviously, this approach is only
feasible for small robots. Besides, the resulting robot program
still does not know anything about the operator’s intention.
Industrial solutions such as Delmia or Process Simulate
offer sophisticated robot programming interfaces that scale to
a full production line with multiple robots. However, they
require the programmer to specify robot tasks on a low
level that requires expertise in robotics, thereby making these
solutions difﬁcult to use for untrained shop ﬂoor workers. The
operators typically have to train for several weeks to learn
how to use these interfaces. Even small changes in the robot
program require signiﬁcant programming effort.
In contrast to this, research—particularly in the domain
of service robotics—is being carried out to develop fully
autonomous cognitive robotic systems that are able to trans-
late high-level task descriptions to appropriate robot actions.
These systems are able to perceive their environment and to
reason about the implications of their tasks. Many of them
provide natural language interfaces, e.g., by exploiting task
descriptions from wikis or by interpreting speech input from
the operator. However, as of now, these systems have not had
a real impact on industrial applications.
In this paper, we introduce a concept tailored to intuitive
teaching of tasks for industrial robotic workcells (Fig. 1)
by utilizing approved techniques from the ﬁeld of cognitive
robotics. The operator is still in charge of controlling most
aspects, whereas the cognitive capabilities of the robot system
are used to increase the efﬁciency in human-robot communi-
cation. Compared to classical approaches, it presupposes less
knowledge about the system. As a result, the required time
to train a human worker can be signiﬁcantly reduced. The
system may resort to previously modeled knowledge about
certain industrial domains, processes, interaction objects, and
the workcell itself. Hence, the communication between the
operator and the robot system can be lifted to a more abstract
and semantically meaningful level, where the operator can talk
about, e.g., which object to pick instead of specifying raw
II. RE LATED WO RK
The strive for an easier, intuitive, and more automated
teaching process for robots dates back more than 40 years
now: In 1972 a prototype was developed that was able to
assemble simple objects from plan drawings using a vision
system . Another early approach deﬁned constraints be-
tween two objects using planar and cylindrical matching and
developed an ofﬂine object level language for programming
robot assembly tasks [1, 5]. RALPH  is a similar system
which uses information from CAD drawings to automatically
generate a process plan for the manipulator to perform the task.
A hierarchical graph-based planning algorithm for automatic
CAD-directed assembly is described in . These classical
systems are not human-centric, i.e., the robot program is
generated automatically giving no possibility to a human
worker to alter or modify the proposed assembly. They also
require accurate models to perform automatic plan generation.
Additionally, the drawings or CAD models do not include
any semantic description to do further reasoning about its
properties, e.g., if a cylindrical part is the thread of a screw
and needs to be fastened to assemble it.
Pardo-Castellote  described a system for programming
a dual arm robot system through a graphical user interface.
Despite being one of the more advanced systems at the time
of publishing, it only supported simple pick and place actions
and did not allow in-air assembly or, for example, horizontally
placing a cylindrical object into a corresponding hole.
In recent years, there have been research efforts toward
teaching robot programs by observing human demonstra-
tion [10, 7, 18]. They rely on a predeﬁned set of speech
commands and demonstrating actions or require various dif-
ferent sensors to detect human motions. Additional effort is
required to teach human workers how to demonstrate tasks
in a compatible way. Sensor requirements reduce the mobility
of such systems since all sensors need to be set up in each
A comprehensive overview of programming methods for
industrial robots until the year 2010, including online (operator
assisted, sensor guided), ofﬂine programming (CAD data),
and augmented reality is presented in . Current research
mainly focuses on task-level programming that requires only a
- robot model
- support structures
- shape as BREP
- recognition model
- weight, etc.
- task types
- task parameters (e.g.
- task ordering
Fig. 2. Overview of semantic models and interrelations. Object models may
serve as parameters for certain tasks speciﬁed in process models. A workcell
model relies on object models for describing its entities. For executing process
models, they are deployed on workcells exploiting the information in their
small library of predeﬁned lower-level building blocks, called
skills [4, 14, 15].
With the rapid growth of the World Wide Web
and widespread availability of knowledge, Knowledge
Bases gained increasing importance for intuitively teaching
robots . RoboEarth  and RoboHow  developed
robotic systems that use semantic descriptions to share knowl-
edge between different robots. Using Knowledge Bases for
deﬁning additional skill parameters simpliﬁes the whole teach-
ing process even further [15, 2].
III. TOWARD A NATU RAL TEACHING PARADIGM
Current industrial robotic systems require the user to be
an expert not only in the application domain but also at
robot programming. The key motivation in our approach is
to substantially reduce the level of robotics expertise required
to use such systems to a level where a shop ﬂoor worker with
minimal knowledge about robotics can instruct, interact with,
and operate the system. In this programming paradigm, the
robotics and domain speciﬁc knowledge is modeled explicitly
(Fig. 2) and the system needs to be able to understand,
interpret, and reason about it. We have chosen the Web
Ontology Language (OWL) for this knowledge representation,
primarily because OWL is based on a formal speciﬁcation, i.e.,
a description logic, which facilitates logical inference. Another
advantage is the ease with which additional knowledge (e.g.,
new workpieces) can be added to the system. This facilitates
the separation of knowledge and code, thus enabling addition
of information without changing the implementation.
A key concept in this design is programming at object level.
In this approach, the user speciﬁes robot tasks in terms of the
objects involved and the relevant parameters. This is in contrast
to the traditional teach pendant-based approaches where tasks
are speciﬁed in terms of raw coordinate frames. While there
exist some approaches that involve task speciﬁcation in terms
of the coordinate frames attached to an object , they are
restricted to coordinate frames only.
In several manufacturing domains, especially assembly,
products are designed by domain experts using specialized
CAD software. In this process, the information and ideas
Fig. 3. Excerpt of task taxonomy. An abstract task description might have
multiple actual implementations, i.e., robot or tool skills.
that the product designer had in mind while designing the
product are lost and only the ﬁnished CAD model of the
product is sent to manufacturing. In contrast to this, we aim to
model, store, and exploit this information in order to ease the
programming of the manufacturing process. The ﬁrst step in
this direction—most suitable for assembly tasks—is to include
in the product description not only the ﬁnal CAD model,
but also the semantically relevant geometrical entities in it,
the constraints between these geometrical entities, and the
individual assembly steps that the designer took in designing
the ﬁnal product.
IV. SEM ANT IC PROC ES S MODE LS
Semantic process models are descriptions of the steps
required for manufacturing a product, built by arranging tasks
that have been hierarchically deﬁned in a potentially con-
strained order. Each of these tasks are object-level descriptions
of the individual steps in the process, which might be under-
speciﬁed and have pre-post-conditions. Due to this abstract
description, they can be understood and executed not only by
a machine but also by a human operator. Robotic systems pro-
vide corresponding low-level implementations (called skills)
that require a complete parametrization with actual numerical
We have implemented a taxonomy of industrial task types
to deal with different industrial domains, e.g., wood working,
welding, and assembly (Fig. 3). Every type of task has a
certain set of parameters that are required to be set by the
user during the Teach-In phase. There are optional parameters
which can either be speciﬁed by the operator or automatically
inferred by the system upon deployment.
Fig. 4 visualizes an excerpt of a semantic process model
that describes the process of assembling the core part of a
gearbox as shown in Fig. 5. It shows the class level concept
GearboxAssembly and its instantiation GearboxAssembly 324.
Every attempt at executing the assembly would result in an
additional instantiation of the class level description of the
task. This allows to log all relevant information collected
during execution, e.g., in case of an anomaly we can determine
which task in which process instance failed, when it happened,
and due to what error. The GearboxAssembly 324 process con-
Fig. 4. Process model for the 3-step assembly of the core part of a gearbox.
Boxes featuring a yellow circle or a purple rhombus represent classes and
instances of those classes respectively.
(a) Exploded view (b) Assembly steps
Fig. 5. Industrial use-case based on assembling four work pieces in three
steps to form the core part of a gearbox
sists of three Assembly tasks, i.e., AssembleBearingTreeTask,
AssembleBearingPipeTask, and AssemblePipeTreeTask.
The order of these tasks is not yet fully speciﬁed. Instead,
three partial ordering constraints have been speciﬁed, which
assert that the task instance associated with PartialOrdering-
Constraint3has to succeed the tasks associated with Par-
tialOrderingConstraint2and PartialOrderingConstraint1. No
further restrictions have been modeled, which results in the
order of the two tasks AssembleBearingTreeTask and Assem-
bleBearingPipeTask not being constrained. The AssembleBear-
ingPipeTask links two interaction objects MechanicalPipe1
and MechanicalTree1 as its parameters. The geometric con-
straints specifying the assembly pose have been deﬁned on a
sub-object level between single faces of the two object models
as explained in Section V-B.
V. SEMANTIC OBJECT MO DE LS
Object models form one of the key pillars of our proposed
robot programming approach. They are modeled in a hierarchi-
cal fashion, wherein properties of generic object types can be
re-used in the more speciﬁc ones. Process descriptions refer to
objects and their properties as parameters for each robot task.
The requirements of a task can help ﬁlter the type of object
suitable for it.
Simple object properties include information such as the
name, weight, pose, or material. Additional information such
as specialized appearance models that can be exploited by
computer vision modules to detect the objects in a workcell
can also be linked to it. Basic geometric properties include the
Fig. 6. Upper taxonomy of geometric interrelation constraints
bounding box, the corresponding dimensions, and the polygon
mesh used for rendering the object.
Most classical approaches rely only on basic geometric
information. In our approach, we aim to preserve all relevant
information produced while designing the object. This includes
the CAD model used for creating the polygon mesh. We
support geometric representations at multiple levels of detail,
from points and coordinate frames, to semantically meaningful
entities such as lines and circles or planes and cylinders.
Constraints between these geometric entities can also be used
to describe parameters for robot tasks in an intuitive way.
A. Boundary Representation of Objects
A Boundary Representation (BREP) of CAD data describes
the geometric properties of points, curves, surfaces and vol-
umes using mathematical models as its basis. CAD models are
created by deﬁning boundary limits to given base geometries.
The BREP speciﬁcation distinguishes geometric and topolog-
ical entities, as illustrated in Fig. 7. Geometric entities hold
the numerical data, while the topological entities group them
and arrange them in a hierarchical fashion.
1) Topological Entities: The BREP standard speciﬁes eight
kinds of topological entities that are connected through prop-
erties which resemble the relations depicted in Fig. 7: Vertex,
Edge,Face,Wire,Shell,Solid,CompSolid, and Compound.
Only Vertices,Edges, and Faces have direct links to geometric
entities. A Vertex is represented by a point. An Edge is
represented by a curve and bounded by up to two Vertices.
Fig. 7. Illustration of the basic BREP structure. Its data model comprises
geometric entities (Point, Curve, Surface) and topological entities (the rest).
AWire is a set of adjacent Edges. When the Edges of a
Wire form a loop, the Wire is considered to be closed. A
Face is represented by a surface and bounded by a closed
Wire. A Shell is a set of adjacent Faces. When the Faces of a
Shell form a closed volume, the Shell can be used to deﬁne a
Solid.Solids that share common Faces can be grouped further
into CompSolids.Compounds are top-level containers and may
contain any other topological entity.
2) Geometric Entities: The topological entities may link to
three types of geometric entities, which are Points,Curves,
and Surfaces. They represent 0-, 1-, and 2-dimensional ge-
ometries respectively. Curves and Surfaces are deﬁned through
parameterizable mathematical models. Supported curve types
can be categorized as unbounded curves (e.g., lines, parabolas,
or hyperbolas) and bounded curves (e.g., Bezier curves, B-
spline curves, circles, or ellipses). Offset curves represent a
translated version of a given base curve along a certain vector,
whereas trimmed curves bound a given base curve by limiting
the minimum and maximum parameters of their mathematical
model. In case the exact model is unknown, a curve might
also be approximated by a polygon on triangulated data. The
geometric representation of an Edge can be speciﬁed by a 3D
curve, or a 2D curve in the parameter space of each surface
that the Edge belongs to.
Surfaces rely on unbounded mathematical models (e.g.,
planes, cones, or cylindrical surfaces) and bounded models
(e.g., Bezier surfaces, B-spline surfaces, spheres, or toruses).
Surfaces can also be deﬁned as linearly extruded curves. An
offset surface translates a base surface along a given vector,
and a revoluted surface is created by rotating a given base
curve around a given direction vector. Again, if the exact math-
ematical model of a surface is unknown, an approximation
based on triangulation might be speciﬁed.
B. Geometric Constraints between Objects
Given the rich semantic description of CAD models, it is
possible to refer to arbitrary parts of a CAD model and to
link it with additional information. Fig. 6 shows the upper
taxonomy of the geometric constraints that have been designed
to represent assembly constraints. Those constraints are meant
to be speciﬁed between points, curves, and surfaces of objects.
In our representation a geometric constraint refers to two
geometric entities: A base entity with a deﬁned pose and a
constrained entity whose pose depends on the ﬁxed entity and
the constraint itself.
Fig. 8. Excerpt of a semantic workcell model. From left to right, this
illustration shows the class concept of a workcell and a particular instan-
tiation of it, called D1Workcell. In this example, a rigidly attached RGBD
sensor, work table, and two robots are associated with the workcell through
FixedJoint instances. The visualization exemplary shows the robot class
(ComauSmart5SixOne) and instance (comau-left-r1) of one of the two robots.
VI. SE MAN TI C WORKCELL MODE LS
A workcell model describes the physical setup of the
workcell, including robots, tools, sensors, and available skills.
For instance, the dual-arm robot workcell depicted in Fig. 1
contains two robot arms, a work table, and an RGBD sen-
sor. Fig. 8 shows an excerpt of the corresponding semantic
workcell description. It asserts the Workcell instance called
D1Workcell that links to its contained entities. These links are
represented through FixedJoint instances. In order to specify
the poses of the sensor, table, and robot bases with respect
to the workcell origin, the FixedJoints refer to instances of
VII. TEACHING INTE RFACE
Having to manipulate the semantic process descriptions
manually would only shift the required expert knowledge
for using the robot system from the domain of robotics to
the domain of knowledge engineering. Therefore, we include
a graphical user interface for the human operator based on
modeling language and can run in any modern web browser.
Fig. 9 depicts the main view of the GUI showing a process
plan consisting of six tasks and the parameters of the selected
task. Parameters objectToPick and objectToPlaceOn can be
set through selecting the desired objects from a list (or other
modalities that are not described in this paper). pickTool and
placeTool are optional parameters, which might be used to
specify compatible tools for grasping the two objects involved
in the assembly. Parameter endPose deﬁnes the assembly pose
and can be set through a dedicated 3D interface. In the example
shown in Fig. 10, two cylindrical surfaces have been selected
and a concentricity constraint speciﬁed.
VIII. EXECUTION FRA ME WO RK
The process plan designed using the intuitive teaching in-
terface is an underspeciﬁed, hardware independent description
of the robot’s tasks. In order to execute the task on a speciﬁc
workcell, the tasks need to be fully speciﬁed and mapped
to executable actions for the robot. As an example, the tool
for grasping an assembly object is an optional parameter in
the task speciﬁcation interface. The appropriate tool can be
Fig. 9. Snippet of the relevant subpart of the teaching interface that acts as
a frontend to the semantic process descriptions
Fig. 10. Mode of the intuitive teaching interface for deﬁning geometric
constraints between two objects. As the two objects are described in a
boundary representation, each of the two highlighted cylindrical surfaces can
be selected with a single click. Geometric constraints can be chosen from
the list on the right. A PlanePlaneCoincidenceConstraint (Fig. 6) has already
been speciﬁed and is visualized in the bottom left corner of the GUI.
automatically selected by matching the set of available tools
in the workcell to the ones suitable for manipulating the object.
Once completely speciﬁed, the assembly task is mapped
to a set of primitive actions (e.g., move, open/close gripper
commands) that are offered by the robot controller interface.
The system can reason if this mapping is even possible for a
speciﬁc task on a speciﬁc workcell and provide the user with
appropriate feedback. A world model component maintains the
state of all involved entities in the world, including instances
of objects, tools, robots, etc. In our experimental setup, poses
of detected objects in the workcell are obtained from the vision
system and updated in the world model.
Once the task execution begins, monitoring can be done
at different levels of abstraction. The individual actions (e.g.,
open/close gripper) might fail, or the task as a whole might not
be possible due to unmet pre-conditions (e.g., assembly object
missing). In either case, the error message that is generated
contains semantically meaningful information for the user.
IX. EVAL UATI ON
In order to evaluate our teaching concept, we carried out a
pre-study with one test person. The robotic task used in this
evaluation is the partial assembly of a gearbox (Fig. 5) using an
Fig. 11. Snippet of a robot program created using the teach pendant, which
is hard to understand especially without the programmer’s comments
industrial dual-arm robot system. The gearbox comprises four
parts which have to be assembled in three steps (Fig. 5(b)).
Each assembly step requires sub-millimeter precision, making
it an ideal use-case for a precise industrial robot.
The subject ﬁrst followed the classical approach of pro-
gramming the task in a robot programming language using a
combination of teach pendant and PC. The second attempt was
made on a touch screen showing the graphical user interface of
our cognitive teaching framework. This comparison is biased
against our system as the test person has several years of
experience in using the teach pendant and robot programming
language, but only received a 10-minute introduction to the
intuitive framework right before he had to use it.
A video demonstrating our intuitive interface and the
results of this comparison can be found online at
A. Classical Teaching
The program for performing this robotic task is fairly
complex, involving numerous robot movement commands and
synchronization between the two robot arms (Fig. 11). Hence,
the skeleton for the program was created on a PC and the
speciﬁc robot poses for each movement were obtained by
jogging the robot using a teach pendant (Fig. 1(a)). Given the
precision required by the assembly task, the parts needed to be
placed into ﬁxtures and the robot poses ﬁne-tuned accordingly.
B. Intuitive Teaching
The process programmed using the intuitive teaching in-
terface is shown in Fig. 9. It consists of six steps, of which
three are assembly tasks parametrized using the BREP mating
interface. We employ a simple CAD model-based vision sys-
tem using a Kinect-like camera placed on top of the working
table (at a distance of ˜1 m) to recognize objects, providing a
positioning accuracy of ˜1 cm. There are two centering steps
that use the imprecise object poses obtained from the vision
system and center the parts between the gripper ﬁngers of a
parallel gripper, thereby reducing their pose uncertainties.
C. Pre-Study Results
The experiments have been carried out in the same industrial
dual-arm robot workcell (Fig. 1). They investigated the times
required to program the assembly process from scratch, to
adjust the object poses in the existing program, and to adjust
the robots’ approach poses for the objects.
REQUIRED TIMES FOR ACCOMPLISHING VARIOUS PROGRAMMING TASKS
FOR THE CLASSICAL AND THE INTUITIVE APPROACH.
Task Classical Intuitive
time in min time in min saved time in %
Full assembly 48 8 83
Adjust object poses 23 0 100
Adjust approach poses 5 2 60
Comparing the times required to teach the full assembly
using the two different methods shows a time-saving of 83 %
for the intuitive teaching approach (Table I). Updating the pre-
existing robot program to reﬂect changes in the work pieces’
positions took 23 min for the classical teaching approach,
whereas the intuitive teaching framework’s vision system
naturally coped with the changes automatically. Adjusting the
approach poses for the work pieces had been accomplished in
5 min on a teach pendant. Using the intuitive teaching interface
it required 2 min.
While the programmer decided to use precise positioning by
jogging the robot using the classical approach, this option was
not available for our intuitive approach. With a more precise
vision system, the additional centering tasks would not be
necessary. This centering technique could have also been used
for the classical approach. However, as can be seen from the
second comparison (Table I), ﬁne tuning object poses takes
much more time than programming centering tasks.
The pre-study indicates the potential of semantically mean-
ingful and object-centric robot programming, as compared
to classical methods. We demonstrated that domain-speciﬁc
interfaces in a cognitive system with domain knowledge eases
the tedious task of programming a robot to a large extent.
By programming at the object level, the low-level details
pertaining to robot execution did not have to be programmed.
The resulting process plan was more concise, readable, and
re-useable. The communication between the operator and the
robot system could be lifted on an abstract level, relying on
previously available knowledge on both sides. This enables a
new operator to use the system with minimal training.
The preliminary results from our pre-study are very promis-
ing and we plan to perform additional evaluations and full-
scale user studies in the future, which will provide a more
holistic assessment of the proposed robot teaching approach.
ACK NOWL ED GEM EN TS
We would like to thank Comau Robotics for providing the
industrial robot workcell used in our experiments and Alﬁo
Minissale for his support and participation in the evaluation.
The research leading to these results has received funding
from the European Union Seventh Framework Programme
(FP7/2007-2013) under grant agreement no. 287787 in the
 A.P. Ambler and R.J. Popplestone. Inferring the positions
of bodies from speciﬁed spatial relationships. Artiﬁcial
Intelligence, 6(2):157–174, June 1975.
 Stephen Balakirsky and Andrew Price. Implementation
of an ontology for industrial robotics. Standards for
Knowledge Representation in Robotics, pages 10–15,
 Muhammad Baqar Raza and Robert Harrison. Design,
development & implementation of ontological knowledge
based system for automotive assembly lines. Interna-
tional Journal of Data Mining & Knowledge Manage-
ment Process, 1(5):21–40, 2011.
 Simon Bøgh, Oluf Skov Nielsen, Mikkel Rath Pedersen,
Volker Kr ¨
uger, and Ole Madsen. Does your robot have
skills? Proceedings of the International Symposium on
Robotics, page 6, 2012.
 D. F. Corner, A. P. Anbler, and R. J. Popplestone.
Reasoning about the spatial relationships derived from a
RAPT program for describing assembly by robot. pages
842–844, August 1983.
 Tinne De Laet, Steven Bellens, Ruben Smits, Erwin
en, Herman Bruyninckx, and Joris De Schutter.
Geometric relations between rigid bodies (part 1): Se-
mantics for standardization. IEEE Robotics Automation
Magazine, 20(1):84–93, March 2013.
udiger Dillmann. Teaching and learning of robot tasks
via observation of human performance. Robotics and
Autonomous Systems, 47(2-3):109–116, June 2004.
 Masakazu Ejiri, Takeshi Uno, Haruo Yoda, Tatsuo Goto,
and Kiyoo Takeyasu. A prototype intelligent robot that
assembles objects from plan drawings. IEEE Transac-
tions on Computers, C-21(2):161–170, 1972.
 P. Gu and X. Yan. CAD-directed automatic assembly
sequence planning. International Journal of Production
Research, 33(11):3069–3100, 1995.
 Monica N. Nicolescu and Maja J. Mataric. Natural
methods for robot task learning. Proceedings of the
International Joint Conference on Autonomous Agents
and Multiagent Systems, page 241, 2003.
 Bartholomew O. Nnaji. Theory of automatic robot
assembly and programming. Chapman & Hall, London,
1st edition, 1993.
 Zengxi Pan, Joseph Polden, Nathan Larkin, Stephen van
Duin, and John Norrish. Recent progress on program-
ming methods for industrial robots. Proceedings of the
International Symposium on Robotics, 28:619–626, 2010.
 Gerardo Pardo-Castellote. Experiments in the integration
and control of an intelligent manufacturing workcell. Phd
thesis, Stanford University, September 1995.
 Mikkel Rath Pedersen, Lazaros Nalpantidis, Aaron Bo-
bick, and Volker Kr ¨
uger. On the integration of hardware-
abstracted robot skills for use in industrial scenarios.
In Proceedings of the IEEE/RSJ International Confer-
ence on Robots and Systems, Workshop on Cognitive
Robotics Systems: Replicating Human Actions and Ac-
 Maj Stenmark. Instructing Industrial Robots Using
High-Level Task Descriptions. Licentiate thesis, Lund
 Moritz Tenorth and Michael Beetz. KnowRob –
a knowledge processing infrastructure for cognition-
enabled robots. International Journal of Robotics Re-
search, 32(5):566 – 590, April 2013.
 Moritz Tenorth, Alexander Perzylo, Reinhard Lafrenz,
and Michael Beetz. The RoboEarth language: Represent-
ing and exchanging knowledge about actions, objects and
environments. In Proceedings of the IEEE International
Conference on Robotics and Automation, pages 1284–
1289, St. Paul, MN, USA, May 2012.
 Andrea L. Thomaz and Cynthia Breazeal. Teachable
robots: Understanding human teaching behavior to build
more effective robot learners. Artiﬁcial Intelligence, 172
(6-7):716–737, April 2008.