PreprintPDF Available
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

Team Description Paper for the mobile manipulator Leobot, build by the RoboCup@work team Tyrolics from the Univercity of Innsbruck
Content may be subject to copyright.
Team Description Paper – Team TYROLICS
RoboCup@Work 2021
Martin Sereinig1, Matteo Saveriano2,3, Johannes Gerstmayr1, Michael Pieber1,
Justus Piater2,3, Simon Haller2,3, Peter Manzl1, Metiyan Ata1, Burkhard
Moser1, Rene Neurauter1, Lukas Possenig1, David Freina2, Michael
Dejanovic1, and Patrick Hofmann2
1Department of Mechatronics, University of Innsbruck, Technikerstr. 13, 6020
Innsbruck, Austria
2Department of Computer Science, University of Innsbruck, Technikerstr. 21a, 6020
Innsbruck, Austria
3Digital Science Center (DiSC), University of Innsbruck, Innrain 15, 6020 Innsbruck,
Abstract. This paper presents the TYROLICS RoboCup@Work League
team of the University of Innsbruck (Tyrol- Austria) and its current
hardware and software development. The team builds up a new mobile
robot platform named Leobot, which has been designed for operating
in industrial environments.
Keywords: robotics ·University of Innsbruck ·RoboCup@Work ·
RoboCup2021 ·Tyrolics ·mobile manipulator ·object recognition
·mechatronic design ·redundant robot ·deep Networks ·real-time
control ·Franka Emika Panda ·mecanum wheels
1 Introduction
The RoboCup@Work team TYROLICS (team logo shown in Fig.1) started as
a cross-location project at the University of Innsbruck and was founded in 2019.
The idea was born in the research group for Machine Elements and Design of
the Department of Mechatronics which is a part of the Faculty of Engineering
Sciences. The research focus of the group is on theoretical, numerical and experi-
mental problems in the field of multibody system dynamics[3], such as robots[9],
machine tools or hybrid vehicles which constitute a fundamental part of every
mechatronic system.
Regarding problems with robot kinematics, the team gets supported by the
research group Geometry and CAD who are experts in the field of kinemat-
ics [5,8]. In order to further increase the team expertise, the group Intelligent
and Interactive Systems of the Department of Computer Science[13,12] joined
the team. Their research is focused on vision, grasping and manipulating with
Fig. 1: TYROLICS team logo.
reactive algorithms and cognitive models in order to solve complete perception-
action loops as well as human-robot interaction, image and video analysis and
visual neuroscience. Since the latter group hosted the RoboCup Junior Austrian
Open 2016 and 2019, they already were aware of the RoboCup world cup and
enthusiastic about the idea of taking part. Since 2019 the team was focusing on
developing a new mobile platform called Leobot shown in Fig. 2. The name
is referred to the Leopold-Franzens-Universit¨at Innsbruck. The development of
the mobile robot platform is a crucial part for participation at RoboCup since
YOUBOT, the platform used by most teams, is no longer available on the mar-
ket. There are only a small number of suitable alternatives available which fit the
specifications of the RoboCup@work-rulebook in terms of size and functionality.
Fig. 2: 3D drawings of the mobile robot platform Leobot (left). Real picture
during laboratory tests (right).
Team Description Paper – Team TYROLICS RoboCup@Work 2021 3
1.1 Team Structure
The team consists currently of 5 students, 5 junior researchers and 3 professors,
whereas the professors provide guidance and advisory support. In addition, stu-
dents are involved in RoboCup tasks through different courses every year. The
development of a new mobile robot platform is just possible due to the large
variety in the research interests of the team members.
Table 1: Oberview of the team members by task and field of studies or role.
Name Task Field / Role
Johannes Gerstmayr Advisory Prof. Mechatronics, Innsbruck
Justus Piater Advisory Prof. Computer Science, Innsbruck
Martin Pfurner Advisory Prof. Geometry and CAD, Lienz
Martin Sereinig Organization Ass. Prof. Mechatronics, Lienz
Matteo Saveriano Organization Ass. Prof., Computer Science, Innsbruck
Simon Haller Software Engineer, Computer Science, Innsbruck
Rene Neurauter Mechanical drive system Team Leader, Ass. Prof., Innsbruck
Peter Manzl Control system Co-Team Leader, Ass. Prof. , Innsbruck
Metiyan Ata Mechatronic system Student Mechatronics, Innsbruck
Burkhard Moser Safety system Student Mechatronics, Lienz
Lukas Possenig Electronic system Student Mechatronics, Lienz
Michael Dejanovic Mechanical System Student Mechatronics, Innsbruck
Patrick Hofmann Manipulation Student Computer Science, Innsbruck
2 Hardware Description
A design sketch of Leobot with its features is shown in Fig. 4. Mounting points
as well as the overall dimensions of the robot can be seen in Fig. 5. Beside
the 3D CAD design also detailed multy body simulations of the mobile robot
platform are used during the development process as shown in Fig. 3. Thereby
the simulations have been computed with the multibody simulation environment
Exudyn [4], which provides a C++ multibody dynamic library with a Python
user interface.
2.1 Mobile Platform
The mobile platform’s base consists out of a laser-cut aluminum plate which is
designed to allow small twists of the platform to compensate for small uneven-
nesses in the floor in a way that the mecanum wheels stay on the floor while
moving. Item profiles are used for the skeletal structure of the robot and to at-
tach the electronic modules. The electronic modules are located at two layers
since the modules heights are smaller than the platform’s. It is easier to scale
the height than the base area of the robot and therefore introducing more than
Fig. 3: Multi body system simulation Leobot (left). Dynamic simulation of the
deformation of Leobot under its own weight (right).
one layer in the inside is a space-saving approach, while maintaining a well-
arranged order. The overall system costs and general parameters such as weight,
dimension, velocities etc. can be found in Tab. 5 and 4 in the appendix.
The built-in sick laser scanners, which are nowadays widely used in mobile
platforms, offer a number of advantages. Due to the diagonal arrangement it
is possible to eliminate dead zones and thus achieve an optimal result. A Intel
Realsense D435i camera is attached to the front of the robot. Its tasks include
tape detection on the one hand, and on the other hand the odometry can be
determined via the IMU values which are included in the camera system. The
combination of sensor data from the IMU, the wheel encoders and the Sick laser
scanners offer a sophisticated navigation system for autonomous maneuvering
through space without the risk of collision.
2.2 Franka Emika Panda Manipulator
For interactions with the surroundings of the robot, a Franka Emika Panda
shown in Fig. 6 (left) is used. The Franka Emika Panda controller and the
according power supply are included on the first electronic components level in
the robot. The manipulator is mounted onto the base aluminium sheet on the
front side of the mobile robot platform. It is a collaborative 7R-manipulator,
which means that it is kinematically redundant (7 rotational joints) and has
been approved for interaction with humans. The redundancy can be utilized to
optimize trajectories or to reconfigure the arm while maintaining the end-effector
position. The seven joints are marked in red in Fig. 6 (left).
The arm itself has a maximum payload of 3kg and a maximum reach of
0.855m which is sufficient for the demands at the RoboCup@work competition.
Team Description Paper – Team TYROLICS RoboCup@Work 2021 5
Franka Emika Panda
mecanum-wheel with
v-belt construction,
motor and gearbox
base: 6mm aluminium sheet
with 3D-camera
2D Lidar
Sick TiM5xx
two layers
with electronic components
Fig. 4: Design sketch of the mobile manipulator Leobot.
Fig. 5: Overall dimensions of Leobot.
Fig. 6: Franka Emika Panda manipulator with 7 rotational joints (left). Franka
Emika Panda gripper (right).
It further provides torque sensors in all seven axes which are useful for tasks such
as precision placement, where objects need to be inserted into a cavity. It also
comes with a parallel gripper (Fig. 6 (right)) with 80mm travel and exchangeable
fingers. Additional manipulator properties can be found in Tab. 4. On top of the
end effector a Intel Realsense D415 depth camera is used to recognize and localize
obstacles during a certain task.
2.3 Electric Motor and Gear design calculation
The use of 4 Mecanum wheels enable omnidirectional movements of the base.
They are driven by Maxon-Motors which are connected to the wheels via a
timing-belt construction (see Fig. 7 (left)). This facilitates the task of position-
ing the robot compared to normal wheels because it can move with arbitrary
trajectories. Independent control of the wheels is required for this setup.
The aim is to calculate gear transmission ratio (iR) for a given nominal
speed and torque safety (S) for a given desired acceleration, taking into account
a moving mass and given motor parameters. The parameters of the motor and
the gearbox are taken from the data sheets (see Tab. 2). Four Maxon brushless
DC motors (EC-i30) were used in combination with a planetary gear (GP32C)
and incremental encoders (ENC16RIO4096).
With the mechanical parameters, rM(radius of Mecanum wheels) and m
(mass of the robot) and the design parameters, vmax (maximum velocity) and a
(maximum acceleration) the desired wheel speed nmax (see Eq.2) can be calcu-
wmax =vmax
angular velocity Mecanum wheel (1)
nmax =wmax ·30
pi wheel speed (2)
Team Description Paper – Team TYROLICS RoboCup@Work 2021 7
Fig. 7: Drive assembly group. Mecanum wheel with timing-belt construction
(left). Representation of the motor-gearbox assembly group (right).
Table 2: Parameters of motor and gearbox
dMDiameter Mecanum Wheel
vmax Maximum platform speed
wmax ω=v
iRBelt step transmission ratio
iPPlanetary gear transmission ratio
ηPEfficiency planetary gear
ηREfficiency belt drive
ηLEfficiency bearing
nNNominal speed of motor
nN2Speed Gearbox output at rated speed
MNNominal torque motor
m Mass of vehicle
a Target acceleration vehicle
c Factor resulting from inclined position of the rollers
Using this parameters the desired motor speed by gearbox and force per wheel
(see Eq. 6) can be determined:
Motor speed by gearbox (4)
F=m·aForce to accelerate the vehicle (5)
FRad =F
4Force per wheel (6)
By the help of the overall efficiency ηges =ηP·ηR·ηLthe desired torque per
wheel and safety factor can be calculated:
Required belt speed ratio (7)
Meff =MN·iP·iR·c·ηges Effective torque (8)
MRad =FRad ·rMRequired torque per wheel (9)
Safety factor torque (10)
The calculation results in a transmission ratio for the belt of 1.64 and a safety
factor for the torque of 2.3.
2.4 Manufacturing
In this section the manufacturing and assembly of those components that were
produced in-house will briefly be discussed. These components are preferably
made of aluminum or aluminum alloys. This material has a low density and,
depending on the alloy, also a high strength. It is therefore well suited for
lightweight constructions. The drive shafts of Mecanum-Wheels are made of
the aluminum alloy EN AW - 2007. This material is very easy to machine and
the yield strength is over 200MPa. The shafts were machined on a CNC lathe
and are shown in Fig. 8 (left).
The bearing housing for the seat of the drive shaft bearings and the bracket
for the motor-gearbox unit are also made of an aluminum alloy (EN AW - 5083).
This material can be easily machined and has sufficient strength. The component
parts were manufactured using a CNC milling machine and are shown in Fig. 8
Team Description Paper – Team TYROLICS RoboCup@Work 2021 9
Fig. 8: Manufactured milled/lathed parts. Drive shaft before mounting in wheel
hub (left). Bearing housing for the bearing of the drive shaft (right).
The base plate of the vehicle platform is cut from an aluminum sheet using
a laser beam process. All holes for fixing the components are provided in this
base plate. Afterwards, all necessary threads were cut into the plate.
2.5 Assembly
By designing an appropriate interference fit, the drive shaft is pressed into the
hub of the Mecanum wheel. In the same way, the deep groove ball bearing
and the cylindrical roller bearing were pressed into the bearing housing. The
motor-gearbox unit is frictionally connected to the motor bracket by a clamp
connection. The synchronous belt wheels are joined to the drive shaft by a special
adhesive. The synchronous belt wheel on the motor shaft is additionally held by a
screw connection. The representation of the assembly group in Fig. 7 is intended
to explain the assembly of the components.
2.6 Electrical Concept
The basic electronic and wiring of the robot are a fundamental part of the
project. Standard 2.5mm2PVC cables are used for most of the power lines and
4mm2for main connections between battery and the voltage converters. Fur-
thermore most of the connectors of the built in devices are eliminated to reduce
the risk of bad-contact-problems. In the developed robot system an 24V 20 Ah
LiPeFo4 accumulator is used. This advanced type of lithium cell accumulator
technology is used to simplify the usage while charging the battery. The 24V
System is choosen because it’s a good compromise between voltage and current.
Higher voltage needs more battery cells which leads to a bigger battery, less volt-
age results in a higher current which leads do bigger wire gauges. Furthermore
DC/DC switches and other electronic devices are often available for 24V. Fig. 9
shows an overview on the electric circuit.
Main Battery 24V
Motor Controller
Epos 4
Motor Controller
Epos 4
Motor Controller
Epos 4
Motor Controller
Epos 4 5V DC/DC
230V ACDC Franka Power
wall Socket
24V Charger
Main PC
Safety Utillitys
Universal Utillity
Robotic Arm
Fig. 9: Detailed drawing of different voltage level generation
Different levels of voltage are necessary for the robot to work properly, there-
fore two DC/DC switch mode converters are used to generate 5V and 12V from
the 24V battery voltage. Furthermore a 230V inverter is needed to power the
Franka Emika robotic arm. This power supply can also be provided via an ex-
ternally IEC 60320-C13 Connector. This allows the use of the robotic arm sta-
tionary (with its original software). A relay automatically switches the power
source for the Franka Emika power supply and shuts down the inverter when a
230V cable is connected to the robot. It would be easier and more efficient to
generate the neccessary 48V directly from 24V battery volatage, but the Franka
Emika controller does not allow an external power source.
Universal Utility Connector (UUC) The UUC is a feature which is pro-
vided on the back of the mobile robot platform, due to the fact that the robot
will face also educational use in the future. It’s a multi-pin connector on the
back of the robot which provides 24V, 12V, 5V, GND and 10Mbit Ethernet
for external hardware which can be mounted on the robot in future. The 24V
pins are connected to the implemented SICK safety PLC (Programmable Logic
Controller, see section 3) and are turned off by the emergency circuit.
3 Safety
The implementation of a safety concept with two Sick TIM781S laser scanners
and a Sick FX3-CPU is discussed in the following section. It’s purpose is to
Team Description Paper – Team TYROLICS RoboCup@Work 2021 11
stop the motor drivers (according to the robots base velocity) if an obstacle
violates within the predefined safety zone, preventing damage on both sides and
furthermore the used sensors provide the possibility to generate a 2D map of the
environment. Moreover, the e-stop, reset and mute switches are also managed
over this PLC as well as electronic safety circuits. With the suitable lidar-module
and PLC it is possible to stop the robot from colliding with its surrounding
environment or a person without relying on ROS, which does not provide real
time execution. Another important feature is to get the point cloud from the
lidar scanner and publish it via ROS to allow autonomous driving.
The used solution from SICK fits with its outer dimensions into the platform
concept and allows to collect their measurement data via TCP/IP. Furthermore
the sensor system allows to publish the sensor data in ROS out of the box. An
safety approved PLC from Sick is being used to switch the TIM’s safety zones
according to the current speed of the robot (equals the time it takes the robot
to stop) and e-stop the system, if an obstacle is detected in its configured range.
3.1 Electronic Safety
The PLC also checks the cable connections to the charging and 230V ports (see
section 2.6) and lowers the maximum speed of the motor controllers if it detects
a connection. This is meant to prevent accidents like ripped out connectors or
thorn off cables. An important issue besides the physical safety is that also the
safety of electronic parts play a crucial role. Therefore we use fuses to protect
devices and cables from overcurrent generated by unpredicted events. For an
optical indication (if a fuse is blown) simple LED-resistor circuit is used for each
3.2 Configuration
As mentioned before the PLC monitors the lidar, the e-stop switches and the
cable connections. The STOP-function of the motor controllers are triggered via
the PLC if a violation occurs. A discovered obstacle in the safety zone leads only
to a motor stop. The power supply of the Franka Emika Panda is disabled if the
e-stop switch is triggered. Communication via Modbus to the main computer
(ROS Based System) is used to provide the e-stop state and the correct safety
zone has to be sent in the other direction. Additionally the safety zones of the
TIM-Lidar module are configured with the SICK Sopas tool.
4 Software Description
The computing power of the system consists of a PC with a i7-processor running
ROS melodic and additional a NVIDIA Jetson Nano GPU is used. An overview
of the currently used software (ROS) packages is given in Tab. 3.
4.1 Communication Structure
The motor controllers, manipulator and router each have their own RJ45 jack
with gigabit Ethernet/EtherCAT. The manipulator and the motor controllers
have both a communication frequency of 1kHz, which the real time PC has to
fulfill. The SICK laser scanners are used in ROS for mapping an obstacle detec-
tion and realize together with the safety PLC a hard real time security system
overlaying ROS, where safety zones can be defined, see chapter 3. This makes
testing algorithms at the real robot a lot safer as it decreases risk of damage to
the robot and its environment without relying upon the software components in
ROS, which are under development. An operator PC can be connected to the
master PC either with a RJ45 plug at the backside of the Robot or with WLAN.
The communication of the Robot’s PC with the motorcontrollers is using the
fieldbus EtherCAT and is realized in a custom package. The EtherCAT connec-
tion is implemented with the library SOEM (Simple Open EtherCAT Master)[7].
Fig. 10: The structure of hardware and communication.
4.2 Pointcloud and Mapping
A custom written ROS launch script collects the individual pointclouds of both
laser scanners and merges them in one[2], which can be used, e.g. for navigation.
The model of the lidar modules is implemented in this step to point out possible
problems in the transformation of the local coordinate systems. To get first
results hector slam [6] is tried, since no odometry data is required. Results can
be seen in Fig. 11.
Team Description Paper – Team TYROLICS RoboCup@Work 2021 13
Fig. 11: First attempts in mapping
4.3 Real-time object detection using a deep neural network
Object detection is of fundamental importance for the competition. Already the
basic tests, e.g., Basic Manipulation Test, rely on object detection. In our plat-
form, we address the problem of object recognition by exploiting recent advances
in computer vision. The adopted combination of deep learning techniques and
dedicated, low-power consumption hardware results in a robust and efficient ob-
ject detection system. The main features of the proposed solutions are presented
as follows.
The You Only Look Once (YOLO) network
The YOLO network [10] is a popular approach to object detection. The network
is able to output, from a single input image, both the recognized object label
and a bounding box around each object. It works also when multiple objects
are present in the same input image and even in case of partial overlapping (oc-
clusions). In order to simultaneously classify an object and create a bounding
box, it is possible to perform classification on image patches (bounding boxes).
This idea is trivial to implement by using a sliding window and a fixed rectangu-
lar shape. However, this increases the computational complexity and generates
inaccurate results. The key idea of YOLO to make this process accurate and
computationally efficient is to consider that most bounding boxes have a certain
height-width ratio. Hence, predefining those boxes results in a faster training
and increased stability (see Fig. 12).
Fig. 12: The working principle of the YOLO network. Classification on image
patches is efficiently computed by exploiting a set of nine predefined bounding
The YOLO network is trained by minimizing a loss defined as the sum of:
1) a localization loss, 2) a confidence loss, and 3) a classification loss.Training
is performed in a supervised manner using a labeled dataset that contains the
class label of each object and the bounding box around each object. The net-
work is implemented using the popular open source platform TensorFlow4. Our
YOLO v3 [11] implementation in TensorFlow 2.0, based on the implementation
available at 5, is freely available at 6.
An object dataset for the RoboCup@Work
The YOLO network implementation comes with pre-trained gains. This is a
common practice in deep learning useful to properly initialize the network gain.
In order to make the network working for the RoboCup@Work competition, a
fine tuning step is needed. To this end, one has to create a labeled dataset and
re-train the network. Hence, the first step in the fine tuning is to create a dataset
containing all the objects used in the competition with associated class labels and
bounding boxes. One possibility is to create a dataset of real images by taking
pictures of the objects from various perspectives and on different backgrounds.
This should enable the neural network to identify the objects despite real in-
fluences such as exposure, occlusion, and background clutter. Further problems
like deformations and intra-class variations do not occur since the objects are
stiff and known in advance. However, the time needed to create and label such a
dataset is prohibitive. In order to solve this issue, we propose a “hybrid” solution
where the dataset contains both real and synthetic images. Synthetic images are
rendered from 3D models that we have created using computer aided design
Team Description Paper – Team TYROLICS RoboCup@Work 2021 15
(CAD) tools. These images are rendered from different views and on different
backgrounds to make the dataset more realistic, allowing the network to rec-
ognize an object independent for the angle and background. It is clear that a
synthetic dataset, even if very realistic, will not let the network performing at its
best. Therefore, we augment the synthetic data with real images of the same ob-
jects, again taken from different views and on different backgrounds. Currently,
the dataset consists of 5200 images of which about 5 % are real images and the
rest are synthetic views generated from CAD models. Sample images from the
dataset are shown in Fig. 13. We reserve 4150 images for YOLO network training
(fine tuning) and 1050 images for testing. The evaluation is still on-going.
(a) Bearing (synthetic image) (b) Bearing (real image)
(c) F20 profile (synthetic image) (d) F20 profile (real image)
Fig. 13: Sample images from the object dataset.
Deployment on NVIDIA Jetson
The YOLO network, as almost all the existing deep networks, runs at best
when parallel computation capabilities are available, i.e. when the object detec-
tion runs on a Graphics Processing Unit (GPU). However, the computational
power available on mobile platforms is typically limited due to space and energy
constraints. Moreover, the drivers of popular graphics cards, like NVIDIA and
AMD RADEON, are not compatible with the real-time kernel used to perform high-
frequency control of the robotic arm (see 4.4). These reasons are usually limiting
the deployment of deep learning based solution on mobile robotic platforms.
In order to overcome these limitations, we decide to exploit recent hardware
advances in the field of parallel computing by integrating in our mobile platform a
NVIDIA Jetson Nano7. The NVIDIA Jetson Nano is a complete computer with
dedicated CPU, GPU, memory, and communication interfaces (e.g., USB and
Gigabit Ethernet). The integrated GPU, a (NVIDIA Maxwell) with 128 cores,
is relatively powerful given the compact size (69.6×45 mm) and a maximum
of 10 W power consumption of the device. Having a separated PC dedicated to
GPU computation allows for a physical separation of low- (object detection) and
high-priority (robot control) processes. Finally, the NVIDIA Jetson Nano is fully
compatible with TensorFlow 2.0 that is used to implement the YOLO network.
4.4 Real-time robotic arm control
The Franka Panda arm is controlled trough the Franka Control Interface (FCI)
that allows a bi-directional communication between the low-level arm controller
and an external PC8. An open source C++ library, namely libfranka, pro-
vides access to robot state and control variables at 1kHz 9. On top of this,
it is possible to run a low-frequency communication loop using the ROS node
franka ros. To match the real-time requirements, the high-frequency control
loop on a PC equipped with Ubuntu 18.04 and a real-time kernel is used.
More in details, the PREEMPT RT kernel patch version 5.2.21-rt13, configured
as a “Fully Preemptible Kernel” is used. Under this preemption modality, low-
priorities processes can be always preempted to guaranty the timed execution of
the high-priority ones. Therefore, high-frequency control loops are executed as
high-priority processes with preemption rights, while other processes like com-
munication and visualization are treated as a low-priority processes and can be
preempted if they affect the control frequency.
4.5 Precision Placement
To succeed at the more advanced Precision Placement task, a trajectory from
human demonstration is used. This data is used to train Dynamic Movement
Primitives (DMPs). For the execution impedance control is used right now. Later
on this approach will be improved by implementing the control algorithm pro-
posed by [1]. This approach uses the forces and torques, meassured during the
demonstration, to optimize the trajectory made by the DMPs. The idea is that
the human operator uses the haptic feedback himself to accomplish precise tasks.
The holes are detected by visual object recognition, so errors in the starting po-
sition are expected. The arm is guided to a rough starting point, from where
the precise DMPs are executed. Robust placement should be possible with the
feedback control, so that the errors occuring while executing the movement can
be balanced. For the testing of these trajectories, the CoppeliaSim simulator
was used (formerly V-rep). In the demonstration a Item profile (40x40x100) was
used, in the simulation it is represented by a cubpoid as shown in Fig. 14.
Team Description Paper – Team TYROLICS RoboCup@Work 2021 17
Fig. 14: CoppeliaSim simulation of the 7-DOF Franka Emika Panda robot arm.
4.6 Navigation Stack
The ROS navigation stack needs the here discussed components and packages
to work on the robot platform.
tf (Transform Configuration)
Every robotic system in ROS is made from different coordinate frames i.e. the
world frame or the base frame of the robot itself. Those frames change over time
therefore they should be tracked. Transform configuration or tf is used to refer
the different frames to each other over time. It is done by using a structure called
transform tree. In this tree every node represents a frame and every edge is the
transformation between those frames. While tf is used to get the position of the
robot in the map it provides no functionality to supply any information about
the velocity. This is where the odometry information comes into play.
Sensor Information
The sensor information is used for one of the main tasks of navigation where
it is used to reactively avoid collision with objects that are not in the global
costmap. The sensor information is used to update the local costmap and detect
new obstacles which were not present at the time of the initial mapping.
There are two types of maps in ROS:
global costmap: is generated either from a static map if the area was already
discovered and mapped before or by some kind of SLAM implementation.
The map is published on the topic /map and it is used to get the environment
boundaries and to calculate the relative position and orientation of our robot.
local costmap: gives information about appearing obstacles or other objects
in the current path of the robot. Virtual walls, represented by tape, can not
be detected with the laser sensor information. When detected by the camera,
these walls are inserted into the local costmap.
Global Planner
The global planner is used to calculate the best possible route to the goal loca-
tion using interpolation. There are several different behaviors which the global
planner can use. For example grid path which lets the robot orient on the grid,
or dijkstra which uses djikstra-algorithm to calculate the best route.
5 Conclusion and Future Work
In this paper the development of a new 10 degree of freedom mobile robot
platform is presented. Beside the mechanical construction and electronic system
development also safety procedures are discussed. Especially for collaborative
mobile manipulators this is a crucial point. The mobile robot platform Leobot
will be used in the RoboCup@work league as well as for teaching in the field of
mobile robotics. Moreover it will be used to develop and evaluate new algorithms
for kinematically redundant collaborative mobile manipulators and to evaluate
the movement of mecanum wheel driven robots. Especially the influence of the
mecanum wheels on the vibration behavior of the system will be investigated in
1. Abu-Dakka, F.J., Nemec, B., Jørgensen, J.A., Savarimuthu, T.R., Kr¨uger,
N., Ude, A.: Adaptation of manipulation skills in physical contact with the
environment to reference force profiles. Autonomous Robots 39(2), 199–217
(Aug 2015).,
2. Ballardini, A.L., Fontana, S., Furlan, A., Sorrenti, D.G.: ira laser tools: a ros laser-
scan manipulation toolbox (2014)
3. Gattringer, H., Gerstmayr, J.: Multibody System Dynamics, Robotics and Control.
Springer Publishing Company, Incorporated (2015)
4. Gerstmayr, J.: EXUDYN user documentation (2021). user manual, University of
Innsbruck (2021), see also URL
5. Husty, M.L., Pfurner, M., Schroecker, H.P.: A new and efficient algorithm for the
inverse kinematics of a general serial 6R manipulator. Mech. Mach. Theory 42(1),
66–81 (2007)
Team Description Paper – Team TYROLICS RoboCup@Work 2021 19
6. Kohlbrecher, S., Meyer, J., von Stryk, O., Klingauf, U.: A flexible and scalable slam
system with full 3d motion estimation. In: Proc. IEEE International Symposium
on Safety, Security and Rescue Robotics (SSRR). IEEE (November 2011)
7. Open EtherCAT Society: Simple Open EtherCAT Master. https:
// (5 2019), accessed: 15-06-2020
8. Pfurner, M., Stigger, T., Husty, M.L.: Algebraic analysis of overcon-
strained single loop four link mechanisms with revolute and pris-
matic joints. Mechanism and Machine Theory 114, 11 – 19 (2017).,
9. Pieber, M., Neurauter, R., Gerstmayr, J.: An adaptive robot for building in-plane
programmable structures. In: IEEE/RSJ International Conference on Intelligent
Robots and Systems. pp. 5321–5327 (2018)
10. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-
time object detection. IEEE conference on computer vision and pattern recognition
pp. 779–788 (2016)
11. Redmon, J., Farhadi, A.: Yolov3: An incremental improvement. CoRR (2018)
12. Saveriano, M., Hirt, F., Lee, D.: Human-aware motion reshaping us-
ing dynamical systems. Pattern Recognition Letters 99, 96 – 104
(2017)., http:
//, user
Profiling and Behavior Adaptation for Human-Robot Interaction
13. Zech, P., Piater, J.: Proceedings of the 6th Austrian Robotics Workshop
(7 2018).,
A Appendix
Table 3: Overview of currently used software packages.
Software package Responsible developer Information
epos ethercat Peter Manzl Interface from platform to motor controllers
franka controllers Patrick Hofmann Interface to control Franka Emika Panda
franka movement Patrick Hofmann Interface between MoveIT and Franka controller
franka ros Patrick Hofmann Franka Control Interface
ira laser tools Burkhard Moser Point cloud merger to include multiple laser scan data
leobot config Peter Manzl Launch and config files
leobot safety Burkhard Moser Interface between ROS and SICK laser PLC
meassurements Peter Manzl Interface for measurement data
panda moveit config Patrick Hoffmann MoveIt! configuration
panda test move Martin Sereinig Interface to Moveit from Python node
precition placement Patrick Hofmann Precition placement test implementation
qr position Martin Sereinig QR code detection and position using visp auto tracker
robocup msgs Patrick Hofmann Custom messages
sic scan Burkhard Moser Interface to the SICK TIM laser scanners
sim ros interface Patrick Hofmann Coppellia Simulation Interface for ROS
vicon bridge Martin Sereinig Interface to vicon motion capture system
vicon position Martin Sereinig Controller to move to a position using vicon data
Team Description Paper – Team TYROLICS RoboCup@Work 2021 21
Table 4: Overview of the general system properties.
Attribute Value Unit
Overall system weight 60 kg
Platform length 0.71545 m
Platform width 0.54495 m
Platform height 0.2612 m
Platform max. velocity 1.5 m s1
Payload platform 5 kg
Manipulator max. reach 0.855 m
Manipulator end effector max. velocity. 2.6 m s1
Manipulator joint max. velocity 150(joint 1 to 4) /180(joint 5 to 7) rad s1
Payload manipulator 3 kg
Battery Voltage 24 V
Battery Capacity 20 A h
Table 5: Overview of the general hardware costs.
Matter of expense Quantity Unit price (EUR) Total price (EUR)
Gear (GP32C) and motor (Maxon EC-i30) 4 460 1840
Motor controller (EPOS4 50/15 EtherCAT) 4 460 1840
Mecanum-wheels (NM152A) 4 200 800
Mechanical material (screws, nuts, bolts etc.) - - 500
Production (CNC Machine, technicians) - - 1500
Miscellaneous (cables, connectors, relais etc.) - - 500
Franka Emika Panda 1 23350 23350
Sick Laser System (TIM7xx) 2 1500 3000
Sick Laser Safety PC (Flexisoft FX3) 1 2000 2000
Power electronics 1 559 559
Internal computer running Ubuntu 18.04 1 1000 1000
Nvidia Jetson Nano, GPU 1 200 200
Battery 2 400 800
3D-Camera Intel Realsene (D415) 1 200 200
3D-Camera Intel Realsene (D435i) 1 200 200
Sum 38289
ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
The majority of people with upper extremity loss replace their arm and hand with a low-cost prosthesis. However, an average prosthesis only covers minimal functionality in comparison to a human hand, and the user is strongly limited in everyday life. Sophisticated bionic hands have been developed to replace upper extremity functionality. A bionic hand can be controlled via muscle contraction of the upper extremity or the shoulder area, and can replace the main functions that a human needs in everyday life. Nearly every hand movement and the independent movement of the fingers can be produced through a rotation mechanism around the wearer's wrist. Since these bionic hands are very expensive, only a small percentage of the world population have the privilege to own one. To close the gap between customer, designer and engineer, an open source bionic hand that can be 3D-printed is a cost effective possibility. The result of this project is a cost effective 3D-printed bionic hand that can be reprogrammed for user specific functions. The sensed muscle regions can be changed spontaneously as needed. The sensitivity of the muscle contraction and the gripping force are adjusted by software using a closed loop control.
Full-text available
In this work, we present a real-time approach for human-aware motion replanning using a two-level hierarchical architecture. The lower level leverages stable dynamical systems to generate motor commands and to online reshape the robot trajectories. The reshaping strategy modifies the velocity of the robot to match three requirements: i) to increase the human safety in case of close interaction with the robot, ii) to guarantee the correct task execution in case of unforeseen obstacles (including the human), and iii) to replan online the current task taking into account the human behavior. The lower level has to execute all needed computations in real-time. To this end, we also propose a novel approach that leverages depth space structure and parallel programming to rapidly and accurately estimate the robot-obstacle distances. The higher level of the architecture monitors the human and provides to the lower level information about the human status. The proposed approach has been tested in a human robot interaction scenario, showing promising results in terms of safe human-robot coexistence.
Full-text available
We propose a new methodology for learning and adaption of manipulation skills that involve physical contact with the environment. Pure position control is unsuitable for such tasks because even small errors in the desired trajectory can cause significant deviations from the desired forces and torques. The proposed algorithm takes a reference Cartesian trajectory and force/torque profile as input and adapts the movement so that the resulting forces and torques match the reference profiles. The learning algorithm is based on dynamic movement primitives and quaternion representation of orientation, which provide a mathematical machinery for efficient and stable adaptation. Experimentally we show that the robot’s performance can be significantly improved within a few iteration steps, compensating for vision and other errors that might arise during the execution of the task. We also show that our methodology is suitable both for robots with admittance and for robots with impedance control.
We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that's pretty swell. It's a little bigger than last time but more accurate. It's still fast though, don't worry. At 320x320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 mAP@50 in 51 ms on a Titan X, compared to 57.5 mAP@50 in 198 ms by RetinaNet, similar performance but 3.8x faster. As always, all the code is online at
Although spatial four link single loop mechanisms with revolute and prismatic joints are well investigated in many publications, there are only few of them dealing with the geometric conditions that cause these generally rigid structures to become movable. This article deals with the derivation of the necessary geometrical design conditions for mobility using kinematic mapping. Furthermore, the coupler motions of the mechanisms are computed. For the Bennett mechanism and the overconstrained RPRP chain the ruled surfaces representing the base surfaces for the line symmetric motions as well as the moving and fixed axodes are computed using the derived design parameters only. All surfaces are computed explicitly and shown in examples. A further contribution of this article is that in case of the RPRP mechanism algebraic mobility conditions in the design parameters are found which have never been published before.
The volume contains 19 contributions by international experts in the field of multibody system dynamics, robotics and control. The book aims to bridge the gap between the modeling of mechanical systems by means of multibody dynamics formulations and robotics. In the classical approach, a multibody dynamics model contains a very high level of detail, however, the application of such models to robotics or control is usually limited. The papers aim to connect the different scientific communities in multibody dynamics, robotics and control. Main topics are flexible multibody systems, humanoid robots, elastic robots, nonlinear control, optimal path planning, and identification.
In this paper a new and very efficient algorithm to compute the inverse kinematics of a general 6R serial kinematic chain is presented. The main idea is to make use of classical multidimensional geometry to structure the problem and to use the geometric information before starting the elimination process. For the geometric pre-processing we utilize the Study model of Euclidean displacements, sometimes called kinematic image, which identifies a displacement with a point on a six dimensional quadric in seven dimensional projective space P7. The 6R-chain is broken up in the middle to form two open 3R-chains. The kinematic image of a 3R-chain turns out to be a Segre-manifold consisting of a one parameter set of 3-spaces. The intersection of two Segre-manifolds and yields 16 points which are the kinematic images representing the 16 solutions of the inverse kinematics. Algebraically this procedure means that we have to solve a system of seven linear equations and one resultant to arrive at the univariate 16 degree polynomial. From this step in the algorithm we get two out of the six joint angles and the remaining four angles are obtained straight forward by solving the inverse kinematics of two 2R-chains.
EXUDYN user documentation (2021). user manual
  • J Gerstmayr
Gerstmayr, J.: EXUDYN user documentation (2021). user manual, University of Innsbruck (2021), see also URL