Conference PaperPDF Available

Abstract and Figures

We present our open humanoid robot platform Wolfgang. The described hardware focuses on four aspects. Firstly, the robustness against falls is improved by integrating 3D printed elastic elements. Additionally, a high control loopfrequency is achieved by using new custom control electronics. Furthermore, a torsion spring is applied to reduce the torque on the knee joints. Finally, computational power is provided through the combination of different processors. The paper also presents the ROS-based software stack that is used in RoboCup.
Content may be subject to copyright.
Wolfgang-OP:
A Robust Humanoid Robot Platform for Research and Competitions
Marc Bestmann1and Jasper G¨
uldenstein1and Florian Vahl1and Jianwei Zhang1
Abstract We present our open humanoid robot platform
Wolfgang. The described hardware focuses on four aspects.
Firstly, the robustness against falls is improved by integrating
3D printed elastic elements. Additionally, a high control loop
frequency is achieved by using new custom control electronics.
Furthermore, a torsion spring is applied to reduce the torque
on the knee joints. Finally, computational power is provided
through the combination of different processors. The paper also
presents the ROS-based software stack that is used in RoboCup.
I. INTRODUCTION
In the last years, many new humanoid platforms were
developed and released open source. One major factor for
this was the RoboCup Humanoid League, where many teams
started their participation by using the commercially avail-
able, but open, platform Darwin-OP [14]. While this platform
gave an easy start point, it showed its limitations when the
game complexity increased, i.e. by switching from carpet to
artificial turf and by usage of a real soccer ball. Many teams
started to further develop the platform, especially in terms
of actuator power, height, and computational capacity [10].
From this, new platforms emerged, which again influenced
each other. In our opinion, this possibility to start with a
complete platform and then improve certain aspects of it,
facilitated the research in this interdisciplinary area.
In this paper, we present our open platform (OP) Wolfgang
which is a descendant of the Nimbro-OP [32]. Our design
goals were influenced by the fact that our robots are not
only used for research in the lab, but also for participation
in RoboCup Humanoid Soccer. Therefore, one main aspect
is the robustness to falls, which happen often during the
competition. For this, we added compliant elements to the
robot’s shoulders and head, as well as flexible bumpers to
the torso. Another main aspect was the improvement of
the electronics. We significantly improved the update rate
of the communication with servos and sensors compared
to its predecessors. Furthermore, we developed a low-cost
parallel elastic actuator (PEA) to reduce the torque on the
robot knees. As a fourth main aspect, we improved the com-
putational power, especially in regards to neural networks,
by using a combination of different hardware components.
We estimate the material cost (excluding manufacturing and
assembly) of our platform to be 11,000e.
This research was partially funded by the German Research Foundation
(DFG) and the National Science Foundation of China (NSFC) in project
Crossmodal Learning, TRR-169.
1All the authors are with the Department of Informatics, University
of Hamburg, 22527 Hamburg, Germany [bestmann, 5guelden,
7vahl, zhang]@informatik.uni-hamburg.de
Fig. 1. Picture of the robot (left) and illustration of the kinematic chain
(right). The 20 degrees of freedom are colored blue, except the series elastic
actuators (orange) and the parallel elastic actuators (yellow). Positions of
the force sensors (red), IMUs and camera are included.
Besides those hardware changes, we also briefly describe
our RoboCup software architecture and our implementation
of different motion skills, as well as our vision and behavior
algorithms.
TABLE I
WOLFGANG-OP SPE CI FIC ATION S
Type Specification Value
General
Height 80cm
Weight 7.5kg
Battery 5200mAh, 14.8V, 4-cell LIPO
Battery Life 25min standing, 10min walking
Material Aluminium, Carbon, PLA, Ninjaflex
PC
Name NUC8i5BEK Odroid-XU4 Intel NCS2
Processor i5-8259 Exynos5422 Myriad X
Memory 8GB DDR4 2GB DDR3
Network GigE GigE
Wireless 2.4/5GHz
Controller
Microcontroller Teensy 4.0 + FT4232H
CPU 600MHz Cortex-M7
Control loop 700Hz - 1000Hz
Sensors
Camera Basler acA2040-35gc
Camera Lens Computar Lens M0814-MP2 F1.4 f8mm 2/3”
IMU MPU6500
Foot Pressure Custom strain gauge based
Actuators
Name MX-64 MX-106 XH540
Count 8 10 2
Stall torque 7.3Nm 10Nm 11.7Nm
No load speed 78 rev/min 55rev/min 46 rev/min
II. RELATED WORK
Many robot platforms have been developed in recent
years, of which several are described in [12]. Some focus
on bringing optimal performance at significant costs while
others focus on using more low-cost hardware. A survey of
current robot platforms has been performed in [31]. Commer-
cially available child-sized robots such as the Robotis OP31
(as well as its predecessor the Darwin-OP [14]) and NAO
robot [13] enable research groups to focus on developing
software rather than hardware. While a considerable amount
of research has been done using these platforms, they are
limited both in computational power and mechanical ability.
Multiple open source child-sized humanoid robots were
developed in recent years. These include the Sigma-
ban+ robot by the four times RoboCup champion team
Rhoban [11] (with a buying price of 30,000e), the prede-
cessor of the platform presented in this paper, the Nimbro-
OP [32], and its mostly 3D printed successor, the igus
platform [2]. While these platforms are relatively low cost,
the iCub [24] is a robot equipped with many more sensors
and degrees of freedom but at a much higher price of
approximately 250,000e.
Furthermore, different adult-sized humanoid robots are
commercially available such as Robotis’ THORMANG32or
PAL Robotics’ TALOS [34]. The adult-sized open-source
robots Nimbro-OP2X [8] is reproducible with at a relatively
low cost of 20,000eto 40,000e.
Multiple robots were developed at companies or research
institutions such as Boston Dynamics’ Atlas [23], Honda’s
Asimo [33], and Toyota’s T-HR33, but these are neither open-
source nor commercially available.
III. HARDWARE
The general kinematic structure of the Wolfgang platform
is similar to the original Nimbro-OP [32]. We made some
changes to the mechanics and completely changed the elec-
tronics of the robot. These are described in the following.
A. Elastic Elements
The RoboCup teams WF Wolves and Taura Bots de-
veloped series elastic actuators (SEAs) using compliant
polyurethane elements [22]. They also added the compliant
elements in the robot’s shoulder roll motor. These prevent
damage to the gears when the robot falls on its side. We
adopted these elements for the shoulder roll motors but later
improved them to be 3D printable to ease manufacturing
while maintaining a similar geometry. The used material
is NinjaFlex4, a thermoplastic polyurethane with a shore
hardness of 85A. To reduce cost and complexity, we also
did not include a position sensor to measure the deformation
1https://emanual.robotis.com/docs/en/platform/
op3/introduction/
2https://emanual.robotis.com/docs/en/platform/
thormang3/introduction/
3https://global.toyota/en/newsroom/corporate/
30609642.html
4https://ninjatek.com/ninjaflex/
Fig. 2. Exploded view of the compliant element (left) and photo of
assembly (right). introduced by the elastic parts The inner part (yellow)
is elastic and made of NinjaFlex. The outer part (red) is stiff and made
of PLA. It is necessary to provide the same mounting as the motor horn
(blue). Standard hex nuts are inserted in both parts to provide threads for
mounting.
of the elastic element since the exact position of the shoulder
roll motor is not crucial in our use-case. Instead of using an
aluminum part to connect it to the robot, we use a printed
part made of PLA (see Figure 2). Since these can both be
produced on a low-cost fused deposition modeling (FDM)
printer and the material is widely available, this facilitates
production. Furthermore, due to the rapid prototyping nature
of 3D printing, multiple versions with different degrees of
compliance can be created in a short time.
The same element was also installed in the robot’s neck to
further reduce the risk of damage to the neck when falling
caused by the rapid deceleration of the considerable mass
of the camera and lens. Since this introduces an uncertainty
in the measurement of the kinematic chain, we installed an
additional IMU in the robot’s head.
Multiple approaches have been proposed to prevent dam-
age to the robot due to falls. Firstly, a rigid structure to
withstand the impact may be used [17]. Secondly, active
or passive compliance mechanisms such as airbags [16] or
elastic elements such as piano wire [11] can be employed.
Furthermore, the robot can assume a pose in which damage
is prevented as described for our robot in [5]. We decided
for passive compliance since no maintenance has to be done
between falls as presented in [16] and only minimal weight
is added. 3D printed elastic elements at the front and back
of the robot’s torso, also made of NinjaFlex, were designed
and added. In the case of a sagittal fall, these elements will
first come in contact with the ground, thus reducing the
impact force applied on the robot. The elasticity can easily
be controlled by using different amounts of infill material.
Mechanical development is done using the online platform
Onshape. This allows anybody to view the model, without
installing additional software. Furthermore, it is possible to
automatically generate the corresponding URDF by using a
tool5. This prevents differences between the CAD and the
URDF model. It also enables a faster hardware development
cycle, as no manual changes to the URDF are necessary.
B. Parallel Elastic Actuator
The robot’s kinematic structure forces it to bend its knees
during walking and many motion skills. Thereby, the knee
5https://github.com/Rhoban/onshape-to-robot
Fig. 3. Exploded view of the torsion spring assembly on the knee (left)
and photo of assembly (right). The spring (grey) is fixed centered on the
rotation axis by a 3d printed part (red). A second part (blue) countersinks
the screw heads. The arms of the spring are fixed to below and above the
joint (yellow).
is mostly bent, e.g. by around 60when standing in a
pose from which it can directly start walking. This leads
to high torques that are acting on the motor, which is
undesirable due to high energy consumption and eventual
overload issues. Furthermore, the knee joints experience the
highest peak loads in most commonly performed motions,
thus limiting the performance of the robot. To tackle these
issues, we added a torsion spring on the side of each knee,
which produces a counter torque (see Figure 3), turning
these motors into parallel elastic actuators (PEAs). Similar
approaches to reduce energy consumption by applying dif-
ferent springs have already been investigated in [28], [19],
and [15]. The use of torsion springs was also investigated
for exoskeletons, e.g. to reduce the energy needed during
cycling [6]. Different other spring based exoskeletons were
developed for industrial applications [18]. While clutching
mechanisms for PEAs exist [25] we decided against it, since
we estimated the disadvantages of increased weight and
complexity to outweigh the benefits of the reduced required
torque during specific motion (e. g. the swing phase of the
walking).
In a static case, we can assume that the knee torque
resulting from the mass of the robot (τm) follows the
equation for an inverted pendulum.
τm=1
2mgl ·sin α(1)
Where mis the mass of the robot above the knees, which
is divided by two since it is distributed over two legs.
Furthermore, gis the gravitation constant, lthe distance to
the mass point and αis the angle of the pendulum. The
torque resulting from the torsion spring (τs) can be computed
using Hooke’s law.
τs=(2)
Where kis the spring constant and βthe bending angle of
the spring, which is in our case the bending angle of the
knee. It would also be possible to mount the spring already
bent at β= 0, resulting in an additional offset in equation 2.
However, this reduces the bending range of the spring and
we need almost 180movement.
We can now exploit the fact that the robot’s center of
mass is typically above its feet when it is upright. Otherwise,
0.0 0.5 1.0 1.5 2.0 2.5 3.0
Angle β [rad]
0
1
2
3
4
5
Torque [Nm]
τm
τb
τs
Fig. 4. Plot of the theoretical relationship be-
tween knee angle βand torque for the presented
robot platform with x= 0.13 while standing with
both legs on the ground. All torques are displayed
without sign for better comparison. The torque
without spring τm(blue) is higher than using a
spring τb(red). When the leg is free, an additional
torque τs(yellow) is created when using a spring.
Still, this leads to a minimization of maximum
torque.
β
α
α
α
l
CoM
estimate
Fig. 5. Visual expla-
nation of the symbols
used in the formulas.
it would fall due to the ZMP leaving the support polygon.
Furthermore, our robot, as most humanoids, has the same
link length for upper and lower legs. For simplicity, we
assume that the CoM is at the hip joint. Typically, it is a few
centimeters higher than that and changes based on arm and
head movement. Still, this allows us to do a rough estimation
of tmwhich is generally lower than the actual torque. See
Figure 5 for a visual explanation.
Based on this, we can assume:
β2α(3)
We can choose kfreely as long as we find a spring with
this constant that fits on the robot. We choose as following.
k=x·1
2mgl (4)
Where xis a scaling factor that we will use later. If we now
combine formulas 1 and 2 we can get the resulting torque
on the knee when the robot is standing on both legs τb.
τb=τm+τs=1
2mgl ·sin α(5)
Inserting 3 and 4 into 5 leads to the following equation.
τb1
2mgl ·sin 1
2βx·β(6)
Similarly, we can compute the torque τowhen this leg is
supporting the complete weight of the robot, as well as τa
when the leg has no ground contact. We neglected the torque
resulting from the mass of the foot for τa.
τo2τbτaτs(7)
We can use the factor xto scale the spring and thereby
change the trade-off between applied torque in different knee
angles for the cases of τb,τo, and τa. There are different
possibilities to choose this factor based how much time the
knee spends in certain angles and in which of these cases, as
well as if the focus is on reducing mean or maximal torque.
Ethernet
Switch
NUC
(with WiFi)
ODROID
XU4
Basler
2040-35gc
CORE PoE
QUADDXL IMU 1&2
Pressure
Left Foot
Pressure
Right Foot
Battery PSU
Dynamixel
Left Leg
Dynamixel
Right Leg
Dynamixel
Arms
Power source
Selection
Voltage
Regulation
Buttons &
LEDs
Teensy 4.0
Power Sensing
and Switching
Intel NCS Dynamixel
Head
Fig. 6. Electronic systems in the Wolfgang-OP. The Intel NUC connects
to the QUADDXL on the Wolfgang CORE and the Intel NCS2 using USB
(blue). The Basler camera, ODROID-XU4, and NUC are connected via
Ethernet (red). The Basler camera is powered using Power over Ethernet
induced on the Wolfgang CORE. Sensors (IMUs, foot pressure, and voltage
and current sensing) and actuators are connected to the QUADDXL on four
separate RS485 buses (yellow) and communicate through it to the NUC.
The Wolfgang CORE also handles the selection of either a battery or power
supply unit. It furthermore handles the voltage regulation for the ODROID-
XU4 and Ethernet switch and the control of the power supply to the servos
and sensors.
For this robot, the goal was to reduce mean torque to improve
energy consumption but also to reduce max τbsince the knee
joints are close to their limit during stand-up motions (see
also Figure 12), leading to issues with older servos or when
the battery is running low. At the same time, we need to
ensure max τamax τb, since otherwise we would create
to high torque when pulling the legs towards the body while
lying on the back during stand-up. When we assume that the
knee angle is always between 0 and π, we result with x
0.15. From this we can compute the corresponding spring
parameter for our robot.
k= 0.15 ·1
2·6.0kg ·9.81 m
s2·0.17m = 0,75N m (8)
Due to size and availability we chose a spring with k= 0.76.
This leads to a theoretical reduction of 37% for the maximal
τbtorque. A plot of the theoretical torques is shown in Figure
4 and the results are presented in Section V.
C. Electronics
To efficiently use a robot, a good interface to its hardware
is needed. For this, we developed the CORE board (see
Figure 6 and 7), which builds upon our previous work on
high-frequency hardware control [4]. It provides four RS-485
buses with up to 12 MBaud. Since the used Dynamixel
Motors6are only able to communicate at 4 MBaud [4] we use
this frequency on all buses. Additionally, the CORE board
handles the power management of the robot. A battery and
an external power supply unit (PSU) can be connected at
the same time which allows battery swapping without loss
of power. Uncontrolled charging of the battery by the PSU
or reverse current into the PSU is prevented by a double
anode single cathode diode. The input voltage is regulated
to 5V and 9V for the ODROID-XU4 and network switch
respectively. To power the camera, a Power over Ethernet
6https://emanual.robotis.com/docs/en/dxl/
Fig. 7. Overview of the components and functionalities on the Wolfgang
CORE board, the main electronics board of the Wolfgang platform.
(PoE) power sourcing equipment is implemented on the
board.
Both, a manual switch and the installed Teensy 4.0 mi-
crocontroller can be used to enable and disable power to the
devices on the Dynamixel bus, i.e. actuators and sensors. The
microcontroller is also used to measure the input voltages
of the power supplies and the cell voltages of the battery
as well as the current consumed by the whole robot using a
hall effect based sensor. It is connected to one of the RS-485
buses like the other sensors and actuators in the robot.
To be able to place the IMUs at arbitrary positions in the
robot (i.e. the torso and the head), we developed an IMU
module that connects to the Dynamixel bus. This reduces
the need for running additional cables through the robot.
By using an IMU in the robot’s head we hope to improve
our results from reprojecting the camera image to real-world
coordinates by improving the accuracy of pitch and roll
angle. The used sensor is an MPU6500. As a microcontroller,
we chose the ESP32 since it features two cores. One of these
cores is dedicated to communication on the DXL bus, while
the other reads the sensor and runs a complementary filter
based on [35]. Additionally, we added buttons and RGB
LEDs to the IMU module in the torso for interaction and
status indication.
In [4] we introduced our foot pressure sensors. While
keeping the analog electronics the same, we have changed
the microcontroller to an ESP32 to share code with the IMU
module and avoid errors due to conflicts between the reading
of the sensors and bus communication.
When the robot is turned off suddenly while standing, its
joints will rotate while falling. This can induce voltages in
the motor controller chips which are higher than the ones
they are rated for, thus leading to damage [1]. To prevent
this, we added a transient-voltage-suppression (TVS) diode
to all MX servos.
A single Basler acA2040-35gc camera is used on the
Wolfgang-OP platform. It features a global shutter and,
compared to the commonly used webcams, a higher dynamic
Hardware
Interface
HCM
Walk
Engine
Kick
Engine
Animation
Server
Move
Base
Head
Behavior
Body
Behavior
Vision
Reprojector
Camera
Driver
Localization
Simulator
Recorded
Motions
URDF
Recording
GUI
BioIK
(MoveIt)
Dynamic
Reconfigure
Visualization
Tools for
Rviz & rqt
Dynamic
Stack
Decider
Wifi
Team Com.
Tele-
operation
Visual
Compass
Utilities
Stand Up
Engine
Fig. 8. Simplified overview of the ROS based software architecture. The
vision and high-level behavior (blue) is specific for RoboCup competitions.
Still, the motion part (red), including falling handling and walking, can
be easily used for other purposes due to the decoupling by the Humanoid
Control Module (HCM).
range and lower noise due to a larger photosensor. The
global shutter ensures that all pixel measurements are done
at the same time reducing errors due to wrong timestamps
for parts of the image, especially if the robot moves dur-
ing the exposure. The camera uses Gigabit Ethernet for
communication and power (PoE). Data is transferred via
the GigE Vision interface standard. Images are captured at
2048 x 1536 px but they are binned 4 times to reduce the
noise and dimensionality for our vision system. While the
camera supports up to 36 FPS at full resolution, we limit it
to the framerate of the vision pipeline of currently 16 FPS.
D. Computer
The robot’s software mainly runs on an Intel
NUC 8i5BEK. It is used to execute real-time critical
and resource-intensive software, e.g. the robot’s hardware
control, motion, and vision. An Odroid-XU4 single board
computer is used for additional less real-time critical tasks,
for example, the localization or the high-level behavior.
The communication between these computers is done
via Ethernet. Other devices, e.g. the camera or external
computers for maintenance and configuration, are also
connected to this network.
For neural network inference, an Intel Neural Compute
Stick 2 is used. It is based on the Intel Movidius Myriad X
Vision Processing Unit. This hardware accelerator is used
to reduce the CPU load and power consumption while
maintaining an equivalent processing speed compared to the
CPU inference. The device is connected via USB 3.0 to
the NUC and uses about 2 W [20]. With 35 g it is also
a significant weight reduction considering the 102 g of the
Jetson TX2 which we used previously for this task.
IV. SOFTWARE
Our software is based on ROS [27]. The main advantage is
the large library of existing software packages and tools. To
Fig. 9. Screenshots from different available simulation environments. From
left to right: Webots, Pybullet, Gazebo.
take full advantage of this, we use the standard messages
and packages as far as possible. This also facilitates the
use of parts of our software by others. An overview of
the architecture can be seen in Figure 8. For development
and testing, the Wolfgang-OP is available for three different
simulators (see Figure 9).
One distinctive feature of our architecture is the Humanoid
Control Module (HCM) [5] that adds an additional abstrac-
tion layer between the hardware and high-level behavior. It
allows the usage of software originally meant for wheeled
robots, i.e. move base [21], and creates a loose coupling of
the different motion skills. Furthermore, it takes control of
the robot in case of falls and makes sure that it lands on
one of its elastic elements to prevent damage. Afterward,
the HCM will automatically perform a stand-up motion to
bring the robot into a standing pose. This encapsulation of
low-level, reflex-like behavior allows the high-level behavior
to concentrate on high-level goals. The fall detection uses
a threshold-based classifier that uses the torso’s angular
velocity and orientation in fused angles [3] based on IMU
data. It predicts the direction of the fall to allow the robot
to go in the correct safe pose. Fall measurements are only
invoked if the classifier predicts the same results over 10 ms
to prevent false positives from sensor noise.
The high-level behavior is split into two parts. The head
behavior controls the facing direction of the camera to gather
information about the environment. It can get different goals,
e.g. find the ball or gather localization information, and
uses the current world model to perform corresponding head
movements. The body behavior is the highest instance of
decision making. It decides where the robot should go, if it
should perform a motion skill, and tells the head behavior
what it should look for. This modularity leads to less complex
code and enables the reuse of parts, e.g. the robot can be
teleoperated by directly controlling the HCM.
All behavior modules are implemented using the Dynamic
Stack Decider (DSD), a lightweight open-source control
architecture [26]. It allows to define complex behaviors and
was also used for different service robot scenarios outside of
RoboCup.
A. Sensing
Most of our vision pipeline is already described in a previ-
ous work [9]. It can detect balls, lines, goalposts, obstacles,
Fig. 10. Vision debug output showing the detected robots divided in their
team colors (red/blue box), balls (green circle), goalposts (white box), and
the convex field-boundary (yellow line).
and the field boundary as seen in Figure 10. We recently
changed our neural network object detection to a YOLO V3
tiny architecture which we use to detect balls and goalposts.
YOLO V3 tiny is a simplified version of the established
YOLO V3 [29] architecture. The neural network inference
is done on the Intel Neural Compute Stick 2 which we
described in section III-D. Based on the camera’s orientation,
the object detections and segmentations are reprojected from
the image space onto the field plane for further processing in
our behavior, path-planning, and localization. Our localiza-
tion uses a particle filter to estimate the robot’s position using
the line, goalpost, and odometry inputs. Additionally, we
can use our visual compass for an estimation of the robot’s
orientation relative to the background which is necessary to
solve the symmetry problem of a soccer field.
B. Motion
The three main motion skills that the robot has to perform
(walk, kick, stand up) are all implemented by parameter-
ized Cartesian quintic splines with additional stabilization
controllers. Our previous approaches were mostly based on
interpolation of keyframes in joint space that were recorded
on the robot. The new approach has multiple advantages.
The manual fine-tuning is more straight forward, as param-
eters relate to a Cartesian unit, rather than a joint angle.
Furthermore, the number of parameters is lower, making
optimization easier. It is also possible to add a goal, e.g.
the position of the ball, to adapt a motion to the current
situation. For the stand-up motion, interpolation in Cartesian
space is necessary to ensure a straight vertical trajectory of
the upper body.
The stabilization of the motion skills is improved by
adding a PID controller based on the fused angles [3] of
the torso which is measured by the IMU sensor. The control
signal is also applied in Cartesian space. Finally, inverse
kinematics (IK) is needed to compute the goal angles of the
joints. We use the MoveIt [7] IK interface, which allows us to
choose between different IK implementations. We currently
apply BioIK [30] to all tasks since it is generic, fast, and
allows us to solve for different types of goals, e.g. the look-at
goal in our head behavior.
V. EVALUATION
Previous versions of this platform were successfully used
in multiple RoboCup tournaments. One technical challenge
in RoboCup was the push recovery, where the robot has to
withstand an impact while walking. In this challenge, our
platform scored first place in the teen-size category in 2019,
proving its stability.
A. Robustness
The usage of good fall detection together with the bumpers
and compliant elements significantly reduced the number of
gears that needed replacement. Even after numerous hours
of use, the 3D printed elements showed no damage while
some of the previous polyurethane-based ones broke.
Without the compliant element in the neck joint, we fre-
quently encountered failure in the 3D printed neck connector.
We tried using a metal connector, but this led to damages
to the plastic servo casing. After integrating the compliant
element in the neck joint no further failures occurred.
To test our falling detection, the IMU data was recorded
while the robot was falling, standing, and walking. Manual
perturbations were performed to produce edge cases where
the robot is almost falling. The training set consists of 13,944
frames where the robot is not falling and 7,961 where it is
falling in one of the four different directions. For evaluation,
a set of 9,436 frames where the robot is not falling is used
to compute the number of false positives. Furthermore, the
impact times for five falls while standing and walking for
each direction (together 40) were labeled to compute the
lead time. The classifier predicts a fall with a minimal lead
time of 295 ms and a mean lead time of 596 ms without false
positives. It is possible to achieve higher minimal lead times
if false positives are accepted. Since the maximal observed
time between detection of a fall and the robot reaching a safe
pose is 231 ms, the resulting minimal lead time is sufficient.
B. Control Loop
Our custom electronics can read and write all servos and
sensors with 750 Hz if only the positions of the servos are
read. When reading positions, velocities, and efforts of the
servos, the rate is 715 Hz. This rate is limited by the fact that
the four buses can not be evenly distributed on the devices
due to the kinematic structure of the robot. The slowest buses
are the ones on the legs which need to control six servos
and a foot sensor each. If no foot sensors are used, a control
frequency of 1,000 Hz is possible. For more details on this
and comparison with other approaches see our previous paper
[4].
C. Torque Reduction
To evaluate the PEA performances, the joint feedback
was recorded during the performance of different typical
motions with and without springs. The servos are only
able to estimate the torque based on the sensed current.
Therefore, the recorded torques are not precise. Still, since
the experiments were performed on the same robot, we
assume the values are comparable to each other. Naturally,
TABLE II
COMPARISON OF KNEE TOR QU ES
Motion No PEA PEA Relation
[Nm] [Nm] PEA / no PEA
Standing in a mean 0.48 0.31 0.66
walk ready pose max 0.81 0.80 0.99
Walking mean 2.47 1.91 0.78
max 9.4 7.29 0.78
Stand up mean 1.85 1.61 0.87
from the front max 7.58 6.60 0.87
Stand up mean 1.90 1.49 0.78
from the back max 7.03 6.80 0.97
Standing to mean 1.29 0.48 0.37
squatting max 2.85 1.17 0.41
Squatting to mean 2.75 1.74 0.63
standing max 7.08 3.75 0.53
0.0 0.2 0.4 0.6 0.8 1.0
Step Phase
−2
0
2
4
Torque [Nm]
no PEA
PEA
Fig. 11. Exemplary plot of a knee servo during a walking, with (yellow)
and without (blue) torsion spring. The double support phases are marked in
grey. The PEA reduces the load in the first half where the servo is supporting
the robot’s weight. In the second half, additional torque is generated since
the leg is not under load.
the torques are also dependent on the specific parameters, e.g.
how high the foot is lifted from the ground during a step.
All experiments were performed with the same parameters
that are used in our competitions. The results are displayed
in Table II. The mean torque is evaluated to show how the
energy consumption is improved and the maximal torque to
show how much the servos are stressed. It is visible that
using the PEA improves the mean torque in all motions.
Even during walking, where the PEA on the swing knee
creates additional torque, a significant overall reduction can
be measured (see Figure 11). A similar pattern can be seen
when standing up from the back (see Figure 12). Here, both
feet need to be pulled towards the torso without ground
contact, therefore additional torque is created by the PEAs.
Still, the overall mean torque remains lower as the energy
saved during later phases of the motion compensates for
this. The maximal torque values show almost no change
in some motions. This value is influenced by the fact that
the torque is often not equally distributed between both
legs when the center of mass is shifted slightly to one
side. Furthermore, this value is stronger influenced by the
noisiness of measurements.
D. Computational Performance
The current accuracy of the vision pipeline is shown in
table III. The metrics and evaluation data set are identical
0 1 2 3 4 5 6 7
Time [s]
4
2
0
2
4
6
Torque [Nm]
no PEA
PEA
Fig. 12. Exemplary plot of a knee servo during a stand up motion from
lying on the back, with (yellow) and without (blue) torsion spring. In the
first phase, the feet are pulled towards the torso, requiring a higher torque
when using the PEAs. Afterward, the robot tilts onto its foot soles. In the
third phase, the robot briefly stays in a squatting pose to stabilize. Finally,
the robot gets up to a standing pose. In the last three phases, the robot’s
weight rests on the feet where the PEA solution requires lower motor torque.
TABLE III
EVALUATI ON O F THE V IS IO N PIP EL IN E (ME AN JAC CA RD -IN DE X)
Version Ball Field boundary Line Goalpost
Current (1.3.3) 0.794 0.922 0.058 0.658
Previous (1.1.0) [9] 0.677 0.925 0.021 0.183
to the ones proposed in [9]. The current ball and goalpost
detection performed significantly better than the approach
described in [9] due to the new YOLO v3 tiny. The field
boundary score decreased slightly overall due to minor
modifications which were necessary to prevent issues when
a second soccer field is visible in the image. A case that
often happens during RoboCup tournaments. These changes
also improved the execution time slightly. The vision runs at
20.7FPS when using all cores of the Intel NUC. When using
a single core for the vision to reserve performance for other
tasks, the frame rate drops to 11.2FPS. Running the neural
networks on the NCS2 Vision Processing Unit increases the
FPS to 15.67 while still using one CPU core. The remaining
cores of the NUC are sufficient to run the motion part of the
software stack at 700Hz.
VI. CONCLUSIONS
The Wolfgang-OP was designed to increase robustness as
well as performance while still keeping the robot low-cost.
An early and reliable falling detection together with the 3D
printed compliant elements ensure high robustness against
falls. Reacting fast to falls requires a high control loop rate
which is achieved by our custom electronics. Furthermore,
the load on the robot’s knees is reduced by integrating a
torsion spring. Our paper provides the formula to choose
the correct stiffness for humanoid robot knee joints. Addi-
tionally, we investigated how computational performance is
improved by integrating a tensor processing unit (TPU).
In the future, the robot could be improved by adding a
second rotary encoder after the compliant elements. This
would enable us to read the correct joint position and es-
timate the torque by measuring the rotation of the compliant
part. Software-wise, switching to ROS 2 would reduce the
overhead of message passing.
The robot’s hardware, software, and simulation
environments are open source and available here:
https://github.com/bit-bots/wolfgang robot
ACKNOWLEDGMENT
Thanks to the members of the Hamburg Bit-Bots for
helping to develop and test this platform.
REFERENCES
[1] NUbots Hardware Overview and Specifications. https://
nubook.nubots.net/system/hardware/overview. (vis-
ited on Jan. 21, 2021).
[2] P. Allgeuer, H. Farazi, M. Schreiber, and S. Behnke. Child-sized 3D
Printed Igus Humanoid Open Platform. In International Conference
on Humanoid Robots (Humanoids). IEEE, 2015.
[3] Philipp Allgeuer and Sven Behnke. Fused Angles: A Representation
of Body Orientation for Balance. In International Conference on
Intelligent Robots and Systems (IROS). IEEE, 2015.
[4] Marc Bestmann, Jasper G¨
uldenstein, and Jianwei Zhang. High-
Frequency Multi Bus Servo and Sensor Communication Using the
Dynamixel Protocol. In Robot World Cup. Springer, 2019.
[5] Marc Bestmann and Jianwei Zhang. Humanoid Control Module: An
Abstraction Layer for Humanoid Robots. In International Conference
on Autonomous Robot Systems and Competitions (ICARSC), 2020.
[6] Ronnapee Chaichaowarat, Diego Felipe Paez Granados, Jun Kin-
ugawa, and Kazuhiro Kosuge. Passive Knee Exoskeleton Using
Torsion Spring for Cycling Assistance. In International Conference
on Intelligent Robots and Systems (IROS). IEEE, 2017.
[7] Sachin Chitta, Ioan Sucan, and Steve Cousins. Moveit! IEEE Robotics
& Automation Magazine, 19(1), 2012.
[8] G. Ficht, H. Farazi, A. Brandenburger, D. Rodriguez, D. Pavlichenko,
P. Allgeuer, M. Hosseini, and S. Behnke. NimbRo-OP2X: Adult-
Sized Open-Source 3D Printed Humanoid Robot. In International
Conference on Humanoid Robots (Humanoids), 2018.
[9] Niklas Fiedler, Hendrik Brandt, Jan Gutsche, Florian Vahl, Jonas
Hagge, and Marc Bestmann. An Open Source Vision Pipeline
Approach for Robocup Humanoid Soccer. In Robot World Cup.
Springer, 2019.
[10] Reinhard Gerndt, Daniel Seifert, Jacky Hansjoerg Baltes, Soroush
Sadeghnejad, and Sven Behnke. Humanoid Robots in Soccer: Robots
Versus Humans in RoboCup 2050. IEEE Robotics & Automation
Magazine, 22(3), 2015.
[11] Lo¨
ıc Gondry, Ludovic Hofer, Patxi Laborde-Zubieta, Olivier Ly, Lucie
Math´
e, Gr´
egoire Passault, Antoine Pirrone, and Antun Skuric. Rhoban
Football Club: RoboCup Humanoid KidSize 2019 Champion Team
Paper. In Robot World Cup. Springer, 2019.
[12] Ambarish Goswami and Prahlad Vadakkepat. Humanoid Robotics: A
Reference. Springer, 2019.
[13] D. Gouaillier, V. Hugel, P. Blazevic, C. Kilner, J. Monceaux, P. Lafour-
cade, B. Marnier, J. Serre, and B. Maisonnier. Mechatronic Design
of NAO Humanoid. In International Conference on Robotics and
Automation. IEEE, 2009.
[14] Inyong Ha, Yusuke Tamura, and Hajime Asama. Development of Open
Platform Humanoid Robot DARwIn-OP. Advanced Robotics, 2013.
[15] Clinton G Hobart, Anirban Mazumdar, Steven J Spencer, Morgan
Quigley, Jesper P Smith, Sylvain Bertrand, Jerry Pratt, Michael Kuehl,
and Stephen P Buerger. Achieving versatile energy efficiency with the
wanderer biped robot. IEEE Transactions on Robotics, 36(3), 2020.
[16] S. Kajita, R. Cisneros, M. Benallegue, T. Sakaguchi, S. Nakaoka,
M. Morisawa, K. Kaneko, and F. Kanehiro. Impact Acceleration of
Falling Humanoid Robot with an Airbag. In International Conference
on Humanoid Robots (Humanoids). IEEE, 2016.
[17] Y. Kakiuchi, M. Kamon, N. Shimomura, S. Yukizaki, N. Takasugi,
S. Nozawa, K. Okada, and M. Inaba. Development of Life-Sized
Humanoid Robot Platform with Robustness for Falling down, Long
Time Working and Error Occurrence. In International Conference on
Intelligent Robots and Systems (IROS), 2017.
[18] Hyung Joo Kim, Jaeho Noh, and Woosung Yang. Knee-Assistive
Robotic Exoskeleton (KARE-1) Using a Conditionally Singular Mech-
anism for Industrial Field Applications. Applied Sciences, 10(15),
2020.
[19] Matthieu Lapeyre, Pierre Rouanet, and Pierre-Yves Oudeyer. The
Poppy Humanoid Robot: Leg Design for Biped Locomotion. In
International Conference on Intelligent Robots and Systems. IEEE,
2013.
[20] Leandro Ariel Libutti, Francisco D Igual, Luis Pinuel, Laura De Giusti,
and Marcelo Naiouf. Benchmarking Performance and Power of
USB Accelerators for Inference with MLPerf. In 2nd Workshop on
Accelerated Machine Learning (AccML), Valencia, Spain, 2020.
[21] E. Marder-Eppstein, E. Berger, T. Foote, B. Gerkey, and K. Kono-
lige. The Office Marathon: Robust navigation in an indoor office
environment. In 2010 IEEE International Conference on Robotics
and Automation, pages 300–307, May 2010.
[22] Leandro Tom´
e Martins, Christopher A Arend Tatsch, Eduardo Hen-
rique Maciel, Reinhard Gerndt, and Rodrigo da Silva Guerra. A
Polyurethane-Based Compliant Element for Upgrading Conventional
Servos into Series Elastic Actuators. IFAC-PapersOnLine, 48(19),
2015.
[23] Gabe Nelson, Aaron Saunders, and Robert Playter. The PETMAN
and Atlas Robots at Boston Dynamics. In Humanoid Robotics: A
Reference. Springer, 2019.
[24] Alberto Parmiggiani, Marco Maggiali, Lorenzo Natale, Francesco
Nori, Alexander Schmitz, Nikos Tsagarakis, Jos´
e Santos Victor,
Francesco Becchi, Giulio Sandini, and Giorgio Metta. The Design
of the iCub Humanoid Robot. International Journal of Humanoid
Robotics, 09(04), 2012.
[25] M. Plooij, M. van Nunspeet, M. Wisse, and H. Vallery. Design
and evaluation of the Bi-directional Clutched Parallel Elastic Actuator
(BIC-PEA). In International Conference on Robotics and Automation
(ICRA). IEEE, 2015.
[26] Martin Poppinga and Marc Bestmann. DSD - Dynamic Stack Decider:
A Lightweight Decision Making Framework for Robots and Software
Agents. in peer-review process, 2021. doi:10.13140/RG.2.2.
14585.01129.
[27] Morgan Quigley, Ken Conley, Brian Gerkey, Josh Faust, Tully Foote,
Jeremy Leibs, Rob Wheeler, and Andrew Y Ng. ROS: an Open-Source
Robot Operating System. In ICRA workshop on open source software,
volume 3. Kobe, Japan, 2009.
[28] Katayon Radkhah, Christophe Maufroy, Moritz Maus, Dorian Scholz,
Andre Seyfarth, and Oskar Von Stryk. Concept and Design of the
Biobiped1 Robot for Human-Like Walking and Running. International
Journal of Humanoid Robotics, 8(03), 2011.
[29] Joseph Redmon and Ali Farhadi. Yolov3: An incremental improve-
ment. arXiv preprint arXiv:1804.02767, 2018.
[30] Philipp Ruppel, Norman Hendrich, Sebastian Starke, and Jianwei
Zhang. Cost Functions to Specify Full-Body Motion and Multi-Goal
Manipulation Tasks. In International Conference on Robotics and
Automation (ICRA). IEEE, 2018.
[31] Saeed Saeedvand, Masoumeh Jafari, Hadi S. Aghdasi, and Jacky
Baltes. A Comprehensive Survey on Humanoid Robot Development.
The Knowledge Engineering Review, 34, 2019/ed.
[32] Max Schwarz, Michael Schreiber, Sebastian Schueller, Marcell Mis-
sura, and Sven Behnke. NimbRo-OP Humanoid Teensize Open
Platform. In In Proceedings of 7th Workshop on Humanoid Soccer
Robots, International Conference on Humanoid Robots, Osaka, 2012.
[33] Satoshi Shigemi. ASIMO and Humanoid Robot Research at Honda.
In Humanoid Robotics: A Reference. Springer, 2019.
[34] O. Stasse, T. Flayols, R. Budhiraja, K. Giraud-Esclasse, J. Carpentier,
J. Mirabel, A. Del Prete, P. Sou`
eres, N. Mansard, F. Lamiraux,
J. Laumond, L. Marchionni, H. Tome, and F. Ferro. TALOS: A new
humanoid research platform targeted for industrial applications. In
2017 IEEE-RAS 17th International Conference on Humanoid Robotics
(Humanoids), 2017.
[35] Roberto G. Valenti, Ivan Dryanovski, and Jizhong Xiao. Keeping a
Good Attitude: A Quaternion-Based Orientation Filter for IMUs and
MARGs. Sensors, 15(8), 2015.
... Marc Bestmann, Jasper Güldenstein, Florian Vahl, and Jianwei Zhang "Wolfgang-OP: A Robust Humanoid Robot Platform for Research and Competitions" IEEE Humanoids, 2021. [BGVZ21] describes the open-source humanoid robot platform "Wolfgang-OP". Special attention is given to the new features of this platform. ...
... The presented version of the robot is the Wolfgang-V2. The published paper [BGVZ21] describes the earlier version 1, which has some minor changes, especially to the computer (see also Figure 3.1), but is generally very similar. The following sections describe the platform in more detail. ...
... The torque τ t is displayed without sign for better comparison. [BGVZ21] 12bit (see Section 2.2.1 for more details). While each servo also includes a temperature sensor, this is only used to ensure that no overheating happens. ...
Thesis
Full-text available
This thesis investigates the learning of motion skills for humanoid robots. As ground- work, a humanoid robot with integrated fall management was developed as an experi- mental platform. Then, two different approaches for creating motion skills were investi- gated. First, one that is based on Cartesian quintic splines with optimized parameters. Second, a reinforcement learning-based approach that utilizes the first approach as a reference motion to guide the learning. Both approaches were tested on the developed robot and on further simulated robots to show their generalization. A special focus was set on the locomotion skill, but a standing-up and kick skill are also discussed.
... Obstacle avoidance is essential to efficiently navigate around the other robots on the field to prevent collisions, falls, and fouls. The Hamburg Bit-Bots play with the Wolfgang-OP robot [4]. It is an 0.8 m tall robot with 20 degrees of freedom (DOF). ...
... Chapter 6 proposes metrics and evaluates the approach compared to the baseline. The results of the evaluations and limitations of the approach are discussed in Chapter 7, and a conclusion is drawn in Chapter 8. Possible future work is outlined in Chapter 9. 4 ...
... The Wolfgang-OP [4] is the open source and open hardware platform developed and used by the Hamburg Bit-Bots RoboCup team. Figure 3.3 shows the 20 DOF humanoid robot, its kinematic model, and its simulation model. ...
Thesis
Full-text available
Navigation is a crucial component of any mobile robot. For humanoid robots, this task is more challenging than for wheeled robots. Their capabilities can not be modeled by the velocity and acceleration limits used for wheeled navigation, as the discrete nature of taking steps interferes with the assumption that acceleration is possible at any time. Footstep planning can provide a viable alternative. It enables the robot to step onto or over obstacles and navigate more precisely as it considers individual footstep placement. However, modeling the capabilities of where a robot can place its feet while remaining stable given a specific walking engine and the robot’s dynamic state is challenging. This thesis proposes using a model-free reinforcement learning approach to train a policy that outputs the footstep poses for a walking engine. This allows learning the capabilities of the robot and the walking engine to minimize the time required for navigation. The policy is trained and evaluated in simulation. However, the employed walking engine has been used successfully on several real robots, which could ease the sim-to-real transfer. The evaluation shows a significant performance increase compared to velocity-based navigation while still being robust to noisy measurements.
... Therefore, it is important to find the right balance. The torsion spring used in the Wolfgang OP reduces the motor torque by up to 37% with a spring constant of k = 0.76 [25]. ...
... This layout results in five kinematic chains when viewed from the base link in the torso of the robot. The layout of the robot can be seen in Both SEAs and PEAs have been used in this robot prior to this thesis, albeit in different use cases [25]. The SEAs used in this robot are fully 3D printed from an elastic filament 1 . ...
... As the robot's knees are in a bent position most of the time, the force applied to these two motors is significantly higher than the other actuators. This often leads to overload errors and reduced performance, especially with rising temperatures and diminishing battery charge [25]. ...
Thesis
Full-text available
Energy conservation and soft robotics are important aspects of humanoid robot development , especially in the context of human-robot interaction. In this thesis, a Series Elastic Actuator (SEA) is introduced to the knee joints of a humanoid robot. As this element adds a positional error to the joint, neural networks are trained to apply a corrective action to each position control command. To provide information for these networks, a new Hall sensor board is added to the robot. Different network architec-tures and modalities are evaluated. The results show, that using a neural network for solving this task in real-time is possible and returns results comparable to using a PI controller. By adding a compliant element and a hall effect sensor to the knee, this setup also allows for accurate estimation of the motor torque. This estimate is more precise than the estimate of the servo itself.
... To keep our walk controller generally usable, we do not rely on a specic analytic IK. This would need to be adapted to new robot models, and some of the used models, i.e. the Wolfgang-OP [10] and the NUgus [14], do not have an analytic solution because the hip joint axes do not intersect. We use the widespread MoveIt IK interface which provides multiple solvers. ...
... Additionally, we provide the baseline of a random search with 1,000 and 10,000 trials to highlight that these values can not be easily found randomly. The experiment was performed on three dierent robots, the Wolfgang-OP [10], the OP3 [2], and the Bez robot. These robots were chosen, as they have dierent sizes, foot shapes (with cleats and without), and dierent degrees of realism in the model. ...
Chapter
Full-text available
This paper presents a open-source omnidirectional walk controller that provides bipedal walking for non-parallel robots through parameter optimization. The approach relies on pattern generation with quintic splines in Cartesian space. Additionally, baselines of achieved walk velocities in simulation for all robots of the Humanoid Virtual Season, as well as some commercial robot models, are provided.KeywordsBipedal walkingRoboCupROS2
... To keep our walk controller generally usable, we do not rely on a specific analytic IK. This would need to be adapted to new robot models, and some of the used models, i.e. the Wolfgang-OP [10] and the NUgus [14], do not have an analytic solution because the hip joint axes do not intersect. We use the widespread MoveIt IK interface which provides multiple solvers. ...
... Additionally, we provide the baseline of a random search with 1,000 and 10,000 trials to highlight that these values can not be easily found randomly. The experiment was performed on three different robots, the Wolfgang-OP [10], the OP3 [2], and the Bez robot. These robots were chosen, as they have different sizes, foot shapes (with cleats and without), and different degrees of realism in the model. ...
Conference Paper
Full-text available
This paper presents a open-source omnidirectional walk controller that provides bipedal walking for non-parallel robots through parameter optimization. The approach relies on pattern generation with quintic splines in Cartesian space. Additionally, baselines of achieved walk velocities in simulation for all robots of the Humanoid Virtual Season, as well as some commercial robot models, are provided.
... This enables more research regarding the mechanical design of the robot. In addition to that, it enables the usage of more computationally intensive approaches due to a more flexible choice of computer hardware [Bes+21]. But this comes at the cost of less portable software approaches due to different hardware configurations. ...
... The robot platform used in this thesis is the Wolfgang-OP [Bes+21]. It is designed for the RoboCup environment and features an industrial C-Mount camera from Basler (Basler acA2040-35gc) with a global shutter as well as an Intel NUC (NUC8i5BEK) and an Odroid-XU4 for processing. ...
Thesis
Full-text available
In this thesis, an active vision approach is trained using reinforcement learning to control the head of a humanoid robot in the RoboCup Humanoid League domain. To train the approach a lightweight simulation is constructed to simulate a partially observable Markov decision process based on recorded high-level game data and the actions of the agent. The agent was trained using Proximal Policy Optimization and rewarded for a higher world model confidence. Different observation and action space configurations were evaluated to improve the performance of the policy. The best policy was compared against an existing entropy-based approach as well as the static scanning pattern currently used by the Hamburg Bit-Bots.
... In the Wolfgang robot platform utilized by the Hamburg Bit-Bots, most processing tasks are performed on an Intel NUC. The robot is also equipped with an Intel Neural Compute Stick 2 for neural network inference [29]. Because of positive experiences with the OpenVINO-toolkit [30] in the deployment of YOEO [31] to our robots, we follow a similar strategy for this approach. ...
Conference Paper
Full-text available
We showcase a pipeline to train, evaluate, and deploy deep learning architectures for monocular depth estimation in the RoboCup Soccer Humanoid domain. In contrast to previous approaches, we apply the methods on embedded systems in highly dynamic but heavily constrained environments. The results indicate that our monocular depth estimation pipeline is usable in the RoboCup environment.
Chapter
This paper presents the approach to designing and constructing SUSTAINA-OP™, a robot developed to participate in the RoboCup Humanoid League KidSize. The robot was developed as a humanoid hardware platform with the objective of overcoming the barrier of high hardware development costs associated with participation in the RoboCup Humanoid League. The concept of this platform is to create a “sustainable” system that is user-friendly for both new participants and software developers who employ multiple robots, facilitating long-term development. To achieve this, our design focused on enhancing robustness, minimizing resources required for repair and maintenance, and increasing versatility. Moreover, the system is equipped with a high-performance System-on-Module to support software development, considering the growing importance of deep learning-based recognition and other applications. The robot featured in this paper was utilized by the CIT Brains team from Chiba Institute of Technology, which emerged victorious in the Humanoid League of RoboCup2022. The data for the developed robot is available on GitHub.
Article
This paper investigates the influence of reference motion quality and other design choices on the performance of deep reinforcement learning for bipedal walking with Proximate Policy Optimization (PPO). We use parametrized Cartesian quintic splines to generate reference actions for an omnidirectional walk policy. By using parameter sets with different qualities, we show that the performance of the trained policy correlates to the quality of the reference motion. We also show that a policy in Cartesian space performs superior to a joint-space-based one if an advantageous representation of orientation is chosen. Additionally, we show that using an initial bias for the policy speeds up the training and leads to higher performances for policies using position control. We also show that we can achieve a stable omnidirectional walk on a wide variety of simulated humanoid robots.
Chapter
We describe the technologies of our autonomous soccer humanoid robot system that won the RoboCup2022 Humanoid KidSize League. For RoboCup2022, we developed both hardware and software. We developed a new hardware SUSTAINA-OP. We aimed to make it easier to build, harder to break, and easier to maintain than our previous robot. SUSTAINA-OP is an open hardware platform. As the control circuit, we selected a computer with higher processing power for deep learning. We also developed its software. In terms of image processing, the new system uses deep learning for all object detection. In addition, for the development of action decision-making, we built a system to visualize the robot’s states and solved many problems. Furthermore, kicking forward at an angle action is added as a new tactical action. In RoboCup2022, even when the robots were facing each other with the ball between them, by this action the robot succeeded in getting the ball out in the direction of the opponent’s goal.KeywordsOpen hardware platformDeep learningVisualize
Conference Paper
Full-text available
The popularization of edge computing imposes new challenges to computing platforms in terms of performance, power and energy efficiency. A correct tradeoff between these metrics will ultimately determine the success or failure of new architectures devoted to inference in the road towards an approximation of machine learning to sensors. In this work, we evaluate the ability of new USB-based inference accelerators to fulfill the requirements posed by applications in terms of inference time and energy consumption. We propose the adaptation of the MLPerf benchmark to handle two new devices: Google Coral Edge TPU and Intel NCS2. In addition, we present a hardware infrastructure that yields detailed power measurements for USB-powered devices. Experimental results reveal interesting insights in terms of performance and energy efficiency for both architectures, and also provide useful details on the tradeoffs in terms of accuracy derived from the specific arithmetic capabilities of each platform.
Article
Full-text available
We present the Dynamic Stack Decider (DSD), a lightweight open-source control architecture. It combines different well-known approaches and is inspired by behavior trees as well as hierarchical state machines . The DSD allows to design and structure complex behavior of robots as well as software agents while providing easy maintainability. Challenges that often occur in robotics, i.e., a dynamic environment and situation uncertainty, remain well-manageable. Furthermore, it allows fast modifications of the control flow, while providing the state-fullness of a state machine. The approach allows developing software using a simple Domain Specific Language (DSL) which defines the control flow and two types of elements that contain the programmed parts. The framework takes care of executing the demanded portions of the code and gives, due to its stack-like internal representation, the ability to verify preconditions while maintaining a clear structure. The presented software was used in different robotic scenarios and showed great performance in terms of flexibility and structuredness.
Article
Full-text available
With the aging demographic of today’s society, the need for robotic exoskeletons is expected to increase as they can compensate for declining physical strength in the physically impaired. In this study, an assistive robotic exoskeleton for the knee joint with fairly low energy consumption is proposed for industrial applications. The knee-assistive robotic exoskeleton (KARE-1) was designed to support a human body during production line tasks. The KARE-1 is based on a four-bar link mechanism with a rotary actuator and gas spring to accommodate a high power-to-weight ratio. By taking advantage of the utilized singular configuration of the four-bar linkage, this novel design is able to efficiently support the weight of the human body. The selected singular configuration allows this device to support the knee joint in the load-bearing stages of static sitting as well as during the motion between standing and sitting. The proposed device is further able to move freely along with the knee during walking movements. The proposed design was verified through a series of numerical simulations and through human subject testing at an industrial workplace.
Conference Paper
Full-text available
Humanoid robots promise a better integration of robots into our everyday life, but they come with additional challenges when compared with other mobile robots. We present a novel approach to simplify their usage by handling these challenges with the Humanoid Control Module and show its utility in the RoboCup Soccer Humanoid League. The paper also discusses previous approaches to this problem from inside and outside of the RoboCup domain.
Chapter
Full-text available
High-frequency control loops are necessary to improve agility and reactiveness of robots. One of the common limiting bottlenecks is the communication with the hardware, i.e., reading of sensors values and writing of actuator commands. In this paper, we investigate the performance of devices using the widespread Robotis Dynamixel protocol via an RS-485 bus. Due to the limitations of current approaches, we present a new multi-bus solution which enables typical humanoid robots used in RoboCup to have a control loop frequency of more than 1 kHz. Additionally, we present solutions to integrate sensors into this bus with high update rates.
Conference Paper
Full-text available
High-frequency control loops are necessary to improve agility and reactiveness of robots. One of the common limiting bottlenecks is the communication with the hardware, i.e., reading of sensors values and writing of actuator commands. In this paper, we investigate the performance of devices using the widespread Robotis Dynamixel protocol via an RS-485 bus. Due to the limitations of current approaches, we present a new multi-bus solution which enables typical humanoid robots used in RoboCup to have a control loop frequency of more than 1 kHz. Additionally, we present solutions to integrate sensors into this bus with high update rates.
Chapter
Full-text available
In 2019, Rhoban Football Club reached the first place of the KidSize soccer competition for the fourth time and performed the first in-game throw-in in the history of the Humanoid league. Building on our existing code-base, we improved some specific functionalities, introduced new behaviors and experimented with original methods for labeling videos. This paper presents and reviews our latest changes to both software and hardware, highlighting the lessons learned during RoboCup.
Chapter
Full-text available
We are proposing an Open Source ROS vision pipeline for the RoboCup Soccer context. It is written in Python and offers sufficient precision while running with an adequate frame rate on the hardware of kid-sized humanoid robots to allow a fluent course of the game. Fully Convolutional Neural Networks (FCNNs) are used to detect balls while conventional methods are applied to detect robots, obstacles, goalposts, the field boundary, and field markings. The system is evaluated using an integrated evaluator and debug framework. Due to the usage of standardized ROS messages, it can be easily integrated into other teams’ code bases.
Article
Legged humanoid robots promise revolutionary mobility and effectiveness in environments built for humans. However, inefficient use of energy significantly limits their practical adoption. The humanoid biped walking anthropomorphic novelly-driven efficient robot for emergency response (WANDERER) achieves versatile, efficient mobility, and high endurance via novel drive-trains and passive joint mechanisms. Results of a test in which WANDERER walked for more than 4 h and covered 2.8 km on a treadmill, are presented. Results of laboratory experiments showing even more efficient walking are also presented and analyzed in this article. WANDERER's energetic performance and endurance are believed to exceed the prior literature in human-scale humanoid robots. This article describes WANDERER, the analytical methods and innovations that enable its design, and system-level energy efficiency results.
Article
The development of a versatile, fully-capable humanoid robot as envisioned in science fiction books is one of the most challenging but interesting issues in the robotic field. Currently, existing humanoid robots are designed with different purposes and applications in mind. In humanoid robot development process, each robot is designed with various characteristics, abilities, and equipment, which influence the general structure, cost, and difficulty of development. Even though humanoid robot development is very popular, a few review papers are focusing on the design and development process of humanoid robots. Motivated by this, we present this review paper to show variations in the requirements, design, and development process and also propose a taxonomy of existing humanoid robots. It aims at demonstrating a general perspective of existing humanoid robots’ characteristics and applications. This paper includes state-of-the-art and successfully reported existing humanoid robot designs along with different robots used in various robot competitions.