ArticlePDF Available

Compliant Humanoids Moving Toward Rehabilitation Applications: Transparent Integration of Real-Time Control, Whole-Body Motion Generation, and Virtual Reality

Authors:
  • Inria Nancy - Grand Est

Abstract and Figures

Humanoids fascinate us through their anthropomorphic appearance, high dexterity, and potential to work in human-tailored environments and interact with humans in novel applications. In our research, we promote two real-world applications in physiotherapeutic juggling and assisted walking using two compliant humanoids (COMANs), COMAN and COMAN + . We focus on rehabilitation, which, as a result of changing demographics, is becoming an increasingly crucial application field. However, as with most humanoid experiments, the realization of these scenarios is challenging because the hardware is brittle, the software is complex, and control remains highly demanding. In this article, we describe an integrative and transparent control architecture that alleviates this complexity by strictly adhering in design and implementation to a componentbased approach. It promotes flexibility and reusability and allows transparent switching among different humanoid robots, between simulation and the real world, and among control paradigms. It also orchestrates the integration of real-time (RT) and non-RT (NRT) components, including a virtual reality (VR) framework, toward rich user interaction.
Content may be subject to copyright.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
2
IEEE ROBOTICS & AUTOMATION MAGAZINE
MONTH 2019 1070-9932/19©2019IEEE
Humanoids fascinate us through their anthro-
pomorphic appearance, high dexterity, and
potential to work in human-tailored environ-
ments and interact with humans in novel
applications. In our research, we promote two
real-world applications in physiotherapeutic juggling and
assisted walking using two compliant humanoids
(COMANs), COMAN and COMAN+. We focus on
rehabilitation, which, as a result of changing demographics,
is becoming an increasingly crucial application field.
However, as with most humanoid experiments, the
realization of these scenarios is challenging because the
hardware is brittle, the software is complex, and control
remains highly demanding. In this article, we describe an
integrative and transparent control architecture that alleviates
this complexity by strictly adhering in design and
implementation to a component-based approach. It promotes
flexibility and reusability and allows transparent switching
among different humanoid robots, between simulation and
the real world, and among control paradigms. It also
orchestrates the integration of real-time (RT) and non-RT
(NRT) components, including a virtual reality (VR)
framework, toward rich user interaction.
The Humanoid Robotics Decathlon
Humanoid robotics is a fascinating field that has captured the
imaginations of researchers, artists, and the general public.
Endowing humanoids with the ability to execute tasks in the
real world, however, still poses enormous challenges. Indeed,
it represents the decathlon of robotics in that it requires mas-
tery of diverse fields and the integration of expertise from dif-
ferent disciplines to achieve a generic humanoid robot
assistant. Likewise, simpler platforms could typically perform
specific tasks more reliably, just as a specialized athlete per-
forms better in a single discipline than a decathlete.
In practice, most humanoid robots are still prototypic
research platforms that are rather fragile, expensive, and far
from productive for everyday use. Actuator technologies
are still developing; control paradigms, including position
and torque control, differ; dynamic walking must be fur-
ther addressed; and the robots’ high dexterity is based on a
high degree of redundancy, which, in turn, requires that
task hierarchies be defined to make use of and resolve
© PHOTOCREDIT
By Pouya Mohammadi, Enrico Mingo Hoffman, Niels Dehio,
Milad S. Malekzadeh, Martin Giese, Nikos G. Tsagarakis,
and Jochen J. Steil
Transparent Integration of Real-Time Control, Whole-Body
Motion Generation, and Virtual Reality
Digital Object Identifier 10.1109/MRA.2019.2940970
Date of current version: 21 October 2019
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
3
MONTH 2019
IEEE ROBOTICS & AUTOMATION MAGAZINE
this[1]. This comes in addition to challenges in integration
with perception and state estimation, user interaction,
planning, and so on.
Consequently, most control systems are highly tailored to
particular hardware and specific tasks. Algorithms can hardly
ever be benchmarked because of the difficulty of switching
between hardware platforms. System integration is often par-
ticularly difficult as it is trying to hit a moving target, namely,
the integration of a number of continuously and rapidly
changing technologies, including actuation and sensor hard-
ware, RT bus systems and protocols, component models,
robotics control libraries, advanced control algorithms, and
diverse operating systems, to name only a few. This calls for
smart engineering and intermediate steps toward more
robustness, repeatability, and flexibility.
In our research, we approach two real-world applica-
tions, the problems of physiotherapeutic juggling and assisted
walking, using the compliant humanoid robots COMAN
and COMAN+ (see Figure 1 and Table 1). This is strongly
motivated by the demographic changes that many coun-
tries are facing. By 2020, for instance, a quarter of the Euro-
pean Unions population will be older than 60 years [2],
leading to increased demand for physical training, rehabili-
tation, and assistance.
The contribution of this article is to promote a systematic,
model-based approach to control architecture design that
mediates some of the described complexities of such applica-
tions. Our design follows the principles of modularity and sepa-
ration of concerns from software engineering to integrate
functionality in an RT safe environment with the aim of pro-
viding a blueprint for a reusable, hardware-independent sys-
tem. It features transparent switching between robot and
simulation, between different hardware or control paradigms,
and the assimilation of NRT components, such as VR. It there-
by supports a flexible but systematic application development
while accommodating diverse and changing technologies. We
focus mainly on the functional architecture and control of real
and simulated humanoid robots COMAN and COMAN+ in
the context of the following real-world scenarios.
Application of VR in Physiotherapy
Although some humanoid robots, such as our COMAN and
COMAN+, have become inherently safer through the employ-
ment of physical compliance, they are still too complex to
safely interact with users outside the lab. However, the advent
of inexpensive and stable VR systems provides new opportu-
nities for intermediate steps toward user interaction. We pro-
pose invoking the actual model-based, real-world controller
to drive a rendered robot in VR that interacts with the user
through standard VR interfaces. This approach immediately
tackles cost and robustness liabilities, while development, test-
ing, and deployment of the control system can be enhanced
through engaging in much more complex tasks. This makes
the interaction inherently safer and richer, and it mediates the
Figure 1. The COMAN (right) and its scaled-up version COMAN+
(left) humanoid robots. Both robots were used to evaluate the
model-based approach to control architecture design of this article.
Table 1. The specifications for the COMAN and
COMAN+ platforms.
Parameters COMAN COMAN+
Degrees of
freedom
Legs 6 6
Arms 7 7
Torso 3 2
Neck
Modes Position, impedance,
voltage, torque
Position, impedance,
torque
Mass (kg) 35 70
Height (m) 0.95 1.7
Software OROCOS, ROS XBotCore,
OROCOS, ROS
Sensors IMU
4 × F/T 6 axes
Link and joint side
encoders
Joint-torque sensors
IMU
4 × F/T 6 axes
Link and joint side
encoders
Joint-torque sensors
Lidar
F/T: force/torque; IMU: inertial measurement unit.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
4
IEEE ROBOTICS & AUTOMATION MAGAZINE
MONTH 2019
necessity of advanced perception, which is partially replaced
through the VR tools. It is, however, necessary to ensure real-
istic robot physics and motion dynamics to keep the user
engaged and prevent jeopardizing the user experience.
This integration of VR and humanoid robotics was origi-
nally motivated by the concrete rehabilitation application of
physiotherapeutic juggling [3], where the patient and therapist
mutually catch and throw a lightweight ball. The exercise is
an essential part of the treatment for many age-related con-
ditions, such as rehabilitation from stroke. It is motivating
and demanding [4], [5] and improves coordination and bal-
ance [6] since it requires synchronous arm movement and
posture control. Furthermore, it can enhance the arm motion
and trunk–arm coordination of patients with Parkinson’s dis-
ease [7], [8].
Joint-Assisted Walking
In our second scenario, we approached the problem of joint-
assisted walking. Here, the humanoid robot assumes the role
of the follower and attempts to react to perceived human
intentions. Such advanced interaction will be necessary both
for assistance in manipulation of large objects and in rehabili-
tation through assisted walking. It requires an intention detec-
tion algorithm, which is realized through assessing
manipulability in our approach.
System Requirements
To mediate the technical difficulty of robotic experiments in
general and in humanoid robotics in particular, it is crucial to
devise system architectures that alleviate complexity, facilitate
development, and grant scalability. This section discusses the
respective requirements.
Safety
The system must be safe for the users at all times and under
all conditions; however, in an open-ended user interaction,
unpredicted actions can occur. To enable a physiotherapy
application, we use VR to mediate critical cases as there is no
direct physical interaction with the robot and only highly
controlled physical interaction through a haptics interface. A
sufficient safety margin can be enforced for the robot by
mechanical restraints, such that it cannot endanger a patient
even in the case of a technical failure.
RT
RT coordination is paramount to realizing the typical control
schemes for dynamic stability that rely on precisely timed
sensing of the robot state (such as encoders, the inertial mea-
surement unit, force/torque sensors, and so on) and commu-
nication between components. This holds even for simulated
robots [9]. In VR-based physiotherapy and similar user inter-
actions, there is the further important aspect that unhandled
delay can quickly jeopardize the user experience, as humans
perceive visual clues as fast as 13 ms [10].
On the other hand, there are typically many components
that do not rely on (strict) RT, such as motion planning,
long-term memory, or other capabilities integrated with the
Robotic Operating System (ROS). Thus, another requirement
of flexible RT control of humanoids is to isolate the NRT
components so that their lack of reliable response time does
not impede the core execution of RT components.
Secondary Requirements to Support Flexibility
and Reusability
Given the engineering efforts dedicated to the design and
implementation of humanoid robotic control systems, it is
understandable that flexibility and reusability are desired.
System transparency: The ability to switch between dif-
ferent robots with a minimum of programming effort is
generally desired. Although there are platform-specific
parameters to be tuned (for example, control gains), proper
model-based abstractions can relieve the burden of switch-
ing to a considerable degree. This includes reducing trans-
parent switching between the real and simulated robots [9]
to simple plug-and-play components because simulation is
typically an indispensable part of the development and test-
ing process.
Reusability: The reuse of functionality across applications
is a persistent goal of robotic software development, and it
supports the repeatability of experiments. Specifically, the
ability to deploy third-party algorithms can drastically reduce
the cost and complexity of the development.
Adaptability to emerging technology: Another often over-
looked aspect of reusability is adaptability to new and emerg-
ing technologies. Improved algorithms, faster and more
reliable solvers, and new middleware frequently become
available; hence, a modern architecture must be able to inte-
grate them with reasonable effort. The initial investment of
time and development in architecture design will eventually
pay off when it can be migrated to keep up with changing
technology demands.
Architecture
To address the requirements, we have realized the software
architecture CoSiMA (Compliant Simulation and Modeling
Architecture [9]), which is developed through the European
joint project CogIMon [18]. We summarize it only briefly to
provide the basis for discussing the specific functional prop-
erties for humanoid robotic applications.
We use Open Robot Control Software (OROCOS) [11] as
the underlying RT component framework because compo-
nents intrinsically support reusability in providing self-sus-
tained and deployable units of computation (as opposed to
classes). Specifically, the OROCOS Component Library pro-
vides the reusable functionality, whereas the deployment is
handled by the OROCOS Deployment Component and the
Deployer Environment.
The RT requirements are handled by the Real-Time Tool-
kit (RTT), which is an integral part of OROCOS. The RTT
exposes ports, handles their communication, and provides
tools to create new typekits to transfer dedicated data struc-
tures over these ports. The isolation of the NRT components,
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
5
MONTH 2019
IEEE ROBOTICS & AUTOMATION MAGAZINE
their synchronization with the RT components, tracking of
delay or latency, and management of unexpected delays are
resolved by the robotics service bus (RSB) [12] middleware or
RTT–ROS integration.
To address the secondary requirements, a larger number
of specific functional components are provided at different
granularity levels and operational objectives.
Robot models and interfaces: The exposure of identical
interfaces and ports across real hardware and simulation
as well as different robots is achieved through a rigorous
model-based approach that resorts to common abstrac-
tions based on robot unified robot description format
(URDF) and semantic robot description format (SRDF)
descriptions (Figure 2). These encapsulate all necessary
information including kinematic structure, available sen-
sors, actuator proportional-integral-derivative gains, and
joint limits to configure the interfaces. By using SRDF,
developers can also operate the robot based on conceptual
entities, such as right arm, rather than dealing only with
an n-vector of whole-body joint configuration. This
design paradigm further alleviates the reusability of the
components.
Practically, the task to read and write from/to the robot’s
low-level drivers and mediate and broadcast the hardware-
related information is implemented in a single component
named rtt_robot in Figure 2. It is parametrized with respect to
the URDF and SRDF files and is the only system part that
needs to be implemented according to the specific robot driv-
er protocol or simulation environment.
Switching from simulation to the real robot or between
robots is thus just a matter of changing this single component
and is thereby reduced to the inevitable minimum of adjust-
ing control parameters or actuator gains.
Functional and control components: Through leveraging of
the component library and deployment tools of OROCOS,
many components have been implemented to facilitate the
development process. This includes, but is not limited to,
walking-pattern generators (WPGs), stabilizers, and trajecto-
ry generation as well as loggers and helper tools. These com-
ponents are deployed in RT or NRT units depending on their
internal implementation.
A centerpiece is the motion engine component, where we
employ a stack-of-tasks (SoT) approach that is based on
inequality hierarchical quadratic programming (iHQP) [1]
and formulated as a velocity-control scheme together with a
joint level-impedance controller (see Figure 2). It utilizes the
OpenSoT [13] software tool, which simplifies setting up and
solving the respective optimization problems while adhering
to the kinematic chain abstractions. OpenSoT enables the
developer to combine tasks and constraints as atomic entities
without explicit definition of Jacobians or interaction with the
kinematic or dynamic solvers.
For the sake of brevity, we focus only on the SoT, but we
can alternatively also employ dynamically consistent task hier-
archy projections [14] as the motion engine. The latter resorts
to computed torque control, thereby switching the control par-
adigm substantially within the same overall architecture.
For illustration, let us provide a bird’s-eye view of the
described architecture by considering a single iteration of
throwing a ball in the physiotherapy scenario with reference
to Figure 3. Based on specific needs of the patient, the thera-
pist selects a throw point for the ball via the VR agent in the
NRT unit. This point is communicated to the trajectory gen-
eration component in the RT unit via the robot service bus.
The component computes an end-effector trajectory for the
arm of the robot, with which it realizes the desired landing
point of the ball and sends its results to the motion engine
over OROCOS ports. After the SoT is solved, the desired joint
motions are sent to the robot simulator. The robot’s feedback
is sent directly to the other components in the RT unit; in
Robot
SRDF
Robot
URDF
Semantic Analysis Motion Engine
– Kinematic Chains
– Joint Limits
– Actuator Parameters
– Sensors
– Configurable Stack of Tasks
– Whole-Body Inverse Kinematics
– iHQP (qpOASES)
– Integrated Walking Pattern Generator
– Integrated Stabilizer
– Read/Write Robot Driver
rtt_robot (Robot Drivers)
CoSiMA (Simulation)
OpenSoT (iHQP)
/Data
/Config
Figure 2. A semantic analysis of the robot model: kinematic and dynamic information for joints and links from URDF and semantic
information, such as kinematic chains, from SRDF are parsed, grouped, and exposed in interfaces to different components including
the control side. Likewise, the motion engine is configured according to the information broadcast by the component in the semantic
analysis block.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
6
IEEE ROBOTICS & AUTOMATION MAGAZINE
MONTH 2019
parallel, they are sent to the
NRT unit through the RSB,
where the robot is rendered
for the patient in the VR
component.
Communication between
the RT and the NRT units is
handled strictly by RSB mid-
dleware. Furthermore, the
robot or simulator clock is
considered as the reference
clock, although intrinsically it
is provided by the OROCOS
framework. The middleware
provides mechanisms to
measure or handle delays (for
example, due to network
latency).
SoT for Hierarchical
Quadratic
Programming
We follow a standard hierar-
chical SoT approach [1],
which is typical in humanoid
robot control due to the high
number of degrees of free-
dom. Lower-priority tasks
are executed in the null space
of higher priority tasks to
ensure that important tasks
are not compromised, and
less important tasks are fol-
lowed only to a degree,
depending on the possibly
varying availability of
remaining null-space
motion. Additionally,
inequality constraints, such
as joint position and velocity
limits, are integrated. We dis-
cuss our hierarchy of hard
and soft priorities and the
definition of major equality
and inequality tasks follow-
ing the formalism of Open-
SoT [13].
Denote the generalized
coordinates of a humanoid
robot by
.q
R
6n
!
+
Th e
first six coordinates repre-
sent the underactuated vir-
tual chain, attached from
the inertial frame to the
floating base, while the
remaining
n
are associated
NRT RT
Strictly RT Components in RTT/OROCOS
Motion Generation, Robot Drivers, Robotic Simulation, and so on
Middleware
NRT Group Isolated
NRT Group
Data Transports, Multicontext
Communication Bus, Synchronization
Therapist
Input
Patient’s
interaction
VR
Via Unity Engine
VR Agent
Therapist,
Interaction
t + δt
RTT–ROS
Integration
/Config, /Data
Motion Engine
iHQP SoT, Projec.
Robot
Real/Simulated
Trajectory
Generation
qd
qt
[x, x, x]d
...
Component Library
Trajectory Generation, WPG, Stabilizers, Loggers, Solvers
Semantic Analysis
Kinematic Chain Abstraction, Sensor Data,
Control Gains, and so on
Robot URDF
Robot SRDF
Semantic Information Service BusTime User I/O NRT Input NRT RT/NRT Delays Robot Feedbacks
Legend
Figure 3. The functional architecture in the context of physiotherapeutic juggling. The system comprises three major units: RT and NRT units as well as a middleware that handles the
communication, synchronization, and data transport between the two. In the case of juggling, the RSB plays this role; in assisted walking, RTT–ROS integration was used. Building blocks
of RT and NRT units are components that constitute a component library and are deployed via OROCOS. Semantic analysis (Figure 2) provides input to different components, including the
motion engine. I/O: input–output; Projec.: projection.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
7
MONTH 2019
IEEE ROBOTICS & AUTOMATION MAGAZINE
with the actuated joints. Tracking the 6D Cartesian posi-
tion and orientation of a frame
F
attached to an arbitrary
operational point of the robot constitutes a task,
,,Jx e
T
GH
m
=+
)
o (1)
where
eR6
!
is the Cartesian pose error between
F
and
desired
,F)
xR6
!
)
o
is the desired velocity of
,F
R
!
is
the task Jacobian, and
m
is a positive scalar gain. The task can
be associated with a constraint
,,AbCGH
= where matrix
A
and vector
b
define the solution boundaries.
At priority level
,i
the quadratic form of this task, along
with its constraint, is expressed as
,
:
:
argminqJqx
eq
Aq b
2
1
T
C
q
ii ii
ii i
22
Si
<<
<<
#
me
=--+=
=
)
!
oo
oo
o
o
(2)
Loco-Manipulation Throwing
Stability Criteria (CoM, ZMP, CP)
Left and Right Arms, Other Cartesian
Postural (Waist)
Redundancy Resolution
1
2
3
4
Left and Right Soles
Stability Criteria (CoM, ZMP, CP)
Left and Right Arms, Other Cartesian
Postural (Waist)
1
2
3
4
Redundancy Resolution5
Left and Right Soles Joint PV Limits Joint Position and Velocity (PV) Limits
Constraints Tasks
and
(a) (b)
Figure 4. The priorities for (a) assisted walking and (b) physiotherapeutic juggling. The sole contacts have the highest priority in
throwing, whereas, in loco-manipulation they are inserted as constraints to achieve better computational performance. Next is the
center-of-mass (CoM) task, followed by manipulation, postural, and redundancy tasks. The constraints of these task are joint position
and velocity limits. ZMP: zero moment point; CP: capture point.
Distance
σ1 .u1
RR
σm .um
RR
θ
um
L
σm
L
um
R
σm
R
+vR
vL
(c)
(b)
(a)
Figure 5. (a) The black and red configurations have identical distances to the torso, but red requires a left–forward step. This
demonstrates that distance is not sufficient to determine stepping direction. (b) The manipulability ellipsoids of the simplified model
with semi-axes. The red axes show the singular vectors for the smallest singular values. (c) The axes are scaled by
/.1
m
v The sum
indicates the walking direction (green).
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
8
IEEE ROBOTICS & AUTOMATION MAGAZINE
MONTH 2019
where
q
o
are the joint velocities and
e
is a regularization term.
Si
is the set of all possible solutions at this priority level and
lies in the null space of the task
Ti1-
at priority level
.i1-
Thus, the volume of
Si
shrinks as
i
increases and the priority
decreases. Consequently, for two tasks, the solution of a task
T
j must reside in the null space of
Ti
if
,ji2
that is, they
have “hard” prioritization.
Alternatively, it is possible that two tasks share the same
priority level, hence forming a “soft” priority. This is done
by augmenting the Jacobian and Cartesian velocities of
these tasks. Soft prioritization between the tasks can be
achieved using a weighting. An example of soft priorities is
the contact tasks for the left and right soles of a humanoid,
where it is crucial that both positions and orientations are
tracked accurately. In the SoT of our scenarios, this task
has the highest priority [Figure 4(a)], or it is injected as the
constraint of the highest-priority task ([Figure 4(b)].
In the null space of the contact tasks, the motion of the center
of mass (CoM), which is responsible for the balance, is tracked.
In the throwing scenario, where the robot is not stepping, a sta-
bilizer computes suitable CoM trajectories with the goal of
bringing the zero moment point (ZMP) to the center point of
the convex hull of the feet contact polygon. When stepping in
assisted walking, a pattern generator creates dynamically consis-
tent CoM motions according to the linear inverted pendulum
dynamics and the reference CoM velocity governed by
,
,
()
()
,
()().
x
uu
vv
vv
vv vv0
if
if
m
m
m
m
dd
dd
CoM L
L
R
RRL
RL
0
/
22
##
vv
=
+
o
*
(3)
0.7
0.6
0.5
0.4
0.3
0.2
0.1
–0.5
0
0
0.5 1.5 10.5 0
x(m)
y(m)
z(m)
Release
Point
Results of 10 Consecutive Throws
2
1.8
1.6
1.4
1.2
0.8
0.6
0.4
1
(s)
0 1,000 2,000 3,000 4,000
Iterations
Response Time of iHQP Solver
5,000
iHQP Time Worst Case Average
×10–3
(a)
(c) (d)
(b)
FRG/UNIVERSITÄTSKLINIKUM TÜBINGEN
Figure 6. The experimental results from physiotherapeutic juggling in VR. (a) A volunteer patient performing the catching exercise.
(b) Some snapshots of the view through the VR goggles. (c) The results of 10 consecutive throws. (d) The response time of the iHQP
solver.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
9
MONTH 2019
IEEE ROBOTICS & AUTOMATION MAGAZINE
In (3), we monitor the volume of the manipulability ellipsoid
of the left
vL
and the right
vR
arms of the COMAN+. When
either hits a threshold
,vd
the desired
xCoM
o
is computed based
on left singular vectors
um
associated with the smallest singu-
lar values
,
m
v
obtained from singular value decomposition of
the arms’ Jacobians (Figure 5).
On the next level, it proves useful to assign a task to the
waist that maintains a suitable Cartesian orientation and pre-
vents rotations along the roll and pitch with respect to the
inertial frame but allows yaw (along craniocaudal or the
z-axis) rotations. In throwing, this potentially helps the robot
toss the ball farther, whereas, in walking, it can align the
upper and lower bodies.
At the lowest priority, any remaining redundancy is
resolved through a task
TN
that expresses preference for
some default motion
q
N
)
and
q
N
)
o
—for example toward a
home position—with task Jacobian as identity matrix
:IR
nnn
!#
,().Iq qq
TnNNN
GH
m=+-
))
o (4)
Joint position and velocity limits are applied at the top priority
levels as inequality constraints, whereas, in (2), we have
AI=
and
{, }bqq
=
+-
oo
for joint velocity limits. Joint position limits
are similarly handled through a numerical integration.
Figure 4 shows the overall SoT. Its structure follows a
straightforward logic: stability and balance criteria are of the
highest importance, the manipulation task comes at the next
priority level, and the remainder is designed to achieve sec-
ondary objectives, such as human-like motion.
Experimental Validation
Physiotherapeutic-Juggling Scenario
The data flow of a single iteration of a throw was described
in the “Architecture” section. Here, we provide results of
the first empirical evaluations, initially tested in the labora-
tory with healthy staff (who were familiar with both the
robot and the VR) and later with patients. The experimen-
tal setup validated the RT and NRT integration together
with the reliability of the SoT-based motion generation.
Subsequently, we exposed patients to the system and eval-
uated their feedback regarding comfort level, user satisfac-
tion, and practical feasibility. The patients’ preliminary
feedback was highly positive; they were able to catch balls
and showed the desired training effects. Figure 6 shows a
patient and his view into the VR. We are in the process of
obtaining ethical permissions for a systematic evaluation at
the time of this writing.
Technically, the robot simulator component, including
Gazebo, was running at a 2-ms cycle time and achieved a 99%
RT factor while using Open Dynamics Engine as the physics
engine. The dynamically simulated motions were rendered in
the VR engine Unity for the patient wearing goggles.
Although the juggling exercises in VR turned out to be a
standalone application on its own merits, for the sake of com-
pleteness, we address the possibility of performing them on
the real robot. Based on our experiments, the real COMAN is
capable of generating the appropriate motions to make
throws; however, the hand and its fingers are not yet fast
enough to release the ball in a timely manner.
Assisted-Walking Scenario
Here, we present experimental validation of a close interac-
tion between the robot follower and human leader. At each
control cycle, the change of manipulability resulting from
hand-in-hand interactions between the user and the robot
dictates the walking direction according to (3) and Figure 5.
The joint references computed by the whole-body inverse
kinematics, inside the motion engine, are sent to a decentral-
ized joint-level impedance control. The joint stiffness in both
arms is kept low to guarantee enough compliance with the
Walking
Direction (3) WPG
Stabilizer
Motion
Engine
q
q
WL,R
xCoM
.
xCoM
xCoM
xLSole
xRSole
°¢
RT
CogIMon
Figure 7. A schematic of the assisted-walking scenario. Feedback from the robot consists of joint positions and velocities as well as
wrenches computed from ankle-mounted force/torque sensors. The former is used to computed the desired walking direction, where
the outcome is sent to the WPG, and the latter is fed to the stabilizer, which corrects the WPG output.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
10
IEEE ROBOTICS & AUTOMATION MAGAZINE
MONTH 2019
(m)
(a)
0102030405060
t
0102030405060
t
0102030405060
t
0
0.5
1
1.5
Robotic Assistant Force Manipulability
Walk Direction
Robotic Assistant Velocity Manipulability
and Walking Threshold
1
0
–1
0.8
0.6
0.4
0.2
0
–0.2
CoM and Feet Sagittal Positions
Normalized Walk Direction and
Force Manipulation of the Robotic Assistant
Robotic Assistant
Velocity Manipulability
vd
(d)
(e)
(f)
1
0.8
0.6
0.4
0.2
0
z
0.2
–0.2 –0.2 00.2
0.4 0.6
0x
y
Stabilized WPG
Error of Position of CoM
×10–3
1.5
1
0.5
–0.5
–1.5
–1
0
0102030405060
t
x
y
z
Left Foot Right Foot CoM ZMP
(b)
(c)
CogIMon
Figure 8. (a) The experimental setup for assisted walking in a human–robot loco-manipulation scenario. The user interacts with the impedance-controlled upper body, which computes and
performs suitable steps to address the human partner’s intentions based on its own arms’ manipulability. (b) The trajectories for the ZMP, CoM, and feet of the robot. (c) The tracking error for
the CoM. The (d) feet placement and CoM trajectories on the sagittal plane, (e) computed walking direction, and (f) changes of manipulability of the right arm during a 60-s experiment.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
11
MONTH 2019
IEEE ROBOTICS & AUTOMATION MAGAZINE
human patient. As seen in Figure 4, it uses a similar iHQP-
based SoT as the previous scenario with the addition of a
WPG based on [15]. The overall control scheme is depicted
in Figure 7.
In this scenario, a hard RT constraint must be enforced due
to the nature of the experiment. The SoT was running at 100
Hz, and the WPG was running at 10 Hz, achieving RT perfor-
mance under a Xenomai patched environment. Calculation of
walking direction based on manipulability was performed in an
NRT component developed as an ROS node. The performance
of the SoT controller that concerns the tracking of the CoM
and feet computed by the WPG is shown in Figure 8, while
implementation and technical aspects are detailed in [16].
The setup for this second scenario highlights the transpar-
ency toward technology scale-up in two regards. First, RSB
was replaced by RTT–ROS integration for NRT communica-
tions. More important, the COMAN+ hardware drivers are
based on a new framework called XBotCore [17]. Nonethe-
less, exposure of identical interfaces encapsulated this tech-
nology change from the developers. Furthermore, the WPG
was also tested on the COMAN robot without any program-
ming efforts, even though the robots have different kinematic
and dynamic structures. A complementary video attachment
for both of the experiments is available at https://youtu.be/
v2zFpngoe_Q.
Conclusions
We presented two humanoid interaction scenarios with real-
world applications and implications, focused on physiothera-
py and rehabilitation. An architecture that features a strict
separation of concerns proved to be paramount for their swift
development, which shows a very high degree of reusability.
Through strictly adhering to model-based abstractions, iden-
tical interfaces for the robots and their simulated versions are
exposed to the developers, allowing seamless switching
between them. This was lifted to an even higher tier by pro-
viding semantic abstractions in terms of kinematic chains,
encapsulating underlying structural differences.
The integration of VR engages users, strongly enhances
interaction opportunities, and mediates safety concerns. Thanks
to advancements in computer graphics, unbiased and physi-
cally consistent ray-tracing algorithms, and hardware that can
handle them at a very low price, we demonstrate that roboti-
cists can now test their controllers and explore user interac-
tions with humanoid robots in advanced scenarios. In the
VR-based physiotherapy scenario, we devised a juggling
application that has already been tested with patients and in a
real physiotherapy practice. Quantitative evaluation is on the
wa y.
The assisted-walking experiment—although still in a pre-
liminary state and heavily under development—provided
insight about the nature of nonverbal interactions and showed
how interaction forces can be interpreted for intention detection.
Given that exposing elderly patients to humanoids for assisted
walking is not yet reasonable due to safety considerations, for
the time being, we are working to make our algorithms more
reliable. A well-designed architecture strongly enhances safe-
ty, as systematic testing is greatly facilitated.
Once humanoid robots become more reliable and afford-
able, they have the potential to actively support individuals
through training, rehabilitation, and assistance with special
needs. In physiotherapeutics, for instance, therapists will be
able to orchestrate treatment plans for multiple patients, while
robots will do the repetitive “muscle work.” This is an exciting
future that we look forward to.
Acknowledgments
The research was funded under H2020 GA 644727–CogIMon.
We thank D. Brötz and J. Kodl for supervising the patient
work and Y. You, C. Zhou, J. A. Castano Pena, and L. Muratore
for supporting the COMAN+ experiments.
References
[1] A. E scande, N. Ma ns ard, and P.-B . Wiebe r, “Hierarchical quadratic
programming: Fast online humanoid-robot motion generat ion,” Int. J.
Robot. Res., vol. 33, no. 7, pp. 10061028, May 2 014. doi: 10.1177/
0278364914521306.
[2] A. Ch łoń-Domińcz ak , I. Kotowsk a, J. Ku rkiew icz, M. Stonaw sk i, and
A. Abramowsk a-Kmon, Population Ageing in Europe: Facts, Implications
and Policies, Brussells, Belgium: European Union. doi: 10.2777/60452.
2014 .
[3] P. Mohammadi e t al., “Real-time control of whole-body robot motion
and trajectory generation for physiot herapeutic juggling in VR,” in 2018
IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), Oct. 2018, pp.
270–277. doi: 10.1109/IROS.2018.8593632.
[4] B. Drag anski, C. Ga ser, V. Busch, G. Schuierer, U. Bogda hn, and A.
May, “Changes in grey matter induced by training,” Nature, vol. 427,
no.6972, pp. 311312, 2004. doi: 10.1038/427311a.
[5] C. Sampaio-Baptista, N. Filippi ni, C. J. Stagg, J. Near, J. Scholz, and
H. Joha nsen-B erg , “Changes in functional connectivity and GABA lev-
els with long-term motor learning,” NeuroImage, vol. 106 , pp. 1520,
Feb. 2015 . doi: 10.1016/j.neuroimage.2014.11.032.
[6] S. T. Rodrig ues et al ., “Postural control during cascade ball jug-
gling,” Percept. Mot. Skills, vol. 123, no. 1, pp. 279294, Aug. 2016. doi:
10.1177/0031512516660718.
[7] M. J. Majsa k, T. Kaminsk i, A. M. Gent ile, and A. M. G ordon, “Effects
of a moving target versus a temporal constra int on reach a nd grasp in
patients with Park inson’s disease,” Exp. Neurol., vol. 210, no. 2, pp. 479
488, Apr. 2008. doi: 10.1016/j.expneurol.2007.11.023.
[8] H.-I. Ma , W.- J. Hwang, C. -Y. Wang, J.-J. Fan g, I.-F. Leong, and T. -Y.
Wan g, “Trunk–arm coordination in reaching for moving targets in
people with Parkinson’s disease: Comparison between virtual and
physical reality,” Hum. Mov. Sci., vol. 31, no. 5, pp. 13401352, 2012. doi:
10.1016/j.humov.2011.11.004.
[9] D. L. W iga nd, P. Mohammad i, E. M. Hof fman, N. G. Tsagarakis, J. J.
Steil, and S. Wre de, “An open-source architecture for simulation, exe-
cution and a nalysis of real-time robotics systems,” in 2018 IEEE Int.
Conf. Simulation, Modeling, and Programming for Autonomous Robots
(SI MPAR), May 2018, pp. 93–100. doi: 10.1109/SIMPAR.2018.8376277.
[10] C. E. Ha gm an n and M. C. Pott er, “Ultrafast scene detection and
recognition with limited visua l information,” Vis. Cogn., vol. 24 , no. 1,
pp. 2–14, Jan. 2016 . doi: 10.1080/13506285.2016.1170745.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
12
IEEE ROBOTICS & AUTOMATION MAGAZINE
MONTH 2019
[11] H. Bruyninc kx, “Open robot control software: OROCOS project,” in
Proc. 2001 IEEE Int. Conf. Robotics and Automation (ICR A), vol. 3,
pp.25232528, Fe b. 2001.
[12] J. Wienk e and S. Wrede, “A middleware for collaborative resea rch
in experimental robotics,” in Proc. 2011 IEEE/SICE Int. Symp. System
Integration (SII), Dec. 2011, pp. 1183–1190. doi: 10.1109/SII.2011.6147617.
[13] A. Rocchi, E. M. Hoff ma n, D. G. Caldwell, and N. G. Tsagarakis,
Opensot: A whole-body control library for the compliant humanoid
robot COMAN,” in Proc. 2015 IEEE Int. Conf. Robotics and Automation
(ICR A), Ma y 2015, pp. 6248–6253. doi: 10.1109/ICRA.2015.7140076.
[14] N. Dehio, D. Kubu s, and J. J. Stei l, “Continuously shaping projec-
tions and operational space tasks,” in Proc. 2018 IEEE/RSJ Int. Conf.
Intelligent Robots and Systems (IROS), Oct. 2018, pp. 59956002.
[15] A. Herdt, H. Diedam, P.-B . Wiebe r, D. Dimitrov, K. Mombaur, and
M. Diehl, “Online walking motion generation with automatic footstep
placement,” Adv. Robot., vol. 24, no. 5–6, pp. 719737, 2010. doi: 10.1163/
016918610X493552.
[16] P. Mohamma di, E. Mingo Hof fman, L. Muratore, N. G. Tsa ga r akis ,
and J. J. Steil, “Reactive walking based on upper-body ma nipulability: An
application to intention detection and reaction,” in Proc. 2019 Interna-
tional Conference on Robotics and Automation (ICRA), May 20–24, 2019,
pp. 4991–4997. doi: 10.1109/ICRA.2019.8794309.
[17] L. Muratore, A. L auren zi, E. M. Hoffman, A. Rocch i, D. G.
Caldwe ll, and N. G. Tsagar ak is , “Xbotcore: A real-time cross-robot
software platform,” in Proc. 2017 First IEEE Int. Conf. Robotic Comput-
ing (IRC), Ap r. 2017, pp. 7780.
[18] “Cognitive interact ion in motion,Technische Universität Braunsch-
weig. Accessed on: June 2019. [Online]. Available: https://cogimon.eu/
Pouya Mohammadi,
The Institute for Robotics and Process
Control, Technische Universität Braunschweig, Germany.
E-mail: p.mohammadi@tu-bs.de.
Enrico Mingo Hoffman,
Advanced Robotics Department, Istitu-
to Italiano di Tecnologia, Genoa. Email: enrico.mingo@iit.it.
Niels Dehio,
Intelligente Prozessautomation und Robotik, Karl-
sruher Institut für Technologie, Germany. Email: niels.dehio@
kit.edu.
Milad S. Malekzadeh,
The Institute for Robotics and Process
Control, Technische Universität Braunschweig, Germany.
Email: milad.malekzadeh@gmail.com.
Martin Giese,
Section for Computational Sensomotorics,
Department of Cognitive Neurology, Hertie Institute for Clin-
ical Brain Research, Werner Reichardt Centre for Integrative
Neuroscience, University Clinic Tübingen, Germany. Email:
martin.giese@uni-tuebingen.de.
Nikos G. Tsagarakis,
Advanced Robotics Department, Istituto Ital-
iano di Tecnologia, Genoa, Italy. Email: nikos.tsagarakis@iit.it.
Jochen J. Steil,
The Institute for Robotics and Process Control,
Technische Universität Braunschweig, Germany. Email:
j.steil@tu-bs.de.
... While collaborative humanoids are beginning to gain traction in manufacturing [1], their employment in other circumstances is still in its infancy. Whether in a healthcare setting as an extra helping hand [2] or as a robotic nurse [3], on a construction site aiding in the transportation of materials [4] or holding parts ergonomically [5], or as personal assistants at home [6], [7], humanoid robotic partners must autonomously discern a human's touch intention. The field of physical human-robot interaction (pHRI), as the name implies, examines particular cases where there exists a physical connection between a human and a robot, as in Fig. 1(a). ...
Article
Full-text available
Rather than systematically programming joint or task trajectories, having a human physically manipulate the robot for direct adjustments is more intuitive, saves time, and increases usability, especially for non-experts. Interactive motion generation or repositioning of humanoid robots through direct human-touch manipulation is not an easy task, especially for high-level multi-joint maneuvers. We propose a set of design rules for generating intuitive touch semantics called the “two touch kinematic chain paradigm”. Our method interprets user touch intentions to allow motions ranging from low-level single joint control to high-level whole-body task control with posture generation, stepping, and walking. The goal is to provide the user with an intuitive protocol for physical humanoid manipulation that can serve the purpose of any application. The generated set of touch semantics is embodied in a finite state machine-based framework using a task-space quadratic programming controller to interpret human touch using capacitive sensors embedded in the humanoid shell, and force-torque sensors located at the ankles and wrists. A position-controlled humanoid robot is used to assess the utility and function of our proposed touch semantics for physical manipulation. Furthermore, a user study with non-experts examines how our approach is perceived in practice. ------ Supplementary Video: https://www.youtube.com/watch?v=gDnVblT8K8I
... These forces can be treated in two non-mutually exclusive ways: as a command to walk or as a disturbance to be rejected. Certain works have focused on using these interaction forces as commands to the robot to walk in specific directions in a leader-follower control scheme [5], [13], [14] or for manipulating the held object [15]. ...
Conference Paper
Full-text available
In order to properly integrate humanoid robots in real-life situations, they must be able to collaborate with humans in completing tasks. One of these tasks is the cooperative transportation of a heavy object, which has been widely studied in the humanoids literature. However, the proposed methods rely heavily on six-axis force/torque (F/T) sensors at the wrists, which medium-sized or even some full-sized humanoid robots do not have. This paper proposes an observer to overcome the lack of F/T sensors. The observer is then coupled with a simplified dynamic model of the transportation task allowing the humanoid robot to carry out the task in a stable way. The method is tested in simulation using a humanoid robot that does not have F/T sensors, a NAO robot, to demonstrate its performance. These tests pointed out that the proposed method successfully estimated the interaction forces while generating stable walking patterns.
... Note that the controller does not need to measure the extra weight in order to know to push harder. The corresponding external force is estimated using the displacement of the box relative to its desired position (39). ...
Article
Full-text available
Robotics research into multi-robot systems so far has concentrated on implementing intelligent swarm behavior and contact-less human interaction. Studies of haptic or physical human-robot interaction, by contrast, have primarily focused on the assistance offered by a single robot. Consequently, our understanding of the physical interaction and the implicit communication through contact forces between a human and a team of multiple collaborative robots is limited. We here introduce the term Physical Human Multi-Robot Collaboration (PHMRC) to describe this more complex situation, which we consider highly relevant in future service robotics. The scenario discussed in this article covers multiple manipulators in close proximity and coupled through physical contacts. We represent this set of robots as fingers of an up-scaled agile robot hand. This perspective enables us to employ model-based grasping theory to deal with multi-contact situations. Our torque-control approach integrates dexterous multi-manipulator grasping skills, optimization of contact forces, compensation of object dynamics, and advanced impedance regulation into a coherent compliant control scheme. For this to achieve, we contribute fundamental theoretical improvements. Finally, experiments with up to four collaborative KUKA LWR IV+ manipulators performed both in simulation and real world validate the model-based control approach. As a side effect, we notice that our multi-manipulator control framework applies identically to multi-legged systems, and we execute it also on the quadruped ANYmal subject to non-coplanar contacts and human interaction.
... While collaborative humanoids are beginning to gain traction in manufacturing [1], the employment in other circumstances is still in its infancy. Whether in a healthcare setting as an extra helping hand [2] or as a robotic nurse [3], on a construction site aiding in the transportation of materials [4] or holding parts ergonomically [5], or as personal assistants at home [6], [7], humanoid robots working with humans must discern humans' touch intention autonomously. ...
Preprint
Full-text available
(Note: This is the preprint version. A final published version exists on ResearchGate and on IEEE Xplore) https://www.researchgate.net/publication/363856089_Touch_Semantics_for_Intuitive_Physical_Manipulation_of_Humanoids and https://ieeexplore.ieee.org/abstract/document/9911943) -------- Rather than systematically programming joint or task trajectories, having a human physically manipulate the robot for direct adjustments is more intuitive, saves time, and increase usability especially for non-experts. Interactive motion generation or repositioning of humanoid robots through direct human-touch manipulation is not an easy task, especially for high-level multi-joint maneuvers. We propose a set of design rules for generating intuitive touch semantics called the "two-touch kinematic chain paradigm". Our method interprets user touch intentions to allow motions ranging from low-level single joint control to high-level whole-body task control with posture generation, stepping, and walking. The goal is to provide the user an intuitive protocol for physical humanoid manipulation that can serve the purpose of any application. The generated set of touch semantics is embodied in a finite state machine-based framework using task-space quadratic programming controller to interpret human touch using capacitive sensors embedded in the humanoid shell, and force-torque sensors located at the ankles and wrists. A position-controlled humanoid robot is used to assess the utility and function of our proposed touch semantics for physical manipulation. Furthermore, a user study with non-experts examines how our approach is perceived in practice.
Conference Paper
Full-text available
The specification and analysis of the timing are an integral part of a robotics system that requires to be highly reliable. Especially since the demand for robots, which are applied in collaborative environments, is increasing drastically, robots need to be even more reliable and safe. In this paper, we propose a workflow for timing specification and analysis of real-time sensitive component-based robotics systems. Further, we introduce CoSiMA, a C++ based architecture that combines technologies, which are well-known in the domain of robotics. CoSiMA offers the ability to model, simulate, deploy, and analyze a robotics system on different robotic platforms. In addition to that, it offers a real-time safe mechanism to collect execution time data of a system, run in simulation or on the real hardware, to investigate and ensure the desired behavior of the robot. In order to depict the proposed workflow, we implemented an experimental system using CoSiMA, which lets the humanoid robot COMAN perform a Zero Moment Point-based walk on a straight line.
Conference Paper
Full-text available
In this work we introduce XBotCore (Cross-Bot-Core), a lightweight , Real-Time (RT) software platform for EtherCAT-based robots. XBotCore is open-source and is designed to be both an RT robot control framework and a software middleware. It satisfies hard RT requirements, while ensuring 1 kHz control loop even in complex Multi-Degree-Of-Freedom systems. It provides a simple and easy-to-use middleware Application Programming Interface (API), for both RT and non-RT control frameworks. This API is completely flexible with respect to the framework a user wants to utilize. Moreover it is possible to reuse the code written using XBotCore API with different robots (cross-robot feature). In this paper, the XBotCore design and architecture will be described and experimental results on the humanoid robot WALK-MAN [17], developed at the Istituto Italiano di Tecnologia (IIT), will be presented.
Article
Full-text available
Cascade ball juggling is a complex perceptual motor skill which requires efficient postural stabilization. The aim of this study was to investigate effects of experience (expert and intermediate groups) and foot distance (wide and narrow stances) on body sway of jugglers during three ball cascade juggling. A total of 10 expert jugglers and 11 intermediate jugglers participated in this study. Participants stood barefoot on the force plate (some participants wore a gaze tracking system), with feet maintained in wide and narrow conditions and performed three 40-seconds trials of the three-ball juggling task. Dependent variables were sway mean velocity, amplitude, mean frequency, number of ball cycles, fixation number, mean duration and its variability, and area of gaze displacement. Two-way analyses of variance with factors for group and condition were conducted. Experts’ body sway was characterized by lower velocity and smaller amplitude as compared to intermediate group. Interestingly, the more challenging (narrow) basis of support caused significant attenuation in body sway only for the intermediate group. These data suggest that expertise in cascade juggling was associated with refined postural control.
Article
Full-text available
Does the structure of an adult human brain alter in response to environmental demands? Here we use whole-brain magnetic-resonance imaging to visualize learning-induced plasticity in the brains of volunteers who have learned to juggle. We find that these individuals show a transient and selective structural change in brain areas that are associated with the processing and storage of complex visual motion. This discovery of a stimulus-dependent alteration in the brain's macroscopic structure contradicts the traditionally held view that cortical plasticity is associated with functional rather than anatomical changes.
Article
Humans can detect target colour pictures of scenes depicting concepts like picnic or harbour in sequences of six or 12 pictures presented as briefly as 13 ms, even when the target is named after the sequence. Such rapid detection suggests that feedforward processing alone enabled detection without recurrent cortical feedback. There is debate about whether coarse, global, low spatial frequencies (LSFs) provide predictive information to high cortical levels through the rapid magnocellular (M) projection of the visual path, enabling top-down prediction of possible object identities. To test the “Fast M” hypothesis, we compared detection of a named target across five stimulus conditions: unaltered colour, blurred colour, greyscale, thresholded monochrome, and LSF pictures. The pictures were presented for 13–80 ms in six-picture rapid serial visual presentation (RSVP) sequences. Blurred, monochrome, and LSF pictures were detected less accurately than normal colour or greyscale pictures. When the target was named before the sequence, all picture types except LSF resulted in above-chance detection at all durations. Crucially, when the name was given only after the sequence, performance dropped and the monochrome and LSF pictures (but not the blurred pictures) were at or near chance. Thus, without advance information, monochrome and LSF pictures were rarely understood. The results offer only limited support for the Fast M hypothesis, suggesting instead that feedforward processing is able to activate conceptual representations without complementary reentrant processing.
Article
A fundamental aspect of controlling humanoid robots lies in the capability to exploit the whole body to perform tasks. This work introduces a novel whole body control library called OpenSoT. OpenSoT is combined with joint impedance control to create a framework that can effectively generate complex whole body motion behaviors for humanoids according to the needs of the interaction level of the tasks. OpenSoT gives an easy way to implement tasks, constraints, bounds and solvers by providing common interfaces. We present the mathematical foundation of the library and validate it on the compliant humanoid robot COMAN to execute multiple motion tasks under a number of constraints. The framework is able to solve hierarchies of tasks of arbitrary complexity in a robust and reliable way.