Content uploaded by Davide Torielli
Author content
All content in this area was uploaded by Davide Torielli on Feb 18, 2022
Content may be subject to copyright.
IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED JANUARY, 2022 1
TelePhysicalOperation: Remote Robot Control
Based on a Virtual “Marionette” Type Interaction
Interface
Davide Torielli, Luca Muratore, Arturo Laurenzi, and Nikos Tsagarakis
Abstract—Teleoperation permits to control robots from a
safe distance while performing tasks in a remote environment.
Kinematic differences between the input device and the remotely
controlled manipulator or the existence of redundancy in the
remote robot may pose challenges in moving intuitively the
remote robot as desired by the human operator. Motivated by the
above challenges, this work introduces TelePhysicalOperation, a
novel teloperation concept, which relies on a virtual physical
interaction interface between the human operator and the remote
robot in a manner that is equivalent to a “Marionette” based
interaction interface. With the proposed approach, the user
can virtually “interact” with the remote robot, through the
application of virtual forces, which are generated by the operator
tracking system and can be then selectively applied to any body
part of the remote robot along its kinematic chain. This leads to
the remote robot generating motions that comply with the applied
virtual forces, thanks to the underlying control architecture.
The proposed method permits to command the robot from a
distance by exploring the intuitiveness of the “Marionette” based
physical interaction with the robot in a virtual/remote manner.
The details of the proposed approach are introduced and its
effectiveness is demonstrated through a number of experimental
trials executed on the CENTAURO, a hybrid leg-wheel platform
with an anthropomorphic upper body.
Index Terms—Human-Robot collaboration, physical Human-
Robot interaction, telerobotics and teleoperation.
I. INTRODUCTION
ROBOTIC teleoperation is one of the oldest fields in
robotics [1], but yet an active and evolving research
topic finding application in very diverse domains like disaster
response [2], surface finishing [3], construction industry [4]
Manuscript received September 9, 2021; accepted January 2, 2022; date of
current version January 28, 2022. This letter was recommended for publication
by Associate Editor Thomas Hulin and Editor Prof. Jee-Hwan Ryu upon
evaluation of the reviewers’ comments. This work was supported in part
by the European Union’s Horizon 2020 programme under Grant Agreement
Number 101016007 CONCERT and in part by the Italian Fondo per la
Crescita Sostenibile - Sportello Fabbrica intelligente, PON I&C 2014 - 2020
under Project F/190042/01-03/X44 RELAX. (Corresponding author: Davide
Torielli).
Davide Torielli is with Humanoid and Human Centred Mechatronics
(HHCM), Istituto Italiano di Tecnologia, 16163 Genova, Italy, and also
with the Department of Informatics, Bioengineering, Robotics, and Systems
Engineering (DIBRIS), University of Genova, 16128 Genova, Italy (e-mail:
davide.torielli@iit.it).
Luca Muratore, Arturo Laurenzi, and Nikos Tsagarakis are with the
Humanoid and Human Centred Mechatronics (HHCM), Istituto Italiano
di Tecnologia, 16163 Genova, Italy (e-mail: luca.muratore@iit.it; ar-
turo.laurenzi@iit.it; nikos.tsagarakis@iit.it).
This letter has supplementary downloadable material available at https://
doi.org/10.1109/LRA.2022.3144792, provided by the authors.
Digital Object Identifier 10.1109/LRA.2022.3144792
and many others, for permitting the remote command and
execution of complex loco-manipulation tasks. At the same
time, the improved capabilities demonstrated by the recent
mobile/legged manipulation platforms have increased the com-
plexity of remotely commanding them, hence augmenting
the burden of the operator in executing remote tasks. This
has motivated robotics research towards the development of
autonomous capabilities as well as more intuitive interaction
and command interfaces between the human operator and the
remotely operated robot.
Concerning the interaction interfaces, devices with simi-
lar [5] or dissimilar [6] kinematics have been exploited with
the cons of these approaches being the necessity to design
specific slave devices and/or having the operator loaded with
a cumbersome interface. Flexible and lightweight interfaces
that attempt to track multiple operator inputs have also been
explored, based on Inertial Measurement Unit (IMU) devices
to direct teleoperate robots [7]–[9]. In this case, the complexity
of the slave robot is handled by sending multiple inputs pro-
vided by a sensorized body interface. The work in [10] exploits
a full body IMU-based suit combined with a human center of
pressure model and a tele-impedance interface to control the
locomotion and manipulation actions. Tele-impedance control
enriches the command sent to the remote robot by combing
the masters estimated position and the stiffness references
obtained through an Electromyography (EMG) interface [11].
EMG-based Human Robot Interfaces have also been used
in [12] to obtain the inputs of the human operator. Instead,
the work in [13] combines a motion tracking interface with
an autonomous impedance regulator module to deal with the
physical interaction uncertainties and task payload conditions
of the remote robot.
Despite all the provided solutions, an alternative and very
intuitive way to guide the robot is by physically interacting
with the robot body to drive it along the desired motion in
order to teach it how to execute a desired task. Indeed, physical
Human Robot Interaction (pHRI) can assist the operator in
accomplishing a task in collaboration with the robot [14], [15].
In [16], a physically interactive control scheme is exploited to
permit the operator to touch the robot and to apply forces to
make it follow the desired motions. In [17], a direct physical
contact between the human and the robot is studied for a series
of assembly cooperative task, alternating passive and active
robot behaviors to reduce the operator workload and the risk
of injuries. Physical interactions have also been explored for
mobile robots, like in [18], where the MObile Collaborative
2 IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED JANUARY, 2022
Fig. 1. Concept of TelePhysicalOperation. Above, schematics of the traditional teleoperation and the physical human robot interaction interfaces. Below, the
scheme of TelePhysicalOperation, derived by the combination of the two above controlling interfaces.
robot Assistant (MOCA) is functioning autonomously, while
the operator can choose to be physically coupled to the
robot (through a mechanical admittance interface) to perform
conjoined actions. MOCA robot is present again in [19], where
an haptic guidance is performed to physically drive the arm
to grasp the wanted object.
As stated in the above mentioned works, teleoperation
interfaces may vary; however, especially for complex mobile
robots, the control from a remote location remains a challeng-
ing task. At the same time, a very intuitive way to effectively
guide and teach a robot how to execute a task is by physically
interacting and driving it by applying forces on different parts
of its body. However, such direct pHRI may not be possible for
safety reasons in case of robotic systems with elevated power
capabilities. By considering these observations and blending
the teleoperation and the pHRI concepts, in this work we
propose a novel approach, the TelePhysicalOperation, which
enables the teleoperation of a remote mobile manipulation
platform through a virtual physical interaction interface that
enables the operator to generate and selectively apply virtual
forces along the kinematic chain of the remote robot in a
manner that resembles a user physically interacting with the
robot through a virtual “Marionette” type of interface. The
method permits to control complex and redundant robots
providing increased flexibility in regulating the motions of
the robot body beyond the end-effector level by allowing the
operator to on-the-fly select the locations along the kinematic
chain where these virtual forces are applied. It resembles an
interface similar to the physical interaction paradigm of a user
teaching and guiding a robot, thus it can also be employed
for remotely interacting in an intuitive way with collaborative
robots while avoiding at the same time any direct physical
contact with the robots, ensuring safety.
TelePhysicalOperation is experimentally demonstrated and
validated through a number of tasks executed with the
CENTAURO robot [20], a hybrid leg-wheel system with an
anthropomorphic upper body.
The rest of the paper is organized as follows. Section II
introduces the concept of TelePhysicalOperation; Section III
presents the motion capture interface used to provide the input
requested by TelePhysicalOperation; Section IV describes the
control architecture; Section V shows the experimental vali-
dations performed and Section VI draws the conclusions.
II. TH E CON CE PT O F TEL EPHYSICALOPERATIO N
The overall concept of TelePhysicalOperation is schemati-
cally illustrated in Fig. 1. As it can be seen, TelePhysicalOper-
ation emerges from the blending of teleoperation and physical
interaction interfaces traditionally used to control and interact
a remote or a collaborative robot respectively. The rationale
idea of our approach is that such a combination can provide a
universal and intuitive human robot interaction interface that
can be suitable for interacting with either a remote robot (tele-
operation) or a collaborative robot (collaboration) in a manner
inspired by the way that a user today physically interacts with
a collaborative robot during e.g. a teaching or guiding phase
of the collaborative task. This is inspired by the fact that,
when the operator physically interacts with the robot using
multiple contact points, he/she has the possibility to precisely
shape the pose of robot end-effector as well as regulate other
motions permitted by the available robot redundancy to assist
and accomplish the task. TelePhysicalOperation implements
exactly such a multiple-contact interaction interface principle
in a virtual manner permitting an operator to command a
remote or a collaborative robot without the need of any
physical interaction.
To realize this and to regulate the motions of the remote or
collaborative robot, the TelePhysicalOperation concept relies
on the application of virtual forces that can be selectively
applied by the operator to different locations along the kine-
matic chain of the robot. Therefore, only virtual contacts
are established between the operator and the real robot. As
in pHRI, the remote/collaborative robot responds to these
forces as if they were actual interaction forces, regulating
accordingly its motions and resembling the motion response
that the operator should expect when physically interacting
TORIELLI et al.: TELEPHYSICALOPERATION: REMOTE ROBOT CONTROL BASED ON A VIRTUAL “MARIONETTE” 3
Fig. 2. The TelePhysicalOperation concept follows a “Marionette” type
interaction: the operator generates virtual forces that can be selectively applied
on specific locations on the robot segments.
and applying forces to the robot in real. The virtual forces are
generated by virtual springs initiated at the arms of the human
operator and terminated to the selected applied locations in the
robot body approximating the “Marionette” motion generation
principle, as illustrated in Fig. 2. The virtual forces requested
as inputs by the framework are 3D vectors and their amplitude
is controlled through the elongation of the virtual springs
that is performed through suitable motions of the operator
arms monitored by appropriate tracking devices. The main
contributions of the proposed approach are summarized below:
•Differently from a standard end-effector based teleoper-
ation, TelePhysicalOperation provides functionality that
permits to control selectively different body segments
of the robot allowing to regulate both the end-effector
motions as well as the motion of the redundant degrees
of freedom effectively.
•By relying on the proposed virtual forces, following the
“Marionette” motion control principle, our method is
transparent with respect to the particular robot kinematics
and the associated redundancy. In other words the method
does not have the burden of a particular master device to
be mapped 1-to-1 to the remote robot to control, but it
only requires an input device system that can track the
motions of the operator, needed to generate the virtual
forces that will be applied to the selected robot body
locations.
•The TelePhysicalOperation concept permits to negotiate
the potential safety constraint of the pHRI but keeping
the intuitiveness of a physical contact with the robot
body. This can be particularly applicable in the case of
interacting with a high power and strength collaborative
robot that may not permit to physically interact with due
to safety regulations.
To realize the TelePhysicalOperation concept, we have
developed an architecture based on two main components as
depicted in Fig. 3:
•The TelePhysicalOperation Suit (TPO Suit), an effective
low cost motion tracking interface for the operator, in
charge of monitoring the movements of the operator arms
and of providing the requested inputs for the TelePhys-
icalOperation interface and control architecture (Section
III).
•The main control node, implemented on a pilot pc, which
is responsible for handling the virtual forces based on the
motion inputs received by the TPO suit and deriving the
corresponding motions that should be generated by the
robot due to the application of the virtual forces. This
node implements the control modes used to facilitate the
control of the remote robot through the inputs received
by the operator (Section IV).
The TelePhysicalOperation architecture provides on the fly
flexibility to quickly change the points where the virtual forces
are applied. In addition it delivers additional features to assist
the operator executing the remote task, i.e., the Blocking
Link (Section IV-C) and the Mirroring Motion (Section IV-D)
features. The implementation is based on the Robotic Operat-
ing System (ROS), a middleware well-known in the robotics
community.
III. THE TE LE PHYSICALOPE RATI ON SUIT
The virtual forces to be applied to the remote robot body are
generated through the motion inputs of the operator arms. To
track the operator arms we have realized a lightweight and low
cost motion capture solution, based on Visual-Simultaneous
and Localization Mapping (V-SLAM) tracking cameras.
The TPO Suit is composed of:
•Two Intel R
RealSense Tracking Camera T265 worn on
operator’s left and right arm’s wrists.
•ARaspberry Pi 4 Model B fully functional computer
with highly compact dimensions allowing to mount it as
a wearable device on the body of the operator.
•A compact battery bank, which provides power to the
TPO Suit components.
The task of the worn computer is to synchronize the data
coming from the different cameras, and to forward the com-
puted virtual forces through the network to the pilot pc
where the main control node of the TelePhysicalOperation is
implemented. The cameras are connected to the Raspberry Pi
through USB ports, while the Raspberry Pi communication
with the pilot pc is handled with a Wi-Fi connection eliminat-
ing the need of any tethering between the operator who wears
the TPO Suit and the pilot pc.
The T265 camera is a V-SLAM tracking device composed
of two fisheye lens, an IMU, and a Visual Processing Unit
(VPU). The V-SLAM algorithms run directly on the device,
allowing for low latency and efficient power consumption. Its
small size (108 x25 x13 mm) and its light weight (55g) make
it very comfortable to be worn. By mounting the two cameras
on the operator’s wrists, we can track the motion of the arms
in any direction in the 3D space. Hence, at any instant we
can choose to use the motions of two wrists to generate a pair
of virtual forces that can be then selectively applied to two
different locations of the robot body.
The computer of the TPO Suit receives the cameras data at
a specific frequency of 100 Hz. For each camera, we gather its
actual pose as a 4-by-4 transformation matrix T. This matrix
describes the 3D pose from the camera origin frame (the point
where the camera has been switched on) to the actual camera
frame. The operator can selectively set a reference frame at
a particular pose of his/her arm during initialization, which
results in Tref , a transformation from the origin frame to
4 IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED JANUARY, 2022
Fig. 3. The architectural scheme of TelePhysicalOperation. On the left, the operator is wearing the TPO Suit, which provides a virtual force fcp to the main
control node. The main control node is in charge of processing the virtual force based on the task associated with the control point chosen. On the right, the
final references are sent to the robot.
the reference frame. Hence, the transformation
ˆ
Tfrom the
reference frame to the actual frame is computed as:
ˆ
T=T−1
ref T(1)
From
ˆ
Twe extract the translation component r∈R3×1,
and we associate it with the elongation of a virtual spring
to compute the final virtual force fcp ∈R3×1(Fig. 3) as:
fcp =kcam s(r)(2)
where s(·)is a simple filter to smooth out the behaviour, and
kcam is a positive gain representing the stiffness of the virtual
spring. The vector fcp describes the virtual force applied on
the selected robot part, i.e., the control point cp located on a
robot link. ROS services are provided to change on the fly the
control points and to link them to the virtual forces generated
by the motions tracked by the cameras of the TPO Suit. The
virtual forces generated from each camera are then aggregated
and sent to the main control node through ROS topics.
IV. TEL EPHYSICALOP ER ATIO N MAIN CON TROL NODE
The main control node handles the incoming virtual forces
fcp and use them to generate the references for the joints of
the robot. Depending on the chosen control point, each virtual
force can be processed and explored in two ways:
•With the postural motion generation (Section IV-A),
the virtual force is used to derive the joints position
references for a postural task, i.e. a task at a joint-level.
One or more virtual forces can be applied to different
links of a single kinematic chain, resulting in the joints
moving accordingly to the external virtual forces.
•With the Cartesian motion generation (Section IV-B), the
virtual force is used to generate a motion of a Cartesian
task on the control point by exploring the virtual force to
generate a 3D Cartesian velocity reference for that point.
To help the operator in accomplish the tasks we have included
two additional features: the Blocking Link feature, and the
Mirroring Motion feature (Sections IV-C and IV-D).
The main control node runs on a pilot pc which handles
the data received, shows through GUIs information on the
current state of the systems, and sends the final commands to
the robot with the CartesI/O Control Framework [21] and the
XBot Architecture [22]. Visualization tools include the robot’s
representation on the ROS-standard kinematic visualizator
RViz with visual feedback on the virtual forces applied.
A. Postural motion generation
To derive the references of the postural motion task, the
virtual force fcp is considered as a force acting on a control
point cp, which can be selected at a location along the kine-
matic chain of joints Nof the robot. From fcp, the resulting
torques on the chain’s joints τcp ∈RN×1are computed:
τcp =JT
cp fcp (3)
where Jcp ∈R3×Nis the linear Jacobian matrix of the
control point chosen, i.e., a matrix such that its product with
the derivative of the configuration vector gives the 3D linear
velocity of cp. Note that, if the control point is not on the
last link of the robot chain, the virtual force will not influence
the joints after the chosen link hence the Jacobian will have
the final columns filled with zeros. In case that more than one
virtual force is acting on different control points on the same
kinematic chain, their contributions will be summed to a total
joint torque τas follows:
τ=X
cp∈chain
τcp (4)
If the virtual forces are acting on different kinematic chains
(e.g. on two different arms) each τis handled separately.
For each controlled kinematic chain, having derived the corre-
sponding total joint torque due to the application of the virtual
forces, the joint reference motion is computed by considering
a joint mass-spring-damper model (Fig. 3) as follows:
¨qref (t) = M−1K(qeq −q(t)) −D˙qref (t−1) + τ
˙qref (t) = ˙qref (t−1) + ¨qr ef (t) ∆t
qref (t) = qref (t−1) + ˙qref (t) ∆t
(5)
where qref (t)∈RN×1is the joints position reference vector;
M,K,D∈RN×Nare diagonal matrices of the mass,
TORIELLI et al.: TELEPHYSICALOPERATION: REMOTE ROBOT CONTROL BASED ON A VIRTUAL “MARIONETTE” 5
stiffness and damping parameters of the joint mass-spring-
damper model; q,qeq ∈RN×1are the current position of the
joints and the equilibrium set point where a stiffness greater
than zero will drag the joints; ∆tis the time interval between
two consecutive control loops. The parameters of the mass-
spring-damper model can be set experimentally to regulate the
motion behavior of the individual joints subject to the applied
virtual torques derived by Eq. 4. As an example, the diagonal
elements of the stiffness Kcan be set to zero to eliminate the
returning elastic torque towards the equilibrium set point of
the joints. Instead, by increasing these values, the joints will
tend to go back to their qeq set point if no virtual forces are
applied to the control points on the robot chain.
B. Cartesian motion generation
Postural based motion control provides full flexibility to the
operator to control the individual joints of the robot based on
the control points selected on the robot body. This kind of
motion control lets the user to regulate as needed the end-
effector pose as well as the available redundancy if it exists. In
some situation though, the possibility to regulate the motions
of the remote robot at the task space may be required and it
may be more appropriate to facilitate the execution of some
tasks. For example, the operator can apply a virtual force on
the control point set in the base of a mobile robot equipped
with steering wheels, with the intention to move the whole
mobile base. Another challenging Cartesian task can be to
control the pose of a quadruped body by selecting a control
point on the pelvis and driving its pose using the virtual force.
Such a task would be challenging to be carried out through
postural motion generation as it involves motions from several
legs joints that contributes to the pose of the pelvis, e.g.
making the platform going up or down (“squatting”).
To realize the functionality of regulating specific links of
the robot body in the corresponding task space, the proposed
TelePhysicalOperation method provides a Cartesian motion
generation interface, which, from the applied virtual force fcp,
derives a velocity reference ˙xcp ∈R3×1(Fig. 3) as follows:
˙xcp =Kcart,cp fcp (6)
where Kcart,cp ∈R3×3is a diagonal matrix of gains based
on the Cartesian task specific for the control point cp. From
the resulting Cartesian velocity reference ˙xcp, a joint reference
qref is derived with an inverse kinematic process.
This Cartesian motion generation interface permits also to
limit the possible Cartesian directions to comply with the phys-
ical constraints of the robot or the specific requirements of the
Cartesian task. This can be done by putting the correspondent
elements in the diagonal of Kcart,cp equal to zero, for example
to control the mobility of a planar mobile robot, which can
not follow a velocity along the zaxis.
C. The Blocking Link Feature
In the postural task, according to Eq. 3 and Eq. 4, each
virtual force applied on a control point will contribute to
command each joint from the chain’s root up to the last joint
before the control point. There are situations in which the
operator may want to move only specific joints in the middle
of the robot chain without influencing the position of the first
joints of the chain. For example on the CENTAURO robot,
one may want to move precisely the wrist joint while keeping
the shoulder and elbow joints fixed, as it can be seen in the
second image of Fig. 7. In the image, the virtual force applied
on the end-effector is influencing only the wrist joint, because
of the presence of another virtual force on the robot forearm,
which is the only virtual force that can influence the shoulder
and elbow joints. We call this the Blocking Link feature.
D. The Mirroring Motion Feature
For a dual arm manipulation system, there are situations in
which it is useful to command the two twin arms to move
in a specular and symmetrical manner, like to execute a bi-
manual object grasping action. Hence, we have implemented
the Mirroring Motion feature, with which the virtual forces
applied on a control point of an arm are also applied on the
correspondent control point of the other arm, symmetrically
(respect to an axis perpendicular to the floor).
A showcase of this feature is visible in the second image
of Fig. 5, where the force applied on the right end-effector
has been mirrored to the left end-effector. We have also taken
advantage of this functionality to place a box on a platform,
previously picked up with the two CENTAURO arms (last
image of Fig. 9).
V. EX PE RI ME NTAL VALIDATION
We have validated the TelePhysicalOperation method by
performing a number of tasks with the CENTAURO platform,
a quadruped body with wheels and a human-like torso with
two arms. During these experiments the operator applies
virtual forces to different robot body locations (i.e., the control
points) to execute different tasks. The control points were
selected by another person, the assistant pilot, through ROS
services, according to the instruction provided by the first
operator. The parameters of the joint mass-spring-damper
model presented in the Eq. 5 have been experimentally tuned
to adjust the sensitivity of the robot generated motions, while
the virtual spring stiffness of the cameras kcam has been
experimentally set to 1.8N/mbased on the level of sensitivity
in the motion produced that felt comfortable by the human
operator.
We have performed some initial tests about the latency in-
troduced by our framework, where the embedded pc connected
with the cameras was communicating with the pilot pc through
Mean
[ms]
Std Dev
[ms]
Left cam 29.30 0.47
Right cam 28.48 0.50
Fig. 4. Latency introduced by the TelePhysicalOperation framework. The
time interval is calculated from the instant when the data is polled from each
camera to the instant after the command references are sent to the robot (in
Fig. 3, from the leftmost side of “Embedded PC” box to the rightmost side
of “Main Control Node” box).
6 IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED JANUARY, 2022
Fig. 5. CENTAURO’s arms are moved by means of the TelePhysicalOperation
interface. In the first image, each operator’s arm is applying a virtual force on
the respective robot end-effector. In the second image, the Mirroring Motion
feature is used: with one arm the operator is applying two specular virtual
forces on both robot end-effectors.
Wi-Fi (Fig. 4). The latency between the instant in which the
camera are polled to the instant in which the robot’s commands
are sent has resulted to be, on average, less than 30ms. The
difference between the two cameras was due to the fact that
we poll one after the other.
The first experiment demonstrates a showcase in which the
arms of CENTAURO are commanded within their workspace
(Fig. 5). It can be observed that the TelePhysicalOperation
interface results to not matching postures between the robot
and the human operator. This is an expected behavior that is
intrinsic to the employed “Marionette” based control. Indeed,
this approach explores the virtual forces generated by the
motions of the operator rather than the postural operator state
to control the reference of the arm joints.
In the second experiment a remote collaboration between
the operator and the robot is performed (Fig. 6). In this demon-
stration the operator, thanks to the TelePhysicalOperation
interaction interface, moves the remote robot’s end-effector to
a number of workspace locations, which Cartesian positions
are recorded. In a second phase the robot is commanded such
that it performs autonomously a trajectory which follows the
recorded positions. This demonstration in principle resembles
the teaching phase of a human-robot collaboration but without
the need to physically interact with the robot body to guide its
end-effector. Indeed, the operator, using the proposed interface,
can perform the teaching/demonstration phase from remote
in a similar way as when the robot is guided through direct
physical interaction.
In the third experiment we show how, by applying two
virtual forces on the same arm, the operator can shape its pose
and make the end-effector reach a goal location while avoiding
an obstacle (Fig. 7). The task is to press a button obstructed
by some bricks. The operator first activates the shoulder and
the elbow joints to go over the obstacle, and then bend the
wrist to reach the button from above. The plots in Fig. 8 show
the relevant data gathered during the experiment. In the first
and third plots, the camera inputs are depicted, i.e. the virtual
forces fcp of Eq. 2. For each input, the second and fourth
plots show the joint torques generated with Eq. 3 and with
the Blocking Link feature. Indeed, this feature nullifies the
contribution of the right camera in the torque of some joints
because another virtual force is applied with the left camera
in an ancestor link. This permits to precisely control with the
right camera only the Elbj,ForearmPlate,Wrj1, and Wrj2
joints in the time interval from t= 0sto t= 16s, and only the
Fig. 6. TelePhysicalOperation interface is exploited for a remote collaboration.
In the first image, the operator remotely guides the robot to specific workspace
locations. In the second image, the robot executes autonomously a trajectory
which follows the recorded locations.
Wrj1 and Wrj2 joints in the time interval after t= 20s. The
last three plots show the state of the robot’s right arm during
the experiment; please note that the Cartesian velocities (in the
two bottom plots) have been computed from the derivation
of the sensed robot joints position, hence they have been
post-processed with a moving average filter to improve the
visualization.
In the last experiment, a complex environment is set up in
our laboratory (Fig. 9). The CENTAURO robot must be guided
first through a low passage, which imposes the necessity to
command the robot to perform a “squatting” motion. Then a
box must be picked up with the two arms from one location,
and put down in another site. The final placement of the box
is made thanks to the Mirroring Motion feature, in which
the operator with a single arm applies two identical specular
virtual forces on the two robot’s end-effectors. This experiment
effectively validates the flexibility in applying up to two virtual
forces in different parts of the robot, according to the needs
of the task and the selection made by the operator. Plots are
shown in Fig. 10. Like in the previous Fig. 8, the first and
third plots show the input from the cameras. Differently from
the previous experiments, in this case the virtual forces are
used to generate not only postural motions (Section IV-A) for
the arms, but also Cartesian motions (Section IV-B) for the
robot body-locomotion. The second and fourth plots show the
joints position of the two arms.
All the described experiments have been recorded, result-
ing in the video attached with this paper, available also at
https://youtu.be/dkBmbTyO_GQ.
VI. CONCLUSIONS
We have presented TelePhysicalOperation, a novel concept
to improve the classical robot teleoperation by exploring
virtual physical human robot interactions through the intu-
itiveness of an approach that resembles controlling the remote
robot in a “Marionette” based manner. By applying virtual
forces selectively on points of the robot body, the operator
can effectively teleoperate the remote robot. Different number
of inputs can be applied on different robot parts, hence pro-
viding more possibilities than a classical Cartesian controller
interface that focuses mainly on the end-effector control. A
control architecture has been developed to realize the proposed
concept, and an inexpensive motion tracking suit, the TPO suit,
composed by tracking cameras, has been exploited to provide
the necessary inputs. The whole framework has been evaluated
on the CENTAURO robot demonstrating good efficacy and
TORIELLI et al.: TELEPHYSICALOPERATION: REMOTE ROBOT CONTROL BASED ON A VIRTUAL “MARIONETTE” 7
Fig. 7. In the button experiment, the goal is to command the robot arm to press a button obstructed by some bricks. The cyan and yellow marks indicate
the control points where the virtual forces are applied by the two operator’s arms. Thanks to the TelePhysicalOperation interface, the operator can shape the
robot arm to avoid the obstacle and reach the goal.
Fig. 8. Plots for the button experiment. For each virtual force fcp generated by the right and left cameras (first and third plots, Eq. 2), joint torques τcp
are computed (second and fourth plots, Eq. 3). The control points where the virtual forces are applied are indicated by the areas delimited by the vertical red
dashed lines (i.e. Wrist,Elbow,End-Effector, and Forearm). The joints positions of the robot’s arm varying during the experiment are visible in the fifth plot.
In the two plots at the bottom, the End-Effector Cartesian velocities (filtered to improve the visualization) are shown.
Fig. 9. The three phases of the locomotion and pick & place box experiment: in the first image, CENTAURO robot is passing below the low passage; in the
second, the left arm is being positioned on the box side; in the third the box is being placed with the Mirroring Motion feature activated.
8 IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED JANUARY, 2022
Fig. 10. Plots for the locomotion and pick & place box experiment. In the first and second plots, the camera inputs are shown. The different phases in which
different control points are chosen are highlighted by the areas delimited by the vertical dashed red lines. For Left and Right End-Effector control points, the
inputs generate postural motions (Eq. 3). For Locomotion,Locomotion yaw, and Squatting control points, the inputs generate Cartesian motions (Eq. 6). The
joints position of the right and left arm varying during the experiment are visible in the second and fourth plots, respectively.
ability of the operator to command the execution of a number
of tasks. Future works will concentrate on the development
of additional features that will increase the intuitiveness of
the proposed approach. The interface of the primary pilot
will be extended with a handheld interface that will permit
the primary pilot to autonomously switch to different control
points without the need to request to the assistant pilot to
perform this operation. Other future features will include the
incorporation of a gesture recognition module that can assist
to operator to initiate the execution of predefined motions, e.g.
primitive manipulation actions.
ACKNOWLEDGMENT
The authors want to thank Dr. Alessio De Luca for the
recording of the final experiments.
REFERENCES
[1] R. C. Goertz, “Fundamentals of general-purpose remote manipulators,”
Nucleonics (U.S.) Ceased publication, vol. 10, no. 11, pp. 36–42, 1952.
[2] Y. Liu and G. Nejat, “Robotic urban search and rescue: A survey from
the control perspective,” J. of Intell. Robot. Syst., vol. 72, pp. 147–165,
2013.
[3] T. H¨
oglund, J. Alander, and T. Mantere, “A survey of telerobotic surface
finishing,” Open Engineering, vol. 8, pp. 156–161, 2018.
[4] G. Carra, A. Argiolas, A. Bellissima, M. Niccolini, and M. Ragaglia,
“Robotics in the construction industry: State of the art and future
opportunities,” in Int. Symp. Automat. Robot. Construction, 2018, pp.
866–873.
[5] Y. Mae, T. Inoue, K. Kamiyama, M. Kojima, M. Horade, and T. Arai,
“Direct teleoperation system of multi-limbed robot for moving on
complicated environments,” in IEEE Int. Conf. Robot. Biomimetics,
2017, pp. 1171–1174.
[6] J. Rebelo and A. Schiele, “Master-slave mapping and slave base
placement optimization for intuitive and kinematically robust direct
teleoperation,” in Int. Conf. Control Automat. Syst., 2012, pp. 2017–
2022.
[7] C. Stanton, A. Bogdanovych, and E. Ratanasena, “Teleoperation of a
humanoid robot using full-body motion capture, example movements,
and machine learning,” in Australas. Conf. Robot. Automat., vol. 8, 2012,
p. 51.
[8] A. Noccaro, F. Cordella, L. Zollo, G. Di Pino, E. Guglielmelli, and
D. Formica, “A teleoperated control approach for anthropomorphic
manipulator using magneto-inertial sensors,” in IEEE Int. Symp. Robot
Human Interactive Commun., 2017, pp. 156–161.
[9] K. Darvish et al., “Whole-body geometric retargeting for humanoid
robots,” in Int. Conf. Humanoid Robots, 2019, pp. 679–686.
[10] Y. Wu, P. Balatti, M. Lorenzini, F. Zhao, W. Kim, and A. Ajoudani,
“A teleoperation interface for loco-manipulation control of mobile
collaborative robotic assistant,” IEEE Robot. Autom. Lett., vol. 4, no. 4,
pp. 3593–3600, 2019.
[11] A. Ajoudani, N. Tsagarakis, and A. Bicchi, “Tele-impedance: Towards
transferring human impedance regulation skills to robots,” in IEEE Int.
Conf. Robot. Autom., 2012, pp. 382–388.
[12] G. Pagounis, P. Koustoumpardis, and N. Aspragathos, “Robot motion
control using emg signals and expert system for teleoperation,” in Adv.
in Service and Ind. Robot., 2020, pp. 137–148.
[13] L. Muratore et al., “Enhanced tele-interaction in unknown environments
using semi-autonomous motion and impedance regulation principles,” in
IEEE Int. Conf. Robot. Autom., 2018, pp. 5813–5820.
[14] P. Akella et al., “Cobots for the automobile assembly line,” in IEEE Int.
Conf. Robot. Autom., vol. 1, 1999, pp. 728–733.
[15] J. Kr ¨
uger, T. Lien, and A. Verl, “Cooperation of human and machines
in assembly lines,” CIRP Annals, vol. 58, no. 2, pp. 628–646, 2009.
[16] M. S. Erden and T. Tomiyama, “Human-intent detection and physically
interactive control of a robot without force sensors,” IEEE Trans. Robot.,
vol. 26, no. 2, pp. 370–382, 2010.
[17] A. Cherubini, R. Passama, A. Crosnier, A. Lasnier, and P. Fraisse,
“Collaborative manufacturing with physical human-robot interaction,”
Robot. and Comput.-Integr. Manuf., vol. 40, pp. 1–13, 2016.
[18] W. Kim, P. Balatti, E. Lamon, and A. Ajoudani, “MOCA-MAN: A
MObile and reconfigurable Collaborative Robot Assistant for conjoined
huMAN-robot actions,” in IEEE Int. Conf. Robot. Autom., 2020, pp.
10 191–10 197.
[19] E. Lamon, F. Fusaro, P. Balatti, W. Kim, and A. Ajoudani, “A visuo-
haptic guidance interface for mobile collaborative robotic assistant
(moca),” in IEEE Int. Conf. Intell. Robots Syst., 2020, pp. 11 253–11 260.
[20] T. Klamt et al., “Flexible disaster response of tomorrow: Final presenta-
tion and evaluation of the centauro system,” IEEE Robot. Autom. Mag.,
vol. 26, no. 4, pp. 59–72, 2019.
[21] A. Laurenzi, E. M. Hoffman, L. Muratore, and N. Tsagarakis,
“CartesI/O: A ROS Based Real-Time Capable Cartesian Control Frame-
work,” in IEEE Int. Conf. Robot. Autom., 2019, pp. 591–596.
[22] L. Muratore, A. Laurenzi, E. Mingo Hoffman, and N. Tsagarakis, “The
XBot real-time software framework for robotics: From the developer
to the user perspective,” IEEE Robot. Autom. Mag., vol. 27, no. 3, pp.
133–143, 2020.