Conference PaperPDF Available

Design and Evaluation of Haptic Interface Wiggling Method for Remote Commanding of Variable Stiffness Profiles

Authors:

Abstract and Figures

Unlike many traditional stiff position-controlled robots, new collaborative robots interact with humans and operate in an environment that is often unpredictable and unknown. For safe and effective executions of manipulation tasks within such an environment, the robot requires to modulate its compliance. Therefore, the human operator must have a system that enables an intuitive demonstration of compliance skills to the robot. Ideally, this should also be possible through teleoperation in order to have the ability to demonstrate skills at a distance, such as in remote home care applications or any other scenario where the skilled operators are not physically present all the time. Existing state-of-the-art methods for remote demonstration of impedance skills only enable limited modulation of the stiffness matrix, or they are too complex and cumbersome for practical applications. This research tries to overcome these limitations and proposes a teleoperated stiffness commanding method that enables a complete modulation of the stiffness matrix in 3 degrees of freedom. The method uses the same haptic device hardware as used for controlling the robot manipulator motion, hence it does not require extra specialised equipment for stiffness commands. By wiggling the endpoint of the haptic device, the stiffness is commanded to the robot and also fed back to the operator through haptic and visual feedback. To evaluate the performance and acceptance of the system, we performed a user study where the participants had to demonstrate various interaction behaviour to the remote robot. The results show how varying system parameters (i.e., degrees of freedom, orientation, and size of the stiffness commands) influence the performance of the system and user acceptance.
Content may be subject to copyright.
Design and Evaluation of Haptic Interface Wiggling Method for
Remote Commanding of Variable Stiffness Profiles
Jasper Schol1,2, Jelle Hofland1, Cock J.M. Heemskerk1, David A. Abbink2, and Luka Peternel2,
Abstract Unlike many traditional stiff position-controlled
robots, new collaborative robots interact with humans and
operate in an environment that is often unpredictable and
unknown. For safe and effective executions of manipulation
tasks within such an environment, the robot requires to
modulate its compliance. Therefore, the human operator must
have a system that enables an intuitive demonstration of
compliance skills to the robot. Ideally, this should also be
possible through teleoperation in order to have the ability to
demonstrate skills at a distance, such as in remote home care
applications or any other scenario where the skilled operators
are not physically present all the time. Existing state-of-the-
art methods for remote demonstration of impedance skills
only enable limited modulation of the stiffness matrix, or they
are too complex and cumbersome for practical applications.
This research tries to overcome these limitations and proposes
a teleoperated stiffness commanding method that enables a
complete modulation of the stiffness matrix in 3 degrees of
freedom. The method uses the same haptic device hardware
as used for controlling the robot manipulator motion, hence
it does not require extra specialised equipment for stiffness
commands. By wiggling the endpoint of the haptic device, the
stiffness is commanded to the robot and also fed back to the
operator through haptic and visual feedback. To evaluate the
performance and acceptance of the system, we performed a
user study where the participants had to demonstrate various
interaction behaviour to the remote robot. The results show how
varying system parameters (i.e., degrees of freedom, orientation,
and size of the stiffness commands) influence the performance
of the system and user acceptance.
I. INTRODUCTION
Traditionally, industrial robots are stiff and position con-
trolled, without the ability to adjust dynamic properties of the
interaction. In combination with a well organised and known
environment, this simple method is effective in autonomous
and repetitive manipulation tasks, such as assembly in a pro-
duction line. However, robots that work in unstructured and
unpredictable environments or are interacting with humans
require dynamic control in order to efficiently deal with
complex interaction challenges. In this respect, impedance
control gives robots the ability to adapt the properties of
dynamic interaction with the environment [1].
Besides pre-programming, there are two main types of ap-
proaches to demonstrate or command the impedance skills to
the robot: kinaesthetic guidance and teleoperation. In the first
approach, the impedance parameters can be inferred through
the data obtained by kinaesthetic guidance. For example,
the method in [2] infers variable impedance skills from the
1Heemskerk Innovative Technology B.V., Delft, The Netherlands
2Delft Haptics Lab, Cognitive Robotics, Delft University of Technology,
Delft, The Netherlands
Corresponding author (e-mail: l.peternel@tudelft.nl)
measured force during kinaesthetic teaching. In [3], [4], the
impedance is inferred through the variability in demonstrated
kinematic trajectories; small variability in motion implies
precision and thus high impedance, and vice-versa. While
these methods do not require special interfaces, the operator
can not control the impedance directly and might lead to
undesirable parameters. For example, the motion through
a narrow section, while in contact with the environment,
has a small variability due to the environment constraint,
and if the object is fragile, then high stiffness can lead
to unsafe interaction forces [5]. Furthermore, kinaesthetic
guidance requires physical presence of the human operator,
which limits its use in applications such as remote home care.
Therefore, teleoperation approach might often be preferred
over pure autonomy or kinaesthetic guidance. There are
also methods that can learn compliant motion directly from
teleoperated demonstrations [6], [7], however, these too do
not give the operator a direct control over the impedance.
In [8], the concept of tele-impedance was introduced,
where surface electromyography (sEMG) measured the mus-
cle activation of the operator to estimate human arm stiffness
through offline calibration. The human endpoint stiffness
was then superimposed to the robot impedance controller to
command its impedance parameters in real-time. This sEMG
approach was later on also used to teach the impedance
skills to robots [9], [10]. However, these methods all require
complex offline identification techniques to find the sEMG
to stiffness mapping, which is operator-specific and typically
only locally valid (in the identified arm pose). Simplifications
were introduced in [9], [11], however sEMG is still a
rather complicated system with wearable sensors and time-
consuming calibration procedures. Additionally, when force-
feedback is involved with sEMG interface [12], [13], there is
a coupling between the force-feedback and the commanded
impedance that takes away some direct control [14].
A fundamentally different approach to tele-impedance is
to use the operator’s grip force as an interface to command
stiffness. The interface in [15] measured the force of grip
on the handle of the master device by pressure sensors. The
measured force was then linearly mapped to the commanded
robot stiffness and damping. A different approach was intro-
duced in [5], where the impedance was commanded by the
position of the button on a handheld control interface. The
methods [15] and [5] are more practical compared to sEMG,
however they enable only control of a limited degrees of
freedom (DoF), while the stiffness matrix has many DoF that
are related to magnitude and orientation. Therefore, some
aspects of the stiffness matrix must be coupled to a single
control DoF and independent modulation is not possible.
The methods in [16], [17] proposed to use kinaesthetically
induced perturbation to command impedance to the robot.
This technique relies on physically wiggling the robot struc-
ture around its reference trajectory to modulate the stiffness
in the three translational DoF. However, this method requires
close physical interaction to move the robot and therefore
remote demonstration or control is not possible. Furthermore,
if the robot is large, physical interaction and movements are
not easy. Finally, environment constraints can prevent the
manipulator to be moved in a certain direction which can
only be solved by commanding the robot with an offset in
trajectory [17]. Nevertheless, this method does provide the
possibility to vary all aspects of the stiffness matrix without
using sEMG.
Given the limitations of the existing methods, this work
designs and evaluates a novel stiffness commanding interface
that allows the operator to provide remote demonstration of
varying stiffness profiles in 3 DoF, where the three transla-
tional DoF can be varied in the direction (stiffness matrix
eigenvectors) and magnitude (stiffness matrix eigenvalues).
To make the interface suited for a real-world practical
application, it is also important that the interface is easy to
implement and does not use wearable devices or calibration
precedes (i.e., limitations of sEMG). In addition, demonstra-
tion or control of stiffness should not be dependent on the
environment constraints or kinematic trajectory. Additionally,
the interface must be accepted by the operator such that
stiffness behaviour can be easily and intuitively programmed.
Similar to [16], [17], the proposed method is based on
perturbations around the kinematic reference trajectory in
order to provide stiffness commands. Different from [16],
[17], the proposed method does not induce perturbations
by physically wiggling the robot. Instead, the haptic master
device of teleoperation setup is used to make virtual pertur-
bations. During the reference motion trajectory, the operator
is moving a virtual marker around the current end-effector
position of the robot by wiggling the haptic device. The
larger the distance between the virtual marker and the end-
effector (amplitude), the lower the stiffness, and the more
compliant the end-effector of the robot will become in that
direction. The commanded stiffness is updated online and fed
back to the operator through a visualisation of compliance
ellipsoid1, and by providing stiffness related forces from
the haptic device. Therefore, the operator sees and feels the
stiffness changes, which helps him/her to estimate to what
extent the commanded stiffness is successful with respect to
what was intended. After the stiffness profile is demonstrated
in a safe simulation environment, the robot can use the
obtained stiffness profile in order to autonomously execute
the task in real-life.
The main contribution of this paper is a new teleoperation
based stiffness commanding method that allows direct modu-
lation of all aspects of the stiffness matrix for 3 translational
DoF. The proposed method was evaluated by a user study in
1Compliance ellipsoid in an inverse of stiffness ellipsoid.
terms of performance and acceptance in various conditions.
II. MET HOD DESIGN
The main objective of the proposed system is to enable
human operators to remotely demonstrate new interaction
skills to the robot. Fig. 1 shows the system overview with
the most important software blocks, signals, and apparatus.
The key element is the new stiffness commanding interface
for varying the stiffness parameter of the remote robot’s
impedance controller. This work assumes an already estab-
lished or learned kinematic trajectory xcand no assumptions
are made on how it was obtained (e.g., it can be done
remotely through the same haptic device that is used for
stiffness demonstration). Additionally, during the stiffness
demonstration in simulation, the impedance controller is
temporarily bypassed to facilitate the creation of the stiffness
profile through remote demonstration.
A. Stiffness commands from perturbations
The proposed demonstration method is intended to com-
mand stiffness for the 3 translation DoF through wiggling of
a haptic device. Therefore, the requirements for the hardware
setup is a haptic device with at least 3 DoF. The stiffness
matrix KR3×3gives a relationship between the contact
forces and position errors, and we define it to be inversely
proportional to a covariance matrix from the demonstration.
This covariance matrix is constructed based on a perturbation
signal that is created by the operator while moving the haptic
device in the robot base frame, which is aligned with the
world frame. By wiggling a virtual marker point xmR3×1
around the current robot end-effector position xR3×1, a
perturbation vector ˜
xcan be found for each time step ias
˜
xi=xixm,i.(1)
The perturbation vector is stored over time in a data matrix
Ξ˜
x×L. From this data matrix, a covariance matrix Σi
R3×3is constructed. Here L=T·dt =T
fs
is the total amount
of stored data points in the data matrix and hence the length
of the sliding temporal window, where Tis the time span of
the window in seconds and fsthe frequency in Hz at which
the software runs. As time progresses with time step dt, a
new perturbation vector will be appended while the earliest
vector will be removed from the data matrix. Therefore at
time t=t1, the data matrix contains the perturbation vectors
in the range [(t1T), t1]for t1> T . From this data matrix,
the symmetric and positive definite (SPD) covariance matrix
is found according to
Σi=1
L
L
X
i
(˜
xiµ)(˜
xiµ)T,
µi=1
L
L
X
i
˜
xi,
(2)
where µR3×1is the average of the perturbation vectors
in the data matrix. The three variances corresponding to
the x, y and z-axis are presented on the diagonal of the
covariance matrix, and the off-diagonal elements represent
Fig. 1. System overview containing the most important software blocks, signals, and apparatus. The operator and the remote environment (blue sections)
interact with the master and remote devices, respectively (green sections). The yellow section contains the software blocks and signals and the purple
section shows the visualisation based on the important signals or robot sensors. Since no impedance (compliant) controller is implemented, the stiffness
command Kis directly connected to the stiffness scaling.
the covariance (coupling terms) between them. The next step
is to set the covariance matrix inversely proportional to the
stiffness matrix. Therefore, the direction and the magnitude
should be found. Since the covariance matrix is a SPD
matrix, eigendecomposition gives
Σi=QΛQT,(3)
where QR3×3is a matrix containing the orthonormal
eigenvectors (direction) and ΛR3×3is a diagonal matrix
composed of the eigenvalues λi, i = 1,2,3(magnitude
along the eigenvectors). We take eigenvectors from (3) and
use them to construct the stiffness matrix as
Ki=QΓQT,(4)
where Γis a diagonal matrix of which the diagonal elements
γiare defined to be the inverse to the square root of the
diagonal elements of matrix Λ, such that σi=λi. The
inverse relation for each diagonal element γ(σi)is given by
γ(σi) =
¯
K σi>¯σ
¯
K¯
K¯
K
¯σ¯
σ(σi¯
σ)¯
σσi¯σ,
¯
K σi<¯
σ
(5)
where σiis a measure of the amplitude of the perturba-
tions (wiggles), and the minimum and maximum allowed
perturbations ¯
σand ¯σare tunable parameters. Since the
diagonals γiof Γshould be bounded between the stiffness
limits of the impedance controller, the stiffness diagonal γ
is set inversely proportional to the perturbations measure σ.
Therefore, the minimum and maximum allowed perturbation
[¯
σ, ¯σ]is related to the minimum and maximum allowed
stiffness limits of the impedance controller. The minimum
and maximum stiffness is denoted by [¯
K, ¯
K].
B. Visual feedback
Since stiffness commands are given using a remote-
demonstration setup, it is important to have a good un-
derstanding of the remote robot and the environment. Ad-
ditionally, to understand the system itself, visual cues are
known to improve user acceptance and performance in
Haptic Shared Control systems [18], [19]. To design the
visual feedback system, we divided the operator’s screen in 3
sections (see Fig. 2). The right-top side of the screen shows
a camera stream from the robot’s head camera, and the right-
bottom shows the end-effector camera stream. The left side
of the screen provides a simulated environment based on
robot sensors. Here, the current robot state and additional
visualisations of the signals are presented. The figure also
shows the task of opening a microwave door.
The simulated environment includes 1) a model of the
current robot state and 2) a point cloud that shows the
environment constructed from the depth camera. In the
simulated view, 3) the commanded end-effector trajectory is
visualised to help in understanding the future motion of the
end-effector. Furthermore, 4) the virtual marker is presented
by the red sphere that is controlled by the endpoint of the
haptic device. By moving the marker away from the end-
effector, the marker color gradually changes from green to
red providing a sense of depth. Finally, 5) the compliance
ellipsoid is used to visualise the demonstrated stiffness
and to help operators understand the effect of the input
motion commands on the stiffness. Although the compliance
ellipsoid is the inverse of a stiffness ellipsoid, the choice is
made to visualise compliance, since this naturally matches
the movement of the virtual marker xm. For example, if
the operator wiggles the haptic device along the z-axis, the
compliance ellipsoid forms a cigar shaped ellipsoid with the
long axis aligned with the z-axis. This is intuitive since the
long axis forms in the line of movement from the wiggling
motion, resulting in lower stiffness along the z-axis.
C. Haptic feedback
Typically haptic feedback generated by the master de-
vice is used to feel the forces from the remote robot. In
this work however, we use a different method to provide
haptic feedback; the demonstrated manipulator stiffness is
Fig. 2. The left picture shows an overview of the simulated remote
environment. The top-right and bottom-right pictures show the visual
feedback provided to the operator based on the head and wrist camera
streams. The task mimics the process of opening the door of a microwave.
explicitly made observable through the haptic device by
scaling the commanded stiffness and using this to produce a
virtual stiffness force. If the endpoint of the haptic device
moves away from its zero position, the virtual stiffness
force pulls it back to the zero position, similar to how
an impedance controlled manipulator would move back to
its equilibrium trajectory. Therefore, the operator can sense
the effects of the demonstrated stiffness directly without
an actual impedance controller being present. This allows
commanding of stiffness even for simulated motion where
contact forces are absent. Since the operator evaluates the
stiffness directly, it allows for quick adjustments during
the motion. Additionally when a fixed stiffness is set (e.g.
from earlier demonstrations), it also allows users to feel and
improve upon the earlier commanded stiffness. It is important
to convey this information haptically since this is how people
naturally evaluate stiffness [16]. Furthermore, it supplements
the compliance ellipsoid visualisation since it can be seen
and felt simultaneously.
The virtual stiffness force is calculated by multiplying
the scaled-down manipulator stiffness KsR3×3with the
deviation of haptic device from its zero position. To set (or
limit) the force range that the haptic device can produce, the
stiffness should be scaled to ensure that the haptic device
uses the full or a defined range of force. This is done by
defining the maximum allowed deviation ¯xhd of the haptic
device and the minimum and maximum force limits (for
maximum haptic device deviation). [¯
fhd,¯
fhd]of the haptic
device. Subsequently, the minimum and maximum allowed
stiffness for the haptic device deviation is defined as
¯
Khd =¯
fhd
¯xhd
,
¯
Khd =¯
fhd
¯xhd
.
(6)
By relating the minimum and maximum manipulator stiff-
ness limits [¯
K, ¯
K]to the minimum and maximum allowed
haptic device stiffness limits, the scaled down stiffness can
be found.
In order to scale the stiffness matrix K, eigenvalue de-
composition of the stiffness matrix has to be performed such
that the eigenvalues can be scaled. Using the eigenvalues of
Fig. 3. The top plot shows a sinusoidal perturbation signal along the
x-direction and the bottom plot shows the resulting stiffness commands.
(4), the diagonal matrix Ks,e with the scaled eigenvalues is
given by
Ks,e =¯
Khd +¯
Khd ¯
Khd
¯
K¯
K(Γ¯
K),(7)
where ¯
Khd,¯
Kand ¯
Kare diagonal matrices with on the
diagonals ¯
Khd,¯
K,¯
Krespectively. Once again, using (4)
the scaled stiffness matrix is found using the eigenvectors of
the decomposition as
Ks=QKs,eQT,(8)
Subsequently, the force feedback is calculated by
fhd =Ksxhd,(9)
where fhd R3×1is the virtual stiffness force produced by
the haptic device and xhd R3×1is its endpoint deviation
away from the zero position.
D. Stiffness behaviour
This section shows how the perturbation signal (operator
input) influences the stiffness commands and how the param-
eters influence the system. Given the stability stiffness limits
of the manipulator, parameters that influence the stiffness
commands are the amount of data points in the sliding
window L=dt ×T=T
fs
, and the parameters ¯σ, ¯
σ. The
sliding window length along with the frequency rate of the
software determines the time it takes to completely refresh
the window with data points. The time it takes to refresh the
window mainly influences the rate of stiffness changes during
the demonstration. Therefore, a large window and a high
frequency rate correspond to slow rate of change of stiffness,
and vice-versa. The parameters ¯
σ, ¯σrepresent the standard
deviation of the data in the data matrix Ξ˜
x×Land are related
to minimum and maximum allowed deviation of the haptic
device endpoint. Given the inverse relation of (5), motion
below ¯
σ= 0.0707 increases the stiffness to the maximum
limit ¯
K= 1000 N/m and above ¯σ= 0.3to a minimum limit
¯
K= 100N/m of the manipulator. This effect is shown in
Fig. 3.
The sliding window length and software frequency are
L= 100 and fs= 100 Hz. Therefore, the time it takes
to completely refresh the window is T= 1 s. This can be
observed by looking at the top and bottom graphs of Fig. 3.
In the top graph at approximately 4.5 seconds a sinusoidal
with an amplitude larger than the threshold starts. After 1
second the window is completely refreshed with new data
points, and this is shown in the bottom graph, where the
stiffness has flattened to 600 N/m at 5.5 seconds.
III. EXPERIMENT
An important aspect of the method is that non-expert
operators can use it to demonstrate stiffness skills. We
evaluated the method by an experimental user study, where
the operator was instructed to recreate various compliance
ellipsoids (representing the stiffness matrix) as accurately as
possible. The quantitative aspects were evaluated based on
the trial time and by comparing the ellipsoid demonstrated
by the operator to the reference ellipsoid. The reference
ellipsoids were varied in size, orientation, and required
either 1 or 2 DoF commands. Note that the references were
provided as 3D ellipsoids, therefore the user had to reproduce
3D ellipsoids through either 1 DoF or 2 DoF command mode.
In the 1 DoF command mode, different stiffness axes were
demonstrated sequentially one by one, while in the 2 DoF
mode multiple axes were demonstrated simultaneously. The
subjective aspects of user acceptance were evaluated by the
van der Laan questionnaire [20].
First, we hypothesised that 1 DoF stiffness commands
have higher performance scores, compared to 2 DoF stiffness
commands. To describe task complexity we used the theoret-
ical model in [21], which implies that simultaneous actions
contribute to increased overall task complexity. Thus, the
level of task complexity in executing simultaneous actions
(as in 2 DoF mode) may exceed the capacity of the individual
operator, which leads to a lower performance.
Second, we hypothesised that commanding compliance in
the horizontal (or transverse) plane yields lower performance
scores, as compared to commands in the vertical (or frontal)
plane. The reason being that commands in the horizontal
plane require the operator to predominantly use the visual
feedback that has a top-down view of the horizontal plane,
while natural human eye view is in vertical plane. Therefore,
misalignment exists between the operator view and the
control input, which does not exist in the vertical plane.
In [22] it was shown that to improve teleoperation, a setup
should minimise control and view rotations. Additionally,
the manipulability of the human arm changes with the arm
configuration [23], and can be different when operating in
the horizontal plane and in the vertical plane. While no
research is found that directly relates the effect of human
arm manipulability on different planes on teleoperated task
performance, it could contribute to performance increase or
decrease.
Finally, we hypothesised that larger shapes take more time
to demonstrate, compared to small shapes. To create large
shapes, the haptic device endpoint has to travel a greater
distance, resulting in more time needed per trial. No effect
was expected in terms of similarity between the reference
and user demonstrated stiffness ellipsoid.
A. Participants
Eight male participants (age: M= 25.15,SD = 2.53)
volunteered and were included in the experiment. All par-
ticipants gave their prior consent and the experiment was
approved by the Human Research Ethics Committee of the
Delft University of Technology.
B. Experiment setup and protocol
The experiments were performed on a remote-
demonstration setup consisting of a 3D Systems Touch
Haptic Device, which measured operator’s position
commands and provided force feedback about the remote
robot. The remote robot was simulated in Gazebo. The
computer screen in front of the participant provided the
visual feedback. Before starting the experiment each
participant was provided a description of the setup, method
and task instructions. A familiarisation trial was performed
prior to the experiment. In the experiment, the participant
had to demonstrate a given stiffness profile in four different
conditions, where each condition was preceded with a
practice run to get used to that specific condition. After the
final condition, the participants were asked to fill in a van
der Laan questionnaire along with four additional questions
complementing the questionnaire:
What did you like or find helpful?
What did you find undesirable or hard?
Which condition did you find most difficult and why?
Do you have any remarks?
The experiment conditions were defined as a combination
of 1 or 2 DoF stiffness commands, and whether they were
commanded in a horizontal or vertical plane, leading to the
following notations: 1 DoF – horizontal,1 DoF – vertical,
2 DoF – horizontal and 2 DoF – vertical. Within each
condition, the ellipsoids were varied in size (small, large)
and orientation (0, 45, 90 and 135 degrees). The 1 DoF
reference ellipsoids had one long axis of which the size
varied to be either large (0.46) or small (0.25) forming the
shape of a ”cigar”. The 2 DoF ellipsoids had the same long
axis in addition to a second axis that was half the size
of the long axis. These two axes formed an ”oval” in the
plane of their corresponding condition. Both 1 and 2 DoF
ellipsoids were rotated in their plane with either 0, 45, 90
or 135 degrees. Within each condition, every combination
of size and orientation was repeated four times. The 2 DoF
conditions had two additional ellipsoids, a small and large
”circle” which were both repeated four times. They were not
rotated within their plane, since all orientations resulted in
the same compliance ellipsoid.
C. Performance measures
To measure the performance of the participants in the
individual trials, we defined the completion time and three
accuracy related metrics. The Trial time metric was defined
as the time it takes for the participant to recreate the reference
ellipsoid in seconds. The time started as the new reference
ellipsoid spawned and was stopped by the operator when
a satisfactory performance was reached. This was done by
pressing a button, which also automatically spawned the next
reference ellipsoid. High trial times corresponded to lower
performance, and vice versa.
The accuracy of demonstrated ellipsoids was evaluated by
decomposing the stiffness matrix in orientation (eigenvec-
tors) and size (eigenvalues) components according to (4).
Thus, the error between the reference and the demonstrated
ellipsoid was evaluated by comparing the eigenvalues and
eigenvectors. The first accuracy related metric, Absolute
average size error, s, was defined to be the average of the
absolute error between the reference and the demonstrated
ellipsoid axes as
s=1
n
n
X
i=1,2,3|σref,i σcom,i |(10)
The second accuracy related metric, Relative average size
accuracy, sacc, was defined to be the average error between
the reference and the demonstrated ellipsoid axes, relative to
the reference ellipsoid. Additionally, the score was converted
to a percentage such that it could be presented to the operator
as a convenient feedback score during the trials. The score
ranged between 0100%, where 100% represented a perfect
match in size, and was defined as
sacc = 100 1
n
n
X
i=1,2,3|σref,i σcom,i |
σref,i ×100(11)
The third accuracy related metric, Orientation error, α,
was defined as the smallest absolute angle between the
reference orientation and the demonstrated orientation for
an arbitrary axis in the axis angle framework. The absolute
angle was derived from a distance metric for 3D rotations
by using the inner product of a unit quaternion φ[24] as
φ= arccos (|qref ·qcom|)(12)
The distance metric φwas scaled by a factor to represent the
angle in radians. Subsequently, because of the symmetry of
the ellipsoid, the range was halved to [0 π
2rad]as
α=(π2φ2φ > π
2
2φ, otherwise (13)
The angle αwas also represented as a feedback accuracy
score αacc to the operator in (14). Similar to the size
accuracy, 100% represented a perfect match in orientation.
The feedback score was identical to α, only scaled to have
a different range [0 100%] as
αacc = 100 (α2
π×100) (14)
IV. RES ULTS
We performed a statistical analysis to test the null hypoth-
esis H0(i.e., equal medians) of the different conditions using
the Wilcoxon signed-rank test. The results of comparing the
performance of commanded DoF (1 vs 2), plane (horizontal
vs vertical), and size (large vs small) are presented in Table I.
Additionally, Fig. 4 visualises the most important quantitative
and subjective results.
Comparing 1 DoF commands with 2 DoF commands,
Table I, Fig. 4a and Fig. 4b show that all metrics are
significant with p < 0.001. As hypothesised, all median
values show higher performance scores in 1 DoF compared
to 2 DoF, confirming the first hypothesis.
Secondly, it was hypothesised that performance in the
horizontal plane is lower compared to the performance in the
vertical plane for all the performance metrics. As indicated
in the legend of Fig. 4a and Fig. 4b, statistical difference is
observed only for the orientation error p < 0.001. Further-
more, Table I reveals that the absolute and relative average
size error p= 0.73,p= 0.14, are not statistically significant.
Therefore, the performance in the horizontal plane is lower
compared to the performance in the vertical plane, because it
took the operator less time to demonstrate an ellipsoid (trial
time p < 0.001). Furthermore, the demonstrated ellipsoids
are less similar to their reference ellipsoid, where the error
in similarity is based on the error in orientation p < 0.001
and not in the size of the stiffness commands.
Finally, it was hypothesised that the performance for large
shapes is lower than for small shapes (for the trial time
only). Table I confirms this with p < 0.001, however, the
orientation error p= 0.004 and absolute average size error
p < 0.001 also show a statistically significant difference.
The median values of large shapes reveal a (small) increase
in performance score in orientation error, but lower perfor-
mance score in the absolute average size error.
All eight participants reported a positive experience in
terms of the usefulness of the stiffness commanding method
(see Fig. 4c). However, not all participants were satisfied with
the stiffness commanding interface. Out of the eight partici-
pants, six participants reported a positive score, one neutral
and one participant was not satisfied. Comments during the
experiment and feedback from the questions as described
in (III-B) indicate that force feedback was either helpful or
tiring. All participants perceived the 2 DoF commands as
the more difficult mode. Six participants reported that the
horizontal plane was the more difficult view. Additionally,
ellipsoids that are oriented diagonally between two axes with
a 45 or 135 degrees rotation were considered more difficult.
Finally, comments were made on the visualisations, where
the orientation of small ellipsoids was difficult to see.
V. DISCUSSION
The trial time is an indicator of overall performance for
the plane and DoF conditions since it shows how quickly
the participant reached satisfactory performance. It should
be noted that if a subsequent trial features a differently
oriented compliance ellipsoid, it took 2seconds to rotate the
TABLE I
STATIST ICAL A NALYSI S:DESCRIPTIVE STATISTICS (LE FT)A ND I NF ER EN TI AL S TATIS TI CS (RIGHT)
Descriptive Statistics Inferential Statistics
Metrics\Conditions 1 DoF (493) 2 DoF (493) Horizontal (555) Vertical (555) Large (556) Small (556) 1 DoF =2 DoF Horizontal =Vertical Large =Small
trial time [s] Q25.68 6.88 7.06 6.14 8.22 5.60 w =43620 w =62291 w =32307.5
Q13.34 4.02 4.11 3.47 4.28 3.29 p<0.001p<0.001p<0.001
Q39.72 11.74 11.51 10.75 13.90 8.75
absolute average Q20.92 ·1021.49 ·1021.25 ·1021.27 ·1021.63 ·1020.98 ·102w=34089 w =75851 w =71071.5
size error [-] Q10.45 ·1020.91 ·1020.74 ·1020.68 ·1020.94 ·1020.53 ·102p<0.001p=0.73 p<0.001
[0.14 - 0.33] 2Q31.65 ·1022.26 ·1022.16 ·1022.00 ·1022.55 ·1021.59 ·102
relative average Q297.21 93.70 95.22 95.57 95.56 95.22 w =18487 w =70566 w =32504
size accuracy [%] 1Q198.65 95.91 97.48 97.67 97.50 97.72 p<0.001p=0.14 p =0.11
[0-100] 3Q394.99 90.06 92.29 92.86 92.75 92.15
orientation Q26.65 17.41 14.79 10.87 11.45 12.92 w =15658 w =54113 w =66391
error [deg] Q12.98 10.83 6.56 5.20 5.71 5.84 p<0.001p<0.001p=0.004
[0-90] 3Q314.13 26.09 23.87 18.26 20.34 21.96
Q1is the first quartile, Q2is the median, and Q3is the third quartile of respective data sets.
1Different than the other metrics, high scores correspond with high performance.
2Presents the minimum and maximum average size of the reference ellipsoids.
3Presents the range of the minimum and maximum scores.
*Significant difference (p0.05), which rejects the null hypothesis H0.
(a) (b) (c)
Fig. 4. Quantitative (a),(b) and subjective (c) results. The main metrics (a) average shape error and (b) absolute angle are compared for the commanded
DoF and plane, where significance is denoted by: p0.05,∗ ∗ p0.01,∗∗∗ p0.001. (c) Presents the van der Laan acceptance scores [20] that
evaluate the stiffness commanding method. The horizontal axis represents the usefulness scale and the vertical axis represents the satisfying scale where
the self-reported scores range from -2 (negative) to 2 (positive).
compliance ellipsoid in the correct direction. Two seconds
was the time needed for the temporal sliding window to have
an all-new perturbation signal. The remaining time, was the
time needed for the operator to reach satisfactory stiffness
commands/performance.
As hypothesised, the results show that all performance
metrics have significantly higher scores in 1 DoF commands
compared to 2 DoF. 1 DoF command mode might be
preferred when demonstrating or adjusting a specific axis
magnitude of ellipsoids, since it is quick and accurate. On
the other hand, using 1 DoF command mode to demonstrate
multiple axes of ellipsoids, the operator has to independently
and sequentially control the magnitude of individual axes,
which can be less convenient. Therefore, the task and con-
ditions at hand should dictate the choice of the mode.
Furthermore, all participants agreed that 2 DoF com-
mands are perceived as more difficult, where additional
participant feedback suggests that controlling the pitch in
2 DoF was especially difficult. In theory, the interface also
allows simultaneous commanding in 3 DoF. Following the
trend, increasing the DoF would even further decrease the
performance and is expected to be too difficult. Therefore,
we recommend multiple demonstrations in order to modulate
the stiffness in 3 DoF independently.
The second hypothesis expected a decrease in performance
for horizontal commands compared to vertical commands.
This hypothesis is partly satisfied since the overall per-
formance did indeed decrease. The metrics trial time and
orientation error showed a significant decrease in perfor-
mance, while the relative and absolute size error did not. A
potential reason could be that horizontal commands required
the operator to mainly focus on the less natural top-down
view of the remote/simulated scene.
However, this is likely not the only effect that contributed
to a difference in orientation error. Due to manipulability
properties, the kinematic structure of the human arm allows
for easier movements in certain directions, while being
more resistant to perturbation forces in other directions
[23]. Moreover, the study in [25] revealed a significant and
consistent anisotropy in force magnitude perception in the
three dimensional axes. Therefore, different perceptions of
force (or even movement) could contribute to the difference
in performance in the horizontal and vertical planes. This is
even more likely, since the feedback from the participants
revealed that they had more difficulties in orienting their
stiffness commands along the diagonals (45 and 135 degrees
of rotation) within the horizontal or vertical plane condition.
It can be concluded that the commanded direction and/or
reference view of the operator influences the performance
of the stiffness commands through the eigenvectors only.
Therefore, the viewpoint of the operator, direction of the
stiffness commands, and alignment with the robot reference
frame should be carefully considered when maximising per-
formance for a task demonstration. Future work could try
to distinguish what effects contribute the most in order to
further improve the interface.
Finally, it was hypothesised that trial times for large el-
lipsoids are higher than small ellipsoids, which is confirmed
in the results. Commands that require larger movements take
naturally more time compared to small movements. However,
another effect is expected to contribute to the increased trial
times. User feedback reveals that the difference between the
demonstrated ellipsoid and reference ellipsoid was difficult
to see for small sizes. Therefore, participants could spend
more time on correcting their orientation and size error for
large ellipsoids, since they were more aware of these errors.
This is in accordance with the small significant decrease in
orientation error for large sizes, which suggests that partici-
pants were able to improve performance on orientation, when
allowed more time and improved visuals.
Furthermore, the results show that the error in absolute size
is significantly greater for large shapes, while the relative size
error is not. Large sizes result in a larger absolute over and
undershoot, but are in proportion to the size of the reference
ellipsoid.
The van der Laan acceptance score indicated that the
method was perceived as useful and intuitive. However, not
everybody was satisfied. Participant feedback clarifies that
participants either liked or disliked the force feedback. Ad-
ditionally, some participants reported that the force feedback
becomes tiring after a while. Depending on the participant,
the experiment time ranged between 45 and 75 minutes.
Since the method is intended for learning impedance be-
haviour, it is unlikely that stiffness will be commanded for
such long periods. To increase user satisfaction, the most
simple solution would be to lower the force feedback to
prevent the operators from getting tired. Instead, the forces
can be visualised on the screen.
REFERENCES
[1] N. Hogan, “Impedance control - An approach to manipulation. I -
Theory. II - Implementation. III - Applications,ASME Transactions
Journal of Dynamic Systems and Measurement Control B, vol. 107,
pp. 1–24, Mar. 1985.
[2] F. J. Abu-Dakka, L. Rozo, and D. G. Caldwell, “Force-based vari-
able impedance learning for robotic manipulation,” Robotics and
Autonomous Systems, vol. 109, pp. 156–167, 2018.
[3] S. Calinon, I. Sardellitti, and D. G. Caldwell, “Learning-based control
strategy for safe human-robot interaction exploiting task and robot
redundancies,” in 2010 IEEE/RSJ International Conference on Intel-
ligent Robots and Systems, 2010, pp. 249–254.
[4] P. Kormushev, S. Calinon, and D. G. Caldwell, “Imitation learning of
positional and force skills demonstrated via kinesthetic teaching and
haptic input,” Advanced Robotics, vol. 25, no. 5, pp. 581–603, 2011.
[5] L. Peternel, T. Petriˇ
c, and J. Babiˇ
c, “Robotic assembly solution
by human-in-the-loop teaching method based on real-time stiffness
modulation,” Autonomous Robots, vol. 42, no. 1, pp. 1–17, 2018.
[6] A. Pervez, A. Ali, J.-H. Ryu, and D. Lee, “Novel learning from
demonstration approach for repetitive teleoperation tasks,” in 2017
IEEE World Haptics Conference, 2017, pp. 60–65.
[7] M. Suomalainen, J. Koivum¨
aki, S. Lampinen, V. Kyrki, and J. Mattila,
“Learning from demonstration for hydraulic manipulators,” in 2018
IEEE/RSJ International Conference on Intelligent Robots and Systems,
2018, pp. 3579–3586.
[8] A. Ajoudani, N. Tsagarakis, and A. Bicchi, “Tele-impedance: Teleop-
eration with impedance regulation using a body–machine interface,”
The International Journal of Robotics Research, vol. 31, no. 13, pp.
1642–1656, 2012.
[9] L. Peternel, T. Petriˇ
c, E. Oztop, and J. Babiˇ
c, “Teaching robots to
cooperate with humans in dynamic manipulation tasks based on multi-
modal human-in-the-loop approach,” Autonomous robots, vol. 36, no.
1-2, pp. 123–136, 2014.
[10] C. Yang, C. Zeng, C. Fang, W. He, and Z. Li, “A dmps-based
framework for robot learning and generalization of humanlike variable
impedance skills,” IEEE/ASME Transactions on Mechatronics, vol. 23,
no. 3, pp. 1193–1203, 2018.
[11] A. Ajoudani, C. Fang, N. Tsagarakis, and A. Bicchi, “Reduced-
complexity representation of the human arm active endpoint stiffness
for supervisory control of remote manipulation,” The International
Journal of Robotics Research, vol. 37, no. 1, pp. 155–167, 2018.
[12] C. Yang, C. Zeng, P. Liang, Z. Li, R. Li, and C.-Y. Su, “Interface
design of a physical human–robot interaction system for human
impedance adaptive skill transfer,IEEE Transactions on Automation
Science and Engineering, vol. 15, no. 1, pp. 329–340, 2017.
[13] M. Laghi, A. Ajoudani, M. Catalano, and A. Bicchi, “Tele-impedance
with force feedback under communication time delay,” in 2017
IEEE/RSJ International Conference on Intelligent Robots and Systems,
2017, pp. 2564–2571.
[14] L. M. Doornebosch, D. A. Abbink, and L. Peternel, “Analysis of
coupling effect in human-commanded stiffness during bilateral tele-
impedance,” IEEE Transactions on Robotics, vol. 37, no. 4, pp. 1282–
1297, 2021.
[15] D. S. Walker, R. P. Wilson, and G. Niemeyer, “User-controlled variable
impedance teleoperation,” in 2010 IEEE International Conference on
Robotics and Automation, 2010, pp. 5352–5357.
[16] K. Kronander and A. Billard, “Online learning of varying stiffness
through physical human-robot interaction,” in 2012 IEEE International
Conference on Robotics and Automation, 2012, pp. 1842–1849.
[17] ——, “Learning compliant manipulation through kinesthetic and tac-
tile human-robot interaction,” IEEE transactions on haptics, vol. 7,
no. 3, pp. 367–380, 2014.
[18] V. Ho, C. Borst, M. M. van Paassen, and M. Mulder, “Increasing
acceptance of haptic feedback in uav teleoperation by visualizing force
fields,” in 2018 IEEE International Conference on Systems, Man, and
Cybernetics (SMC), 2018, pp. 3027–3032.
[19] W. Vreugdenhil, S. Barendswaard, D. A. Abbink, C. Borst, and
S. M. Petermeijer, “Complementing haptic shared control with visual
feedback for obstacle avoidance,IFAC-PapersOnLine, vol. 52, no. 19,
pp. 371–376, 2019.
[20] J. D. Van Der Laan, A. Heino, and D. De Waard, “A simple procedure
for the assessment of acceptance of advanced transport telematics,”
Transportation Research Part C: Emerging Technologies, vol. 5, no. 1,
pp. 1–10, 1997.
[21] R. E. Wood, “Task complexity: Definition of the construct,” Organi-
zational behavior and human decision processes, vol. 37, no. 1, pp.
60–82, 1986.
[22] B. P. DeJong, J. E. Colgate, and M. A. Peshkin, “Improving tele-
operation: reducing mental rotations and translations,” in 2004 IEEE
International Conference on Robotics and Automation, vol. 4, 2004,
pp. 3708–3714.
[23] W. Kim, L. Peternel, M. Lorenzini, J. Babiˇ
c, and A. Ajoudani,
“A human-robot collaboration framework for improving ergonomics
during dexterous operation of power tools,Robotics and Computer-
Integrated Manufacturing, vol. 68, p. 102084, 2021.
[24] D. Q. Huynh, “Metrics for 3d rotations: Comparison and analysis,”
Journal of Mathematical Imaging and Vision, vol. 35, no. 2, pp. 155–
164, 2009.
[25] F. E. Van Beek, W. M. B. Tiest, W. Mugge, and A. M. Kappers,
“Haptic perception of force magnitude and its relation to postural arm
dynamics in 3d,” Scientific reports, vol. 5, no. 1, pp. 1–11, 2015.
... As a result, the robot's actions become predictable and robust. Impedance control is widely employed in a range of applications, including robot manipulation [4], haptic interfaces [5], rehabilitation robots [6,7], and walking exoskeletons [8], etc. By carefully fine-tuning the impedance parameters of the robot, the control strategy can ensure high levels of stability, accuracy, and stability in human-robot interactions. ...
Article
Full-text available
This paper proposes a novel cascade impedance control architecture designed for the upper limb exoskeleton rehabilitation robot. The proposed architecture comprises two parts: First, the impedance reference trajectory is shaped from the desired trajectory utilizing the desired impedance model and feedback contact torques. The second part of the proposed controller is an adaptive backstepping control, responsible for tracking the generated impedance reference trajectory. Notably, the proposed adaptive backstepping impedance controller is non-model-based control approach,eliminating the need for the robot’s model. Furthermore, a genetic algorithm is employed as an offline tuning method for the inner position loop controller, namely the adaptive backstepping controller. This approach outperforms the conventional impedance controller without relying on the model of the robot, making it more robust against model uncertainties and unknown parameters. Furthermore, to mitigate undesired compliance resulting from the permanent attachment between the robot and the patient’s limb, we introduce an adaptive gain. This gain dynamically adjusts the priority between compliance and tracking, ensuring that the robot complies only when necessary.
Article
Recent advances in physiological human motor control research indicate that human endpoint stiffness magnitude increases linearly with grasp force. Based on these findings, a scheme was proposed in this paper to integrate the linear quadratic estimation (LQE) filter with the stiffness model inferred from grasp force, which can simultaneously estimate the human arm’s stiffness and motion intention. Then, an online variable impedance controller (VIC) was designed based on previous estimations for physical human-robot interaction (pHRI). The proposed stiffness model and estimation method were validated through experiments using a planar robotic interface. In order to assess its performance in practical pHRI tasks, the implementation of human arm stiffness and intention estimation combining with VIC was extended to a teleoperation peg-in-hole and robot-assisted rehabilitation tasks. The experimental results demonstrate that the proposed method can effectively estimate human motion intention and arm stiffness simultaneously. Compared to existing methods, the proposed VIC enhances pHRI in terms of increased flexibility, effective guidance, and reduced human effort.
Chapter
Daily household tasks involve manipulation in cluttered and unpredictable environments and service robots require complex skills and adaptability to perform such tasks. To this end, we developed a teleoperated online learning approach with a novel skill refinement method, where the operator can make refinements to the initially trained skill by a haptic device. After a refined trajectory is formed, it is used to update a probabilistic trajectory model conditioned to the environment state. Therefore, the initial model can be adapted when unknown variations occur and the method is able to deal with different object positions and initial robot poses. This enables human operators to remotely correct or teach complex robotic manipulation skills. Such an approach can help to alleviate shortages of caretakers in elderly care and reduce travel time between homes of different elderly to reprogram the service robots whenever they get stuck. We performed a human factors experiment on 18 participants teaching a service robot how to empty a dishwasher, which is a common daily household task performed by caregivers. We compared the developed method against three other methods. The results show that the proposed method performs better in terms of how much time it takes to successfully adapt a model and in terms of the perceived workload.KeywordsLearning from DemonstrationTeleoperationOnline Learning
Article
Despite the significant progress made in making robots more intelligent and autonomous, today, teleoperation remains a dominant robot control paradigm for the execution of complex and highly unpredictable tasks. Attempts have been made to make teleoperation systems stable, easy to use, and efficient in terms of physical interactions between the follower remote robot and the environment. In particular, the emergence of torque-controlled robots has permitted to regulate the interaction forces from a distance through direct force or impedance control, enabling them to engage in complex interaction tasks. Exploiting this feature, the concept of teleimpedance control was introduced as an alternative method to bilateral force-reflecting teleoperation. The aim was to create a feed-froward yet contact-efficient teleoperation by enriching the leader commands with desired impedance profiles while executing a task. Since then, the teleimpedance concept has found its way into a wide range of interface and controller designs, as well as application domains. Accordingly, after a decade of research progress, this survey aims to provide: first, a convenient introduction of the concept to new researchers in the field, second, consolidate the existing state-of-the-art for active researchers, third, and discuss the pros and cons of different methods in terms of interface and force feedback to provide guidelines for different applications and future developments.
Article
Full-text available
For automated vehicles (SAE Level 2-3) part of the challenge lies in communicating to the driver what control actions the automation is taking and will take, and what its capabilities are. A promising approach is haptic shared control (HSC), which uses continuous torques on the steering wheel to communicate the automation’s current control actions. However, torques on the steering wheel cannot communicate future spatiotemporal constraints, that might be required to judge appropriate overtaking or obstacle avoidance. A visualisation of predicted vehicle trajectory, along with velocity-dependent constraints with respect to achievable trajectories is proposed. The goal of this paper is to experimentally compare obstacle avoidance behaviour while driving with the designed visualisation against driving with a previously designed HSC, as well as the two support systems combined. It is expected that adding visual feedback improves obstacle avoidance and user acceptance, and reduces control effort with respect to HSC only. In a driving simulator experiment, 26 participants drove three trials with each feedback condition (visual, HSC, and combination) and had to avoid obstacles that appeared with a Time to collision of either 1.85 s (critical) or 4.7 s (non-criticall). Results showed that, compared to HSC only, the HSC and visual combination yielded slightly smaller safety margins to the obstacle, a significant reduction of control activity on straights, and increased subjective acceptance rating. Visual and HSC offered a beneficial synergy, as it seemed the visual feedback allowed drivers to anticipate the effect of their steering actions on the car’s trajectory more accurately, and the HSC reduced the intra-subject variability. Future research should investigate the effects of added visual feedback in more detail, specifically in terms of the effectiveness to communicate automation capabilities and driver gaze behavior.
Article
Full-text available
In order for robots to successfully carry out manipulation tasks, they require to exploit contact forces and variable impedance control. The conditions of such type of robotic tasks may significantly vary in dynamic environments, which demand robots to be endowed with adaptation capabilities. This can be achieved through learning methods that allow the robot not only to model a manipulation task but also to adapt to unseen situations. In this context, this paper proposes a learning-from-demonstration framework that integrates force sensing and variable impedance control to learn force-based variable stiffness skills. The proposed approach estimates full stiffness matrices from human demonstrations, which are then used along with the sensed forces to encode a probabilistic model of the task. This model is used to retrieve a time-varying stiffness profile that allows the robot to satisfactorily react to new task conditions. The proposed framework evaluates two different stiffness representations: Cholesky decomposition and a Riemannian manifold approach. We validate the proposed framework in simulation using 2D and 7D systems and a couple of real scenarios.
Conference Paper
Full-text available
Tele-operation in the presence of environmental constraints is a well-studied problem, where the difficulties of the transparency-stability trade-off have been elucidated by several important studies. While at the state-of-art, passivity-based stabilizers appear to provide the best insight and command over this problem, recent work by our group has proposed an alternative approach, which consists in measuring and replicating the master's limb endpoint impedance on the slave robot in real-time. Tele-impedance control offers advantages in certain conditions, e.g. where master-slave communications are low quality. However, force feedback remains necessary when visual feedback is impaired or transparency and telepresence in the remote environment is of major concern. In this paper, we propose a novel framework to achieve the Tele-Impedance with Force Feedback (TIFF) so as to have a seamless control scheme that subsumes the performance advantages of both, while still guaranteeing stability and transparency. Experimental results illustrate the potential of the proposed technique in addressing the drawbacks of the two concepts.
Article
Tele-impedance augments classic teleoperation by enabling the human operator to actively command remote robot stiffness in real-time, which is an essential ability to successfully interact with the unstructured and unpredictable environment. However, the literature is missing a study on benefits and drawbacks of different types of stiffness command interfaces used in bilateral tele-impedance. In this article, we introduce a term called coupling effect, which pertains to the coupling between human-commanded stiffness going to the remote robot and force feedback coming from the remote robot. We hypothesize that, whenever the operator's commanded stiffness and force feedback are subject to coupling effect (e.g., muscle activity based stiffness command interfaces), force feedback can invoke involuntary changes in the commanded stiffness due to human reflexes. Although the coupling effect takes away some degree of the operator's control over the commanded stiffness, these involuntary changes can be either beneficial (e.g., during position tracking) or detrimental (e.g., during force tracking) to the task performance on the remote robot side. We examined the coupling effect in an experimental study with 16\mathbf {16} participants, who performed position and force tracking tasks by using both coupled type (muscle activity based) and decoupled type (external device based) of interface. The results demonstrate a benefit of the coupling effect when the remote robot is operating in presence of unexpected force perturbations, where lower absolute error in position tracking task was observed. On the other hand, the decoupled type of interface is beneficial for force tracking tasks on the remote robot side, such as establishing or maintaining a stable contact with objects. However, the coupling effect negatively influences the commanding of reference stiffness to the remote robot in both position and force tracking tasks for the coupled type of interface, compared to the decoupled type of interface, which is not affected.
Article
In this work, we present a novel control approach to human-robot collaboration that takes into account ergonomic aspects of the human co-worker during power tool operations. The method is primarily based on estimating and reducing the overloading torques in the human joints that are induced by the manipulated external load. The human overloading joint torques are estimated and monitored using a whole-body dynamic state model. The appropriate robot motion that brings the human into the suitable ergonomic working configuration is obtained by an optimisation method that minimises the overloading joint torques. The proposed optimisation process includes several constraints, such as the human arm muscular manipulability and safety of the collaborative task, to achieve a task-relevant optimised configuration. We validated the proposed method by a user study that involved a human-robot collaboration task, where the subjects operated a polishing machine on a part that was brought to them by the collaborative robot. A statistical analysis of ten subjects as an experimental evaluation of the proposed control framework is provided to demonstrate the potential of the proposed control framework in enabling ergonomic and task-optimised human-robot collaboration.
Article
One promising approach for robots efficiently learning skills is to learn manipulation skills from human tutors by demonstration and then generalize these learned skills to complete new tasks. Traditional learning and generalisation methods, however, have not well considered human impedance features, which makes the skills less human-like and restricted in physical human-robot interaction (pHRI) scenarios. In particular, stiffness generalization has not been well considered. This paper develops a framework that enables the robot to learn both movement and stiffness features from the human tutor. To this end, the upper limb muscle activities of the human tutor are monitored to extract variable stiffness in real time, and the estimated human arm endpoint stiffness is properly mapped into the robot impedance controller. Then, a dynamical movement primitives (DMPs) model is extended and employed to simultaneously encode the movement trajectories and the stiffness profiles. In this way, both position trajectory and stiffness profile are considered for robot motion control in this paper to realize a more complete skill transfer process. More importantly, stiffness generalization and movement generalization can be efficiently realised by the proposed framework. Experimental tests have been performed on a dual-arm Baxter robot to verify the effectiveness of the proposed method.
Article
In this paper, a reduced-complexity model of the human arm endpoint stiffness is introduced and experimentally evaluated for the teleimpedance control of a compliant robotic arm. The modelling of the human arm endpoint stiffness behaviour is inspired by the human motor control principles on the predominant use of the arm configuration in directional adjustments of the endpoint stiffness profile, and the synergistic effect of muscular activations which contributes to a coordinated modification of the task stiffness in all Cartesian directions. Calibration and identification of the model parameters are carried out experimentally, using perturbation-based arm endpoint stiffness measurements in different arm configurations and co-contraction levels of the chosen muscles. Consequently, the real-time model is used for the remote control of a compliant robotic arm while executing a drilling task, a representative example of tool use in environments with constraints and dynamic uncertainties. Results of this study illustrate that the proposed model enables the master to execute the remote task by the modulation of the directions of the major axes of the endpoint stiffness ellipsoid and its volume using natural arm configurations and the co-contraction of the involved muscles, respectively.
Article
It has been established that the transfer of human adaptive impedance is of great significance for physical human-robot interaction (pHRI). By processing the electromyography (EMG) signals collected from human muscles, the limb impedance could be extracted and transferred to robots. The existing impedance transfer interfaces rely only on visual feedback and, thus, may be insufficient for skill transfer in a sophisticated environment. In this paper, physical haptic feedback mechanism is introduced to result in muscle activity that would generate EMG signals in a natural manner, in order to achieve intuitive human impedance transfer through a designed coupling interface. Relevant processing methods are integrated into the system, including the spectral collaborative representation-based classifications method used for hand motion recognition; fast smooth envelop and dimensionality reduction algorithm for arm endpoint stiffness estimation. The tutor's arm endpoint motion trajectory is directly transferred to the robot by the designed coupling module without the restriction of hands. Haptic feedback is provided to the human tutor according to skill learning performance to enhance the teaching experience. The interface has been experimentally tested by a plugging-in task and a cutting task. Compared with the existing interfaces, the developed one has shown a better performance.