Conference PaperPDF Available

An adaptive robot teacher boosts a human partner’s learning performance in joint action

Authors:
An adaptive robot teacher boosts a human partner’s learning
performance in joint action
Alessia Vignolo12, Henry Powell3, Luke McEllin4, Francesco Rea2, Alessandra Sciutti5and John Michael1
Abstract One important challenge for roboticists in the
coming years will be to design robots to teach humans new
skills or to lead humans in activities which require sustained
motivation (e.g. physiotherapy, skills training). In the current
study, we tested the hypothesis that if a robot teacher invests
physical effort in adapting to a human learner in a context
in which the robot is teaching the human a new skill, this
would facilitate the human’s learning. We also hypothesized
that the robot teacher’s effortful adaptation would lead the
human learner to experience greater rapport in the interaction.
To this end, we devised a scenario in which the iCub and a
human participant alternated in teaching each other new skills.
In the high effort condition, the iCub slowed down his move-
ments when repeating a demonstration for the human learner,
whereas in the low effort condition he sped the movements up
when repeating the demonstration. The results indicate that
participants indeed learned more effectively when the iCub
adapted its demonstrations, and that the iCub’s apparently
effortful adaptation led participants to experience him as more
helpful.
I. INTRODUCTION
As robots become increasingly prevalent in many domains
of everyday life, such as disaster relief, health care, educa-
tion, and manufacturing ([1], [2], [3], [4], [5]), researchers
are devoting ever more attention to developing new ways
of optimizing human-robot-interaction. One challenge in this
regard is to mitigate the risk of human interactants becoming
frustrated or impatient when an interaction with a robot does
not go well (for example because their robot partner makes
mistakes or is slow in making its contribution), and sub-
sequently avoiding interactions with robots in general. This
risk may be particularly acute insofar as many of the people
who will be asked or expected to interact intensively with
robots may not have a high degree of prior familiarity with
robots or with technology in general (e.g. senior citizens).
This is especially important for designing robots to teach
humans new skills or to lead humans in activities which
require sustained motivation (e.g. physiotherapy, skills train-
ing). Robot teachers, much like human teachers, will need
to ensure that their learners sustain motivation so that they
learn effectively and get the most out of the interaction.
1Department of Philosophy, University of Warwick, United Kingdom
2Robotics Brain and Cognitive Sciences Unit, Istituto Italiano di Tec-
nologia, Genova, Italy
3Institute of Neuroscience and Psychology, University of Glasgow, Scot-
land
4Department of Cognitive Science, Central European University, Bu-
dapest, Hungary
5CONTACT Unit, Istituto Italiano di Tecnologia, Genova, Italy
A. Related Research
To address this challenge, Powell and Michael ([6]; cf.
also [7]) have recently proposed that a potentially effective
and low-cost strategy could be to develop design features
that serve to maintain a human’s sense of commitment to
an interaction with a robot. For example, in the context of
human-human interaction, it has been hypothesized [8] that
the perception of a partner’s effort increases people’s sense
of commitment to joint actions, leading them to reciprocate
by investing more effort and attention in the joint action as
well. The rationale for this is that a partner’s investment of
effort indicates that the joint action is valuable to them -
i.e. that they are willing to invest physical and/or cognitive
resources in order to perform the activity well with that
partner. Indeed, if we understand effort as - ‘... the process
that mediates between how well an organism can potentially
perform on some task and how well they actually perform
on that task’ [9], then an agent’s investment of effort in
a joint action is a direct indication that they are willing
to sacrifice resources in order to perform that joint action
well with their partner. If so, they are likely to be happy
if the joint action goes well and disappointed if it does not
(especially given that they have invested effort). As a result,
in response to the perception of a partner’s effort, people
may increase their own investment of effort in a joint action
as a useful means of maintaining a good rapport as well as a
good reputation. In the context of human-human interaction,
several studies provide support for this general hypothesis
i.e., that the perception of a partner’s effort elicits a sense
of commitment, leading to increased effort, persistence and
performance on boring and effortful tasks ([10], [11], [12]).
Building on this, in [13] the authors have recently found
evidence that the perception of a robot partner’s apparent
investment of cognitive effort boosted people’s persistence
on a boring task which they performed together with a robot.
One specific form of effort investment which may be par-
ticularly relevant in the context of human-robot interaction is
the adaptation of kinematics to facilitate action intelligibility.
Research on the adaptation of kinematics takes its starting
point in previous work on so-called ‘motionese’ [14]. This
term refers to the style of movement which caregivers tend
to spontaneously adopt when demonstrating things to infants
- i.e., slowing down their movements, introducing more
segmentation, and standing in closer proximity to infants
when demonstrating actions than when they demonstrate
actions for adult observers [14]. In the context of human-
robot interaction, Vollmer et al. [15] found that human
participants produce motionese in demonstrations directed
towards a robot learner, and Nagai and Rohlfing [18] showed
that a robot observer could be designed to pick up on, and
extract information from, motionese produced by a human.
Moreover, Chandra et al. [16] showed that the interaction
with an adaptive robot enabled children not only to teach
but also to improve their own learning of letters.
B. The Current Study
Building upon these previous findings, we hypothesized
that if a robot invests physical effort in adapting its kine-
matics to a human partner in a context in which the robot
is teaching the human a new skill, the human partner will
perceive this as indicating the robot’s commitment to the
teaching task. We also hypothesized that the robot’s effort-
ful adaptation to the human would facilitate the human’s
learning, and lead to an increase in the level of rapport
experienced by participants. Specifically, we tested these
hypotheses in a scenario in which the humanoid robot iCub
([17], [18]) and a human participant alternated in teaching
each other new skills. In the experiment, the robot demon-
strated movement sequences to a human with either a high
or a low level of effort (in separate experimental conditions),
thus manipulating its apparent commitment to the teaching
task.
II. MET HODS
A. Experimental Design
We designed a task consisting of two distinct phases.
In the first (the robot teaching phase), participants were
required to learn a sequence of movements taught by the
robot (see the experimental setup in Figure 1). In the second
phase (participant teaching phase), participants taught the
robot words by drawing them in the air. The experimental
manipulation was implemented in the robot teaching phase.
Specifically, we manipulated the robot’s commitment to
teaching by varying its level of effort when demonstrating the
movement sequences: high effort condition versus low effort
condition. In both conditions, the robot first demonstrated the
movement sequence with a baseline speed. If the participant
asked the robot to repeat the sequence, however, the robot
repeated it more slowly in the high effort condition, or more
quickly in the low effort condition. These two conditions
were presented in separate counterbalanced test blocks in a
within-subjects design.
The robot was iCub ([17], [18]), a humanoid robot devel-
oped as part of the EU project RobotCub. It is approximately
1m tall with the appearance of a child.
B. Procedure
Prior to the experiment, there was a baseline participant
teaching phase in which the participant drew a word first
in front of the robot and then in front of the human
experimenter. Next, there was a familiarisation phase, in
which the participant was left alone to become familiar with
the robot and with the procedure for the robot teaching
phase and the participant teaching phase. To this end, the
robot first demonstrated individual movements and then a
short sequence of 3 movements, and the participant was then
instructed to teach individual letters and then a short word
of 3 letters.
Participants were then informed that the robot would
have two different teaching strategies in the two sessions
(i.e. experimental blocks) of the experiment, but no specific
details about these strategies were revealed. Participants were
also asked to complete various questionnaires (see below).
The core of the experiment consisted of two (counterbal-
anced) blocks with six trials each. Each trial consisted of
arobot teaching phase followed by a participant teaching
phase.
During the robot teaching phase of each trial, the iCub
demonstrated one sequence of movements which the partic-
ipant was required to observe, memorize and repeat. After
the robot’s demonstration, the participant was required to
reproduce the sequence in the correct order. For each se-
quence, the participant was given two chances to perform the
movement sequence correctly (meaning that if the participant
did not understand the sequence the first time, the robot
demonstrated it a second time). In the participant teaching
phase of each trial, the participant was instructed to teach
the iCub a word (which was displayed on the screen behind
the robot) by drawing it in the air with their right hand.
If the iCub did not understand the word the first time,
participants were required to repeat the demonstration. In
order to simplify data segmentation, participants were asked
to press a key on the keyboard (placed between them and
the robot) before starting and when ending drawing.
For each experimental block, the robot provided positive
feedback for one word after the participant’s first demon-
stration, indicating that it had understood the word after the
first demonstration. For the other five words, the robot asked
for a second demonstration. After the second demonstration,
the robot always told the participants that it had understood
better. The experimenter was hidden in ‘Wizard of Oz’
fashion [19] in order to create the impression that the robot
was not controlled by anyone - although the experimenter’s
intervention was needed to start the interaction and in or-
der to determine whether the participant had repeated the
sequence correctly.
After the experiment, the same questionnaires that par-
ticipants had completed prior to the experiment were again
administered in order to measure any changes in participants’
perception of the robot.
C. Setup and technical implementation
We leveraged the iCub middleware YARP (Yet Another
Robot Platform [20]) to build a distributed system of several
computers connected to the robot network. The computers in
the network were connected to other devices: 1) a computer
keyboard connected via USB from which participant’s could
begin each trial, 2) a television monitor behind the iCub
connected via HDMI displaying the words participants were
to teach the iCub, 3) a RGB-D camera through USB through
which the experimenter could monitor participant’s activities
to assess whether they performed their movement sequences
correctly. The iCub’s behaviour (specifically the movements
and the speech) and words’ appearance on the screen were
both controlled by a main YARP module while kinematic
data collection was synchronized and controlled by the
participant’s presses of a button on the keyboard.
The main module consisted of a finite-state machine where
the trials of teaching the word and the sequence followed one
another.
During the participant teaching phase, the transition from
one state to another was triggered by the participant’s key
press. Specifically, upon the first key press in a given trial, the
displayed word would disappear. Upon the second key press
(after the participant had finished drawing the word) the robot
provided feedback, telling them whether it had understood
or not.
During the robot’s teaching task, the transition from one
state to another was triggered by the experimenter’s key
press. After the robot had demonstrated the sequence and
the participant had tried to reproduce it, the experimenter
(who was observing the scene through the robot camera eye),
pressed the “Y” key if the sequence was correct or “N”
key if the sequence was incorrect. This triggered positive
or negative feedback from the robot (this was repeated for
two times if the sequence was not correct and the robot has
to do a second repetition).
III. ROBOT STIMULI
For the robot’s teaching task, the movements of the se-
quences were pre recorded in order to ensure that participants
were presented with exactly the same movements with the
same timing. The robot’s movements were generated by
the Constant Time Position Service (CTP Service), which
takes the required position in joint space and movement
timing as input. We created a library of 22 upper body
movements (combining torso, arm, and head movements)
and randomly combined these to generate twelve sequences
of five movements (see some examples of movements in
Figure 1). We also ensured that for each sequence at least
one movement was new and never seen by the participant,
so that they never became too familiar with the robot’s
movements. The movements were also designed such as not
to have any meaning. That is, we excluded movements like
‘waving hello’ or ‘giving thumbs up’. This was in order to
make it difficult for participants to leverage their semantic
memory to remember the sequences instead of memorising
the movements. An overly simple task would have meant
that participants would never have to ask the iCub to repeat
the sequence thus rendering the change in effort/commitment
to the task superfluous.
In both conditions, the robot demonstrated the sequences
with a duration Tifor each movement ifor the first demon-
stration, and then for the second demonstration with a
duration of 0.75Tifor the low effort condition, or with a
duration of 1.39Tifor the high effort condition. For instance,
if a sequence was 15 seconds in the first repetition (so
with the baseline speed), in the second repetition it would
become 11 seconds in the low effort condition and 21
seconds in the high effort condition. These values were
selected after pilot testing (with ten participants) where we
saw experimentally that the increase/decrease in speed could
be noticeable and at the same time safe for being performed
from the robot (especially in the low effort condition with
faster movements). The baseline speed was designed in order
to make it very difficult for participants to understand the
sequence after the first demonstration. This was done to
maximise the number of times participants would appreciate
the robot’s change of speed in the two experimental blocks.
After each movement, the robot returned to the home
position at the same speed, but the movements were less
segmented in the low effort condition than in the high effort
condition. To make the interaction more naturalistic, we
activated the robot’s face detector module to make it look at
the participant during the experiment, as well as the blinking
and the ‘breathing’ (a set of slight movements of the arms
to give an impression of vitality) when the robot was idle.
A. Participants
We recruited 21 participants (mean age 33 years ±13
SD). 11 participants were female, 10 male. The regional
ethics committee approved the protocol and all participants
gave informed consent before participating, and were fully
debriefed after the experiment.
B. Data
During the experiment we administered questionnaires and
took video recordings of the participants through the RGB-
D camera for 21 participants. 11 participants were presented
with the low effort condition and then the high effort condi-
tion, 10 participants were presented with the opposite order.
Participants were asked to answer the following questions:
“What differences do you think there were in the
teaching strategy of the robot in the two sessions
[experimental blocks]?” (open question, at the end of
the experiment)
“Did you have the impression that iCub helped you
when you had difficulties in repeating the sequence of
movements?” (on a scale from 0 to 5, after the two
blocks).
Additionally we also asked to indicate how close they felt
towards the iCub using the Inclusion of Other in the Self
(IOS) Scale [21] from 1 to 7, before the experiment and
after the two blocks.
In order to evaluate participants’ performance, the videos
were viewed by the experimenter, who scored each trial ac-
cording to how many movements were reproduced correctly
in the correct sequence. Performance was calculated as the
number of correct movements divided by the total number
of movements presented (5). This was done separately for
the first repetition (i.e., after the first demonstration from
the iCub) and the second repetition (i.e., after the second
demonstration from the iCub).
Fig. 1. Setup and robot’s home position (above), an example of a sequence
of five movements (center) and participant trying to repeat the sequence
(below) during the robot teaching phase.
Fig. 2. Performance of the subjects after the first demonstration of iCub.
IV. RES ULTS
The aim of this study was to investigate whether it is
possible for a robot teacher to convey different levels of
commitment by changing its kinematic effort, and whether
this would facilitate human participants’ learning task. To
do that, we compared participants’ performances between
the two conditions and analyzed their responses to the
questionnaires.
A. Performance
Participants found the sequences equally difficult in the
two sessions, as demonstrated in Figure 2 which shows that
participants’ performance after the first robot demonstration
did not differ between the two test blocks - i.e. neither
for participants exposed first to the low effort condition
Fig. 3. Improvement in performance from the first to the second repetition
in the two experimental blocks for participants who had the low effort
condition first and then the high effort condition (blue), and for participants
who had the other order (red).
Fig. 4. Participants’ impression that iCub helped them when they had
difficulty in repeating (from 0 to 5), for all the subjects (above), for
participants who had the LOW-HIGH order (bottom, left) and the HIGH-
LOW order (bottom, right).
and then to the high effort condition (LOW-HIGH) (in the
first block, M=0.32, SD=0.14; in the second block M=0.32,
SD=0.19), nor for participants exposed first to the high
effort condition and then to the low effort condition (HIGH-
LOW) (in the first block, M=0.30, SD=0.15; in the second
block, M=0.41, SD=0.19). Indeed, a two-way Mixed Model
Anova on the performance after the first repetition with
“block” (First block or Second block) as repeated-measures
factor and “blocks order” (LOW-HIGH or HIGH-LOW) as
between-groups factor, did not find any significant difference
neither between the blocks (F(1,19)=1.64, p=0.203), between
the orders (F(1,19)=0.31, p=0.583), nor was the interaction
between the two factors significant (F(1,19)=1.94, p=0.179).
Participants showed a significant improvement in perfor-
mance after the second robot demonstration relative to its
Fig. 5. Difference in closeness towards the robot between the answer
after the two conditions (LOW, HIGH) and the answer given before the
experiment, for all the subjects (above), for subjects that had LOW-HIGH
order (bottom, left) and HIGH-LOW order (bottom, right).
first demonstration (performance after the second repetition
- performance after the first repetition) - see Figure 3.
A two-way analysis of variance on the improvement in
performance shows that there was no main effect of block
(F(1,19)=2.87, p=0.107), and no order effect (F(1,19)=3.80,
p=0.06). However, the interaction between block number
and block order was significant (F(1,19)=8.56, p=0.009).
A Bonferroni post-hoc test (p=0.019) indicated that: in the
first block, participants in the high effort condition improved
significantly more (M=0.35, SD=0.22) than participants in
the low effort condition (M=0.08, SD=0.13). This demon-
strates that a higher effort investment on the part of the
robot enabled participants to better memorize the sequence,
and thus improved their performance. Moreover, participants
who only showed a small improvement between the first and
second demonstration in the first block (LOW-HIGH, blue
in Figure 3) exhibited a significantly higher improvement in
the second block (one-tailed Bonferroni post-hoc, p=0.048).
No other comparisons reached significance, indicating that
in the second block, all participants displayed a significant
improvement in performance, regardless of condition.
This can be due to the fact that during the second block,
independently of how much effort the robot invested, par-
ticipants had had considerable practice with the routine and
became proficient at adjusting their movements in response
to the second demonstration of each sequence. Conversely,
during the first block, the robot’s effort made a large differ-
ence in terms of improvement of performance, as participants
who were in the high effort condition exhibited a significantly
higher improvement.
B. Questionnaires
In the questionnaire after the experiment, we asked par-
ticipants which differences they thought there were in the
teaching strategy of the robot in the two sessions. This was an
open question, so the answers were quite different from one
participant to the next. Despite the fact that the experimenter
explicitly said at the beginning of the experiment that there
would be two different robot teaching strategies in the two
sessions, 33% (7) of participants replied that there was no
difference between the two sessions. 57% (12) of participants
noticed a difference in the speed, and among them 6 said
that the difference was in the speed of the second repetition
(which was indeed the modification we applied), 1 in the
segmentation (the other modification we applied), 5 in speed
in general. 10% (2) of participants replied that there was
a difference in the difficulty. Other differences that were
observed partained to symmetry and to torsion movements.
Although not all participants were able to identify what
had changed in the robot’s behavior between the two ses-
sions, the change had an impact on their perception of the
robot.
Figure 4 displays the answers given by participants when
asked to answer the question about whether they had the
impression that iCub helped them when they had difficul-
ties in repeating the sequence of movements (on a scale
from 0 to 5). A Mixed Model ANOVA with Effort (two-
levels: High - Low) as within factor and Block Order (two-
levels: HIGH-LOW, LOW-HIGH) as between factor yielded
a main effect for the effort (F(1,19)=9.56, p=0.006), such
that the average answer value was significantly higher for
the high effort (M=3.14, SD=0.17) than for the low effort
(M=2.44, SD=0.17). In contrast, the blocks’ order effect
(F(1,19)=0.59, p=0.452) was not significant, nor was the in-
teraction (F(1,19)=1.76, p=0.201). Therefore, independently
of block order, participants had a greater impression that the
robot had helped in the high effort condition.
Participants were also asked to indicate how close they
felt towards the iCub using the IOS Scale from 1 to 7,
before the experiment and after the two blocks. We com-
puted the difference between one block and the baseline
(the answer given before the experiment) (Figure 5). A t-
test comparing increment in closeness between LOW and
HIGH individuates a significant difference only for the LOW-
HIGH order (p=0.038), although this test does not resist
Bonferroni correction. A Mixed Model ANOVA shows that
neither effort (F(1,19)=0.06, p=0.808) nor the block order
(F(1,19)=0.12, p=0.737) were significant, but there was a
significant interaction (F(1,19)=6.60, p=0.019).
V. DISCUSSION
The current study examined human-robot interaction in
a scenario in which the iCub and a human participant
alternated in teaching each other new skills. Specifically,
we probed whether the iCub’s effortful adaptation to the
human would facilitate the human’s learning, whether the
human partner would perceive the iCub’s effortful adaptation
as indicating a commitment to the teaching task, and whether
the effortful adaptation would generate rapport.
The results indicate that participants’ performance in-
creased more from the first to the second demonstration
of a sequence in the high effort condition than in the low
effort condition. In other words, participants learned better
when the iCub slowed down his demonstration and increased
segmentation in between movements. The results from the
questionnaires also indicate that participants experienced the
robot as more helpful in the high effort condition, and that
those participants who experienced the low effort condition
followed by the high effort condition felt closer to the robot
during the high effort condition.
Our findings build upon a recent body of research investi-
gating how movement kinematics can be adapted to increase
legibility to observers in the context of HRI. Indeed, the
potential to make robots’ movements more easily legible to
human interactants is a crucial goal of current and future
research in social robotics [22]. Moreover, our findings also
contribute to research highlighting the important role of
motion in communicating implicit messages and in intuitive
communication in general ([23], [24]). In this respect, the
current study takes a step further by tapping into the concept
of a sense of commitment [12], which may offer considerable
potential in the context of social robotics. Indeed, Sz´
ekely
et al. (forthcoming) have already shown that by eliciting
human interactants’ sense of commitment to an interaction
with a robot, their persistence and patience can be enhanced.
This has important implications insofar as it highlights the
possibility that the adaptation of movement kinematics may
be used not only to increase legibility but also to enhance
human interactants’ persistence, effort and patience within
human-robot interactions.
Our findings also build upon research in developmental
psychology which has been identified as harboring consid-
erable potential for social robotics. Specifically, a wealth
of research has shown that human infants benefit from the
spontaneous use of motionese on the part of caregivers
– i.e., caregivers slow down their movements, introduce
more segmentation, and stand in closer proximity to infants
when demonstrating actions than when they demonstrate
actions for adult observers [14]. Our findings also build
upon previous efforts to implement motionese in the context
of human-robot interaction. Vollmer et al. [15] found that
human participants produce motionese in demonstrations
directed towards a robot learner, and Nagai and Rohlfing
[25] showed that a robot observer could be designed to pick
up on, and extract information from, motionese produced
by a human. Our findings extend this previous research by
showing that a robot can implement motionese in teaching
motion sequences to a human, and that this benefits human
learners.
It would be valuable for future research to investigate other
contexts in which robot motionese may facilitate human
learning, such as in producing or using novel tools or
machines. It would also be important to investigate to what
extent the skills or information learned with the help of
robot motionese are recalled after several weeks or months
– in other words, to probe whether robot motionese also
facilitates the automatization of new skills or the encoding
of new information in long-term memory.
In teaching more complex action sequences, it is often
effect for a teacher to break up longer sequences into shorter
components, and to scaffold learning by focusing on each of
the components sequentially, and identifying which of these
components learners need extra help with. A system that
could evaluate the performance of an end-user in real time
and tailor its motionese to the specific learning needs of that
end user could be particularly useful for real-life teaching
scenarios.
The potential to use movement kinematics not only to
optimize teaching in HRI but also to generate and maintain
a sense of commitment has important implications in such
contexts as physiotherapy, exercise classes, or other skill
training programs. In particular, if movement kinematics can
be used an effective and inexpensive strategy for boosting
human learning from robots and for building up a sense of
commitment to the interaction, then humans may not only
find it easier to learn, but may also be more motivated to do
so.
ACKNOWLEDGMENT
This research was supported by a Starting Grant from
the European Research Council (nr. 679092, SENSE OF
COMMITMENT).
REFERENCES
[1] C. Breazeal, A. Brooks, J. Gray, G. Hoffman, C. Kidd, and H. Lee,
“Humanoid robots as cooperative partners for people,Journal of
Humanoid Robots, p. 34, 2004.
[2] C. Lenz, S. Nair, M. Rickert, A. Knoll, W. Rosel, and J. Gast, “Joint-
action for humans and industrial robots for assembly tasks,” IEEE,
pp. 130–5, 2008.
[3] A. Clodic, H. Cao, S. Alili, V. Montreuil, A. R., and C. R., “Shary: A
supervision system adapted to human-robot interaction,” Experimental
Robotics. Springer Berlin Heidelberg, pp. 229–38, 2009.
[4] A. Sciutti, A. Bisio, F. Nori, G. Metta, L. Fadiga, and T. Pozzo,
“Measuring human-robot interaction through motor resonance,” Inter-
national Journal of Social Robotics, vol. 4, no. 3, pp. 223–34, 2012.
[5] E. Grigore, K. Eder, A. Pipe, C. Melhuish, and U. Leonards,
“Joint action understanding improves robot-to-human object han-
dover,IEEE/RSJ International Conference on Intelligent Robots and
Systems, pp. 4622–9, 2013.
[6] J. Michael and H. Powell, “Feeling committed to a robot: Why, what,
when, and how?,Philosophical Transactions of the Royal Society B:
Biological Sciences, p. 374 (1771), 2019.
[7] J. Michael and A. Salice, “The sense of commitment in human-robot
interaction,” International Journal of Social Robotics, vol. 9, no. 5,
p. 75563, 2017.
[8] J. Michael, N. Sebanz, and G. Knoblich, “The sense of commitment:
A minimal approach,” Frontiers in Psychology, vol. 6, no. 1968, 2016.
[9] M. Inzlicht, A. Shenhav, and C. Y. Olivola, “The effort paradox: Effort
is both costly and valued,” Trends in Cognitive Sciences, vol. 22, no. 4,
p. 338, 2018.
[10] J. Michael and M. Sz´
ekely, “Investing in commitment: Persistence in
a joint action is enhanced by the perception of a partner’s effort,
Cognition, vol. 174, pp. 37–42, 2018.
[11] M. Chennells and J. Michael, “Effort and performance in a cooperative
activity are boosted by perception of a partner’s effort,” Scientific
reports, 2018.
[12] J. Michael, N. Sebanz, and G. Knoblich, “Observing joint action:
Coordination creates commitment,” Cognition, pp. 106–113, 2016.
[13] M. Szkely, H. Powell, F. Vannucci, F. Rea, A. Sciutti, and J. Michael,
“The perception of a robot partner’s effort elicits a sense of commit-
ment to human-robot interaction,” Interaction Studies, 2019.
[14] R. J. Brand, D. A. Baldwin, and L. A. Ashburn, “Evidence for
‘motionese’: modifications in mothers’ infant-directed action,” Devel-
opmental Science, 2002.
[15] A. L. Vollmer, K. S. Lohan, K. Fischer, Y. Nagai, K. Pitsch, J. Fritsch,
K. J. Rohlfing, and B. Wredek, “People modify their tutoring behavior
in robot-directed interaction for action learning,” IEEE 8th Interna-
tional Conference on Development and Learning, pp. 1–6, 2009.
[16] S. Chandra, P. Dillenbourg, and A. Paiva, “Classification of children’s
handwriting errors for the design of an educational co-writer robotic
peer,In Proceedings of the 2017 Conference on Interaction Design
and Children, pp. 215–225, 2017.
[17] N. L. N. F. S. G. V. D. F. L. v. H. C. R. K. L. M. S.-V. J. B. A. Metta,
G. and L. Montesano, “The icub humanoid robot: An open-systems
platform for research in cognitive development,Neural Networks,
vol. 23, pp. 1125 – 1134, 2010.
[18] M. G. Sandini, G. and D. Vernon, “The icub cognitive humanoid robot:
An open-system research platform for enactive cognition,50 years
of artificial intelligence, pp. 358 – 369, 2007.
[19] A. Steinfeld, O. C. Jenkins, and B. Scassellati, “The oz of wizard:
Simulating the human for interaction research,” in Proceedings of the
4th ACM/IEEE International Conference on Human Robot Interaction,
HRI ’09, (New York, NY, USA), pp. 101–108, ACM, 2009.
[20] G. Metta, P. Fitzpatrick, and L. Natale, “Yarp: Yet another robot
platform,” International Journal of Advanced Robotic Systems, vol. 3,
no. 1, pp. 43–48, 2006.
[21] K. M. Woosnam, “The inclusion of other in the self (ios) scale,Ann
Tourism Res., vol. 37, pp. 857–60, 2010.
[22] A. D. Dragan, K. Lee, and S. Srinivasa, “Legibility and predictability
of robot motion,” 8th ACM/IEEE International Conference on Human-
Robot Interaction (HRI), 2013.
[23] G. Sandini, A. Sciutti, and F. Rea, “Movement-based communication
for humanoid-human interaction,” in Humanoid Robotics: A Reference
(A. Goswami and P. Vadakkepat, eds.), Dordrecht: Springer, 2017.
[24] A. Sciutti, M. Mara, V. Tagliasco, and G. Sandini, “Humanizing
human-robot interaction: On the importance of mutual understanding,”
IEEE Technology and Society Magazine, vol. 37, no. 1, pp. 22–29,
2018.
[25] Y. Nagai and K. J. Rohlfing, “Computational analysis of motionese
toward scaffolding robot action learning,IEEE Transactions on
Autonomous Mental Development, vol. 1, no. 1, pp. 44–54, 2009.
... Our design decisions of developing the teacher robot are based on the work of [4] which states that teacher's gestures promote learners' attention, increase learners' comprehension, and enhance their learning gains, in addition to promoting learner's motivation [1]. In the robotic application, the study in [5] advocates that the robotic teacher should incorporate movement to motivate learners to learn effectively with persistence and [6] supports that a robot should incorporate gestures and speech to ensure its livelihood. To this end, a robotic teacher should be designed as similar as possible to a human [7], [8], [9]. ...
Preprint
Full-text available
Humanoid robots are increasingly being integrated into learning contexts to assist teaching and learning. However, challenges remain how to design and incorporate such robots in an educational context. As an important part of teaching includes monitoring the motivational and emotional state of the learner and adapting the interaction style and learning content accordingly, in this paper, we discuss the role of gestures displayed by a humanoid robot (i.e., Pepper robot) in a learning and teaching context and present our ongoing research on designing and developing a teacher robot.
... SAR are always available, unlike humans, and therefore might be able to provide the necessary high-intensive language training. Moreover, SAR are, like humans, able to tailor its performance to the user, in this case PwA [9]. Likewise, Pereira et al. [10] formulated a proposal on the deployment of the social robot NAO as mediator in a memory game as part of. ...
Article
Full-text available
People with aphasia need high-intensive language training to significantly improve their language skills, however practical barriers arise. Socially assistive robots have been proposed as a possibility to provide additional language training. However, it is yet unknown how people with aphasia perceive interacting with a social robot, and which factors influence this interaction. The aim of this study was to gain insight in how people with mild to moderate chronic expressive aphasia perceived interacting with the social robot NAO, and to explore what needs and requisites emerged. A total of 11 participants took part in a single online semi-structured interaction, which was analysed using observational analysis, thematic analysis, and post-interaction questionnaire. The findings show that participants overall felt positive towards using the social robot NAO. Moreover, they perceived NAO as enjoyable, useful, and to a lesser extent easy to use. This exploratory study provides a tentative direction for the intention of people with mild to moderate chronic expressive aphasia to use social robots. Design implications and directions for future research are proposed.
... For example, for a sequence with a duration of 15 seconds, the second demonstration would be of 11 (≈15 · 0.75) seconds in the Unadaptive condition and 21 (≈15 · 1.39) seconds in the Adaptive condition. We validated that slowing down is indeed more effective in teaching than rushing, and that it is perceived as more helpful [32]. ...
Article
We tested the hypothesis that, if a robot apparently invests effort in teaching a new skill to a human participant, the human participant will reciprocate by investing more effort in teaching the robot a new skill, too. To this end, we devised a scenario in which the iCub and a human participant alternated in teaching each other new skills. In the Adaptive condition of the robot teaching phase , the iCub slowed down its movements when repeating a demonstration for the human learner, whereas in the Unadaptive condition it sped the movements up when repeating the demonstration. In a subsequent participant teaching phase , human participants were asked to give the iCub a demonstration, and then to repeat it if the iCub had not understood. We predicted that in the Adaptive condition , participants would reciprocate the iCub’s adaptivity by investing more effort to slow down their movements and to increase segmentation when repeating their demonstration. The results showed that this was true when participants experienced the Adaptive condition after the Unadaptive condition and not when the order was inverted, indicating that participants were particularly sensitive to the changes in the iCub’s level of commitment over the course of the experiment.
... The literature on human-robot interaction and learning contains a large body of research on personalized robot tutors, in which a robot tutor learns to personalize its interactions to support the learning process of a human student, focused on classroom or training related contexts [a few examples are (Baxter et al., 2017;Gao, Barendregt, and Castellano 2017;Belpaeme et al., 2018;Vignolo et al., 2019)]. Formal training is important to initiate learning and to steer development in the right direction. ...
Article
Full-text available
Becoming a well-functioning team requires continuous collaborative learning by all team members. This is called co-learning, conceptualized in this paper as comprising two alternating iterative stages: partners adapting their behavior to the task and to each other (co-adaptation), and partners sustaining successful behavior through communication. This paper focuses on the first stage in human-robot teams, aiming at a method for the identification of recurring behaviors that indicate co-learning. Studying this requires a task context that allows for behavioral adaptation to emerge from the interactions between human and robot. We address the requirements for conducting research into co-adaptation by a human-robot team, and designed a simplified computer simulation of an urban search and rescue task accordingly. A human participant and a virtual robot were instructed to discover how to collaboratively free victims from the rubbles of an earthquake. The virtual robot was designed to be able to real-time learn which actions best contributed to good team performance. The interactions between human participants and robots were recorded. The observations revealed patterns of interaction used by human and robot in order to adapt their behavior to the task and to one another. Results therefore show that our task environment enables us to study co-learning, and suggest that more participant adaptation improved robot learning and thus team level learning. The identified interaction patterns can emerge in similar task contexts, forming a first description and analysis method for co-learning. Moreover, the identification of interaction patterns support awareness among team members, providing the foundation for human-robot communication about the co-adaptation (i.e., the second stage of co-learning). Future research will focus on these human-robot communication processes for co-learning.
Chapter
Previous research has shown that if a robot invests physical effort in teaching human partners a new skill, the teaching will be more effective and the partners will reciprocate by investing more effort and patience when their turn to teach comes. In the current study, we extend this research to child-robot interaction. To this end, we devised a scenario in which a humanoid robot (iCub) and a child participant alternated in teaching each other new skills. In the robot teaching phase iCub taught participants sequences of movements, which they had to memorize and repeat. The robot then repeated the demonstration a second time: in the high effort (or Adaptive) condition, the iCub slowed down its movements when repeating the demonstration whereas in the low effort (or Unadaptive) condition he sped the movements up. In the participant teaching phase, children were asked to give the robot a demonstration of three symbols, and then to repeat it if the robot had not understood.
Chapter
In the current study, we presented participants with videos in which a humanoid robot (iCub) and a human agent were tidying up by moving toys from a table into a container. In the High Coordination condition, the two agents worked together in a coordinated manner, with the human picking up the toys and passing them to the robot. In the Low Coordination condition, they worked in parallel without coordinating. Participants were asked to imagine themselves in the position of the human agent and to respond to a battery of questions to probe the extent to which they felt committed to the joint action. While we did not observe a main effect of our coordination manipulation, the results do reveal that participants who perceived a higher degree of coordination also indicated a greater sense of commitment to the joint action. Moreover, the results show that participants’ sensitivity to the coordination manipulation was contingent on their prior attitudes towards the robot: participants in the High Coordination condition reported a greater sense of commitment than participants in the Low Coordination condition, except among those participants who were a priori least inclined to experience a close sense of relationship with the robot.
Article
Full-text available
The paper spells out the rationale for developing means of manipulating and of measuring people's sense of commitment to robot interaction partners. A sense of commitment may lead people to be patient when a robot is not working smoothly, to remain vigilant when a robot is working so smoothly that a task becomes boring and to increase their willingness to invest effort in teaching a robot. We identify a range of contexts in which a sense of commitment to robot interaction partners may be particularly important. This article is part of the theme issue ‘From social brains to social robots: applying neurocognitive insights to human–robot interaction’.
Article
Full-text available
In everyday life, people must often determine how much time and effort to allocate to cooperative activities. In the current study, we tested the hypothesis that the perception of others’ effort investment in a cooperative activity may elicit a sense of commitment, leading people to allocate more time and effort to the activity themselves. We developed an effortful task in which participants were required to move an increasingly difficult bar slider on a screen while simultaneously reacting to the appearance of virtual coins and earn points to share between themselves and their partner. This design allowed us to operationalize commitment in terms of participants’ investment of time and effort. Crucially, the cooperative activity could only be performed after a partner had completed a complementary activity which we manipulated to be either easy (Low Effort condition) or difficult (High Effort condition). Our results revealed participants invested more effort, persisted longer and performed better in the High Effort condition, i.e. when they perceived their partner to have invested more effort. These results support the hypothesis that the perception of a partner’s effort boosts one’s own sense of commitment to a cooperative activity, and consequently also one’s willingness to invest time and effort.
Article
Full-text available
Can the perception that one's partner is investing effort generate a sense of commitment to a joint action? To test this, we developed a 2-player version of the classic snake game which became increasingly boring over the course of each round. This enabled us to operationalize commitment in terms of how long participants persisted before pressing a 'finish' button to conclude each round. Our results from three experiments reveal that participants persisted longer when they perceived what they believed to be cues of their partner's effortful contribution (Experiment 1). Crucially, this effect was not observed when they knew their partner to be an algorithm (Experiment 2), nor when it was their own effort that had been invested (Experiment 3). These results support the hypothesis that the perception of a partner's effort elicits a sense of commitment, leading to increased persistence in the face of a temptation to disengage.
Conference Paper
Full-text available
In this paper, we propose a taxonomy of handwriting errors exhibited by children as a way to build adequate strategies for integration with a co-writing peer. The exploration includes the collection of letters written by children in an initial study, which were then revised in a second study. The second study also analyses the "peer-learning" (PL) and "peer-tutoring" (PT) learning methods in an educational scenario, where a pair of children perform a collaborative writing activity in the presence of a robot facilitator. The data obtained in the first two studies allowed us to create a "taxonomy of handwriting errors". A set of writing errors were selected and implemented in an educational activity for validation. This activity constituted a third study, wherein we systematically induced the errors into a Nao robot's handwriting using the {PT} method - A teacher-child corrects the handwriting errors of the learner-robot. The preliminary results suggest that the children in general showed awareness to the writing errors and were able to perceive the writing abilities of the robot.
Article
Previous research has shown that the perception that one’s partner is investing effort in a joint action can generate a sense of commitment, leading participants to persist longer despite increasing boredom. The current research extends this finding to human-robot interaction. We implemented a 2-player version of the classic snake game which became increasingly boring over the course of each round, and operationalized commitment in terms of how long participants persisted before pressing a ‘finish’ button to conclude each round. Participants were informed that they would be linked via internet with their partner, a humanoid robot. Our results reveal that participants persisted longer when they perceived what they believed to be cues of their robot partner’s effortful contribution to the joint action. This provides evidence that the perception of a robot partner’s effort can elicit a sense of commitment to human-robot interaction.
Book
Human-Robot Interaction (HRI) brings new challenges to robotics. We focus in this paper on the decisional issues of HRI enabled robots. We propose a control architecture specifically designed for HRI and present an implemented system that illustrates its main components and their interaction. These components provide integrated abilities to support human-robot collaborative task achievement as well as capacities to elaborate task plans involving humans and robots and to produce legible and socially acceptable behavior.
Chapter
Humans are very good at interacting and collaborating with each other. This ability is based on mutual understanding and is supported by a continuous exchange of information mediated only in minimal part by language. The majority of messages are covertly embedded in the way the two partners move their eyes and their body. It is this silent, movement-based flow of information that enables a seamless coordination. It occurs without the two partners’ awareness and the delays in the interaction, by providing the possibility to anticipate the needs and intentions of the partner. Humanoid robot, thanks to their shape and motor structure, could greatly benefit from becoming able to send analogous cues with their behaviors, as well as from understanding similar signals covertly sent by their human partners. In this chapter we will describe the main categories of implicit signals for interaction, namely, those driven by oculomotor actions and by the movement of the body. We will first discuss the neural systems supporting the understanding of these signals in humans, and we will motivate the centrality of these mechanisms for humanoid robotics. At the end of the chapter, the reader should have a clear picture of what is an implicit signal, where in the human brain it is encoded and why a humanoid robot should be able to send and read it in its human partners.
Article
According to prominent models in cognitive psychology, neuroscience, and economics, effort (be it physical or mental) is costly: when given a choice, humans and non-human animals alike tend to avoid effort. Here, we suggest that the opposite is also true and review extensive evidence that effort can also add value. Not only can the same outcomes be more rewarding if we apply more (not less) effort, sometimes we select options precisely because they require effort. Given the increasing recognition of effort's role in motivation, cognitive control, and value-based decision-making, considering this neglected side of effort will not only improve formal computational models, but also provide clues about how to promote sustained mental effort across time.
Article
In conjunction with what is often called the industry 4.0, the new machine age, or the rise of the robots, the authors of this paper have each experienced the following phenomenon. At public events and roundtable discussions, among our circles of friends, or during interviews with the media, we are asked on a surprisingly regular basis: "How must humankind adapt to the imminent process of technological change? What do we have to learn in order to keep pace with the smart new machines? What new skills do we need to understand the robots?"