Content uploaded by Maha Elgarf
Author content
All content in this area was uploaded by Maha Elgarf on May 10, 2021
Content may be subject to copyright.
Reward Seeking or Loss Aversion?
Impact of Regulatory Focus Theory on Emotional Induction in Children and
Their Behavior Towards a Social Robot
Maha Elgarf
mahaeg@kth.se
KTH Royal Institute of Technology
Stockholm, Sweden
Natalia Calvo-Barajas
natalia.calvo@it.uu.se
Uppsala University
Uppsala, Sweden
Ana Paiva
ana.paiva@inesc-id.pt
Instituto Superior Técnico (IST),
Universidade de Lisboa and INESC-ID
Lisbon, Portugal
Ginevra Castellano
ginevra.castellano@it.uu.se
Uppsala University
Uppsala, Sweden
Christopher Peters
chpeters@kth.se
KTH Royal Institute of Technology
Stockholm, Sweden
Figure 1: Sample images of the interaction between the children and the robot. Consent was received from the parents for
publishing children’s images.
ABSTRACT
According to psychology research, emotional induction has positive
implications in many domains such as therapy and education. Our
aim in this paper was to manipulate the Regulatory Focus Theory
to assess its impact on the induction of regulatory focus related
emotions in children in a pretend play scenario with a social ro-
bot. The Regulatory Focus Theory suggests that people follow one
of two paradigms while attempting to achieve a goal; by seeking
gains (promotion focus - associated with feelings of happiness) or
by avoiding losses (prevention focus - associated with feelings of
fear). We conducted a study with 69 school children in two dierent
conditions (promotion vs. prevention). We succeeded in inducing
happiness emotions in the promotion condition and found a result-
ing positive eect of the induction on children’s social engagement
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specic permission and/or a
fee. Request permissions from permissions@acm.org.
CHI ’21, May 8–13, 2021, Yokohama, Japan
©2021 Association for Computing Machinery.
ACM ISBN 978-1-4503-8096-6/21/05. . . $15.00
https://doi.org/10.1145/3411764.3445486
with the robot. We also discuss the important implications of these
results in both educational and child robot interaction elds.
CCS CONCEPTS
•Human-centered computing →Interaction design
;
Interac-
tion paradigms
;
•Computer systems organization →Robot-
ics.
KEYWORDS
social robotics, human robot interaction, emotional induction, reg-
ulatory focus, social engagement
ACM Reference Format:
Maha Elgarf, Natalia Calvo-Barajas, Ana Paiva, Ginevra Castellano, and Christo-
pher Peters. 2021. Reward Seeking or Loss Aversion?: Impact of Regulatory
Focus Theory on Emotional Induction in Children and Their Behavior To-
wards a Social Robot. In CHI Conference on Human Factors in Computing
Systems (CHI ’21), May 8–13, 2021, Yokohama, Japan. ACM, New York, NY,
USA, 11 pages. https://doi.org/10.1145/3411764.3445486
1 INTRODUCTION
Research about child robot interaction (cHRI) has received great
attention. One of its main applications is children’s educational
and social development. For social development, robots have been
used to teach children empathy [
37
] and social skills [
56
], [
48
]. In
CHI ’21, May 8–13, 2021, Yokohama, Japan Elgarf et al.
this work, we address cHRI for educational and social development
from the perspective of emotional induction. The term emotional
induction refers to the process of eliciting specic emotions within
a human user using specic stimuli. Positive aect is known to
have strong implications on children’s social and cognitive skills
[
6
], [
28
], [
27
], [
50
], [
44
]. This suggests that inducing positive emo-
tions has great benets for children’s educational development.
Previous research [
44
] has evaluated this idea by investigating the
eects of inducing positive emotions on children’s (5-8 years) vi-
sual processing abilities. The results indicate that children have
a tendency to a global rather than a local visual perception after
being presented with a positive emotional stimuli. These results
emphasize on the importance of positive emotions in broadening
children’s perspective and widening their scope of attention. Our
work builds on previous work by evaluating emotional induction
through the design of a child robot interaction centered around the
regulatory focus concept. The Regulatory Focus Theory (RFT) sug-
gests that people follow one of two paradigms in order to achieve
their goals. The promotion focus where they are motivated by re-
ward seeking which is associated with feelings of excitement to
receive the reward and happiness at reward receipt. Whereas, in
prevention focus, people are motivated by loss aversion associated
with feelings of fear of loss and relief at loss avoidance [
7
], [
6
].
For example, a child may be motivated to eat his/her food because
he/she wants to have the promised half an hour playing with the
play station or to avoid the punishment of not watching his/her
favourite cartoon on that day.
Previous research concerning RFT with robots is scarce and
has focused merely on matching the user’s and robot’s regulatory
focus personality type. The robot either displays a promotion fo-
cused personality that is more motivated by achieving gains or a
prevention focused personality motivated by fear of failure. For
example, a promotion focused person will study for an exam with
an aim to achieve top results while a prevention focused person
will study enough just to avoid failing the exam. RFT has not been
yet investigated in the eld of cHRI. RFT may however have strong
implications on children’s educational performance as suggested
by [
13
], where participants in the promotion condition exhibited
more resilience and better performance at a dicult sorting task.
[
19
] also suggests a positive eect of promotion focused tasks on
the divergent thinking aspect of creativity. These positive eects of
RFT in the promotion condition are attributed to the induction of
the corresponding positive emotions.
We applied RFT for emotional induction through a pretend play
interaction with a social robot. We designed the interaction in two
conditions (promotion vs. prevention). In each condition the pre-
tend play scenario has been changed accordingly and the robot has
displayed the corresponding emotions (happiness in the promo-
tion condition and fear in the prevention condition). We assessed
whether emotional induction occurred and then we evaluated the
eects of inducing positive emotions on the children social en-
gagement with the robot measured through the social behaviors
exhibited by the child towards the robot.
We introduced a social robot in our design to make the interac-
tion more engaging and because of the capability of social robots
to express emotions. We designed our interaction as a pretend play
scenario since pretend play is one of the most preferred play styles
by children [1], [29], [55].
The contribution of our work is summarized in the following
points:
•
Previous psychology research has discussed the connection
between RFT and dierent emotions [
6
]. However, the use
of RFT for emotional induction has not been investigated in
HRI before.
•
This research is the rst work in HRI to investigate the RFT
in terms of designing the whole interaction rather than only
the robot’s personality. The RFT was applied to the scenario
design (promotion vs. prevention), the robot’s personality
and the corresponding emotions displayed by the robot in
each condition. Our work is also the rst work to investigate
RFT in cHRI.
•
Previous research has suggested that smiles are a sign of en-
gagement [
11
] which demonstrates a relationship between
happiness and engagement. We extended this work by as-
sessing social behaviors exhibited by the children towards
the robot as a result of induced happiness.
2 BACKGROUND
2.1 Regulatory focus
In 1997, Higgins proposed the Regulatory Focus Theory (RFT) [
22
]
that distinguishes between two motivational approaches used by
humans in order to perform a task. Promotion focus, is characterised
by the motivation to accomplish goals through achieving a certain
gain. Whereas, prevention focus is characterised by the motivation
to accomplish goals through the avoidance of failure. For example,
a task that is promotion focused may motivate the user by oering
a possible reward at task completion. Consequently, promotion
focused tasks are associated with feelings of excitement through-
out the task that converge to feelings of happiness at successful
task completion. However, a prevention focused task motivates
the user by encouraging them to avoid a specic loss. Therefore,
prevention focused tasks are associated with feelings of fear and
stress throughout the task that converge to relief at successful task
completion [
7
], [
6
]. An example that illustrates the RFT is explained
in [
19
], where the users completed a maze task in the two dierent
regulatory focus conditions. Participants received a paper with a
cartoon drawing where they were trying to save a mouse trapped
in a maze. In the promotion condition, the mouse will get a piece of
cheese (reward) as soon as it escapes the maze. Nevertheless, in the
prevention condition, the mouse is trying to avoid an owl (threat)
hovering above the maze. The owl will cease to chase the mouse as
soon as it escapes the maze. In another study [
13
], RFT was manip-
ulated to prove that when faced with a dicult task, participants
who were presented with a promotion focused version of the task
will perform better because of positive feelings of happiness while
participants presented with a prevention focused version of the
task will give up sooner because of feelings of fear and stress.
Recent research in human robot interaction has used RFT and
measured the eects of matching the regulatory focus type (also
know as regulatory t) of the robot to the user on performance on a
test [
14
], on perceived robot’s persuasiveness [
15
] and on the dura-
tion of the interaction with the robot [
3
]. Users who interacted with
Reward Seeking or Loss Aversion? CHI ’21, May 8–13, 2021, Yokohama, Japan
a matching regulatory focus type robot performed better on the
test and perceived the robot as more persuasive. Similarly, match-
ing regulatory focus type resulted in deliberate longer interactions
with the robot. The regulatory t concept has also been used with
virtual agents [
18
], where the authors found that it had a signi-
cant positive eect on likability measures of users assigned to the
prevention condition only.
The contribution of our work is that we are assessing the impact of
the regulatory focus design of an interaction with a social robot on
inducing regulatory focus related emotions rather than matching
regulatory focus type.
2.2 Emotional induction
Researchers have often identied several methods for inducing
emotions [
54
], [
36
], [
57
]. The authors in [
54
] have reviewed the
ve most eective ways for emotional induction: visual stimuli,
imagery, situational procedure, music and autobiographical recall.
Visual stimuli is the most common method used by showing the
subjects images or movie clips that elicit specic emotions as in
[
33
], where the authors used cheerful and sad short clips to induce
both happiness and sadness respectively. Imagery consists of ask-
ing the users to imagine themselves in a specic situation where
they would experience specic emotions. Imagery is used in [
38
],
where the experimenter asked the participants to imagine that it
is their birthday and that they are being thrown a surprise party
by their loved ones. A Situational procedure is some form of a real
interaction which is acted upon the user to elicit the desired emo-
tions. In [
33
], the researchers used a real interaction to elicit fear
by creating a real test environment and another real interaction to
induce anger by introducing a rude person to interrupt a teacher
during an ongoing class. Music has also been frequently used for
inducing emotions. For instance, in [
46
], the researchers success-
fully used dierent rhythms of music to induce both happiness and
sadness. Finally, autobiographical recall is used to induce emotions
by asking participants to remember and retell a story where they
strongly felt a specic emotion. For example, in [
45
], the users were
asked to describe a situation where they felt scared to induce their
feelings of fear.
The meta analysis in [
54
] has examined each of the ve methods
for inducing the six basic emotions: happiness, sadness, fear, sur-
prise, anger and disgust. According to the authors, the ve dierent
ways may yield dierent induction levels for the dierent emotions.
For this work, we will only consider the induction of the two basic
emotions related to the concept of regulatory focus that we are
investigating: happiness and fear. As explained in the review, all
of the ve methods are eective for happiness induction except
the situational procedures; with visual stimuli as the most eective
followed by imagery. Additionally, research suggests that combin-
ing several methods together yields to better induction results than
using a single procedure [
59
]. Whereas for fear, all ve paradigms
are eective for the induction with the situational procedures as
most successful followed by imagery and then visual stimuli.
In the eld of cHRI, emotional induction for educational purposes
has not been investigated before. However, in terms of emotion
research, studies have been conducted to investigate empathy [
30
],
[
35
], behavior synchronisation [
5
], [
32
], [
23
], [
9
] and mimicry be-
tween the user and the agent (whether a robot or a virtual character)
[
24
], [
47
], [
25
], [
43
]. The closest to emotional induction is emotional
mimicry also called emotional contagion that the agent elicits in the
user in an interaction. The dierence between emotional mimicry
and emotional induction in that case is that mimicry is detected in a
specic time window that follows a specic behavior expressed by
the agent. For example, in [
25
], users spontaneously matched facial
expressions of an android robot during an interaction within a six
seconds interval. In this research nevertheless, emotional induc-
tion is measured using objective measures for emotional detection
throughout the whole interaction.
2.3 Pretend play
Playing is an essential building block of children’s development.
Pretend play also known as role play is one of the most commonly
adopted play styles for kids. Research has extensively elaborated on
the benets of pretend play for children which includes creativity
[
40
] and cognitive development through the practice of their prob-
lem solving skills in a simulated environment [
51
]. It also helps in
their linguistic development [
4
] through the use of their narration
and communication skills during the pretend play. Pretend play
also facilitates the process of perspective taking [
10
] and therefore
may result in more empathetic behavior and a general improvement
of social skills.
Studies in cHRI have used pretend play as a mean to develop
children’s skills and to asses other relevant measures during the in-
teraction. In [
52
], a NAO robot and a sensorized mini kitchen were
used to provide a safe and entertaining pretend play environment.
Although the main purpose of the study was to measure eects of
gender segregation of the NAO robot on children’s behavior in a
pretend play scenario, results have also shown that the pretend play
environment may further be utilised for social and cognitive chil-
dren’s development. Also in [
1
], the authors compared between the
dierent types of play with and without a robot. They found that
children chose to engage in pretend play more frequently when the
robot was not present. The authors attributed this to the children
not knowing how to include the robot in their playing scenario.
We have designed our interaction scenario as a pretend play inter-
action with a social robot to make it more engaging for the children
and to be able to use the robot for emotional induction through the
emotional expressions that the robot will display.
3 METHODOLOGY
The purpose of this research is to investigate the possibility of
inducing emotions through a pretend play interaction between a
child and a robot. As discussed in the introduction section, suc-
cessful emotional induction is likely to have strong implications
on children’s educational and social development. To accomplish
this, we used the RFT as basis for the desired emotional induction.
Promotion focused tasks are associated with feelings of excitement
and happiness, whereas prevention focused tasks are associated
with feelings of fear and relief. Consequently, we designed two
versions of an interaction, one that is promotion focused and the
other prevention focused to elicit the corresponding emotions (hap-
piness and fear). We used two out of the ve approaches known
CHI ’21, May 8–13, 2021, Yokohama, Japan Elgarf et al.
(a) Priming interface: promotion condition
(b) Storytelling interface: beach scenario (post-test)
Figure 2: The software implementation consisted of two
parts: priming interface and storytelling interface.
The priming interface had two versions for the two dierent
scenarios (promotion vs.prevention). The storytelling inter-
face also had two versions for both the pre- and post-tests.
for emotional induction because they were shown to be among the
most eective for both happiness and fear induction:
•Imagery
: by prompting the child to imagine himself/herself
in a certain exciting/happy situation (promotion condition)
versus a fearful situation (prevention condition) with the
robot.
•Visual stimuli
: the robot consistently displayed correspond-
ing facial expressions (happiness for the promotion condition
and fear for the prevention condition). The robot facial be-
havior was also backed up by the robot’s verbal behavior
that conveyed the same feelings.
Several studies have investigated if emotional conservation oc-
curs within a specic time window after an emotional induction
trial by either music [
46
] or visual stimuli [
21
], [
34
]. We also wanted
to examine if some form of emotional conservation will occur in
our setting. Therefore, we introduced another couple of tasks in
the study: a storytelling pre- and post-tests. In these tasks, the child
is requested to tell the robot a story. The ow of interaction went
as follows: a storytelling pre-test, a priming interaction (promotion
vs. prevention) and then a storytelling post-test. By introducing the
pre- and post-tests we aimed to compare the emotional expressions
of the child between pre- and post-tests as well as the child’s ver-
bal behavior to assess if we will observe some form of emotional
conservation.
4 SYSTEM DESIGN AND IMPLEMENTATION
4.1 Priming scenario
In order to implement the priming scenario, we developed a story
line where the child and the robot are collaboratively solving a
task in one of two motivational conditions: promotion or preven-
tion. Children were requested to imagine themselves locked in a
spaceship with the robot. Together with the robot, the child tried
to nd the key to escape the spaceship to planet Mars. In the pro-
motion condition, the experimenter told the child that they will
receive a gift as soon as they get out. In the prevention condition,
the experimenter warned the child that together with the robot,
they need to nd the key quickly before the spaceship explodes.
The implementation of the priming scenario is divided into two
parts: the interface that the child and the robot used to nd the key
and the robot’s behavior.
4.1.1 Interface. The interface was implemented using the Unity
Game Engine
1
. It consisted of three dierent scenes representing
three dierent rooms in the spaceship. An example of one of the
rooms is illustrated on Figure 2(a). Each room contained three
colored buttons (red, pink and blue). Two of the three buttons
displayed the message “Oops, the key is not here” as soon as the
child clicked on them. However, the third one comprised a clue
about where to move next in order to nd the key. All arrows on a
given scene led to the same next room and the key was to be found
in the third and last room to control the duration of the priming
interaction to be almost constant for all participants. The interface
contained two priming features depending on the condition. In
the promotion condition, the gift that the child and the robot were
promised to receive was shown on the top left corner and shook for a
couple of seconds every time the child clicked on any of the buttons.
The gift received at the end of the promotion focused interaction
was a party with the aliens where the robot danced and invited the
child to dance with him. In the prevention condition, the screen
briey shook every time the child clicked a button to warn the child
and the robot that the spaceship will explode soon. Children did not
go forward with the post-test unless they completed the priming
successfully.
Position Promotion condition Prevention condition
Start of interaction “I am so excited to do this! ” “I am so scared of the explosion! ”
“I want to see what is inside our gift! ” “Let’s try to do this quickly! ”
Middle of interaction “Oh the gift is moving!” “Oh oh! The spaceship is shaking!”
“I cannot wait to open the gift! ” “Again with the shaking! Let’s hurry up! ”
“We are almost there!We are going to do it!” “This is getting scar y!”
End of interaction “Wohoo! We are nally on planet Mars! ” “We are nally on planet Mars.”
“I am so happy!” “I feel so much better now!”
Table 1: Samples of the robot’s verbal behavior during prim-
ing scenario
4.1.2 The behavior of the robot. We used EMYS
2
as the robot in our
study since it is a metallic robotic head capable of head movements
and portrayal of the six basic emotions through facial expressions.
The emotions displayed by EMYS were validated in a study by
school aged children (8-12 years) and were shown to convey the
intended emotions [
31
]. The robot’s behavior was tele-operated and
1https://unity.com/
2EMYS robot. Available at https://emys.co/
Reward Seeking or Loss Aversion? CHI ’21, May 8–13, 2021, Yokohama, Japan
was designed to exhibit the two regulatory focus related emotions.
Therefore, the robot’s behavior exhibited excitement in the promo-
tion condition and fear in the prevention condition. The robot’s
emotions were conveyed through two channels: verbal behavior
and facial expressions. We used the adult male voice provided by
Ivona
3
. We chose the male voice particularly based on previous
research [
41
] that suggests that a synthetic male voice is more favor-
able than a synthetic female voice. Furthermore, Previous studies
that were conducted with the EMYS robot and children [
31
] used a
male voice. We decided to adopt the same methodology to be able
to compare our results with them.
In the promotion condition, we manipulated the embedded EMYS
joy expression (used as the closest available to excitement) as dis-
played on Figure 3(a) and verbal behavior to convey excitement.
Consistently in the prevention condition, we used the embedded
EMYS fear expression as shown on Figure 3(c) and verbal behavior
to convey fear. At the end of the interaction, the robot uttered a
verbal expression of happiness in the promotion condition and one
of relief in the prevention condition. Examples of the robot’s verbal
behavior in the priming scenario are displayed on Table 1.
(a) Joy (b) Neutral
(c) Fear
Figure 3: EMYS facial expressions [31]
4.2 Storytelling scenario
In order to implement the storytelling scenario, we developed two
dierent versions for both the pre- and post-tests. The implementa-
tion of each version of the scenario is divided into two parts: the
interface that the child used to tell the story to the robot and the
robot’s behavior.
4.2.1 Interface. The storytelling interface was implemented using
Unity Game Engine. In each version (pre- and post-tests), a set of
four characters and nine objects were available for the child to use in
the story. The software allowed moving the characters and objects
3https://harposoftware.com/en/12-all-voices
Category Robot’s speech
Question “What’s your name? ”
“And then what happens? ”
“Why? ”
“Did you have fun? ”
Feedback on the story “Ooooh!”
“That’s too funny!”
“That’s scary!”
“That’s a good idea.”
Greeting “Hello! I am a social robot. My name is EMYS.”
“We have nished our game. Bye!”
Table 2: Samples of the robot’s verbal behavior in the story-
telling scenario
around the scene. The children were invited to use the software,
elaborate and tell the story they wanted. In the pre-test, the child
was prompted to choose between two dierent scenarios for their
stories: castle and park. Whereas in the post-test, the child chose
between beach, farm and rain forest. The scenarios, characters and
objects varied between the pre- and post-tests to enable the child
to tell independent and non repetitive stories. Children had the
possibility to navigate between the dierent scenes of the scenario
or the dierent scenarios in the same part of the session (pre-test
or post-test). The pre- and post-tests were freely timed, the child
chose when to stop them. A sample image of the software is shown
on Figure 2(b).
4.2.2 The behavior of the robot. The robot’s behavior in the pre-
and post-tests was also tele-operated. In the storytelling scenario,
the robot’s behavior was exhibited only through the verbal channel.
The experimenter used the robot’s verbal behavior to encourage the
child to tell the story by asking questions or by providing feedback
on the story. The robot also used friendly verbal behavior to start
the interaction in the pre-test and to end the interaction in the post-
test. Examples of the robot’s verbal behavior in the storytelling
scenario are displayed on Table 2.
A summary of the system’s design is displayed on Figure 4.
Figure 4: Summary of the system design
CHI ’21, May 8–13, 2021, Yokohama, Japan Elgarf et al.
5 EXPERIMENTAL EVALUATION
5.1 Hypotheses
Hypotheses for this study are divided into hypotheses concerning
the priming scenario (between subjects) and hypotheses related to
the comparison between pre- and post-tests (within subjects). With
respect to the priming scenario, we evaluated emotional induction.
We were also interested in social engagement measures. According
to prior research, aective states and engagement measures are in-
terrelated with a positive correlation in the case of positive valence
emotions such as happiness [
16
], [
49
]. We wanted to examine if we
will observe a similar eect on the relationship between induced
emotions and social engagement with a robot in our study.
For the priming scenario:
•Hypothesis 1 (H1):
as a result of using a regulatory focus
design of the pretend play interaction between the robot and
the child, regulatory focus related emotions will be induced
in the children.
– H1.a:
an aective state of happiness will be induced in the
promotion condition. Children will exhibit more happiness
related metrics in the promotion than in the prevention
condition.
– H1.b:
an aective state of fear will be induced in the pre-
vention condition. Children will exhibit more fear related
metrics in the prevention than in the promotion condition.
•Hypothesis 2 (H2):
as a result of inducing positive emo-
tions in the promotion condition, children will express more
social engagement with the robot in the promotion condition
than in the prevention condition.
Hypotheses related to the comparison between pre- and post-tests
are measuring emotional conservation. In case of successful emo-
tional induction during the priming scenario, we hypothesize that
induced emotions will be conserved through the nal part of the
interaction (post-test) as demonstrated in previous research [
46
],
[
21
], [
34
]. Emotions in the promotion condition are relatively con-
stant with positive valence throughout the priming interaction
and at the end of it with goal attainment. However, emotions in
the prevention condition converge from fear throughout the prim-
ing interaction to relief at goal attainment. Therefore, it was not
possible to assess emotional conservation of fear emotions in the
prevention condition since they should have changed to relief at
the end of the priming scenario. Measuring relief conservation was
not reasonable as well because of the very short exposure of the
children to the relief emotional state for only a few seconds at the
end of the priming scenario.
We decided to measure emotional conservation by comparing levels
of aective expressions of the children between the pre- and post-
test conditions rather than comparing the aective expressions
between the priming scenario and the post-test condition for con-
sistency. It seems fairer to compare the emotional levels between
two tests of the same nature (telling a story) to eliminate biases
from other inuencing factors. For instance, the robot’s behavior is
more emotionally expressive in the priming scenario than in the
pre- and post-tests as illustrated in Table 1 and Figure 3. However,
the robot maintained an emotionally neutral behavior throughout
the pre-and post-test parts of the interaction.
For the pre- and post-tests:
•Hypothesis 3 (H3):
the state of happiness will be conserved
throughout the interaction in the promotion condition. Chil-
dren will exhibit more happiness related metrics in the post-
test in comparison with the pre-test.
•Hypothesis 4 (H4):
in the promotion condition, children
will express more social engagement with the robot in the
post-test than in the pre-test as a result of H3.
5.2 Participants
69 children participants in second and third grade were recruited
from two British international schools to enable conducting the
study in English language in Lisbon, Portugal. 6 users were excluded
for either not completing the activity or for speaking to the robot
in their native language. Therefore, 63 users (32 male and 31 female)
were included in the nal analysis of the data. Their age ranged from
7 to 9 years old (M = 7.59, SD = 0.59). The study followed a between
subject design with the condition as the independent variable. The
excluded data led to 34 users assigned to the promotion condition
while 29 users were assigned to the prevention condition.
5.3 Materials
During the interaction, the child was seated facing the robot which
was mounted on the other side of the table. The interface was
deployed on a touch screen situated on the table between the child
and the robot. A microphone was placed in front of the child to
record the audio data. We used two cameras to record the video data.
One was used to capture the frontal view with emphasis on the
child’s face and the other, to capture the lateral view with emphasis
on the child’s input to the touch screen.
5.4 Procedures
The study design and procedures were approved by the local insti-
tution’s ethical committee. We sent consent forms to the children’s
parents one week prior to conducting the study. The consent forms
included the authorization from the parent for the child’s participa-
tion, the recording of video data, the recording of audio data and
the public sharing of the data. The duration of the whole interaction
ranged between 6 to 30 minutes for each child.
The interaction took place at the children’s schools. Two experi-
menters were present in the room during the interaction, one guided
the child through the activity and the other was tele-operating the
wizarded robot. In case a child asked random questions, the second
experimenter used general relevant answers available in the wizard
(i.e: yes, no, I don’t know). The experimenter was also sometimes
able to generate real time answers by typing them quickly in case
the answers were short enough to avoid awkward delays.
The interaction started by the rst experimenter welcoming the
child and introducing him/her to the rst activity by telling him/her
that he/she is supposed to use the interface on the touch screen to
tell a story to the robot. The experimenter also explained that the
child may speak to the robot and ask him questions. The experi-
menter asked the child to notify her as soon as he/she was done
with the rst part of the activity. After nishing the pre-test, the
experimenter explained the next part of the activity which is the
priming scenario. She emphasized on the importance of nding the
Reward Seeking or Loss Aversion? CHI ’21, May 8–13, 2021, Yokohama, Japan
key to escape the spaceship and receive a gift in the promotion con-
dition and to avoid the explosion of the spaceship in the prevention
condition. She also asked the child to pay attention to the robot’s
instructions because he knows the location of the key. After the
priming, the experimenter explained that the child will tell another
story to the robot using dierent scenarios and dierent characters.
She also requested that the child noties her as soon as he/she
nishes. After nishing the post-test, the experimenter invited the
child to respond to a short questionnaire about participant’s de-
mographic data. Finally, the experimenter thanked the child for
his/her participation.
5.5 Measures
To assess our hypotheses, we evaluated two measures: aective
expressions and social engagement.
5.5.1 Induced aective expressions. To measure induced aective
expressions, we extracted facial expressions from the collected
frontal video data. The aective expressions we were interested
in analysing have distinctive facial behavior. We also wanted to
analyse this data in a manner that enables the analysis from previ-
ously recorded videos and without distracting children with extra
wearables during the interaction.
According to the review in [
20
], Aectiva
4
is one of the most
commonly used software for facial expression analysis, accurate,
fast in terms of data extraction and easy to integrate in a project. We
used the Aectiva Javascript SDK and analysed the videos stored
locally. Similarly to the Adex software by the same company [
39
],
the Aectiva Javascript SDK uses deep learning technology for
facial expression analysis. It detects 7 emotions (anger, contempt,
disgust, fear, joy, sadness and surprise) and 15 expressions (includ-
ing brow raise, brow furrow, cheek raise, smile and smirk). The
software also calculates scores for valence and engagement as de-
scriptive measures for the emotional experience. The technology
used by Aectiva for the extraction is based on Paul Ekman’s facial
action coding system (FACS) [
17
]. We provided a time interval of
500 milliseconds to the software. For each time frame of the video,
the software attempts to detect a face. If a face is detected, the ap-
plication generates facial expression values for it. At the end of the
process, a le is generated with time entries and the corresponding
extracted facial expression data. Values extracted range from 0 to
100 (from no expression detected to fully present). We only included
the following measures in our analysis: joy, smile and fear.
We excluded further participants’ data from this analysis because
of missing frontal video and/or missing frame data as a result of the
software not recognising children’s faces or lack of parental video
consent. In total, we analysed 52 videos for aective expressions,
29 in the promotion condition and 23 in the prevention condition.
5.5.2 Social engagement measures. We used recorded videos to
analyze the behavior of the children in order to evaluate social
engagement. We used frontal videos for analysis and lateral videos
whenever the frontal video was missing. We excluded some par-
ticipants’ data from this analysis because of missing video data or
lack of parental video consent. In total, we analysed 54 videos for
4https://www.aectiva.com/
social engagement, 30 in the promotion condition and 24 in the
prevention condition.
We analysed the videos for social engagement by coding them
using ELAN
5
[
58
] developed by the Max Planck Institute for Psy-
cholinguistics. ELAN is a software tool used for the annotation
and transcription of audio and video data for behavioral analysis
purposes.
We developed our coding scheme based on the procedures demon-
strated in [
42
]. We adopted the selective coding approach where
we did not code the full data but rather the specic agreed upon be-
havior that is related to our research questions. The coding scheme
contained 19 dierent behaviors varying between robot’s verbal
behavior, children’s verbal behavior and children’s non verbal be-
havior. The ELAN software allowed for coding the time, the du-
ration and the category of each recognised behavior. As standard
practice suggests [
12
], a primary coder coded all the data while a
second coder double coded 25% randomly selected samples of the
videos to enable the assessment of the agreement. We calculated the
agreement rates using EasyDIAg [
26
], a toolbox developed for the
calculation of inter-rater agreement measures for ELAN coded data.
EasyDIAg generates agreement based on matching time sequences
with corresponding categories together. A time match is detected if
the overlap between two time sequences exceeds a certain threshold.
We used the default value provided in the system of 60% overlap.
For inter-rater agreement, EasyDIAg generates raw agreement val-
ues as well as Cohen Kappa and Kappa max indices. We obtained
high agreement ratings with Cohen’s Kappa, the most signicant
statistical measure for evaluation of agreement in observational
research [8], ranging between 0.82 and 0.93 (M = 0.87).
While coding the data, we observed the lack of children’s non
verbal behavior. None of the children tried to touch the robot. Some
of the children exhibited head nods but they were scarce. Facial ex-
pression data was already analysed using Aectiva. Consequently,
we limited our behavioral analysis to the verbal behavior of the
children following the approach applied in previous research [
53
]
where verbal response was identied as a vital part of social en-
gagement. We considered verbal behavior of the child to be a sign
of social engagement with the robot if it belonged to one of the
following four categories:
•Question:
children asked the robot storytelling related ques-
tions in the pre- and post-test, questions about what to do in
the priming scenario and general questions about the robot.
•Response:
this category included response to the robot’s
queries or comments. Some children responded to the robot
and others did not. Therefore, this category was calculated as
a ratio between robot’s comments/questions and the child’s
responses.
•Inform:
children randomly shared information with the
robot.
•Greeting:
children greeted the robot at the start and at the
end of the interaction.
Samples of children’s verbal behavior for each category are dis-
played in Table 3.
We also used the generated engagement index from Aectiva
SDK to evaluate social engagement. It denes the engagement as
5https://archive.mpi.nl/tla/elan
CHI ’21, May 8–13, 2021, Yokohama, Japan Elgarf et al.
a measure of face expressiveness through muscle activation. It is
calculated as a weighted sum of 10 expressions: brow raise, brow
furrow, nose wrinkle, lip corner depressor, chin raise, lip pucker,
lip press, mouth open, lip suck and smile.
Category Pre- and post-tests Priming scenario
Question “What do you think should happen next in the story?” “Should I click this button?”
“Do you like my story?” “Where?” to ask about where to move next.
“Do you like ice cream?” as a random question.
Response “Yes.”as response to the robot’s “Me too” as a response to the robot’s
question “Did you have fun?” comment “I am so excited!”
“No, it’s not!” as response to the robot’s
comment “That’s too funny!”
Inform “I am not into fantasy, I am more of an IQ person.” “It told me that the red button has the key!”
Greeting “Hello!” or “Hi!” at the beginning of the pre-test. -
“Bye!” at the end of the post-test.
Table 3: Samples of children’s verbal behavior
6 RESULTS
We ran the Shapiro-Wilk (S-W) test to check the normality of our
variables distribution. The null hypothesis was rejected (p < 0.05) for
all our variables and thus we deduced that they were non normally
distributed. Based on this, we used the Wilcoxon signed-rank non
parametric test for our statistical analysis. The Aectiva generated
engagement indices were the only exception where the S-W result
indicated that they were normally distributed (p > 0.05). Therefore,
we analysed those engagement measures using a one-way Manova
parametric test. For all the tests, we used the condition as indepen-
dent variable (promotion vs. prevention for the priming scenario
and pre- vs. post-test for the storytelling scenario). As response
variables, we measured the induced aective expressions and social
engagement measures that we were interested in evaluating.
6.1 Induced aective expressions
We evaluated two measures to inspect the emotional induction of
happiness in the promotion condition, the average index generated
by the Aectiva software for both joy and smile expressions. We
found a signicant eect of condition on both values. The children
signicantly exhibited higher averages of joy (W = 216, p = 0.03, M
= 7.52, SD = 12.04) (Figure 5(a)) and smile expressions (W = 199, p =
0.013, M = 9.45, SD = 12.92) (Figure 5(b)) in the promotion condition
than in the prevention condition. Hence, H1.a was accepted.
Similarly, to inspect the emotional induction of fear in the pre-
vention condition, we analysed the average fear index generated
by the Aectiva SDK. We did not nd a signicant eect of the
condition on the fear index value (W = 316, p = 0.76, M = 0.04, SD
= 1.76). Children did not express higher averages of fear in the
prevention condition than in the promotion condition. Therefore,
H1.b was rejected.
We assessed the emotional conservation of the aective state of
happiness throughout the interaction by comparing values of joy
and smile averages between pre- and post-tests for participants in
the promotion condition. Nevertheless, results were not signicant
for both (joy: W = 324, p = 0.14, M = 13.15, SD = 16.01, smile: W = 336,
p = 0.193, M = 15.14, SD = 16.78 ). We concluded that emotional con-
servation did not occur for happiness in the promotion condition.
Thus,
H3
was rejected. We also compared joy and smile averages
between pre- and post-tests independent of the priming condition
but the analysis did not yield signicant results as well (joy: W =
1162, p = 0.22, M = 12.46, SD = 15.88, smile: W = 1211, p = 0.36, M =
14.34, SD = 16.7 ).
(a) (b)
Figure 5: Analysis of facial expressions per condition.
Children signicantly expressed higher averages of joy and
smile expressions in the promotion than in the prevention
condition.
6.2 Social engagement measures
Following our prediction of successful induction of happiness dur-
ing the priming scenario in the promotion condition (
H1.a
), we
hypothesized that the children will exhibit more social engage-
ment in the promotion condition than in the prevention condition
throughout the priming interaction. To test our hypothesis we anal-
ysed both engagement measures generated by Aectiva SDK and
social verbal behavior exhibited by the child towards the robot.
The social verbal behavior measure is a frequency rather than an
average. Hence, we divided the number of social verbal behaviors
detected by the duration of the corresponding interaction. A sig-
nicant eect of the condition on both measures of engagement
was found (engagement index from Aectiva: p = 0.038, M = 33.3,
SD = 18.84, verbal social behavior: W = 236.5, p = 0.009, M = 0.003,
SD = 0.007 ) E(Figure 6(c) and Figure 6(a)). Children were more
socially engaged in the promotion condition than in the prevention
condition during the priming scenario. (
H2
) was therefore accepted.
Although we did not nd proof on emotional conservation, we
analysed engagement measures for the promotion condition com-
paring between pre- and post-tests. No signicant eect was found
(engagement index from Aectiva: p = 0.07, M = 39.74, SD = 16.5,
verbal social behavior: W = 580, p = 0.056, M = 0.009, SD = 0.014 ).
Consistently with the rejection of
H3
, participants in the promotion
condition did not display more social engagement in the post-test
than in the pre-test. However, when we assessed engagement mea-
sures comparing between pre- and post-tests independent of the
priming condition, results showed a signicant eect on coded so-
cial verbal behavior (W = 1949, p = 0.002, M = 0.008, SD = 0.01) (Figure
6(b)). Children exhibited more social verbal behaviors towards the
robot in the post-test than in the pre-test. Whereas, no signicant
eect was found on the Aectiva generated engagement values (p
= 0.08, M = 38.4, SD = 16.6) (Figure 6(c)). (
H4
) was thus rejected. We
also conducted an analysis to compare the duration between pre-
and post-tests. We expected longer duration for the post-test since
social verbal behavior results suggested more engagement in the
post-test than in the pre-test. Results however showed the opposite
Reward Seeking or Loss Aversion? CHI ’21, May 8–13, 2021, Yokohama, Japan
eect, the duration of the pre-tests was signicantly higher than
post-tests (W = 1041.5, p = 0.021, M = 5.33, SD = 3.37).
(a) (b)
(c)
Figure 6: Analysis of the Aectiva engagement measure and
social verbal behaviors in both priming scenario (promo-
tion vs. prevention) and storytelling scenario (pre-test vs.
post-test). The Aectiva engagement index was signicantly
higher in the promotion than in the prevention condition.
Children signicantly exhibited more social verbal behav-
iors in the promotion than in the prevention condition and
in the post-test than in the pre-test. Error bars represent the
standard error of the mean.
6.3 Qualitative analysis
We also observed compelling behaviors from the children during
the whole interaction. For example, some of them used the robot as
a character in their stories (i.e you and I were at the beach). Others
cheered (promotion) or expressed a sigh of relief (prevention) at the
end of the priming interaction or occasionally expressed sighs of
awe as a response to the robot’s behavior. Furthermore, at the end
of the priming scenario in the promotion condition, some children
danced with the robot when he invited them to dance by saying
“oh! A party with the aliens. Let’s dance!”. However, these behaviors
were exhibited by a few children, and therefore were excluded from
the statistical analysis.
7 DISCUSSION
We obtained interesting results of applying the RFT for emotional
induction. As hypothesised, designing an RFT pretend play interac-
tion with a robot successfully induced happiness feelings in children
in the promotion condition. Designing the same interaction in a
prevention paradigm did not lead to the induction of feelings of
fear. Several reasons may explain this discrepancy. First, for consis-
tency reasons we have chosen to induce both feelings of happiness
and fear using both imagery and visual stimuli. However, as dis-
cussed in [
54
], the best methods for inducing happiness emotions
are imagery and visual stimuli, whereas for fear it is situational
procedures followed by imagery and visual stimuli. This suggests
that using a situational procedure approach may have been more
successful for fear induction. Second, as explained in [
2
], fear is
the most challenging emotion to recognise from facial expressions
while happiness is the easiest. This is also proven by the validation
study conducted in [
31
], where children (8-12 years) recognised
EMYS joy and fear expressions with an accuracy of 91.9% and 64%
respectively. Furthermore, induced emotions in the prevention con-
dition may fall under other aspects related to fear such as panic
or stress that could have been better detected by assessing brain
activity using electroencephalographic signals (EEG) or measuring
electrodermal activity through galvanic skin response (GSR). We
decided against all these measures in our setup in order to avoid
burdening the children with extra sensors during the interaction.
This point however paves the way for interesting research ques-
tions to investigate in the future.
Previous studies have indicated that inducing positive aect has a
positive eect on creativity [
6
], problem solving skills [
28
], exibil-
ity in cognitive organization [
27
], widening the scope of attention
[
50
] and visual perception [
44
]. Therefore, RFT has a great potential
in designing educational scenarios in cHRI. We plan in a future pub-
lication to investigate the eects of RFT on the creativity process
by comparing the stories told by children in both pre-and post-tests.
We hypothesize that a promotion focus task design will result in
higher creativity performance when compared to a prevention fo-
cus task design [
19
]. The dierence in creativity performance is
attributed to the induction of the specic corresponding emotions.
Therefore, before assessing creativity measures from our data, we
had to ensure the successful emotional induction and then build on
it with creativity skills’ assessment.
As hypothesised, successful emotional induction of happiness led
to signs of social engagement of children with the robot. Therefore,
designing child interactions with a robot in a promotion focused
paradigm may introduce further benets. It may be used for devel-
oping children social skills by inducing specic social behaviors
towards the robot.
No signs of emotional conservation occurred in the post-test.
Evidence in the literature has discussed successful emotional con-
servation when emotions are induced through music [
46
] and visual
stimuli [
21
], [
34
]. However, none has discussed the eects of emo-
tional conservation when the visual stimuli has been expressed by
a robot as in our study. Also prior research has not investigated
emotional conservation when emotions are induced by imagery.
As per [
34
], within a window of 8 minutes, induction follows a
logarithmic function where the emotional state starts to decay after
the few rst minutes. Hence, in our study, emotional conservation
may have occurred for a very short window and may have had an
unnoticeable eect on the rest of the session. The post-test duration
was freely timed and ranged from 1.58 to 15.43 minutes (M = 4.78).
We did not want to restrict the children by introducing a shorter
time limit for the post-test. The duration of the post-test was also
a measure that we were interested in as an eect of engagement.
CHI ’21, May 8–13, 2021, Yokohama, Japan Elgarf et al.
Another explanation may be that in the pre- and post-tests chil-
dren were more immersed in the screen moving between scenes
and characters than in the priming scenario. This has resulted in
less frames where the face of the child was detected by the Aec-
tiva software and thus some facial expression data may have been
missed.
Results revealed that social verbal behaviors were signicantly
higher in the post-test than in the pre-test independent of the
priming condition. This suggests that children were getting more
engaged and more acquainted to the robot’s behavior with time. We
also suggest that pretend play may have been an important factor
in this. The pretend play may have got the child more immersed in
the interaction and have helped build social rapport with the robot
as well.
Another interesting result of the study, is that despite children
exhibiting more social behaviors in the post-test than in the pre-
test, the duration of the pre-tests was signicantly higher than
post-tests. Children told the robot longer stories in the pre-test
than in the post-test. This may be a result of the novelty eect at
the beginning of the interaction that faded or decreased by having
to repeat the same activity (telling a story to the robot) at the end
of the interaction.
8 LIMITATIONS AND FUTURE WORK
As a result of the successful emotional induction, we are interested
in analysing the recorded audio data for the pre- and post-tests.
We aim to evaluate creativity measures in children’s stories before
and after the successful emotional induction despite the lack of
emotional conservation. Emotions may have ceased to show in
the facial muscles, nevertheless, they may have had an impact on
cognitive measures that may inuence creativity.
We would like to further investigate the unexpected results that we
got about the correlation between time and the social engagement.
In the future, we plan to assess if the social engagement was higher
in the post-test because of the pretend play or because of the length
of the interaction by introducing a control condition.
As discussed before, we did not manage to detect fear induction by
means of EEG or GSR to prevent distracting the children by other
sensors. Nevertheless, in the future we might be able to conrm our
results concerning fear by evaluating subjective measures in the
same setting or by using other sensors in another fear induction
scenario.
9 CONCLUSION
We manipulated the RFT for the design of a pretend play interac-
tion between a robot and a child. We were primarily interested
in the eect of RFT on the induction of regulatory focus related
emotions. We succeeded in inducing happiness emotions in the pro-
motion condition, whereas we failed to prove fear induction in the
prevention condition. We also investigated the eect of emotional
induction on social engagement. Consistently with the literature,
the induction of positive emotions resulted in more social engage-
ment in the promotion condition during the priming scenario.
We also examined if emotional conservation occurred in the post-
test after the exposure to the successful induction in the priming
scenario. Emotional conservation did not occur in our setting and
consequently social engagement did not vary between pre- and
post-tests in the promotion condition. However, we found another
intriguing result, social verbal behaviors were exhibited more by
children in the post-test than the pre-test independent of the prim-
ing condition. Our results have strong implications on the design
of both educational tasks and child interactions with social agents.
ACKNOWLEDGMENTS
This work was supported by the European Commission Horizon
2020 Research and Innovation Program under Grant Agreement
No. 765955. We would like to thank the reviewers for their valuable
feedback that helped us polishing the nal version of the paper. We
would also like to acknowledge the help of Sahba Zojaji in double
coding the data for the inter-rater agreement and Giovanna Varni
for her valuable insights on the behavioral analysis of the data and
her revision of the paper’s rst draft.
REFERENCES
[1]
Kim Adams, Adriana Rios, Lina Becerra, and Paola Esquivel. 2015. Using robots to
access play at dierent developmental levels for children with severe disabilities:
a pilot study. In RESNA Conference. RESNA, Washington DC.
[2]
Ralph Adolphs, Daniel Tranel, S Hamann, Andrew W Young, Andrew J Calder,
Elizabeth A Phelps, Al Anderson, Gregory P Lee, and Antonio R Damasio. 1999.
Recognition of facial emotion in nine individuals with bilateral amygdala damage.
Neuropsychologia 37, 10 (1999), 1111–1117.
[3]
Roxana Agrigoroaie, Stefan-Dan Ciocirlan, and Adriana Tapus. 2020. In the Wild
HRI Scenario: Inuence of Regulatory Focus Theory. Frontiers in Robotics and AI
7 (2020).
[4]
Helga Andresen. 2005. Role play and language development in the preschool
years. Culture & Psychology 11, 4 (2005), 387–414.
[5]
Sean Andrist, Bilge Mutlu, and Adriana Tapus. 2015. Look like me: matching
robot personality via gaze to increase motivation. In Proceedings of the 33rd
annual ACM conference on human factors in computing systems (CHI ’15). ACM,
New York, NY, USA, 3603–3612.
[6]
Matthijs Baas, Carsten KW De Dreu, and Bernard A Nijstad. 2008. A meta-
analysis of 25 years of mood-creativity research: Hedonic tone, activation, or
regulatory focus? Psychological bulletin 134, 6 (2008), 779.
[7]
Matthijs Baas, Carsten KW De Dreu, and Bernard A Nijstad. 2011. When pre-
vention promotes creativity: The role of mood, regulatory focus, and regulatory
closure. Journal of personality and social psychology 100, 5 (2011), 794.
[8]
Roger Bakeman and Vicenç Quera. 2011. Sequential analysis and observational
methods for the behavioral sciences. Cambridge University Press, Cambridge.
[9]
Linda Bell, Joakim Gustafson, and Mattias Heldner. 2003. Prosodic adaptation in
human-computer interaction. In Proceedings of ICPHS 2003, Vol. 3. Citeseer, USA,
833–836.
[10]
Doris Bergen. 2002. The role of pretend play in children’s cognitive development.
Early Childhood Research & Practice 4, 1 (2002), n1.
[11]
Ginevra Castellano, André Pereira, Iolanda Leite, Ana Paiva, and Peter W
McOwan. 2009. Detecting user engagement with a robot companion using
task and social interaction-based features. In Proceedings of the 2009 international
conference on Multimodal interfaces. ACM, New York, NY, USA, 119–126.
[12]
Jill MacLaren Chorney, C Meghan McMurtry, Christine T Chambers, and Roger
Bakeman. 2015. Developing and modifying behavioral coding schemes in pedi-
atric psychology: a practical guide. Journal of pediatric psychology 40, 1 (2015),
154–164.
[13]
Ellen Crowe and E ToryHiggins. 1997. Regulator y focus and strategic inclinations:
Promotion and prevention in decision-making. Organizational behavior and
human decision processes 69, 2 (1997), 117–132.
[14]
Arturo Cruz-Maya, Roxana Agrigoroaie, and Adriana Tapus. 2017. Improving
user’s performance by motivation: Matching robot interaction strategy with
user’s regulatory state. In International Conference on Social Robotics. Springer,
New York, USA, 464–473.
[15]
Arturo Cruz-Maya and Adriana Tapus. 2018. Adapting Robot Behavior using
Regulatory Focus Theory, User Physiological State and Task-Performance Infor-
mation. In 2018 27th IEEE International Symposium on Robot and Human Interactive
Communication (RO-MAN). IEEE, New York, USA, 644–651.
[16]
Jesus Alfonso D Datu, Ronnel B King, and Jana Patricia M Valdez. 2017. The
academic rewards of socially-oriented happiness: Interdependent happiness pro-
motes academic engagement. Journal of School Psychology 61 (2017), 19–31.
[17]
Paul Ekman and Wallace V Friesen. 1978. Manual for the facial action coding
system. Consulting Psychologists Press, Palo Alto, CA, USA.
Reward Seeking or Loss Aversion? CHI ’21, May 8–13, 2021, Yokohama, Japan
[18]
Caroline Faur, Jean-Claude Martin, and Celine Clavel. 2015. Matching articial
agents’ and users’ personalities: designing agents with regulatory-focus and
testing the regulatory t eect.. In CogSci. Cognitive Science Society, Washington,
USA.
[19]
Ronald S Friedman and Jens Förster. 2005. Eects of motivational cues on
perceptual asymmetry: Implications for creativity and analytical problem solving.
Journal of personality and social psychology 88, 2 (2005), 263.
[20]
Jose Maria Garcia-Garcia, Victor MR Penichet, and Maria D Lozano. 2017. Emo-
tion detection: a technology review. In Proceedings of the XVIII international
conference on human computer interaction. Springer, USA, 1–8.
[21]
Patrick Gomez, PG Zimmermann, Sissel Guttormsen Schär, and Brigitta Danuser.
2009. Valence lasts longer than arousal: Persistence of induced moods as assessed
by psychophysiological measures. Journal of Psychophysiology 23, 1 (2009), 7–17.
[22]
E Tory Higgins. 1997. Beyond pleasure and pain. American psychologist 52, 12
(1997), 1280.
[23]
Rens Hoegen, Deepali Aneja, Daniel McDu, and Mary Czerwinski. 2019. An
end-to-end conversational style matching agent. In Proceedings of the 19th ACM
International Conference on Intelligent Virtual Agents. ACM, New York, NY, USA,
111–118.
[24]
Rens Hoegen, Job Van Der Schalk, Gale Lucas, and Jonathan Gratch. 2018. The
impact of agent facial mimicry on social behavior in a prisoner’s dilemma. In
Proceedings of the 18th International Conference on Intelligent Virtual Agents. ACM,
New York, NY, USA, 275–280.
[25]
Galit Hofree, Paul Ruvolo, Marian Stewart Bartlett, and Piotr Winkielman. 2014.
Bridging the mechanical and the human mind: spontaneous mimicry of a physi-
cally present android. PloS one 9, 7 (2014), e99934.
[26]
Henning Holle and Robert Rein. 2015. EasyDIAg: A tool for easy determination
of interrater agreement. Behavior research methods 47, 3 (2015), 837–847.
[27]
Alice M Isen. 1987. Positive aect, cognitive processes, and social behavior.
In Advances in experimental social psychology. Vol. 20. Elsevier, Amsterdam,
Netherlands, 203–253.
[28]
Alice M Isen, Kimberly A Daubman, and Gary P Nowicki. 1987. Positive aect
facilitates creative problem solving. Journal of personality and social psychology
52, 6 (1987), 1122.
[29]
Jason F Jent, Larissa N Niec, and Sarah E Baker. 2011. Play and interpersonal
processes. Play in clinical practice: evidence-based approaches. Guilford Press, New
York 2, 2 (2011), 23–47.
[30]
Eun Ho Kim, Sonya S Kwak, and Yoon Keun Kwak. 2009. Can robotic emotional
expressions induce a human to empathize with a robot?. In RO-MAN 2009-The 18th
IEEE International Symposium on Robot and Human Interactive Communication.
IEEE, USA, 358–362.
[31]
J. Kkedzierski, R. Muszyński, C. Zoll, A. Oleksy, and M. Frontkiewicz. 2013.
EMYS—emotive head of a social robot. International Journal of Social Robotics 5,
2 (2013), 237–249.
[32]
Jacqueline M Kory-Westlund and Cynthia Breazeal. 2019. Exploring the eects of
a social robot’s speech entrainment and backstory on young children’s emotion,
rapport, relationship, and learning. Frontiers in Robotics and AI 6 (2019), 54.
[33]
Dalibor Kučera and Jiří Haviger. 2012. Using mood induction procedures in
psychological research. Procedia-Social and Behavioral Sciences 69 (2012), 31–40.
[34]
Andre Kuijsters, Judith Redi, Boris de Ruyter, and Ingrid Heynderickx. 2016. In-
ducing sadness and anxiousness through visual media: Measurement techniques
and persistence. Frontiers in psychology 7 (2016), 1141.
[35]
Sonya S Kwak, Yunkyung Kim, Eunho Kim, Christine Shin, and Kwangsu Cho.
2013. What makes people empathize with an emotional robot?: The impact of
agency and physical embodiment on human empathy for a robot. In 2013 IEEE
RO-MAN. IEEE, USA, 180–185.
[36]
Heather C Lench, Sarah A Flores, and Shane W Bench. 2011. Discrete emotions
predict changes in cognition, judgment, experience, behavior, and physiology: a
meta-analysis of experimental emotion elicitations. Psychological bulletin 137, 5
(2011), 834.
[37]
C. Li, Q. Jia, and Y. Feng. 2016. Human-Robot Interaction Design for Robot-
Assisted Intervention for Children with Autism Based on E-S Theory. In 2016 8th
International Conference on Intelligent Human-Machine Systems and Cybernetics
(IHMSC), Vol. 02. CPS, USA, 320–324.
[38]
John D Mayer, Laura J McCormick, and Sara E Strong. 1995. Mood-congruent
memory and natural mood: New evidence. Personality and Social Psychology
Bulletin 21, 7 (1995), 736–746.
[39]
Daniel McDu, Abdelrahman Mahmoud, Mohammad Mavadati, May Amr, Jay
Turcot, and Rana el Kaliouby. 2016. AFFDEX SDK: a cross-platform real-time
multi-face expression recognition toolkit. In Proceedings of the 2016 CHI conference
extended abstracts on human factors in computing systems. ACM, New York, NY,
USA, 3723–3726.
[40]
Candice M Mottweiler and Marjorie Taylor. 2014. Elaborated role play and
creativity in preschool age children. Psychology of Aesthetics, Creativity, and the
Arts 8, 3 (2014), 277.
[41]
John W Mullennix, Steven E Stern, Stephen J Wilson, and Corrie-lynn Dyson. 2003.
Social perception of male and female computer synthesized speech. Computers
in Human Behavior 19, 4 (2003), 407–424.
[42]
Yfke P Ongena and Wil Dijkstra. 2006. Methods of behavior coding of survey
interviews. Journal of Ocial Statistics 22, 3 (2006), 419.
[43] Maike Paetzel, Isabelle Hupont, Giovanna Varni, Mohamed Chetouani, Christo-
pher Peters, and Ginevra Castellano. 2017. Exploring the Link between Self-
assessed Mimicry and Embodiment in HRI. In Proceedings of the Companion of
the 2017 ACM/IEEE International Conference on Human-Robot Interaction. ACM,
New York, NY, USA, 245–246.
[44]
Nicolas Poirel, Mathieu Cassotti, Virginie Beaucousin, Arlette Pineau, and Olivier
Houdé. 2012. Pleasant emotional induction broadens the visual world of young
children. Cognition & emotion 26, 1 (2012), 186–191.
[45]
Kenneth M Prkachin, Rhonda M Williams-Avery, Caroline Zwaal, and David E
Mills. 1999. Cardiovascular changes during induced emotion: An application
of Lang’s theory of emotional imagery. Journal of psychosomatic research 47, 3
(1999), 255–267.
[46]
Fabiana Silva Ribeiro, Flávia Heloísa Santos, Pedro Barbas Albuquerque, and
Patrícia Oliveira-Silva. 2019. Emotional induction through music: Measuring
cardiac and electrodermal responses of emotional states and their persistence.
Frontiers in psychology 10 (2019), 451.
[47]
Laurel D Riek, Philip C Paul, and Peter Robinson. 2010. When my robot smiles at
me: Enabling human-robot rapport via real-time head gesture mimicry. Journal
on Multimodal User Interfaces 3, 1-2 (2010), 99–108.
[48]
Ben Robins, Kerstin Dautenhahn, Rene Te Boekhorst, and Aude Billard. 2005.
Robotic assistants in therapy and education of children with autism: can a small
humanoid robot help encourage social interaction skills? Universal access in the
information society 4, 2 (2005), 105–120.
[49] Natacha Rouaix, Laure Retru-Chavastel, Anne-Sophie Rigaud, Clotilde Monnet,
Hermine Lenoir, and Maribel Pino. 2017. Aective and engagement issues in the
conception and assessment of a robot-assisted psychomotor therapy for persons
with dementia. Frontiers in psychology 8 (2017), 950.
[50]
Gillian Rowe, Jacob B Hirsh, and Adam K Anderson. 2007. Positive aect increases
the breadth of attentional selection. Proceedings of the National Academy of
Sciences 104, 1 (2007), 383–388.
[51]
Sandra W Russ and Julie A Fiorelli. 2010. Developmental approaches to creativity.
The Cambridge handbook of creativity 12 (2010), 233–249.
[52]
Anara Sandygulova and Gregory MP O’Hare. 2016. Investigating the impact of
gender segregation within observational pretend play interaction. In 2016 11th
ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, USA,
399–406.
[53]
Soa Serholt and Wolmet Barendregt. 2016. Robots tutoring children: Longitudi-
nal evaluation of social engagement in child-robot interaction. In Proceedings of
the 9th nordic conference on human-computer interaction. ACM, New York, NY,
USA, 1–10.
[54]
Ewa Siedlecka and Thomas F Denson. 2019. Experimental methods for inducing
basic emotions: A qualitative review. Emotion Review 11, 1 (2019), 87–97.
[55]
Elly Singer, Merel Nederend, Lotte Penninx, Mehrnaz Tajik, and Jan Boom. 2014.
The teacher’s role in supporting young children’s level of play engagement. Early
Child Development and Care 184, 8 (2014), 1233–1249.
[56]
Bram Vanderborght, Ramona Simut, Jelle Saldien, Cristina Pop, Alina S Rusu,
Sebastian Pintea, Dirk Lefeber, and Daniel O David. 2012. Using the social robot
probo as a social story telling agent for children with ASD. Interaction Studies 13,
3 (2012), 348–372.
[57]
Rainer Westermann, Kordelia Spies, Günter Stahl, and Friedrich W Hesse. 1996.
Relative eectiveness and validity of mood induction procedures: A meta-analysis.
European Journal of social psychology 26, 4 (1996), 557–580.
[58]
Peter Wittenburg, Hennie Brugman, Albert Russel, Alex Klassmann, and Han
Sloetjes. 2006. ELAN: a professional framework for multimodality research. In
5th International Conference on Language Resources and Evaluation (LREC 2006).
European Language Resources Association, Marseille, France, 1556–1559.
[59]
Xuan Zhang, Hui W Yu, and Lisa F Barrett. 2014. How does this make you feel? A
comparison of four aect induction procedures. Frontiers in psychology 5 (2014),
689.