Conference PaperPDF Available

Reward Seeking or Loss Aversion?: Impact of Regulatory Focus Theory on Emotional Induction in Children and Their Behavior Towards a Social Robot

Authors:

Abstract and Figures

According to psychology research, emotional induction has positive implications in many domains such as therapy and education. Our aim in this paper was to manipulate the Regulatory Focus Theory to assess its impact on the induction of regulatory focus related emotions in children in a pretend play scenario with a social robot. The Regulatory Focus Theory suggests that people follow one of two paradigms while attempting to achieve a goal; by seeking gains (promotion focus - associated with feelings of happiness) or by avoiding losses (prevention focus - associated with feelings of fear). We conducted a study with 69 school children in two different conditions (promotion vs. prevention). We succeeded in inducing happiness emotions in the promotion condition and found a resulting positive effect of the induction on children's social engagement with the robot. We also discuss the important implications of these results in both educational and child robot interaction fields.
Content may be subject to copyright.
Reward Seeking or Loss Aversion?
Impact of Regulatory Focus Theory on Emotional Induction in Children and
Their Behavior Towards a Social Robot
Maha Elgarf
mahaeg@kth.se
KTH Royal Institute of Technology
Stockholm, Sweden
Natalia Calvo-Barajas
natalia.calvo@it.uu.se
Uppsala University
Uppsala, Sweden
Ana Paiva
ana.paiva@inesc-id.pt
Instituto Superior Técnico (IST),
Universidade de Lisboa and INESC-ID
Lisbon, Portugal
Ginevra Castellano
ginevra.castellano@it.uu.se
Uppsala University
Uppsala, Sweden
Christopher Peters
chpeters@kth.se
KTH Royal Institute of Technology
Stockholm, Sweden
Figure 1: Sample images of the interaction between the children and the robot. Consent was received from the parents for
publishing children’s images.
ABSTRACT
According to psychology research, emotional induction has positive
implications in many domains such as therapy and education. Our
aim in this paper was to manipulate the Regulatory Focus Theory
to assess its impact on the induction of regulatory focus related
emotions in children in a pretend play scenario with a social ro-
bot. The Regulatory Focus Theory suggests that people follow one
of two paradigms while attempting to achieve a goal; by seeking
gains (promotion focus - associated with feelings of happiness) or
by avoiding losses (prevention focus - associated with feelings of
fear). We conducted a study with 69 school children in two dierent
conditions (promotion vs. prevention). We succeeded in inducing
happiness emotions in the promotion condition and found a result-
ing positive eect of the induction on children’s social engagement
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specic permission and/or a
fee. Request permissions from permissions@acm.org.
CHI ’21, May 8–13, 2021, Yokohama, Japan
©2021 Association for Computing Machinery.
ACM ISBN 978-1-4503-8096-6/21/05. . . $15.00
https://doi.org/10.1145/3411764.3445486
with the robot. We also discuss the important implications of these
results in both educational and child robot interaction elds.
CCS CONCEPTS
Human-centered computing Interaction design
;
Interac-
tion paradigms
;
Computer systems organization Robot-
ics.
KEYWORDS
social robotics, human robot interaction, emotional induction, reg-
ulatory focus, social engagement
ACM Reference Format:
Maha Elgarf, Natalia Calvo-Barajas, Ana Paiva, Ginevra Castellano, and Christo-
pher Peters. 2021. Reward Seeking or Loss Aversion?: Impact of Regulatory
Focus Theory on Emotional Induction in Children and Their Behavior To-
wards a Social Robot. In CHI Conference on Human Factors in Computing
Systems (CHI ’21), May 8–13, 2021, Yokohama, Japan. ACM, New York, NY,
USA, 11 pages. https://doi.org/10.1145/3411764.3445486
1 INTRODUCTION
Research about child robot interaction (cHRI) has received great
attention. One of its main applications is children’s educational
and social development. For social development, robots have been
used to teach children empathy [
37
] and social skills [
56
], [
48
]. In
CHI ’21, May 8–13, 2021, Yokohama, Japan Elgarf et al.
this work, we address cHRI for educational and social development
from the perspective of emotional induction. The term emotional
induction refers to the process of eliciting specic emotions within
a human user using specic stimuli. Positive aect is known to
have strong implications on children’s social and cognitive skills
[
6
], [
28
], [
27
], [
50
], [
44
]. This suggests that inducing positive emo-
tions has great benets for children’s educational development.
Previous research [
44
] has evaluated this idea by investigating the
eects of inducing positive emotions on children’s (5-8 years) vi-
sual processing abilities. The results indicate that children have
a tendency to a global rather than a local visual perception after
being presented with a positive emotional stimuli. These results
emphasize on the importance of positive emotions in broadening
children’s perspective and widening their scope of attention. Our
work builds on previous work by evaluating emotional induction
through the design of a child robot interaction centered around the
regulatory focus concept. The Regulatory Focus Theory (RFT) sug-
gests that people follow one of two paradigms in order to achieve
their goals. The promotion focus where they are motivated by re-
ward seeking which is associated with feelings of excitement to
receive the reward and happiness at reward receipt. Whereas, in
prevention focus, people are motivated by loss aversion associated
with feelings of fear of loss and relief at loss avoidance [
7
], [
6
].
For example, a child may be motivated to eat his/her food because
he/she wants to have the promised half an hour playing with the
play station or to avoid the punishment of not watching his/her
favourite cartoon on that day.
Previous research concerning RFT with robots is scarce and
has focused merely on matching the user’s and robot’s regulatory
focus personality type. The robot either displays a promotion fo-
cused personality that is more motivated by achieving gains or a
prevention focused personality motivated by fear of failure. For
example, a promotion focused person will study for an exam with
an aim to achieve top results while a prevention focused person
will study enough just to avoid failing the exam. RFT has not been
yet investigated in the eld of cHRI. RFT may however have strong
implications on children’s educational performance as suggested
by [
13
], where participants in the promotion condition exhibited
more resilience and better performance at a dicult sorting task.
[
19
] also suggests a positive eect of promotion focused tasks on
the divergent thinking aspect of creativity. These positive eects of
RFT in the promotion condition are attributed to the induction of
the corresponding positive emotions.
We applied RFT for emotional induction through a pretend play
interaction with a social robot. We designed the interaction in two
conditions (promotion vs. prevention). In each condition the pre-
tend play scenario has been changed accordingly and the robot has
displayed the corresponding emotions (happiness in the promo-
tion condition and fear in the prevention condition). We assessed
whether emotional induction occurred and then we evaluated the
eects of inducing positive emotions on the children social en-
gagement with the robot measured through the social behaviors
exhibited by the child towards the robot.
We introduced a social robot in our design to make the interac-
tion more engaging and because of the capability of social robots
to express emotions. We designed our interaction as a pretend play
scenario since pretend play is one of the most preferred play styles
by children [1], [29], [55].
The contribution of our work is summarized in the following
points:
Previous psychology research has discussed the connection
between RFT and dierent emotions [
6
]. However, the use
of RFT for emotional induction has not been investigated in
HRI before.
This research is the rst work in HRI to investigate the RFT
in terms of designing the whole interaction rather than only
the robot’s personality. The RFT was applied to the scenario
design (promotion vs. prevention), the robot’s personality
and the corresponding emotions displayed by the robot in
each condition. Our work is also the rst work to investigate
RFT in cHRI.
Previous research has suggested that smiles are a sign of en-
gagement [
11
] which demonstrates a relationship between
happiness and engagement. We extended this work by as-
sessing social behaviors exhibited by the children towards
the robot as a result of induced happiness.
2 BACKGROUND
2.1 Regulatory focus
In 1997, Higgins proposed the Regulatory Focus Theory (RFT) [
22
]
that distinguishes between two motivational approaches used by
humans in order to perform a task. Promotion focus, is characterised
by the motivation to accomplish goals through achieving a certain
gain. Whereas, prevention focus is characterised by the motivation
to accomplish goals through the avoidance of failure. For example,
a task that is promotion focused may motivate the user by oering
a possible reward at task completion. Consequently, promotion
focused tasks are associated with feelings of excitement through-
out the task that converge to feelings of happiness at successful
task completion. However, a prevention focused task motivates
the user by encouraging them to avoid a specic loss. Therefore,
prevention focused tasks are associated with feelings of fear and
stress throughout the task that converge to relief at successful task
completion [
7
], [
6
]. An example that illustrates the RFT is explained
in [
19
], where the users completed a maze task in the two dierent
regulatory focus conditions. Participants received a paper with a
cartoon drawing where they were trying to save a mouse trapped
in a maze. In the promotion condition, the mouse will get a piece of
cheese (reward) as soon as it escapes the maze. Nevertheless, in the
prevention condition, the mouse is trying to avoid an owl (threat)
hovering above the maze. The owl will cease to chase the mouse as
soon as it escapes the maze. In another study [
13
], RFT was manip-
ulated to prove that when faced with a dicult task, participants
who were presented with a promotion focused version of the task
will perform better because of positive feelings of happiness while
participants presented with a prevention focused version of the
task will give up sooner because of feelings of fear and stress.
Recent research in human robot interaction has used RFT and
measured the eects of matching the regulatory focus type (also
know as regulatory t) of the robot to the user on performance on a
test [
14
], on perceived robot’s persuasiveness [
15
] and on the dura-
tion of the interaction with the robot [
3
]. Users who interacted with
Reward Seeking or Loss Aversion? CHI ’21, May 8–13, 2021, Yokohama, Japan
a matching regulatory focus type robot performed better on the
test and perceived the robot as more persuasive. Similarly, match-
ing regulatory focus type resulted in deliberate longer interactions
with the robot. The regulatory t concept has also been used with
virtual agents [
18
], where the authors found that it had a signi-
cant positive eect on likability measures of users assigned to the
prevention condition only.
The contribution of our work is that we are assessing the impact of
the regulatory focus design of an interaction with a social robot on
inducing regulatory focus related emotions rather than matching
regulatory focus type.
2.2 Emotional induction
Researchers have often identied several methods for inducing
emotions [
54
], [
36
], [
57
]. The authors in [
54
] have reviewed the
ve most eective ways for emotional induction: visual stimuli,
imagery, situational procedure, music and autobiographical recall.
Visual stimuli is the most common method used by showing the
subjects images or movie clips that elicit specic emotions as in
[
33
], where the authors used cheerful and sad short clips to induce
both happiness and sadness respectively. Imagery consists of ask-
ing the users to imagine themselves in a specic situation where
they would experience specic emotions. Imagery is used in [
38
],
where the experimenter asked the participants to imagine that it
is their birthday and that they are being thrown a surprise party
by their loved ones. A Situational procedure is some form of a real
interaction which is acted upon the user to elicit the desired emo-
tions. In [
33
], the researchers used a real interaction to elicit fear
by creating a real test environment and another real interaction to
induce anger by introducing a rude person to interrupt a teacher
during an ongoing class. Music has also been frequently used for
inducing emotions. For instance, in [
46
], the researchers success-
fully used dierent rhythms of music to induce both happiness and
sadness. Finally, autobiographical recall is used to induce emotions
by asking participants to remember and retell a story where they
strongly felt a specic emotion. For example, in [
45
], the users were
asked to describe a situation where they felt scared to induce their
feelings of fear.
The meta analysis in [
54
] has examined each of the ve methods
for inducing the six basic emotions: happiness, sadness, fear, sur-
prise, anger and disgust. According to the authors, the ve dierent
ways may yield dierent induction levels for the dierent emotions.
For this work, we will only consider the induction of the two basic
emotions related to the concept of regulatory focus that we are
investigating: happiness and fear. As explained in the review, all
of the ve methods are eective for happiness induction except
the situational procedures; with visual stimuli as the most eective
followed by imagery. Additionally, research suggests that combin-
ing several methods together yields to better induction results than
using a single procedure [
59
]. Whereas for fear, all ve paradigms
are eective for the induction with the situational procedures as
most successful followed by imagery and then visual stimuli.
In the eld of cHRI, emotional induction for educational purposes
has not been investigated before. However, in terms of emotion
research, studies have been conducted to investigate empathy [
30
],
[
35
], behavior synchronisation [
5
], [
32
], [
23
], [
9
] and mimicry be-
tween the user and the agent (whether a robot or a virtual character)
[
24
], [
47
], [
25
], [
43
]. The closest to emotional induction is emotional
mimicry also called emotional contagion that the agent elicits in the
user in an interaction. The dierence between emotional mimicry
and emotional induction in that case is that mimicry is detected in a
specic time window that follows a specic behavior expressed by
the agent. For example, in [
25
], users spontaneously matched facial
expressions of an android robot during an interaction within a six
seconds interval. In this research nevertheless, emotional induc-
tion is measured using objective measures for emotional detection
throughout the whole interaction.
2.3 Pretend play
Playing is an essential building block of children’s development.
Pretend play also known as role play is one of the most commonly
adopted play styles for kids. Research has extensively elaborated on
the benets of pretend play for children which includes creativity
[
40
] and cognitive development through the practice of their prob-
lem solving skills in a simulated environment [
51
]. It also helps in
their linguistic development [
4
] through the use of their narration
and communication skills during the pretend play. Pretend play
also facilitates the process of perspective taking [
10
] and therefore
may result in more empathetic behavior and a general improvement
of social skills.
Studies in cHRI have used pretend play as a mean to develop
children’s skills and to asses other relevant measures during the in-
teraction. In [
52
], a NAO robot and a sensorized mini kitchen were
used to provide a safe and entertaining pretend play environment.
Although the main purpose of the study was to measure eects of
gender segregation of the NAO robot on children’s behavior in a
pretend play scenario, results have also shown that the pretend play
environment may further be utilised for social and cognitive chil-
dren’s development. Also in [
1
], the authors compared between the
dierent types of play with and without a robot. They found that
children chose to engage in pretend play more frequently when the
robot was not present. The authors attributed this to the children
not knowing how to include the robot in their playing scenario.
We have designed our interaction scenario as a pretend play inter-
action with a social robot to make it more engaging for the children
and to be able to use the robot for emotional induction through the
emotional expressions that the robot will display.
3 METHODOLOGY
The purpose of this research is to investigate the possibility of
inducing emotions through a pretend play interaction between a
child and a robot. As discussed in the introduction section, suc-
cessful emotional induction is likely to have strong implications
on children’s educational and social development. To accomplish
this, we used the RFT as basis for the desired emotional induction.
Promotion focused tasks are associated with feelings of excitement
and happiness, whereas prevention focused tasks are associated
with feelings of fear and relief. Consequently, we designed two
versions of an interaction, one that is promotion focused and the
other prevention focused to elicit the corresponding emotions (hap-
piness and fear). We used two out of the ve approaches known
CHI ’21, May 8–13, 2021, Yokohama, Japan Elgarf et al.
(a) Priming interface: promotion condition
(b) Storytelling interface: beach scenario (post-test)
Figure 2: The software implementation consisted of two
parts: priming interface and storytelling interface.
The priming interface had two versions for the two dierent
scenarios (promotion vs.prevention). The storytelling inter-
face also had two versions for both the pre- and post-tests.
for emotional induction because they were shown to be among the
most eective for both happiness and fear induction:
Imagery
: by prompting the child to imagine himself/herself
in a certain exciting/happy situation (promotion condition)
versus a fearful situation (prevention condition) with the
robot.
Visual stimuli
: the robot consistently displayed correspond-
ing facial expressions (happiness for the promotion condition
and fear for the prevention condition). The robot facial be-
havior was also backed up by the robot’s verbal behavior
that conveyed the same feelings.
Several studies have investigated if emotional conservation oc-
curs within a specic time window after an emotional induction
trial by either music [
46
] or visual stimuli [
21
], [
34
]. We also wanted
to examine if some form of emotional conservation will occur in
our setting. Therefore, we introduced another couple of tasks in
the study: a storytelling pre- and post-tests. In these tasks, the child
is requested to tell the robot a story. The ow of interaction went
as follows: a storytelling pre-test, a priming interaction (promotion
vs. prevention) and then a storytelling post-test. By introducing the
pre- and post-tests we aimed to compare the emotional expressions
of the child between pre- and post-tests as well as the child’s ver-
bal behavior to assess if we will observe some form of emotional
conservation.
4 SYSTEM DESIGN AND IMPLEMENTATION
4.1 Priming scenario
In order to implement the priming scenario, we developed a story
line where the child and the robot are collaboratively solving a
task in one of two motivational conditions: promotion or preven-
tion. Children were requested to imagine themselves locked in a
spaceship with the robot. Together with the robot, the child tried
to nd the key to escape the spaceship to planet Mars. In the pro-
motion condition, the experimenter told the child that they will
receive a gift as soon as they get out. In the prevention condition,
the experimenter warned the child that together with the robot,
they need to nd the key quickly before the spaceship explodes.
The implementation of the priming scenario is divided into two
parts: the interface that the child and the robot used to nd the key
and the robot’s behavior.
4.1.1 Interface. The interface was implemented using the Unity
Game Engine
1
. It consisted of three dierent scenes representing
three dierent rooms in the spaceship. An example of one of the
rooms is illustrated on Figure 2(a). Each room contained three
colored buttons (red, pink and blue). Two of the three buttons
displayed the message “Oops, the key is not here” as soon as the
child clicked on them. However, the third one comprised a clue
about where to move next in order to nd the key. All arrows on a
given scene led to the same next room and the key was to be found
in the third and last room to control the duration of the priming
interaction to be almost constant for all participants. The interface
contained two priming features depending on the condition. In
the promotion condition, the gift that the child and the robot were
promised to receive was shown on the top left corner and shook for a
couple of seconds every time the child clicked on any of the buttons.
The gift received at the end of the promotion focused interaction
was a party with the aliens where the robot danced and invited the
child to dance with him. In the prevention condition, the screen
briey shook every time the child clicked a button to warn the child
and the robot that the spaceship will explode soon. Children did not
go forward with the post-test unless they completed the priming
successfully.
Position Promotion condition Prevention condition
Start of interaction “I am so excited to do this! ” “I am so scared of the explosion! ”
“I want to see what is inside our gift! ” “Let’s try to do this quickly! ”
Middle of interaction “Oh the gift is moving!” “Oh oh! The spaceship is shaking!”
“I cannot wait to open the gift! ” “Again with the shaking! Let’s hurry up! ”
“We are almost there!We are going to do it!” “This is getting scar y!”
End of interaction “Wohoo! We are nally on planet Mars! ” “We are nally on planet Mars.”
“I am so happy!” “I feel so much better now!”
Table 1: Samples of the robot’s verbal behavior during prim-
ing scenario
4.1.2 The behavior of the robot. We used EMYS
2
as the robot in our
study since it is a metallic robotic head capable of head movements
and portrayal of the six basic emotions through facial expressions.
The emotions displayed by EMYS were validated in a study by
school aged children (8-12 years) and were shown to convey the
intended emotions [
31
]. The robot’s behavior was tele-operated and
1https://unity.com/
2EMYS robot. Available at https://emys.co/
Reward Seeking or Loss Aversion? CHI ’21, May 8–13, 2021, Yokohama, Japan
was designed to exhibit the two regulatory focus related emotions.
Therefore, the robot’s behavior exhibited excitement in the promo-
tion condition and fear in the prevention condition. The robot’s
emotions were conveyed through two channels: verbal behavior
and facial expressions. We used the adult male voice provided by
Ivona
3
. We chose the male voice particularly based on previous
research [
41
] that suggests that a synthetic male voice is more favor-
able than a synthetic female voice. Furthermore, Previous studies
that were conducted with the EMYS robot and children [
31
] used a
male voice. We decided to adopt the same methodology to be able
to compare our results with them.
In the promotion condition, we manipulated the embedded EMYS
joy expression (used as the closest available to excitement) as dis-
played on Figure 3(a) and verbal behavior to convey excitement.
Consistently in the prevention condition, we used the embedded
EMYS fear expression as shown on Figure 3(c) and verbal behavior
to convey fear. At the end of the interaction, the robot uttered a
verbal expression of happiness in the promotion condition and one
of relief in the prevention condition. Examples of the robot’s verbal
behavior in the priming scenario are displayed on Table 1.
(a) Joy (b) Neutral
(c) Fear
Figure 3: EMYS facial expressions [31]
4.2 Storytelling scenario
In order to implement the storytelling scenario, we developed two
dierent versions for both the pre- and post-tests. The implementa-
tion of each version of the scenario is divided into two parts: the
interface that the child used to tell the story to the robot and the
robot’s behavior.
4.2.1 Interface. The storytelling interface was implemented using
Unity Game Engine. In each version (pre- and post-tests), a set of
four characters and nine objects were available for the child to use in
the story. The software allowed moving the characters and objects
3https://harposoftware.com/en/12-all-voices
Category Robot’s speech
Question “What’s your name? ”
“And then what happens? ”
“Why? ”
“Did you have fun? ”
Feedback on the story “Ooooh!”
“That’s too funny!”
“That’s scary!”
“That’s a good idea.”
Greeting “Hello! I am a social robot. My name is EMYS.”
“We have nished our game. Bye!”
Table 2: Samples of the robot’s verbal behavior in the story-
telling scenario
around the scene. The children were invited to use the software,
elaborate and tell the story they wanted. In the pre-test, the child
was prompted to choose between two dierent scenarios for their
stories: castle and park. Whereas in the post-test, the child chose
between beach, farm and rain forest. The scenarios, characters and
objects varied between the pre- and post-tests to enable the child
to tell independent and non repetitive stories. Children had the
possibility to navigate between the dierent scenes of the scenario
or the dierent scenarios in the same part of the session (pre-test
or post-test). The pre- and post-tests were freely timed, the child
chose when to stop them. A sample image of the software is shown
on Figure 2(b).
4.2.2 The behavior of the robot. The robot’s behavior in the pre-
and post-tests was also tele-operated. In the storytelling scenario,
the robot’s behavior was exhibited only through the verbal channel.
The experimenter used the robot’s verbal behavior to encourage the
child to tell the story by asking questions or by providing feedback
on the story. The robot also used friendly verbal behavior to start
the interaction in the pre-test and to end the interaction in the post-
test. Examples of the robot’s verbal behavior in the storytelling
scenario are displayed on Table 2.
A summary of the system’s design is displayed on Figure 4.
Figure 4: Summary of the system design
CHI ’21, May 8–13, 2021, Yokohama, Japan Elgarf et al.
5 EXPERIMENTAL EVALUATION
5.1 Hypotheses
Hypotheses for this study are divided into hypotheses concerning
the priming scenario (between subjects) and hypotheses related to
the comparison between pre- and post-tests (within subjects). With
respect to the priming scenario, we evaluated emotional induction.
We were also interested in social engagement measures. According
to prior research, aective states and engagement measures are in-
terrelated with a positive correlation in the case of positive valence
emotions such as happiness [
16
], [
49
]. We wanted to examine if we
will observe a similar eect on the relationship between induced
emotions and social engagement with a robot in our study.
For the priming scenario:
Hypothesis 1 (H1):
as a result of using a regulatory focus
design of the pretend play interaction between the robot and
the child, regulatory focus related emotions will be induced
in the children.
– H1.a:
an aective state of happiness will be induced in the
promotion condition. Children will exhibit more happiness
related metrics in the promotion than in the prevention
condition.
– H1.b:
an aective state of fear will be induced in the pre-
vention condition. Children will exhibit more fear related
metrics in the prevention than in the promotion condition.
Hypothesis 2 (H2):
as a result of inducing positive emo-
tions in the promotion condition, children will express more
social engagement with the robot in the promotion condition
than in the prevention condition.
Hypotheses related to the comparison between pre- and post-tests
are measuring emotional conservation. In case of successful emo-
tional induction during the priming scenario, we hypothesize that
induced emotions will be conserved through the nal part of the
interaction (post-test) as demonstrated in previous research [
46
],
[
21
], [
34
]. Emotions in the promotion condition are relatively con-
stant with positive valence throughout the priming interaction
and at the end of it with goal attainment. However, emotions in
the prevention condition converge from fear throughout the prim-
ing interaction to relief at goal attainment. Therefore, it was not
possible to assess emotional conservation of fear emotions in the
prevention condition since they should have changed to relief at
the end of the priming scenario. Measuring relief conservation was
not reasonable as well because of the very short exposure of the
children to the relief emotional state for only a few seconds at the
end of the priming scenario.
We decided to measure emotional conservation by comparing levels
of aective expressions of the children between the pre- and post-
test conditions rather than comparing the aective expressions
between the priming scenario and the post-test condition for con-
sistency. It seems fairer to compare the emotional levels between
two tests of the same nature (telling a story) to eliminate biases
from other inuencing factors. For instance, the robot’s behavior is
more emotionally expressive in the priming scenario than in the
pre- and post-tests as illustrated in Table 1 and Figure 3. However,
the robot maintained an emotionally neutral behavior throughout
the pre-and post-test parts of the interaction.
For the pre- and post-tests:
Hypothesis 3 (H3):
the state of happiness will be conserved
throughout the interaction in the promotion condition. Chil-
dren will exhibit more happiness related metrics in the post-
test in comparison with the pre-test.
Hypothesis 4 (H4):
in the promotion condition, children
will express more social engagement with the robot in the
post-test than in the pre-test as a result of H3.
5.2 Participants
69 children participants in second and third grade were recruited
from two British international schools to enable conducting the
study in English language in Lisbon, Portugal. 6 users were excluded
for either not completing the activity or for speaking to the robot
in their native language. Therefore, 63 users (32 male and 31 female)
were included in the nal analysis of the data. Their age ranged from
7 to 9 years old (M = 7.59, SD = 0.59). The study followed a between
subject design with the condition as the independent variable. The
excluded data led to 34 users assigned to the promotion condition
while 29 users were assigned to the prevention condition.
5.3 Materials
During the interaction, the child was seated facing the robot which
was mounted on the other side of the table. The interface was
deployed on a touch screen situated on the table between the child
and the robot. A microphone was placed in front of the child to
record the audio data. We used two cameras to record the video data.
One was used to capture the frontal view with emphasis on the
child’s face and the other, to capture the lateral view with emphasis
on the child’s input to the touch screen.
5.4 Procedures
The study design and procedures were approved by the local insti-
tution’s ethical committee. We sent consent forms to the children’s
parents one week prior to conducting the study. The consent forms
included the authorization from the parent for the child’s participa-
tion, the recording of video data, the recording of audio data and
the public sharing of the data. The duration of the whole interaction
ranged between 6 to 30 minutes for each child.
The interaction took place at the children’s schools. Two experi-
menters were present in the room during the interaction, one guided
the child through the activity and the other was tele-operating the
wizarded robot. In case a child asked random questions, the second
experimenter used general relevant answers available in the wizard
(i.e: yes, no, I don’t know). The experimenter was also sometimes
able to generate real time answers by typing them quickly in case
the answers were short enough to avoid awkward delays.
The interaction started by the rst experimenter welcoming the
child and introducing him/her to the rst activity by telling him/her
that he/she is supposed to use the interface on the touch screen to
tell a story to the robot. The experimenter also explained that the
child may speak to the robot and ask him questions. The experi-
menter asked the child to notify her as soon as he/she was done
with the rst part of the activity. After nishing the pre-test, the
experimenter explained the next part of the activity which is the
priming scenario. She emphasized on the importance of nding the
Reward Seeking or Loss Aversion? CHI ’21, May 8–13, 2021, Yokohama, Japan
key to escape the spaceship and receive a gift in the promotion con-
dition and to avoid the explosion of the spaceship in the prevention
condition. She also asked the child to pay attention to the robot’s
instructions because he knows the location of the key. After the
priming, the experimenter explained that the child will tell another
story to the robot using dierent scenarios and dierent characters.
She also requested that the child noties her as soon as he/she
nishes. After nishing the post-test, the experimenter invited the
child to respond to a short questionnaire about participant’s de-
mographic data. Finally, the experimenter thanked the child for
his/her participation.
5.5 Measures
To assess our hypotheses, we evaluated two measures: aective
expressions and social engagement.
5.5.1 Induced aective expressions. To measure induced aective
expressions, we extracted facial expressions from the collected
frontal video data. The aective expressions we were interested
in analysing have distinctive facial behavior. We also wanted to
analyse this data in a manner that enables the analysis from previ-
ously recorded videos and without distracting children with extra
wearables during the interaction.
According to the review in [
20
], Aectiva
4
is one of the most
commonly used software for facial expression analysis, accurate,
fast in terms of data extraction and easy to integrate in a project. We
used the Aectiva Javascript SDK and analysed the videos stored
locally. Similarly to the Adex software by the same company [
39
],
the Aectiva Javascript SDK uses deep learning technology for
facial expression analysis. It detects 7 emotions (anger, contempt,
disgust, fear, joy, sadness and surprise) and 15 expressions (includ-
ing brow raise, brow furrow, cheek raise, smile and smirk). The
software also calculates scores for valence and engagement as de-
scriptive measures for the emotional experience. The technology
used by Aectiva for the extraction is based on Paul Ekman’s facial
action coding system (FACS) [
17
]. We provided a time interval of
500 milliseconds to the software. For each time frame of the video,
the software attempts to detect a face. If a face is detected, the ap-
plication generates facial expression values for it. At the end of the
process, a le is generated with time entries and the corresponding
extracted facial expression data. Values extracted range from 0 to
100 (from no expression detected to fully present). We only included
the following measures in our analysis: joy, smile and fear.
We excluded further participants’ data from this analysis because
of missing frontal video and/or missing frame data as a result of the
software not recognising children’s faces or lack of parental video
consent. In total, we analysed 52 videos for aective expressions,
29 in the promotion condition and 23 in the prevention condition.
5.5.2 Social engagement measures. We used recorded videos to
analyze the behavior of the children in order to evaluate social
engagement. We used frontal videos for analysis and lateral videos
whenever the frontal video was missing. We excluded some par-
ticipants’ data from this analysis because of missing video data or
lack of parental video consent. In total, we analysed 54 videos for
4https://www.aectiva.com/
social engagement, 30 in the promotion condition and 24 in the
prevention condition.
We analysed the videos for social engagement by coding them
using ELAN
5
[
58
] developed by the Max Planck Institute for Psy-
cholinguistics. ELAN is a software tool used for the annotation
and transcription of audio and video data for behavioral analysis
purposes.
We developed our coding scheme based on the procedures demon-
strated in [
42
]. We adopted the selective coding approach where
we did not code the full data but rather the specic agreed upon be-
havior that is related to our research questions. The coding scheme
contained 19 dierent behaviors varying between robot’s verbal
behavior, children’s verbal behavior and children’s non verbal be-
havior. The ELAN software allowed for coding the time, the du-
ration and the category of each recognised behavior. As standard
practice suggests [
12
], a primary coder coded all the data while a
second coder double coded 25% randomly selected samples of the
videos to enable the assessment of the agreement. We calculated the
agreement rates using EasyDIAg [
26
], a toolbox developed for the
calculation of inter-rater agreement measures for ELAN coded data.
EasyDIAg generates agreement based on matching time sequences
with corresponding categories together. A time match is detected if
the overlap between two time sequences exceeds a certain threshold.
We used the default value provided in the system of 60% overlap.
For inter-rater agreement, EasyDIAg generates raw agreement val-
ues as well as Cohen Kappa and Kappa max indices. We obtained
high agreement ratings with Cohen’s Kappa, the most signicant
statistical measure for evaluation of agreement in observational
research [8], ranging between 0.82 and 0.93 (M = 0.87).
While coding the data, we observed the lack of children’s non
verbal behavior. None of the children tried to touch the robot. Some
of the children exhibited head nods but they were scarce. Facial ex-
pression data was already analysed using Aectiva. Consequently,
we limited our behavioral analysis to the verbal behavior of the
children following the approach applied in previous research [
53
]
where verbal response was identied as a vital part of social en-
gagement. We considered verbal behavior of the child to be a sign
of social engagement with the robot if it belonged to one of the
following four categories:
Question:
children asked the robot storytelling related ques-
tions in the pre- and post-test, questions about what to do in
the priming scenario and general questions about the robot.
Response:
this category included response to the robot’s
queries or comments. Some children responded to the robot
and others did not. Therefore, this category was calculated as
a ratio between robot’s comments/questions and the child’s
responses.
Inform:
children randomly shared information with the
robot.
Greeting:
children greeted the robot at the start and at the
end of the interaction.
Samples of children’s verbal behavior for each category are dis-
played in Table 3.
We also used the generated engagement index from Aectiva
SDK to evaluate social engagement. It denes the engagement as
5https://archive.mpi.nl/tla/elan
CHI ’21, May 8–13, 2021, Yokohama, Japan Elgarf et al.
a measure of face expressiveness through muscle activation. It is
calculated as a weighted sum of 10 expressions: brow raise, brow
furrow, nose wrinkle, lip corner depressor, chin raise, lip pucker,
lip press, mouth open, lip suck and smile.
Category Pre- and post-tests Priming scenario
Question “What do you think should happen next in the story?” “Should I click this button?”
“Do you like my story?” “Where?” to ask about where to move next.
“Do you like ice cream?” as a random question.
Response “Yes.”as response to the robot’s “Me too” as a response to the robot’s
question “Did you have fun?” comment “I am so excited!”
“No, it’s not!” as response to the robot’s
comment “That’s too funny!”
Inform “I am not into fantasy, I am more of an IQ person.” “It told me that the red button has the key!”
Greeting “Hello!” or “Hi!” at the beginning of the pre-test. -
“Bye!” at the end of the post-test.
Table 3: Samples of children’s verbal behavior
6 RESULTS
We ran the Shapiro-Wilk (S-W) test to check the normality of our
variables distribution. The null hypothesis was rejected (p < 0.05) for
all our variables and thus we deduced that they were non normally
distributed. Based on this, we used the Wilcoxon signed-rank non
parametric test for our statistical analysis. The Aectiva generated
engagement indices were the only exception where the S-W result
indicated that they were normally distributed (p > 0.05). Therefore,
we analysed those engagement measures using a one-way Manova
parametric test. For all the tests, we used the condition as indepen-
dent variable (promotion vs. prevention for the priming scenario
and pre- vs. post-test for the storytelling scenario). As response
variables, we measured the induced aective expressions and social
engagement measures that we were interested in evaluating.
6.1 Induced aective expressions
We evaluated two measures to inspect the emotional induction of
happiness in the promotion condition, the average index generated
by the Aectiva software for both joy and smile expressions. We
found a signicant eect of condition on both values. The children
signicantly exhibited higher averages of joy (W = 216, p = 0.03, M
= 7.52, SD = 12.04) (Figure 5(a)) and smile expressions (W = 199, p =
0.013, M = 9.45, SD = 12.92) (Figure 5(b)) in the promotion condition
than in the prevention condition. Hence, H1.a was accepted.
Similarly, to inspect the emotional induction of fear in the pre-
vention condition, we analysed the average fear index generated
by the Aectiva SDK. We did not nd a signicant eect of the
condition on the fear index value (W = 316, p = 0.76, M = 0.04, SD
= 1.76). Children did not express higher averages of fear in the
prevention condition than in the promotion condition. Therefore,
H1.b was rejected.
We assessed the emotional conservation of the aective state of
happiness throughout the interaction by comparing values of joy
and smile averages between pre- and post-tests for participants in
the promotion condition. Nevertheless, results were not signicant
for both (joy: W = 324, p = 0.14, M = 13.15, SD = 16.01, smile: W = 336,
p = 0.193, M = 15.14, SD = 16.78 ). We concluded that emotional con-
servation did not occur for happiness in the promotion condition.
Thus,
H3
was rejected. We also compared joy and smile averages
between pre- and post-tests independent of the priming condition
but the analysis did not yield signicant results as well (joy: W =
1162, p = 0.22, M = 12.46, SD = 15.88, smile: W = 1211, p = 0.36, M =
14.34, SD = 16.7 ).
(a) (b)
Figure 5: Analysis of facial expressions per condition.
Children signicantly expressed higher averages of joy and
smile expressions in the promotion than in the prevention
condition.
6.2 Social engagement measures
Following our prediction of successful induction of happiness dur-
ing the priming scenario in the promotion condition (
H1.a
), we
hypothesized that the children will exhibit more social engage-
ment in the promotion condition than in the prevention condition
throughout the priming interaction. To test our hypothesis we anal-
ysed both engagement measures generated by Aectiva SDK and
social verbal behavior exhibited by the child towards the robot.
The social verbal behavior measure is a frequency rather than an
average. Hence, we divided the number of social verbal behaviors
detected by the duration of the corresponding interaction. A sig-
nicant eect of the condition on both measures of engagement
was found (engagement index from Aectiva: p = 0.038, M = 33.3,
SD = 18.84, verbal social behavior: W = 236.5, p = 0.009, M = 0.003,
SD = 0.007 ) E(Figure 6(c) and Figure 6(a)). Children were more
socially engaged in the promotion condition than in the prevention
condition during the priming scenario. (
H2
) was therefore accepted.
Although we did not nd proof on emotional conservation, we
analysed engagement measures for the promotion condition com-
paring between pre- and post-tests. No signicant eect was found
(engagement index from Aectiva: p = 0.07, M = 39.74, SD = 16.5,
verbal social behavior: W = 580, p = 0.056, M = 0.009, SD = 0.014 ).
Consistently with the rejection of
H3
, participants in the promotion
condition did not display more social engagement in the post-test
than in the pre-test. However, when we assessed engagement mea-
sures comparing between pre- and post-tests independent of the
priming condition, results showed a signicant eect on coded so-
cial verbal behavior (W = 1949, p = 0.002, M = 0.008, SD = 0.01) (Figure
6(b)). Children exhibited more social verbal behaviors towards the
robot in the post-test than in the pre-test. Whereas, no signicant
eect was found on the Aectiva generated engagement values (p
= 0.08, M = 38.4, SD = 16.6) (Figure 6(c)). (
H4
) was thus rejected. We
also conducted an analysis to compare the duration between pre-
and post-tests. We expected longer duration for the post-test since
social verbal behavior results suggested more engagement in the
post-test than in the pre-test. Results however showed the opposite
Reward Seeking or Loss Aversion? CHI ’21, May 8–13, 2021, Yokohama, Japan
eect, the duration of the pre-tests was signicantly higher than
post-tests (W = 1041.5, p = 0.021, M = 5.33, SD = 3.37).
(a) (b)
(c)
Figure 6: Analysis of the Aectiva engagement measure and
social verbal behaviors in both priming scenario (promo-
tion vs. prevention) and storytelling scenario (pre-test vs.
post-test). The Aectiva engagement index was signicantly
higher in the promotion than in the prevention condition.
Children signicantly exhibited more social verbal behav-
iors in the promotion than in the prevention condition and
in the post-test than in the pre-test. Error bars represent the
standard error of the mean.
6.3 Qualitative analysis
We also observed compelling behaviors from the children during
the whole interaction. For example, some of them used the robot as
a character in their stories (i.e you and I were at the beach). Others
cheered (promotion) or expressed a sigh of relief (prevention) at the
end of the priming interaction or occasionally expressed sighs of
awe as a response to the robot’s behavior. Furthermore, at the end
of the priming scenario in the promotion condition, some children
danced with the robot when he invited them to dance by saying
“oh! A party with the aliens. Let’s dance!”. However, these behaviors
were exhibited by a few children, and therefore were excluded from
the statistical analysis.
7 DISCUSSION
We obtained interesting results of applying the RFT for emotional
induction. As hypothesised, designing an RFT pretend play interac-
tion with a robot successfully induced happiness feelings in children
in the promotion condition. Designing the same interaction in a
prevention paradigm did not lead to the induction of feelings of
fear. Several reasons may explain this discrepancy. First, for consis-
tency reasons we have chosen to induce both feelings of happiness
and fear using both imagery and visual stimuli. However, as dis-
cussed in [
54
], the best methods for inducing happiness emotions
are imagery and visual stimuli, whereas for fear it is situational
procedures followed by imagery and visual stimuli. This suggests
that using a situational procedure approach may have been more
successful for fear induction. Second, as explained in [
2
], fear is
the most challenging emotion to recognise from facial expressions
while happiness is the easiest. This is also proven by the validation
study conducted in [
31
], where children (8-12 years) recognised
EMYS joy and fear expressions with an accuracy of 91.9% and 64%
respectively. Furthermore, induced emotions in the prevention con-
dition may fall under other aspects related to fear such as panic
or stress that could have been better detected by assessing brain
activity using electroencephalographic signals (EEG) or measuring
electrodermal activity through galvanic skin response (GSR). We
decided against all these measures in our setup in order to avoid
burdening the children with extra sensors during the interaction.
This point however paves the way for interesting research ques-
tions to investigate in the future.
Previous studies have indicated that inducing positive aect has a
positive eect on creativity [
6
], problem solving skills [
28
], exibil-
ity in cognitive organization [
27
], widening the scope of attention
[
50
] and visual perception [
44
]. Therefore, RFT has a great potential
in designing educational scenarios in cHRI. We plan in a future pub-
lication to investigate the eects of RFT on the creativity process
by comparing the stories told by children in both pre-and post-tests.
We hypothesize that a promotion focus task design will result in
higher creativity performance when compared to a prevention fo-
cus task design [
19
]. The dierence in creativity performance is
attributed to the induction of the specic corresponding emotions.
Therefore, before assessing creativity measures from our data, we
had to ensure the successful emotional induction and then build on
it with creativity skills’ assessment.
As hypothesised, successful emotional induction of happiness led
to signs of social engagement of children with the robot. Therefore,
designing child interactions with a robot in a promotion focused
paradigm may introduce further benets. It may be used for devel-
oping children social skills by inducing specic social behaviors
towards the robot.
No signs of emotional conservation occurred in the post-test.
Evidence in the literature has discussed successful emotional con-
servation when emotions are induced through music [
46
] and visual
stimuli [
21
], [
34
]. However, none has discussed the eects of emo-
tional conservation when the visual stimuli has been expressed by
a robot as in our study. Also prior research has not investigated
emotional conservation when emotions are induced by imagery.
As per [
34
], within a window of 8 minutes, induction follows a
logarithmic function where the emotional state starts to decay after
the few rst minutes. Hence, in our study, emotional conservation
may have occurred for a very short window and may have had an
unnoticeable eect on the rest of the session. The post-test duration
was freely timed and ranged from 1.58 to 15.43 minutes (M = 4.78).
We did not want to restrict the children by introducing a shorter
time limit for the post-test. The duration of the post-test was also
a measure that we were interested in as an eect of engagement.
CHI ’21, May 8–13, 2021, Yokohama, Japan Elgarf et al.
Another explanation may be that in the pre- and post-tests chil-
dren were more immersed in the screen moving between scenes
and characters than in the priming scenario. This has resulted in
less frames where the face of the child was detected by the Aec-
tiva software and thus some facial expression data may have been
missed.
Results revealed that social verbal behaviors were signicantly
higher in the post-test than in the pre-test independent of the
priming condition. This suggests that children were getting more
engaged and more acquainted to the robot’s behavior with time. We
also suggest that pretend play may have been an important factor
in this. The pretend play may have got the child more immersed in
the interaction and have helped build social rapport with the robot
as well.
Another interesting result of the study, is that despite children
exhibiting more social behaviors in the post-test than in the pre-
test, the duration of the pre-tests was signicantly higher than
post-tests. Children told the robot longer stories in the pre-test
than in the post-test. This may be a result of the novelty eect at
the beginning of the interaction that faded or decreased by having
to repeat the same activity (telling a story to the robot) at the end
of the interaction.
8 LIMITATIONS AND FUTURE WORK
As a result of the successful emotional induction, we are interested
in analysing the recorded audio data for the pre- and post-tests.
We aim to evaluate creativity measures in children’s stories before
and after the successful emotional induction despite the lack of
emotional conservation. Emotions may have ceased to show in
the facial muscles, nevertheless, they may have had an impact on
cognitive measures that may inuence creativity.
We would like to further investigate the unexpected results that we
got about the correlation between time and the social engagement.
In the future, we plan to assess if the social engagement was higher
in the post-test because of the pretend play or because of the length
of the interaction by introducing a control condition.
As discussed before, we did not manage to detect fear induction by
means of EEG or GSR to prevent distracting the children by other
sensors. Nevertheless, in the future we might be able to conrm our
results concerning fear by evaluating subjective measures in the
same setting or by using other sensors in another fear induction
scenario.
9 CONCLUSION
We manipulated the RFT for the design of a pretend play interac-
tion between a robot and a child. We were primarily interested
in the eect of RFT on the induction of regulatory focus related
emotions. We succeeded in inducing happiness emotions in the pro-
motion condition, whereas we failed to prove fear induction in the
prevention condition. We also investigated the eect of emotional
induction on social engagement. Consistently with the literature,
the induction of positive emotions resulted in more social engage-
ment in the promotion condition during the priming scenario.
We also examined if emotional conservation occurred in the post-
test after the exposure to the successful induction in the priming
scenario. Emotional conservation did not occur in our setting and
consequently social engagement did not vary between pre- and
post-tests in the promotion condition. However, we found another
intriguing result, social verbal behaviors were exhibited more by
children in the post-test than the pre-test independent of the prim-
ing condition. Our results have strong implications on the design
of both educational tasks and child interactions with social agents.
ACKNOWLEDGMENTS
This work was supported by the European Commission Horizon
2020 Research and Innovation Program under Grant Agreement
No. 765955. We would like to thank the reviewers for their valuable
feedback that helped us polishing the nal version of the paper. We
would also like to acknowledge the help of Sahba Zojaji in double
coding the data for the inter-rater agreement and Giovanna Varni
for her valuable insights on the behavioral analysis of the data and
her revision of the paper’s rst draft.
REFERENCES
[1]
Kim Adams, Adriana Rios, Lina Becerra, and Paola Esquivel. 2015. Using robots to
access play at dierent developmental levels for children with severe disabilities:
a pilot study. In RESNA Conference. RESNA, Washington DC.
[2]
Ralph Adolphs, Daniel Tranel, S Hamann, Andrew W Young, Andrew J Calder,
Elizabeth A Phelps, Al Anderson, Gregory P Lee, and Antonio R Damasio. 1999.
Recognition of facial emotion in nine individuals with bilateral amygdala damage.
Neuropsychologia 37, 10 (1999), 1111–1117.
[3]
Roxana Agrigoroaie, Stefan-Dan Ciocirlan, and Adriana Tapus. 2020. In the Wild
HRI Scenario: Inuence of Regulatory Focus Theory. Frontiers in Robotics and AI
7 (2020).
[4]
Helga Andresen. 2005. Role play and language development in the preschool
years. Culture & Psychology 11, 4 (2005), 387–414.
[5]
Sean Andrist, Bilge Mutlu, and Adriana Tapus. 2015. Look like me: matching
robot personality via gaze to increase motivation. In Proceedings of the 33rd
annual ACM conference on human factors in computing systems (CHI ’15). ACM,
New York, NY, USA, 3603–3612.
[6]
Matthijs Baas, Carsten KW De Dreu, and Bernard A Nijstad. 2008. A meta-
analysis of 25 years of mood-creativity research: Hedonic tone, activation, or
regulatory focus? Psychological bulletin 134, 6 (2008), 779.
[7]
Matthijs Baas, Carsten KW De Dreu, and Bernard A Nijstad. 2011. When pre-
vention promotes creativity: The role of mood, regulatory focus, and regulatory
closure. Journal of personality and social psychology 100, 5 (2011), 794.
[8]
Roger Bakeman and Vicenç Quera. 2011. Sequential analysis and observational
methods for the behavioral sciences. Cambridge University Press, Cambridge.
[9]
Linda Bell, Joakim Gustafson, and Mattias Heldner. 2003. Prosodic adaptation in
human-computer interaction. In Proceedings of ICPHS 2003, Vol. 3. Citeseer, USA,
833–836.
[10]
Doris Bergen. 2002. The role of pretend play in children’s cognitive development.
Early Childhood Research & Practice 4, 1 (2002), n1.
[11]
Ginevra Castellano, André Pereira, Iolanda Leite, Ana Paiva, and Peter W
McOwan. 2009. Detecting user engagement with a robot companion using
task and social interaction-based features. In Proceedings of the 2009 international
conference on Multimodal interfaces. ACM, New York, NY, USA, 119–126.
[12]
Jill MacLaren Chorney, C Meghan McMurtry, Christine T Chambers, and Roger
Bakeman. 2015. Developing and modifying behavioral coding schemes in pedi-
atric psychology: a practical guide. Journal of pediatric psychology 40, 1 (2015),
154–164.
[13]
Ellen Crowe and E ToryHiggins. 1997. Regulator y focus and strategic inclinations:
Promotion and prevention in decision-making. Organizational behavior and
human decision processes 69, 2 (1997), 117–132.
[14]
Arturo Cruz-Maya, Roxana Agrigoroaie, and Adriana Tapus. 2017. Improving
user’s performance by motivation: Matching robot interaction strategy with
user’s regulatory state. In International Conference on Social Robotics. Springer,
New York, USA, 464–473.
[15]
Arturo Cruz-Maya and Adriana Tapus. 2018. Adapting Robot Behavior using
Regulatory Focus Theory, User Physiological State and Task-Performance Infor-
mation. In 2018 27th IEEE International Symposium on Robot and Human Interactive
Communication (RO-MAN). IEEE, New York, USA, 644–651.
[16]
Jesus Alfonso D Datu, Ronnel B King, and Jana Patricia M Valdez. 2017. The
academic rewards of socially-oriented happiness: Interdependent happiness pro-
motes academic engagement. Journal of School Psychology 61 (2017), 19–31.
[17]
Paul Ekman and Wallace V Friesen. 1978. Manual for the facial action coding
system. Consulting Psychologists Press, Palo Alto, CA, USA.
Reward Seeking or Loss Aversion? CHI ’21, May 8–13, 2021, Yokohama, Japan
[18]
Caroline Faur, Jean-Claude Martin, and Celine Clavel. 2015. Matching articial
agents’ and users’ personalities: designing agents with regulatory-focus and
testing the regulatory t eect.. In CogSci. Cognitive Science Society, Washington,
USA.
[19]
Ronald S Friedman and Jens Förster. 2005. Eects of motivational cues on
perceptual asymmetry: Implications for creativity and analytical problem solving.
Journal of personality and social psychology 88, 2 (2005), 263.
[20]
Jose Maria Garcia-Garcia, Victor MR Penichet, and Maria D Lozano. 2017. Emo-
tion detection: a technology review. In Proceedings of the XVIII international
conference on human computer interaction. Springer, USA, 1–8.
[21]
Patrick Gomez, PG Zimmermann, Sissel Guttormsen Schär, and Brigitta Danuser.
2009. Valence lasts longer than arousal: Persistence of induced moods as assessed
by psychophysiological measures. Journal of Psychophysiology 23, 1 (2009), 7–17.
[22]
E Tory Higgins. 1997. Beyond pleasure and pain. American psychologist 52, 12
(1997), 1280.
[23]
Rens Hoegen, Deepali Aneja, Daniel McDu, and Mary Czerwinski. 2019. An
end-to-end conversational style matching agent. In Proceedings of the 19th ACM
International Conference on Intelligent Virtual Agents. ACM, New York, NY, USA,
111–118.
[24]
Rens Hoegen, Job Van Der Schalk, Gale Lucas, and Jonathan Gratch. 2018. The
impact of agent facial mimicry on social behavior in a prisoner’s dilemma. In
Proceedings of the 18th International Conference on Intelligent Virtual Agents. ACM,
New York, NY, USA, 275–280.
[25]
Galit Hofree, Paul Ruvolo, Marian Stewart Bartlett, and Piotr Winkielman. 2014.
Bridging the mechanical and the human mind: spontaneous mimicry of a physi-
cally present android. PloS one 9, 7 (2014), e99934.
[26]
Henning Holle and Robert Rein. 2015. EasyDIAg: A tool for easy determination
of interrater agreement. Behavior research methods 47, 3 (2015), 837–847.
[27]
Alice M Isen. 1987. Positive aect, cognitive processes, and social behavior.
In Advances in experimental social psychology. Vol. 20. Elsevier, Amsterdam,
Netherlands, 203–253.
[28]
Alice M Isen, Kimberly A Daubman, and Gary P Nowicki. 1987. Positive aect
facilitates creative problem solving. Journal of personality and social psychology
52, 6 (1987), 1122.
[29]
Jason F Jent, Larissa N Niec, and Sarah E Baker. 2011. Play and interpersonal
processes. Play in clinical practice: evidence-based approaches. Guilford Press, New
York 2, 2 (2011), 23–47.
[30]
Eun Ho Kim, Sonya S Kwak, and Yoon Keun Kwak. 2009. Can robotic emotional
expressions induce a human to empathize with a robot?. In RO-MAN 2009-The 18th
IEEE International Symposium on Robot and Human Interactive Communication.
IEEE, USA, 358–362.
[31]
J. Kkedzierski, R. Muszyński, C. Zoll, A. Oleksy, and M. Frontkiewicz. 2013.
EMYS—emotive head of a social robot. International Journal of Social Robotics 5,
2 (2013), 237–249.
[32]
Jacqueline M Kory-Westlund and Cynthia Breazeal. 2019. Exploring the eects of
a social robot’s speech entrainment and backstory on young children’s emotion,
rapport, relationship, and learning. Frontiers in Robotics and AI 6 (2019), 54.
[33]
Dalibor Kučera and Jiří Haviger. 2012. Using mood induction procedures in
psychological research. Procedia-Social and Behavioral Sciences 69 (2012), 31–40.
[34]
Andre Kuijsters, Judith Redi, Boris de Ruyter, and Ingrid Heynderickx. 2016. In-
ducing sadness and anxiousness through visual media: Measurement techniques
and persistence. Frontiers in psychology 7 (2016), 1141.
[35]
Sonya S Kwak, Yunkyung Kim, Eunho Kim, Christine Shin, and Kwangsu Cho.
2013. What makes people empathize with an emotional robot?: The impact of
agency and physical embodiment on human empathy for a robot. In 2013 IEEE
RO-MAN. IEEE, USA, 180–185.
[36]
Heather C Lench, Sarah A Flores, and Shane W Bench. 2011. Discrete emotions
predict changes in cognition, judgment, experience, behavior, and physiology: a
meta-analysis of experimental emotion elicitations. Psychological bulletin 137, 5
(2011), 834.
[37]
C. Li, Q. Jia, and Y. Feng. 2016. Human-Robot Interaction Design for Robot-
Assisted Intervention for Children with Autism Based on E-S Theory. In 2016 8th
International Conference on Intelligent Human-Machine Systems and Cybernetics
(IHMSC), Vol. 02. CPS, USA, 320–324.
[38]
John D Mayer, Laura J McCormick, and Sara E Strong. 1995. Mood-congruent
memory and natural mood: New evidence. Personality and Social Psychology
Bulletin 21, 7 (1995), 736–746.
[39]
Daniel McDu, Abdelrahman Mahmoud, Mohammad Mavadati, May Amr, Jay
Turcot, and Rana el Kaliouby. 2016. AFFDEX SDK: a cross-platform real-time
multi-face expression recognition toolkit. In Proceedings of the 2016 CHI conference
extended abstracts on human factors in computing systems. ACM, New York, NY,
USA, 3723–3726.
[40]
Candice M Mottweiler and Marjorie Taylor. 2014. Elaborated role play and
creativity in preschool age children. Psychology of Aesthetics, Creativity, and the
Arts 8, 3 (2014), 277.
[41]
John W Mullennix, Steven E Stern, Stephen J Wilson, and Corrie-lynn Dyson. 2003.
Social perception of male and female computer synthesized speech. Computers
in Human Behavior 19, 4 (2003), 407–424.
[42]
Yfke P Ongena and Wil Dijkstra. 2006. Methods of behavior coding of survey
interviews. Journal of Ocial Statistics 22, 3 (2006), 419.
[43] Maike Paetzel, Isabelle Hupont, Giovanna Varni, Mohamed Chetouani, Christo-
pher Peters, and Ginevra Castellano. 2017. Exploring the Link between Self-
assessed Mimicry and Embodiment in HRI. In Proceedings of the Companion of
the 2017 ACM/IEEE International Conference on Human-Robot Interaction. ACM,
New York, NY, USA, 245–246.
[44]
Nicolas Poirel, Mathieu Cassotti, Virginie Beaucousin, Arlette Pineau, and Olivier
Houdé. 2012. Pleasant emotional induction broadens the visual world of young
children. Cognition & emotion 26, 1 (2012), 186–191.
[45]
Kenneth M Prkachin, Rhonda M Williams-Avery, Caroline Zwaal, and David E
Mills. 1999. Cardiovascular changes during induced emotion: An application
of Lang’s theory of emotional imagery. Journal of psychosomatic research 47, 3
(1999), 255–267.
[46]
Fabiana Silva Ribeiro, Flávia Heloísa Santos, Pedro Barbas Albuquerque, and
Patrícia Oliveira-Silva. 2019. Emotional induction through music: Measuring
cardiac and electrodermal responses of emotional states and their persistence.
Frontiers in psychology 10 (2019), 451.
[47]
Laurel D Riek, Philip C Paul, and Peter Robinson. 2010. When my robot smiles at
me: Enabling human-robot rapport via real-time head gesture mimicry. Journal
on Multimodal User Interfaces 3, 1-2 (2010), 99–108.
[48]
Ben Robins, Kerstin Dautenhahn, Rene Te Boekhorst, and Aude Billard. 2005.
Robotic assistants in therapy and education of children with autism: can a small
humanoid robot help encourage social interaction skills? Universal access in the
information society 4, 2 (2005), 105–120.
[49] Natacha Rouaix, Laure Retru-Chavastel, Anne-Sophie Rigaud, Clotilde Monnet,
Hermine Lenoir, and Maribel Pino. 2017. Aective and engagement issues in the
conception and assessment of a robot-assisted psychomotor therapy for persons
with dementia. Frontiers in psychology 8 (2017), 950.
[50]
Gillian Rowe, Jacob B Hirsh, and Adam K Anderson. 2007. Positive aect increases
the breadth of attentional selection. Proceedings of the National Academy of
Sciences 104, 1 (2007), 383–388.
[51]
Sandra W Russ and Julie A Fiorelli. 2010. Developmental approaches to creativity.
The Cambridge handbook of creativity 12 (2010), 233–249.
[52]
Anara Sandygulova and Gregory MP O’Hare. 2016. Investigating the impact of
gender segregation within observational pretend play interaction. In 2016 11th
ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, USA,
399–406.
[53]
Soa Serholt and Wolmet Barendregt. 2016. Robots tutoring children: Longitudi-
nal evaluation of social engagement in child-robot interaction. In Proceedings of
the 9th nordic conference on human-computer interaction. ACM, New York, NY,
USA, 1–10.
[54]
Ewa Siedlecka and Thomas F Denson. 2019. Experimental methods for inducing
basic emotions: A qualitative review. Emotion Review 11, 1 (2019), 87–97.
[55]
Elly Singer, Merel Nederend, Lotte Penninx, Mehrnaz Tajik, and Jan Boom. 2014.
The teacher’s role in supporting young children’s level of play engagement. Early
Child Development and Care 184, 8 (2014), 1233–1249.
[56]
Bram Vanderborght, Ramona Simut, Jelle Saldien, Cristina Pop, Alina S Rusu,
Sebastian Pintea, Dirk Lefeber, and Daniel O David. 2012. Using the social robot
probo as a social story telling agent for children with ASD. Interaction Studies 13,
3 (2012), 348–372.
[57]
Rainer Westermann, Kordelia Spies, Günter Stahl, and Friedrich W Hesse. 1996.
Relative eectiveness and validity of mood induction procedures: A meta-analysis.
European Journal of social psychology 26, 4 (1996), 557–580.
[58]
Peter Wittenburg, Hennie Brugman, Albert Russel, Alex Klassmann, and Han
Sloetjes. 2006. ELAN: a professional framework for multimodality research. In
5th International Conference on Language Resources and Evaluation (LREC 2006).
European Language Resources Association, Marseille, France, 1556–1559.
[59]
Xuan Zhang, Hui W Yu, and Lisa F Barrett. 2014. How does this make you feel? A
comparison of four aect induction procedures. Frontiers in psychology 5 (2014),
689.
... Prevention focused users take a vigilant or careful approach while trying to achieve their goals, and they try to avoid the negative aspects or regress from their current state while trying to achieve their goal. RFT has been used in the past to tailor messages and successfully motivate users toward their goals [20,22,30,52]. ...
... Players in the ft condition also spent more time on learning-related behaviours. Elgarf et al. [30] designed an interactive pretend play game (with human-robot interaction) according to RFT to study the impact of RFT on emotional induction in children. The authors designed a pretend-play narrative game where the child must work with the robot to escape to planet Mars. ...
... We found evidence to support our hypothesis that players who played the game tailored according to their motivational orientation had a signifcant improvement in security behaviour over time compared to those who played the non-tailored version of the game. These results are similar to previous studies (both in the domain of games for change and health interventions) that employed Regulatory Focus Theory and found that tailoring the intervention according to users' motivational orientation persuaded them toward their goals [30,39,52,53]. From the results, it was also evident that tailoring the game increases play experience, specifcally, the players' Perceived Choice. ...
Conference Paper
Full-text available
The use of smartphones has become an integral part of everyone’s lives. Due to the ubiquitous nature and multiple functionalities of smartphones, the data handled by these devices are sensitive in nature. Despite the measures companies take to protect users’ data, research has shown that people do not take the necessary actions to stay safe from security and privacy threats. Persuasive games have been implemented across various domains to motivate people towards a positive behaviour change. Even though persuasive games could be effective, research has shown that the one-size-fits-all approach to designing persuasive games might not be as effective as the tailored versions of the game. This paper presents the design and evaluation of a persuasive game to improve user awareness about smartphone security and privacy tailored to the user’s motivational orientation using Regulatory Focus Theory. From the results of our mixed-methods in-the-wild study of 102 people followed by a one-on-one interview of 25 people, it is evident that the tailored version of the persuasive game performed better than the non-tailored version of the game towards improving users’ secure smartphone behaviour. We contribute to the broader HCI community by offering design suggestions and the benefits of tailoring persuasive games.
... The Ekman's emotion model comprises six basic emotional states out of which five are universal (i.e., emotions that all humans have in common) [30], namely anger, fear, happiness, sadness, and disgust. The model has been widely applied in the literature over the years, including in human-computer interaction (HCI) research [33][34][35]. The PSD model consists of twenty-eight persuasive strategies for designing and evaluating persuasive or behaviour change systems and has enjoyed widespread use in persuasive technology research including [36][37][38][39][40]. Similarly, the ABACUS framework comprises twenty-one persuasive strategies for assessing the behaviour change potential of smartphone-based applications or systems. ...
... A third dimension called dominance refers to the degree to which individuals have control over their emotion [52]. Compared to the dimensional theories, the discrete emotion theories have enjoyed widespread use in emotion-based HCI research including [33][34][35]53]. We utilize Ekman's five universal emotional stateshappiness, sadness, anger, fear, disgust -in this work. ...
Conference Paper
Full-text available
Technologies have been shown to alter how people feel and create outlets for expressing positive and/or negative emotions. This indicates that persuasive systems, which rely on persuasive strategies (PS) to motivate behaviour change, have the potential to elicit emotions in users. However, there is no empirical evidence on whether or not PS evoke emotions and how to tailor PS based on emotional states. Therefore, we conduct a large-scale study of 660 participants to investigate if and how individuals respond emotionally to various PS and why. Our results show that some PS (such as Reward, Reduction, and Rehearsal) evoke positive emotion only, while others (such as Self-monitoring, Reminder, and Suggestion) evoke both positive and negative emotions at varying degrees and for different reasons. Our research links emotion theory with behaviour change models to develop practical guidelines for designing emotion-adaptive persuasive systems that employ appropriate PS to motivate behaviour change while regulating users' emotion.
... We generated the behavior of the robot by fne-tuning the Open AI GPT-3 model 2 [16] to create two models that produce creative versus non-creative story continuations. We used training data provided by our two previously conducted studies between a robot and children in a storytelling setting [11,13] to fne-tune our models. The creativity was generated utilising the four creativity variables: fuency, fexibility, elaboration and originality. ...
... We have modifed our previously implemented storytelling software [11,13] developed using the Unity game engine 3 . The theme for the software is a fairy tale castle. ...
... Researchers found that children in the prevention condition perceived the robot as more likeable than those assigned to the promotion one [29]; the authors suggested as a possible interpretation that the robot expressed more vulnerability (i.e., fear) in the prevention condition, leading children to perceive it as more likeable and relatable. Nevertheless, children assigned to the promotion condition expressed more happiness and were more engaged with the robot during the interaction than those assigned to the prevention condition [30]. Despite the relevant work on this topic, no previous study has investigated the impact of regulatory focus on creativity performance in cHRI, the core contribution of our work. ...
... A Wilcoxon signed-rank non-parametric test revealed that children in the promotion condition exhibited a significantly higher number of smiles (W = 199, p = 0.013, M = 9.45, SD = 12.92) and joyful expressions (W = 216, p = 0.03, M = 7.52, SD = 12.04) than those in the prevention condition, which suggests that our intervention worked as expected. These results are retrieved from the analysis and procedures that we conducted for emotional detection from the same study in [30]. ...
Conference Paper
Full-text available
While creativity has been previously studied in Child-Robot Interaction (cHRI), the effect of regulatory focus on creativity skills has not been investigated. This paper presents an exploratory study that, for the first time, uses the Regulatory Focus Theory (RFT) to assess children's creativity skills in an educational context with a social robot. We investigated whether two key emotional regulation techniques, promotion (approach) and prevention (avoidance), stimulate creativity during a story-telling activity between a child and a robot. We conducted a between-subjects field study with 69 children between the ages of 7 and 9 years old, divided between two study conditions: (1) promotion, where a social robot primes children for action by eliciting positive emotional states, and (2) prevention, where a social robot primes children for avoidance by evoking a states related to security and safety associated with blockage-oriented behaviors. To assess changes in creativity as a response to the priming interaction, children were asked to tell stories to the robot before (pre-test) and after (post-test) the priming interaction. We measured creativity levels by analyzing the verbal content of the stories. We coded verbal expressions related to creativity variables, including fluency, flexibility, elaboration, and originality. Our results show that children in the promotion condition generated significantly more ideas, and their ideas were on average more original in the stories they created in the post-test rather than in the pre-test. We also modeled the process of creativity that emerges during storytelling in response to the robot's verbal behavior. This paper enriches the scientific understanding of creativity emergence in child-robot collaborative interactions.
... Many studies have sought to understand the possibilities of AI, concentrating primarily on its accuracy (Coniam 2014) as well as its effective use for resource management in the workplace, including applicant screening during the hiring process (Mehta et al. 2013;Park et al. 2021;Van Esch et al. 2019), and customer service (Følstad and Skjuve 2019;Xu et al. 2017). Conversational AI agents, both text-based and voicebased, emerge as pivotal in streamlining operations and reducing costs, with voicebased agents extending emotional support in specialized fields such as healthcare and therapy (Cha et al. 2021;Elgarf et al. 2021). Fuoli et al. (2021) extend this understanding to social media platforms, revealing how companies' responses to customer feedback on Twitter, particularly those employing an affective style, can effectively manage customer relations and mitigate reputational risks. ...
Article
Full-text available
Situated at the intersection of language, discourse, and communication studies, the present study delves into the dynamics of human-artificial intelligence (AI) interactions. Our study centers on AI-based voice assistants which employ natural language processing to communicate with human users. With a dataset derived from 200 recorded interactions between human users and AI-based voice assistants of a leading Korean telecommunications provider, we investigate the intricate dialogue patterns that emerge within these exchanges. Employing the lens of conversation analysis, especially focusing on adjacency pairs, first pair-part (FPP) and second pair-part (SPP), our analysis elucidates how AI agents and human users negotiate meaning and interactional roles. We identify four distinct response types from the users’ SPP, revealing a variety of interactional patterns. The findings reveal that the users frequently respond to AI-initiated prompts with keywords, reflecting a strategy to efficiently retrieve information, and highlight instances of no verbal response. Additionally, the use of honorifics in Korean AI voice assistants underlines the influence of linguistic and cultural norms on the dynamics of human-AI interaction, emphasizing the need for AI systems to navigate social hierarchies effectively. Our study underscores the importance of enhancing human-AI dialogue and provides valuable implications for interdisciplinary research and practice in the rapidly evolving field of AI-based communication.
... Regulatory focus theory has been used in the Human-Computer Interaction (HCI) literature to promote user experience in humanrobot interactions [5,23,31], privacy decision-making [20,63], and human interactions with virtual agents [32]. Le et al. [63] used regulatory focus theory to study individuals' privacy decision-making in a mobile payment application. ...
Conference Paper
Full-text available
In this study, we explore the effectiveness of persuasive messages endorsing the adoption of a privacy protection technology (IoT Inspector) tailored to individuals' regulatory focus (promotion or prevention). We explore if and how regulatory fit (i.e., tuning the goal-pursuit mechanism to individuals' internal regulatory focus) can increase persuasion and adoption. We conducted a between-subject experiment (N = 236) presenting participants with the IoT Inspector in gain ("Privacy Enhancing Technology"-PET) or loss ("Privacy Preserving Technology"-PPT) framing. Results show that the effect of regulatory fit on adoption is mediated by trust and privacy calculus processes: prevention-focused users who read the PPT message trust the tool more. Furthermore, privacy calculus favors using the tool when promotion-focused individuals read the PET message. We discuss the contribution of understanding the cognitive mechanisms behind regulatory fit in privacy decision-making to support privacy protection. CCS CONCEPTS • Security and privacy → Privacy protections; Economics of security and privacy; Usability in security and privacy; • Social and professional topics → Privacy policies.
... Agents (e.g robots and virtual characters) have long been used for educational purposes. A typical application and entertaining activity of using agents for education is storytelling used for helping children learn languages, develop their social skills and stimulate their creativity [15], [23]- [26]. Previous literature has been in favor of structuring educational activities between a robot and a human user in a collaborative manner to render the activity more engaging and maximise the user's learning performance [9], [10], [27]. ...
... The majority of the examples we saw in robotics use interaction patterns to support the user's creativity. Robots adapts the role of either being a supportive agent that facilitates the user's creativity (Elgarf et al., 2021;Alves-Oliveira et al., 2019) or being a creative peer that collaborating with the user on a creative task (Law et al., 2019;Lin et al., 2020;Hu et al., 2021). In that sense, all of the examples put creative thinking of the user as an aim of the robot. ...
Thesis
Proactive behaviors are self-initiated behaviors to cope with a problem that has or will occur. In order for robots to be truly assistive to humans, we need the robot to be equipped with proactive behaviors; as it can help humans to achieve their goals. Studies have so far focused on proactive behaviors very specific to one domain, i.e. neglecting a general framework or model of proactivity that could be used in a wide variety of HRI scenarios. In this thesis, we specifically focus on the reasoning over these intentions, of 1) what proactive behaviors to generate in this interaction, and 2) when proactivity could occur, for a robot assisting a user. To do so, we propose a generic cognitive framework that encompasses an entire interaction between the user and the robot. To propose such a framework, we address the challenges of user intention recognition by inverse planning, reasoning over these intentions, and then generating appropriate proactive behaviors either plan-based or rule-based algorithms to help the user to achieve their goal. Later, we combine our reasoning module with the latest state-of-the-art framework on predictive proactivity to cope with when to initiate. Lastly, we evaluate our proposed framework by applying various tasks. In the first set of studies, we question how the proactive behavior of robots and the creativity of humans are linked. The overall results indicate that the proposed framework is flexible enough to combine with already existing solutions to have improved proactive reasoning.
Article
Creativity is an important skill that is known to plummet in children when they start school education that limits their freedom of expression and their imagination. On the other hand, research has shown that integrating social robots into educational settings has the potential to maximize children’s learning outcomes. Therefore, our aim in this work was to investigate stimulating children’s creativity through child-robot interactions. We fine-tuned a Large Language Model (LLM) to exhibit creative behavior and non-creative behavior in a robot and conducted two studies with children to evaluate the viability of our methods in fostering children’s creativity skills. We evaluated creativity in terms of four metrics: fluency, flexibility, elaboration, and originality. We first conducted a study as a storytelling interaction between a child and a wizard-ed social robot in one of two conditions: creative versus non-creative with 38 children. We investigated whether interacting with a creative social robot will elicit more creativity from children. However, we did not find a significant effect of the robot’s creativity on children’s creative abilities. Second, in an attempt to increase the possibility for the robot to have an impact on children’s creativity and to increase the fluidity of the interaction, we produced two models that allow a social agent to autonomously engage with a human in a storytelling context in a creative manner and a non-creative manner respectively. Finally, we conducted another study to evaluate our models by deploying them on a social robot and evaluating them with 103 children. Our results show that children who interacted with the creative autonomous robot were more creative than children who interacted with the non-creative autonomous robot in terms of the fluency, the flexibility, and the elaboration aspects of creativity. The results highlight the difference in children’s learning performance when inetracting with a robot operated at different autonomy levels (Wizard of Oz versus autonoumous). Furthermore, they emphasize on the impact of designing adequate robot’s behaviors on children’s corresponding learning gains in child-robot interactions.
Article
Full-text available
Objective We present parenting regulatory focus as a theoretical framework to understand parenting goal motivations and describe the development and validation of a 16‐item Parenting Regulatory Focus Scale. Background Most parenting research is focused on parenting behaviors, but it is also important to understand the goal motivations behind parental approaches to raising children. Method We used two independent samples ( N 1 = 856; N 2 = 497) to validate the Parenting Regulatory Focus Scale as a two‐factor structure composed of promotion‐ and prevention‐based parenting regulatory focus. Across two studies, we tested the construct validity of the Parenting Regulatory Focus Scale through correlations with general regulatory focus, parents' personality traits, child temperament, parenting styles and behaviors, and child adjustment. Results The scale scores demonstrated good internal reliabilities (αs = .86–.91), as well as 2‐week (α promotion = .65 , α prevention = .77) and 6‐month test–rest reliabilities (α promotion = .61 , α prevention = .66). Path analysis supported the relationship between parenting regulatory focus and child adjustment as mediated by parenting styles and behaviors. Conclusions and Implications The Parenting Regulatory Focus Scale is a promising tool that can contribute to parenting research and tailoring of parenting interventions.
Article
Full-text available
Research related to regulatory focus theory has shown that the way in which a message is conveyed can increase the effectiveness of the message. While different research fields have used this theory, in human-robot interaction (HRI), no real attention has been given to this theory. In this paper, we investigate it in an in the wild scenario. More specifically, we are interested in how individuals react when a robot suddenly appears at their office doors. Will they interact with it or will they ignore it? We report the results from our experimental study in which the robot approaches 42 individuals. Twenty-nine of them interacted with the robot, while the others either ignored it or avoided any interaction with it. The robot displayed two types of behavior (i.e., promotion or prevention). Our results show that individuals that interacted with a robot that matched their regulatory focus type interacted with it significantly longer than individuals that did not experience regulatory fit. Other qualitative results are also reported, together with some reactions from the participants.
Article
Full-text available
In positive human-human relationships, people frequently mirror or mimic each other's behavior. This mimicry, also called entrainment, is associated with rapport and smoother social interaction. Because rapport in learning scenarios has been shown to lead to improved learning outcomes, we examined whether enabling a social robotic learning companion to perform rapport-building behaviors could improve children's learning and engagement during a storytelling activity. We enabled the social robot to perform two specific rapport and relationship-building behaviors: speech entrainment and self-disclosure (shared personal information in the form of a backstory about the robot's poor speech and hearing abilities). We recruited 86 children aged 3–8 years to interact with the robot in a 2 × 2 between-subjects experimental study testing the effects of robot entrainment Entrainment vs. No entrainment and backstory about abilities Backstory vs. No Backstory. The robot engaged the children one-on-one in conversation, told a story embedded with key vocabulary words, and asked children to retell the story. We measured children's recall of the key words and their emotions during the interaction, examined their story retellings, and asked children questions about their relationship with the robot. We found that the robot's entrainment led children to show more positive emotions and fewer negative emotions. Children who heard the robot's backstory were more likely to accept the robot's poor hearing abilities. Entrainment paired with backstory led children to use more of the key words and match more of the robot's phrases in their story retells. Furthermore, these children were more likely to consider the robot more human-like and were more likely to comply with one of the robot's requests. These results suggest that the robot's speech entrainment and backstory increased children's engagement and enjoyment in the interaction, improved their perception of the relationship, and contributed to children's success at retelling the story.
Article
Full-text available
Emotional inductions through music (EIM) procedures have proved to evoke genuine emotions according to neuroimaging studies. However, the persistence of the emotional states after being exposed to musical excerpts remains mostly unexplored. This study aimed to investigate the curve of emotional state generated by an EIM paradigm over a 6-min recovery phase, monitored with valence and arousal self-report measures, and physiological parameters. Stimuli consisted of a neutral and two valenced musical excerpts previously reported to generate such states. The neutral excerpt was composed in a minimalist form characterized by simple sonorities, rhythms, and patterns; the positive excerpt had fast tempo and major tones, and the negative one was slower in tempo and had minor tone. Results of 24 participants revealed that positive and negative EIM effectively induced self-reported happy and sad emotions and elicited higher skin conductance levels (SCL). Although self-reported adjectives describing evoked-emotions states changed to neutral after 2 min in the recovery phase, the SCL data suggest longer lasting arousal for both positive and negative emotional states. The implications of these outcomes for musical research are discussed.
Conference Paper
Full-text available
A long tradition of research suggests a relationship between emotional mimicry and pro-social behavior, but the nature of this relationship is unclear. Does mimicry cause rapport and cooperation, or merely reflect it? Virtual humans can provide unique insights into these social processes by allowing unprecedented levels of experimental control. In a 2 x 2 factorial design, we examined the impact of facial mimicry and counter-mimicry in the iterated prisoner's dilemma. Participants played with an agent that copied their smiles and frowns or one that showed the opposite pattern -- i.e., that frowned when they smiled. As people tend to smile more than frown, we independently manipulated the contingency of expressions to ensure any effects are due to mimicry alone, and not the overall positivity/negativity of the agent: i.e., participants saw either a reflection of their own expressions or saw the expressions shown to a previous participant. Results show that participants smiled significantly more when playing an agent that mimicked them. Results also show a complex association between smiling, feelings of rapport, and cooperation. We discuss the implications of these findings on virtual human systems and theories of cooperation.
Article
Full-text available
Experimental emotion inductions provide the strongest causal evidence of the effects of emotions on psychological and physiological outcomes. In the present qualitative review, we evaluated five common experimental emotion induction techniques: visual stimuli, music, autobiographical recall, situational procedures, and imagery. For each technique, we discuss the extent to which they induce six basic emotions: anger, disgust, surprise, happiness, fear, and sadness. For each emotion, we discuss the relative influences of the induction methods on subjective emotional experience and physiological responses (e.g., heart rate, blood pressure). Based on the literature reviewed, we make emotion-specific recommendations for induction methods to use in experiments.
Conference Paper
Full-text available
The presence of a robot in our everyday life can generate both positive and negative effects on us. While performing a difficult task, the presence of a robot can generate a negative effect on the performance and it can also increase the stress and anxiety levels. In order to minimize these undesired effects, we propose the use of user’s motivation, based on the Regulatory Focus Theory. We analyze the effects of using Regulatory oriented strategies in a robot speech, when giving a person the instructions of how to perform a Stroop Test. We found evidence that matching the Chronic Regulatory state of the participants with the Regulatory oriented strategy of the robot improves the user’s performance, and a mismatch leads to an increase of cognitive load and stress in the participants.
Conference Paper
Full-text available
Emotion detection has become one of the most important aspects to consider in any project related to Affective Computing. Due to the almost endless applications of this new discipline, the development of emotion detection technologies has brought up as a quite profitable opportunity in the corporate sector. Many start-up enterprises have emerged in the last years, dedicated almost exclusively to a specific type of emotion detection technology. In this paper, we present a thorough review of current technologies to detect human emotions. To this end, we explore the different sources from which emotions can be read, along with existing technologies developed to recognize them. We also explore some application domains in which this technology has been applied. This survey has let us identify the strengths and shortcomings of current technology for emotion detection. We conclude the survey highlighting the aspects that requires further research and development.
Article
Full-text available
The interest in robot-assisted therapies (RAT) for dementia care has grown steadily in recent years. However, RAT using humanoid robots is still a novel practice for which the adhesion mechanisms, indications and benefits remain unclear. Also, little is known about how the robot’s behavioral and affective style might promote engagement of persons with dementia in RAT. The present study sought to investigate the use of a humanoid robot in a psychomotor therapy for persons with dementia. We examined the robot’s potential to engage participants in the intervention and its effect on their emotional state. A brief psychomotor therapy program involving the robot as the therapist’s assistant was created. For this purpose, a corpus of social and physical behaviors for the robot and a “control software” for customizing the program and operating the robot were also designed. Particular attention was given to components of the RAT that could promote participant’s engagement (robot’s interaction style, personalization of contents). In the pilot assessment of the intervention nine persons with dementia (7 women and 2 men, M age = 86 y/o) hospitalized in a geriatrics unit participated in four individual therapy sessions: one classic therapy (CT) session (patient- therapist) and three RAT sessions (patient-therapist-robot). Outcome criteria for the evaluation of the intervention included: participant’s engagement, emotional state and well-being; satisfaction of the intervention, appreciation of the robot, and empathy-related behaviors in human-robot interaction. Results showed a high constructive engagement in both CT and RAT sessions. More positive emotional responses in participants were observed in RAT compared to CT. RAT sessions were better appreciated than CT sessions. The use of a social robot as a mediating tool appeared to promote the involvement of persons with dementia in the therapeutic intervention increasing their immediate wellbeing and satisfaction.
Conference Paper
We present an end-to-end voice-based conversational agent that is able to engage in naturalistic multi-turn dialogue and align with the interlocutor's conversational style. The system uses a series of deep neural network components for speech recognition, dialogue generation, prosodic analysis and speech synthesis to generate language and prosodic expression with qualities that match those of the user. We conducted a user study (N=30) in which participants talked with the agent for 15 to 20 minutes, resulting in over 8 hours of natural interaction data. Users with high consideration conversational styles reported the agent to be more trustworthy when it matched their conversational style. Whereas, users with high involvement conversational styles were indifferent. Finally, we provide design guidelines for multi-turn dialogue interactions using conversational style adaptation.