Content uploaded by Daniel Davison
Author content
All content in this area was uploaded by Daniel Davison on Mar 23, 2021
Content may be subject to copyright.
1 23
Journal on Multimodal User
Interfaces
ISSN 1783-7677
J Multimodal User Interfaces
DOI 10.1007/s12193-020-00353-9
Words of encouragement: how praise
delivered by a social robot changes
children’s mindset for learning
Daniel P.Davison, Frances M.Wijnen,
Vicky Charisi, Jan van der Meij, Dennis
Reidsma & Vanessa Evers
1 23
Your article is published under the Creative
Commons Attribution license which allows
users to read, copy, distribute and make
derivative works, as long as the author of
the original work is cited. You may self-
archive this article on your own website, an
institutional repository or funder’s repository
and make it publicly available immediately.
Journal on Multimodal User Interfaces
https://doi.org/10.1007/s12193-020-00353-9
ORIGINAL PAPER
Words of encouragement: how praise delivered by a social robot
changes children’s mindset for learning
Daniel P. Davison1·Frances M. Wijnen2·Vicky Charisi1·Jan van der Meij3·Dennis Reidsma1·Vanessa Evers4
Received: 31 May 2019 / Accepted: 5 November 2020
© The Author(s) 2020
Abstract
This paper describes a longitudinal study in which children could interact unsupervised and at their own initiative with a fully
autonomous computer aided learning (CAL) system situated in their classroom. The focus of this study was to investigate how
the mindset of children is affected when delivering effort-related praise through a social robot. We deployed two versions: a
CAL system that delivered praise through headphones only, and an otherwise identical CAL system that was extended with a
social robot to deliver the praise. A total of 44 children interacted repeatedly with the CAL system in two consecutive learning
tasks over the course of approximately four months. Overall, the results show that the participating children experienced a
significant change in mindset. The effort-related praise that was delivered by a social robot seemed to have had a positive
effect on children’s mindset, compared to the regular CAL system where we did not see a significant effect.
Keywords Early childhood education ·Social robotics ·Child-robot interaction ·Mindset ·Praise ·Inquiry learning
This research has been funded by the European Union 7th Framework
Program (FP7-ICT-2013-10) EASEL under the Grant Agreement No.
611971.
BDaniel P. Davison
d.p.davison@utwente.nl
Frances M. Wijnen
f.m.wijnen@utwente.nl
Vic ky Charisi
vasiliki.charisi@ec.europa.eu
Jan van der Meij
j.v.d.meij@het-erasmus.nl
Dennis Reidsma
d.reidsma@utwente.nl
Vanessa Evers
vanessa.evers@ntu.edu.sg
1Department of Human Media Interaction, Faculty of
Electrical Engineering, Mathematics and Computer Science,
University of Twente, Enschede, The Netherlands
2ELAN, Department of Teacher Development, Faculty of
Behavioural, Management and Social Sciences, University of
Twente, Enschede, The Netherlands
3Het Erasmus, Almelo, The Netherlands
4School of Computer Science and Engineering, Nanyang
Technological University, Singapore, Singapore
1 Introduction
Since the emergence of personal computers, researchers and
educators have recognised the potential of Computer Aided
Learning (CAL) systems to support children in their edu-
cation [35]. Although early CAL systems relied mainly on
text-based interactions, modern learning environments have
offered advanced graphical or physical interfaces which offer
richer and more elaborate forms of interacting with the learn-
ing system. Various forms of such advanced CAL systems
have been investigated, each focusing on supporting distinct
aspects of students’ learning processes. For instance, some
CAL systems aim to support mastery of knowledge and skill
through drill-and-practice style content, or focus on offering
direct tutoring and instruction, while others adopt a construc-
tivist approach using inquiry based learning techniques. It is
this latter category of CAL systems that we further investigate
in this paper and in the context of the European Commission
funded EASEL project [9,52].
CAL systems are increasingly being extended with social
capabilities. In such cases, the system often utilises a social
agent (either virtual or robotic) to support the learning pro-
cess. Through social interactions with the learner, the agent
can offer different and richer forms of support, which would
be difficult to achieve with a strictly non-social CAL system.
We are interested in exploring ways in which extending a
123
Journal on Multimodal User Interfaces
typical inquiry based CAL system with a social robot can
have a meaningful impact on children’s education: How can
we leverage the robot’s inherently social nature to support a
better learning experience?
In a previous study we investigated verbalisation behaviour
of children who were prompted to explain their thought pro-
cess to an interactive CAL system that was extended with a
social robot [62]. There, the role of the robot was to prompt
for more detailed (verbal) explanations while participants
worked on an inquiry learning task. We considered such
prompting for explanations to be social acts of the system and
argued that in these situations a social entity like a robot is
more capable at eliciting a favourable response. Results from
that study showed that children gave more detailed and more
relevant explanations when the CAL system was extended
with a social robot through which to deliver the prompts.
Inspired by the findings from our previous study we inves-
tigated other social aspects of learning where robots might
have opportunities to enhance the delivery of the social acts
of the CAL system. The social act of giving praise being one
such instance where we saw opportunities for a social robot
to play a meaningful role. Praise has long been recognised
as an important social mechanism that can be used to sup-
port a learner. Praising the learner’s process, abilities, and
achievements has been shown to influence their motivation,
performance, and self-esteem, among others [23,28,29,45].
In a later section of this paper we give a more detailed account
of the relationships between learning and praise.
We conducted a long-term, unsupervised, in-the-wild
study where we investigated effects of praise by a CAL sys-
tem, delivered through a social robot, on children’s attitudes
towards learning. The primary results that focus on the effects
of praise delivered by a robot are reported in the paper at
hand, while the process of designing a long-term interaction
and the results of deploying such a setup in classes over an
extended period of time are discussed in [15].
This paper is structured as follows. We introduce the con-
cept of a social robot and discuss the related pedagogic
theories on praise and mindset in Sects. 2and 3, respectively,
followed by the aims and objectives in Sect. 4. In Sect. 5
we present the technical setup and discuss the design of the
system’s multimodal interactions. The study methodology is
discussed in Sect. 6, which covers the experimental design,
measures, and procedures. Results are presented in Sect. 8
and further discussed in Sect. 9. Finally, in Sect. 10 we draw
the main conclusions.
2 Computer aided learning and social robots
The goal of instruction and teaching is to promote learn-
ing. Since we invest much time, money, and effort in good
education it is worthwhile to understand what learning is:
“Learning refers to lasting changes in the learner’s knowl-
edge where such changes are due to experience. Thus,
learning is defined as a relatively permanent change in some-
one’s knowledge based on the person’s experience.” [43,
p. 7]. Among many other tools, educators may use Com-
puter Aided Learning (CAL) systems to foster this change
in knowledge by offering the learner an interactive learning
environment.
To support the learner, CAL systems may for example
present background information about the topic at hand, pro-
vide the learner with templates or step-by-step instructions,
or constrain the learner’s interactions with the learning envi-
ronment to reduce variables in a problem space [36,58,59].
Additionally, such systems may, for example, monitor and
structure the learning process to offer adequate advice and
feedback [63]. Modern CAL systems may further personalise
and adapt the learning experience to match an individual’s
characteristics, performance, and personal development [51].
CAL systems are sometimes extended with pedagogical
agents to add a social dimension to the learning experience
[22,27]. They are social agents that perceive and act in a social
context and by communicating with the user they aim to sup-
port the learning process. Among other things, pedagogical
agents can improve learner’s self-efficacy (e.g. [1]), reduce
anxiety (e.g. [2]), and offer motivational scaffolds (e.g. [57]).
Social robots are increasingly used as pedagogical agents
to support learning, and are applied in many situations
where social interactions play a role [4,38]. Being physi-
cally embodied and physically present bestows robots with
multimodal interaction capabilities that set them apart from
virtual agents, playing a role in how users interact with them
socially [41,60]. For example, through their physical embod-
ied nature robots have been shown to improve social presence
(e.g. [37]), turn-taking (e.g. [34]), and learning gains (e.g.
[40]).
Aspects of robots’ social capabilities are often used
to enhance the learning of children in various educa-
tional domains, showing promising results throughout. For
instance, robots are becoming popular tools in language
learning [8]; the social nature of the robot can be used in sto-
rytelling situations (e.g. [31,33]) where it may help modulate
the child’s affective state [21]. Additionally, robots are used
to help children with their handwriting; these robots evoke
the learning-by-teaching paradigm (e.g. [24,39]) where they
achieve success by posing as a social peer that is capable
of learning [7]. Robots have also been shown to impact chil-
dren’s problem-solving skills (e.g. [10]) and robot peers have
supported children’s self-regulation of medical conditions
such as diabetes (e.g. [3,13]), where the robot can be provided
with a relatable background story to enable more natural co-
learning paradigms.
Zeno, the robot used in this study, is a small humanoid
robot with an expressive face (see Fig. 1). Besides appli-
123
Journal on Multimodal User Interfaces
Fig. 1 The robot used in this study: Robokind’s Zeno R25
cations in typical primary education, the Zeno robot has
also been used in therapeutic settings involving children
with autism spectrum disorder. In those settings, the robot
is able to elicit spontaneous exploratory child-robot interac-
tions and facilitate child-adult interactions [55]. Furthermore,
this robot was used to design interactive experiences for chil-
dren with autism where they can practice facial expressions
[11,42]. Although it can sometimes be hard to recognise
which emotion is expressed by robots, work from Cheva-
lier et al. [12] has shown that the availability of Zeno’s facial
features plays an important role in successfully expressing
its emotions. In general, they showed that individuals with
autism, as well as typically developing individuals, could
more easily recognise emotions that were expressed through
a combination of body posture and facial features. Schaden-
berg et al. [54] suggest that in some cases the recognition
of the robot’s emotions can be further improved through
multimodal non-verbal affect bursts, such as laughter or
sobbing—although, obviously, this can only be done when
such affect bursts are appropriate to the interaction.
3 Praise and mindset
In our work we use the robot’s social capabilities in combina-
tion with offering praise to influence how children learn. One
of the factors that plays an important role in the motivation for
learning and thinking in school settings is a learner’s mindset
[6]. Dweck [19] describes two forms of mindset: (1) a fixed
mindset is characterised by the belief that you are born with a
certain capacity and that you cannot influence your capacity
very much; (2) a growth mindset is characterised by the belief
that you can improve your capabilities and expertise through
perseverance and effort, and that failure is an inherent part
of learning.
On the one hand, Mueller and Dweck [45] and Dweck
[19] show that people with a fixed mindset tend to focus on
proving their intelligence: they want to look smart. Because
of this, they are reluctant of tasks that are challenging or hard,
because there is a chance of failure, which conflicts with their
goal of looking smart. In their belief, failing at something is
an indication that you are not smart enough or lack the nec-
essary capabilities; when you fail it is an indication that you
will not be able to complete a certain challenge. You better
give up and try something easier. Furthermore, according to
people with a fixed mindset, effort is seen as something neg-
ative. If you have talent or are gifted then it isn’t necessary to
struggle with a task. Viewed from such a perspective effort
is for people who lack talent.
On the other hand, people with a growth mindset tend to
focus on learning. They are motivated to do new and com-
plicated tasks because it provides opportunities for learning.
Their goal is to learn and to develop themselves by work-
ing hard and by putting in a lot of effort. They see failure
as something that is necessary for learning. So after failing,
people with a growth mindset tend to work harder and try
out new strategies in order to complete a challenge or master
a skill [19,20,45]. A growth mindset is seen as a favourable
trait when it comes to exploring new learning domains and
developing new skills. Therefore, we focused on promoting
such a growth mindset.
Praise and criticism have an important influence on the
development of growth and fixed mindsets [23,28,45]. For
praise to have a positive impact it is important that it is per-
ceived as contingent, specific, sincere and credible [46,48].
Praise for high ability and personal traits is a common
response when someone did a job well. Whether it is in the
classroom, when playing sport, or during artistic endeav-
ours, praise for ability is seen as a popular tool to stimulate
learners’ motivation [29,45]. However, focusing on primar-
ily praising high ability may have an undesired impact. It can
make children feel pressured to perform well in future situ-
ations, which stimulates a fixed mindset. An alternative for
praising ability is praising effort. Instead of praising one’s
goal to seem smart, effort-related praise focuses on the pro-
cess of learning or mastering of a certain skill [47]. This form
of praise stimulates a growth mindset, since the emphasis is
on the process of learning instead of the end result [45].
In related work involving robots and mindset, Park et al.
[49] have shown that a peer-like robot can promote a growth
123
Journal on Multimodal User Interfaces
mindset in children. The robot and child took turns solving
puzzle tasks, during which the robot either exhibited neutral
behaviours or role model behaviours associated with a growth
mindset. Their robot used a multimodal behaviour reper-
toire consisting of speech, nonverbal expressions, and gaze.
Depending on the condition, the robot would use neutral
factual statements or mindset-related statements accompa-
nied by an appropriate body posture and facial expression
(e.g. engagement, interest, excitement, or frustration). After-
wards, children who had worked with the role model robot
exhibited a stronger growth mindset themselves. In our study,
in contrast with Park et al. [49], we were interested in investi-
gating the role of praise instead of role modelling behaviour.
4 Aims and objectives
This paper aims to gain deeper insights into how children
respond to effort-related praise while working on a learning
task. Offering praise to a learner can be seen as a social act
of the CAL system. We are interested in exploring ways in
which a social robot may be used to extend a traditional CAL
system to deliver such social acts in a natural and convincing
way. In situations where a child works together with a peer
learner robot, we consider praise to be an appropriate and
positive way of supporting learning, more so than criticism
(even if constructive). Therefore, we chose to primarily focus
on promoting attitudes associated with a growth mindset as
opposed to dissuading attitudes related to a fixed mindset.
This leads to the following research question: what are
the effects on the mindset of children when extending an
autonomous CAL system with a social robot to deliver effort-
related praise?
We expect that the social act of giving effort-related praise
has a greater potential impact when it is delivered by a robot
as opposed to a traditional CAL system, since by its very
nature a robot can be more convincingly presented as a social
entity. A robot has an elaborate repertoire of social cues to
engage with the learner, such as focus of attention, facial
expressions, and deictic gaze. Therefore the hypothesis in
this study was: participants who work with a CAL system that
delivers praise through a social robot will display a stronger
growth in their mindset than participants who work with a
CAL system that delivers such praise without a social robot.
Additionally, children from a different school worked with
a baseline version of the interactive CAL system, without
robot, which offered no such effort-related praise. Although
for these children we found no significant effects on their
mindset, for the sake of completeness we also report these
results of the baseline CAL system as part of this paper.
(a) No-Robot condition
(b) Robot condition
Fig. 2 Setup of the balance scale task with the following components:
abalance scale learning materials with embedded sensors; btablet;
cheadphones; dMicrosoft Kinect and wide-angle webcam; eRFID
card scanner; and fZeno R25 robot
5 Design of system and multimodal
interactions
The CAL system was based on a technical architecture that
was developed as part of the EASEL project [52]. This base
architecture was adapted and further extended to support
fully autonomous, unsupervised, long-term interactions in
the wild. An early prototype of this system using interactive
embodied learning materials has been previously described
in [14]. The system consisted of the following main compo-
nents: embodied learning materials, sensors, headphones, a
tablet, a robot, and a control computer.
The embodied learning materials were based on inquiry
learning instruments originally developed by Inhelder and
Piaget [26] to support the exploration of several phenomena
from the physics domain. Firstly, in the balance scale task
children explored the ‘moment of force’ by placing differ-
ently weighted pots on a scale at various distances to the
left and to the right of the central pivot point. The balance
task is shown in Fig. 2. Secondly, in the ramp task children
123
Journal on Multimodal User Interfaces
(a) No-Robot condition
(b) Robot condition
Fig. 3 Setup of the ramp task with the following components: aramp
learning materials with embedded sensors; btablet; cheadphones;
dMicrosoft Kinect and wide-angle webcam; eRFID card scanner; and
fZeno R25 robot
explored ‘potential energy’ and ‘rolling resistance’ by rac-
ing various balls of different materials, sizes, and weights
from two adjustable sloped ramps. The ramp task is shown
in Fig. 3.
Several different types of sensors were used to enable
fully autonomous unsupervised interactions. Firstly, embed-
ded sensors in the learning tasks recorded the state of the
materials and the child’s actions. The balance scale task used
a potentiometer in the central pivot to measure the tilt of the
balance, it used different resistors in each pot to measure their
placement on the various locations, and it used reed switches
to detect whether the stabilising blocks were present under
each side of the balance. The ramp task used potentiometers
to measure the angle of each slope, it had a physical button
for releasing the balls, and it used a photoresistor to mea-
sure when a ball had reached the end of the track. Secondly,
an external Microsoft Kinect1depth-camera was used with
1Microsoft Kinect for Windows v2: https://support.xbox.com/en-US/
xbox-on- windows/accessories/kinect-for-windows-v2- info.
the SceneAnalyser software [65] to detect the presence of a
child and the location of their face. Finally, an RFID scan-
ner was used to recognise individual children; we handed out
unique RFID badges that children scanned at the start of each
session.
All control software ran on a small desktop computer,
which was hidden out of sight of the children. This computer
was responsible for collecting and interpreting the sensor
values and generating responses from the system. An early
version of Flipper 2.0 [61] was used to define the CAL
system’s dialogue models, and to manage the flow of the
interaction. Behaviours of the CAL system were specified in
Behaviour Markup Language (BML) [30],whichwereexe-
cuted by ASAPRealizer [53]. ASAPRealizer is an engine for
choreographing synchronised multimodal behaviours across
devices, modalities, and platforms. It ensures that verbal
utterances, tablet interface updates, and robot movements
remain nicely synchronised throughout the interaction. As
shown in Fig. 7, the core BML specification was extended
with robot-specific and tablet-specific behaviours that could
not be expressed in standard BML: gazing towards a (x,y)
location and showing text, images, and buttons on the screen.
The system supported several forms of multimodal inputs
and outputs. A Samsung Galaxy Tab2 tablet was used as the
primary means of direct input to the CAL system: children
could input their responses to questions by pressing buttons
on the tablet interface. The various sensors in the task pro-
vided another means of input to the system, enabling it to
respond dynamically while children worked with the task.
Task instructions were delivered verbally and displayed on
the tablet as written text and illustrations (shown in Fig. 4).
Since the study took place in a real classroom during school
time, teachers had requested that interacting with the system
should not interrupt or disturb regular lessons. Such requests
are common when doing a study in class [32]. Therefore,
all verbal utterances produced by the system and robot were
played through headphones. The system used the Fluency2
text-to-speech engine to generate Dutch speech.
Robokind’s Zeno R253robot was used in the Robot condi-
tion to deliver the CAL system’s verbal feedback and praise.
This small humanoid robot has a face that can express basic
emotions. Furthermore, it has several degrees of freedom in
its eyes, neck, and torso with which it can perform approxi-
mate gaze shifts. Unfortunately, the servos in the robot’s arms
and legs are relatively noisy, which disqualified them from
being used in the classroom.
The robot’s multimodal behaviours in this study were
informed by design guidelines emerging from an extensive
contextual analysis of inquiry learning tasks with our tar-
2Fluency text-to-speech engine for Dutch speech: https://www.fluency.
nl/tts/.
3Robokind: https://www.robokind.com/.
123
Journal on Multimodal User Interfaces
Fig. 4 Assignment instructions
displayed on the tablet during
the preparation phase for the
balance and ramp tasks. Each
assignment had both a textual
and a visual description.
Children could press the bottom
left button to have the
assignment text read out loud.
By pressing the bottom right
button, children continued to the
next assignment phase:
prediction
Translated text: “First, put the blocks
back under the balance. Then, place
a yellow pot on pin 1 and a red pot
on pin 6. What do you think will
happen with the balance?”
(a)
Balance scale assignment example.
(b)
Ramp assignment example. Trans-
lated text: “Make sure that both
ramps are in the low position and
are empty. Place a ping-pong ball on
ramp 1 and a rubber ball on ramp 2.”
(a) Smiling (b) Amazed
Fig. 5 Zeno’s facial expressions
get user group [16]. The robot used expressions for smiling
when addressing the child and delivering praise, and amaze-
ment when the child performed the experimentation step of
the task (for example, see Fig. 5). Furthermore, it gazed
dynamically towards the user, the tablet, and relevant areas
of the task depending on the child’s actions and progress (for
example, see Fig. 6). The robot used lip synchronisation to
match the verbal utterances produced by the text-to-speech
engine. Minimal idle behaviour was added through periodic
eye blinking.
An example of a multimodal BML behaviour script is
shown in Fig. 7. The timing of speech, gaze, facial expres-
sions, and content displayed on the tablet are synchronised
to produce a coherent behaviour sequence. Placeholder vari-
ables, like the assignment details and the location of the
The location of the user was
detected by the SceneAna-
lyzer software (Zaraki et al.
2014).
(a)
Gazing towards the user.
(b)
Gazing towards a part of
the task. The gaze direction
and the appropriate moment
for gazing was determined us-
ing embedded sensors in the
task (here, the balance had
just tilted to one side
)
.
Fig. 6 Zeno dynamically shifts his gaze at various moments during the
interaction
child’s face, are filled at runtime based on the available sen-
sor data and the child’s progression through the tasks. In this
example the tablet displays an assignment including text and
an image. When the content appears on the tablet, the robot
smiles, shifts his gaze from the child towards the tablet and
starts reading the assignment text out loud. When finished
reading, the robot shifts his gaze back to the child.
123
Journal on Multimodal User Interfaces
<bml id="read_assignment"
xmlns="http://www.bml-initiative.org/bml/bml-1.0"
xmlns:sze="http://hmi.ewi.utwente.nl/zenoengine"
xmlns:mwe="http://hmi.ewi.utwente.nl/middlewareengine">
<!-- Show an assignment on the tablet -->
<mwe:sendJsonMessage id="show_assignment" start="1">
{ "showAssignment" : {
"id" : "$aID$",
"text" : "$assignmentText$",
"imageFile" : "$imageFile$",
"buttonText" : "$buttonText$"}}
</mwe:sendJsonMessage>
<!-- Read the assignment out loud -->
<speech id="speech" start="show_assignment:start+1">
<text>$speechText$</text>
</speech>
<!-- Look towards various locations while reading -->
<sze:lookAt id="look_at_child"
start="0"
x="$childLocationX$"
y="$childLocationY$"/>
<sze:lookAt id="look_at_tablet"
start="show_assignment:start"
x="$tabletLocationX$"
y="$tabletLocationY$"/>
<sze:lookAt id="look_back_at_child"
start="speech:end"
x="$childLocationX$"
y="$childLocationY$"/>
<!-- Show a happy facial expression -->
<faceLexeme id="happy"
start="speech:start"
lexeme="happy" amount="0.75"/>
</bml>
Various placeholders are
filled at runme
Custom BML extensions for
supporng the robot and tablet
Fine-grained
synchronisaon
between behaviours
Fig. 7 A BML script illustrating multimodal behaviours of the system
when reading an assignment out loud
6 Methodology
6.1 Study design
The main study was a between-group design with one inde-
pendent variable: the presence or absence of a social robot
to deliver the system’s effort-related praise. The dependent
variable was the child’s mindset, measured through a pretest
and posttest questionnaire.
6.2 Conditions
We manipulated whether the CAL system’s praise was deliv-
ered by a robot or not. This resulted in the following two
conditions:
–No-Robot a traditional CAL system that offered effort-
related praise
–Robot a CAL system extended with a robot to deliver the
effort-related praise
Over the course of several months children worked indi-
vidually on two consecutive learning tasks (see Figs. 2,3)
that consisted of assignments of increasing difficulty. In both
conditions, the CAL system offered identical task-related
instructions, help, feedback, and effort-related praise.
6.3 Participants
Both conditions were run in parallel in a classroom of two
district locations of the same Montessori school. Both dis-
trict locations were located in a similar suburb of the same
city. Participants were 44 children between 6-10 years old,
as described in Table 1. In parallel, the baseline CAL system
without robot and without praise was tested with 17 children
in a classroom of a Freinet school of the same city.
Ethical approval was obtained from the EEMCS ethical
board of the University of Twente and parents signed an
informed consent letter prior to the start of the study.
6.4 Manipulations: delivery of praise
Both conditions used the same state-of-the-art CAL system
which offered verbal task-related instructions, task-related
help, task-related feedback, and effort-related praise using a
computer-generated voice. To not disturb and distract other
children in the classroom, participants always used head-
phones to listen to the system’s verbal utterances.
In the No-Robot condition, children worked with the CAL
system without a robot, and the praise was delivered only
through the headphones. In the Robot condition, the CAL
system was extended with a social robot to deliver the same
effort-related praise. The robot would gaze towards the user
and show a smiling facial expression while verbally deliv-
ering the praise. Both the No-Robot and Robot conditions
offered identical praise using the same computer-generated
voice, played through the same headphones.
The system offered such praise at several moments during
the assignments. Firstly, after a child completed the exper-
imentation phase of an assignment and had entered their
observation the system would offer a compliment on their
progress, such as “I think you put in a lot of effort!” or “I see
you tried your best!”.
Secondly, after the conclusion phase, the system would
ask them how they felt about whether their hypothesis was
correct or incorrect. For example, if their hypothesis was
incorrect, they could select either “I think I am not good at
this” or “I think I can learn this” on the tablet. After selecting
the former, the system would respond with “We can learn
from a mistake!”. After selecting the latter, the system would
respond with “I think so too!”.
Finally, after each completed assignment the system asked
the child to rate how difficult they found the task, after which
they could choose the difficulty of the next task. At this point,
123
Journal on Multimodal User Interfaces
Table 1 Participant
demographics Age Mean (SD) Nr. of participants Total (girls/boys)
No-Robot condition 7.1 (0.83) 24 (10/14)
Robot condition 7 (0.82) 20 (8/12)
Baseline CAL system 8.8 (0.73) 17 (12/5)
the system would generate appropriate feedback and praise
to promote a growth mindset. The praise that was given by the
system depended on three aspects: (1) whether or not the child
gave a correct hypothesis; (2) the self-reported assignment
difficulty; and (3) the subsequent selected level of difficulty.
Based on these aspects the system labelled the child’s attitude
at that point in time as either performance-driven or mastery-
driven.
On the one hand, a performance-driven attitude is char-
acterised by wanting to demonstrate competence by avoid-
ing mistakes, thus often shunning difficult or unknown
challenges. Individuals with a fixed mindset often exhibit
performance-driven attitudes. When a child exhibits such
performance-driven behaviours the system offered feedback
to help promote a growth mindset. For example, when a
child made a correct prediction, indicated that they found
the assignment easy, yet still chose an easier next assign-
ment, this was labelled as performance-driven. In this case,
the system would highlight the importance of seeking an ade-
quate challenge by giving the feedback “Gosh, I would have
expected you to choose a more difficult task, because then
you can learn more” or “It is fine if you want to practice more,
but if you want to learn something new we can try a more
difficult task”.
On the other hand, learners with a mastery-driven atti-
tude focus on improving their skills and learning process
through practice, thus often embracing more difficult chal-
lenges. Such attitudes are often associated with individuals
who lean towards a growth mindset. When a child exhibits
such mastery-driven behaviours the system offered feedback
to further strengthen their growth mindset. For example,
when a child made an incorrect prediction, indicated that the
difficulty was okay, and chose the same difficulty for their
next assignment, the system labeled this as a mastery-driven
attitude. In this case, the system offered encouragement to
emphasize the importance of practicing: “We didn’t get it
yet this time, let’s practice some more and we can learn!”.
Similarly, for example, if a child made a correct prediction,
indicated that the assignment was easy, and chose a harder
next exercise, the system would praise the child to empha-
size the importance of seeking a challenge: “Great, you are
choosing a challenge, I like that!” or “Great, during a more
difficult task we might learn something new!”
6.5 Measures
The main research question in this study focused on affecting
the mindset of children. In particular, we were interested in
promoting a growth mindset through effort-related praise.
Similar to Park et al. [49], pretest and posttest questionnaires
were used to measure a change in the children’s mindset as a
result of the intervention. The 18-item questionnaire used in
this study was inspired by the questionnaire designed by De
Castella and Byrne [17] who revised the implicit theories of
intelligence scale designed by Dweck [18]. Colleagues from
the ELAN group of the faculty of Behavioural, Management
and Social Sciences of the University of Twente used a part of
the questionnaire from De Castella and Byrne [17] to design
a version that is suitable for young children.
In contrast with the questionnaire presented by De Castella
and Byrne [17], the concept “intelligence” was replaced by
“smart”, as pilot tests showed that this was better understood
by very young children. The following (translated) defini-
tion of smart was given to the children: “Smart means that
you are well able to consider, think up, and thresh out/figure
out.” The questionnaire consisted of items that fall under two
main constructs: items measuring a growth mindset and items
measuring a fixed mindset. Furthermore, additional questions
regarding effort were added to the questionnaire used in this
study, since beliefs about effort are related to mindset.
Children could provide their answers according to a 4-
point Likert scale with the following options: strongly agree,
somewhat agree, somewhat disagree, and strongly disagree.
However, Likert scales are often difficult for very young chil-
dren because they tend to think more dichotomously and have
a tendency to endorse responses at the extreme end of the pre-
sented scales. This can especially be the case if the statements
are related to more ‘fuzzy’ subjects such as feelings, beliefs
or attitudes [44]. Park et al. [49] addressed this by offer-
ing children sets of bipolar statements from which to choose.
However, we were interested in capturing nuanced responses
of children that were not necessarily on either extreme end
of the spectrum.
In several pilot test iterations with our target user group,
we explored different techniques for administering the ques-
tionnaire. Participants in these pilot tests were from several
primary schools visiting the university during school trips,
for whom signed parental consent was available. Firstly, we
presented the questionnaire in the traditional fashion as a
123
Journal on Multimodal User Interfaces
self-administered test, while giving children the option to
ask the experimenter for help or additional explanation. Only
some of the older children were able to complete this ver-
sion of the questionnaire without any issues, as the younger
children had difficulties reading and understanding the ques-
tions. Secondly, we had the experimenter read each statement
of the questionnaire out loud, after which children were
asked whether they strongly agreed, somewhat agreed, some-
what disagreed, or strongly disagreed with the statement. In
this case, some children seemed to be unable to distinguish
between the answer options or were hesitant to commit to
a choice of answer. Finally, the technique which was best
understood by the target group was to first have the experi-
menter read each statement aloud and then ask: “do you agree
or disagree?” We observed that children could quite naturally
answer this dichotomous question. After a child made an ini-
tial choice, the experimenter would subsequently ask: “do
you strongly (dis)agree or somewhat (dis)agree?”
The resulting mindset questionnaire was implemented as
an interactive web form and used as a pretest and posttest.
The full questionnaire is included in Appendix A.
6.6 Procedures
Due to the long-term nature of this study, we worked together
with the teachers and school management to fit our activities
in with their regular school schedule as best as possible. In
some cases, this meant that activities at individual schools
were moved forward or delayed and that schools followed a
slightly different timeline throughout the study. Additionally,
the total duration of the experiment varied per school due to
holidays and other events. Table 2shows an overview of the
stages of this study for each school, highlighting the timing
of questionnaires and tasks.
The study took place between December 2016 and May
2017. Around two weeks before the start of the study two
experimenters would do the mindset pretest with the chil-
dren. The mindset posttest was done in the week after the
second task ended. The experimenters would call each child
one by one to a separate room in the school. The experimenter
then read aloud the statements of the mindset questionnaire
and noted down the answers of the children. It took chil-
dren approximately 10 minutes to complete the pretest and
posttest. Following the pretest, the first learning task was
placed in the classrooms for approximately 6–7 weeks. Dur-
ing this period children initiated a total of 260 sessions and
completed a total of 756 assignments. Then, the second task
was placed in the classrooms for approximately 8-10 weeks,
during which a total of 195 sessions were initiated and chil-
dren completed a total of 550 assignments. In all three classes
children progressed through the various levels of difficulty
without many issues, with the majority of children achiev-
ing the highest level in each of the two tasks at some point.
More detailed procedures for each task are discussed in the
following sections.
6.6.1 Task 1: balance
The balance task (see Fig. 2) was placed in the classroom
after school hours. The next school day an experimenter gave
a short explanation to all children, introducing the various
components of the system: the tablet, headphones, the learn-
ing materials, and the robot. Children were instructed that
the voice of the system/robot was speaking through the head-
phones. They then handed out the personal RFID badges and
showed children how to scan their badge to initiate the inter-
action. The children were not instructed how often or how
long they should interact with the learning task, and there
was no set schedule. Instead, children were entirely free to
work with the system on their own initiative, as long as it fit
within the lesson schedule of the teacher.
To initiate a learning session the child would put on the
headphones and scan their RFID tag. The system would then
greet the child by speaking through the headphones. If it
was the very first interaction, the system would greet the
child by saying: “Hi [NAME], nice to meet you! Shall we
play together?” If the child had already interacted with the
system before the system would say: “Hi [NAME], it’s nice
to see you again. How did it go last time? Do you think it was
hard, easy, or was it okay?” [CHILD SELECTS ANSWER]
“All right, last time we finished assignment [DIFFICULTY
LEVEL] let’s move on from here.” After every task, the child
would choose whether they wanted to do another task and
whether the next task should be easier, harder, or equally
difficult.
If the child indicated that they did not want to play any-
more, the CAL system said goodbye and logged out the
current user: “Goodbye [NAME], it was nice playing with
you! See you next time!” If the child indicated that they
wanted to continue, the system started a new task according
to the chosen difficulty level. A child could complete a maxi-
mum of four assignments during each interaction. After four
assignments the system said goodbye to the child and logged
them out automatically. It took children approximately 10
minutes to complete four assignments.
6.6.2 Task 2: Ramp
Similar to the first task, the ramp (see Fig. 3) was placed
in the classroom after school hours. The next school day, a
researcher gave an explanation of the new task. Since at this
point children were familiar with the tablet, headphones, and
robot, the explanation focused on introducing the new task.
In contrast with the highly predictable deterministic nature of
the balance assignments, the ramp task would sometimes give
unpredictable results: balls would occasionally bounce off
123
Journal on Multimodal User Interfaces
Table 2 Timeline for conducting the various stages of this study in each of the three participating schools
School location 1 School location 2 School location 3
No-Robot condition Robot condition Baseline CAL
Pretest mindset questionnaire 16/12/2016 15/12/2016 19/1/2017
Task 1—Balance 9/1/2017–24/2/2017 10/1/2017–24/2/2017 1/2/2017–23/3/2017
Duration 30 schooldays 29 schooldays 32 schooldays
Task 2—Ramp 20/3/2017–24/5/2017 20/3/2017–23/5/2017 18/4/2017–9/6/2017
Duration 36 schooldays 35 schooldays 27 schooldays
Posttest mindset questionnaire 30/5/2017 29/5/2017 12/6/2017
The duration mentioned for each task excludes weekends and holidays. In the No-Robot condition children worked with a CAL system that offered
praise. In the Robot condition, this praise was delivered by the robot. Additionally, we tested a Baseline CAL system without a robot and without
praise
the sides of the ramp while rolling, which would cause them
to slow down. Especially when racing two otherwise identical
balls, this occasionally resulted in an unexpected difference
in their finishing times. Children were therefore reminded
and encouraged that they could repeat the same race as many
times as they wanted, in order to collect additional evidence
to confirm or refute their initial observations.
Children again used their RFID badge to start sessions
with the system on their own initiative. The procedure, the
system utterances, and praise mechanism were the same as
in the first task. Depending on the difficulty level and how
often children would repeat the same experiment, they spent
between 5-15 minutes on completing four assignments.
7 Analysis
In the scope of this paper we were interested in the effects
on the mindset of children when extending an autonomous
CAL system with a social robot to deliver the system’s
effort-related praise. A full analysis of other qualitative and
quantitative results from this study will therefore be reported
elsewhere; here we focus on the analysis of the mindset ques-
tionnaires.
The mindset questionnaire was used to gather pretest
and posttest scores for participating children. Answers were
marked according to a 4-point Likert scale with the follow-
ing options: strongly disagree, somewhat disagree, somewhat
agree, and strongly agree. Answers were then converted to a
numerical representation by assigning the respective scores
1, 2, 3 and 4, such that a low score corresponded with dis-
agreement and a high score corresponded with agreement.
To investigate the presence of underlying constructs in the
questionnaire items an Exploratory Factor Analysis (EFA)
was performed using all pretest questionnaire scores from
the three participating classes. Bartlett’s test of sphericity
shows that the data is appropriate for EFA, χ2(153)=291,
p<0.001. We used an oblimin factor rotation and a factor
loadings cutoff value of 0.4. Remaining items with a load-
ing below 0.4 on all factors have been dropped from further
analysis. Following this approach, three mutually exclusive
constructs emerged from the data:
Growth The ‘growth’ construct consists of items 1, 4,
8, 13 and 14 which explain 41% of the variance, with
loadings ranging from 0.46 to 0.72. These items are
formulated in such a way that they align with a mind-
set oriented towards growth. They address that one can
change how smart they are, and that one can learn and do
better by working hard on difficult assignments. When a
participant shows high agreement with these items this
corresponds with a growth mindset. A Cronbach’s Alpha
of 0.73 shows an acceptable internal consistency for this
construct.
Fixed The ‘fixed’ construct consists of items 2, 3, 5, 7
and 9 which explain 31% of the variance, with loadings
ranging from 0.41 to 0.77. These items highlight one’s
inability to influence how smart they are. The items cite
innate causes for this lack of influence. This is an attitude
that is characteristic for individuals with a fixed mindset.
Accordingly, when a participant shows a high agreement
with these items this corresponds with a fixed mindset. A
Cronbach’s Alpha of 0.68 shows a questionable internal
consistency for this construct.
Effort Finally, the ‘effort’ construct consists of items 16
and 18 which explain the remaining 28% of the vari-
ance, with respective loadings of 0.99 and 0.43. These
two items describe a preference for spending less effort,
either by working less hard on difficult assignments or by
choosing easier assignments, both of which are telling
of a fixed mindset. When a participant shows a high
agreement with these items this corresponds with a fixed
123
Journal on Multimodal User Interfaces
mindset. A Cronbach’s Alpha of 0.59 shows a poor inter-
nal consistency for this construct. Therefore, this has
been dropped from further analysis. Future iterations of
this questionnaire should expand the number of relevant
items, and may focus on further validation of the con-
struct as a whole.
Only participants who completed both the pretest and
posttest were included in the data set. A score was computed
for each construct by taking the average of the individual
items belonging to that construct, such that a low average
score corresponds to a low agreement with the statements in
that construct and a high score corresponds to a high agree-
ment.
8 Results
A total of 17 children worked with the baseline CAL system,
which we tested in a separate school in parallel with the main
study. For these children we found no significant differences
between their pretest (M =3.5, SD =0.4) and posttest
(M =3.4, SD =0.5) scores.
Regarding our main study, a total of 24 and 20 children
completed both the pretest and posttest in the No-Robot and
Robot conditions, respectively. Pretest and posttest results
for the growth and fixed constructs are shown in Fig. 8.We
found no significant differences between conditions for the
fixed construct. For the growth construct, a repeated measures
ANOVA showed no significant interaction effects, but did
show significant main effects on the between-subjects vari-
able (the source of the praise) (Repeated measures ANOVA,
F(1, 42) =7.23, p =0.01) and the within-subjects vari-
able (the pretest versus posttest scores) (Repeated measures
ANOVA, F(1, 42) =4.7, p =0.036). These results show
that there was a difference between the conditions and that
during the course of the study the feedback and praise led to
an overall increase in growth mindset. However, these results
do not necessarily show that this increase was mediated by
the robot.
To further investigate the growth construct main effects, a
post hoc analysis was performed with Bonferroni corrected
pairwise tests (adjusted Alpha level =0.0125). Between
subjects, we found no significant difference on the pretest
scores for the No-Robot (M =3.2, SD =0.7) and Robot
(M =3.4, SD =0.5) conditions. However, we did find
a significant difference on the posttest scores between the
No-Robot (M =3.3, SD =0.5) and Robot (M =3.8,
SD =0.2) conditions (Wilcoxon rank sum test, U =376,
p=0.0011). Within subjects, we found no significant dif-
ferences in the No-Robot condition between the pretest
(M =3.2, SD =0.7) and posttest (M =3.3, SD =0.5)
scores of the growth construct. However, in the Robot con-
dition we did find a significant increase from the pretest
(M =3.4, SD =0.5) to the posttest (M =3.8, SD =0.2)
scores (Wilcoxon signed rank test, Z =−3.19, p =0.0014).
These results show that the group of children who received
praise from the robot saw a significant benefit to their growth
mindset.
9 Discussion
Although the results do not necessarily show that an increase
in growth mindset scores was mediated by the presence of the
robot, they do show that the CAL system as a whole has had a
positive effect. Furthermore, we found a significant improve-
ment in the growth mindset in the Robot condition, where no
significant difference was found in the No-Robot condition.
Based on these results we interpret that working with and
receiving praise from the robot had a positive effect on chil-
dren, although this study was unable to uncover exactly what
caused this.
Working with a robot may have all kinds of impact on
how children work with an interactive learning system. In our
previous work we looked at effects on explanation behaviour
[62] and in addition to the mindset results reported here, we
looked at how children’s interactions with the system devel-
oped as they progressed from initial novelty effects towards
sustained use [15]. Results from the study presented in this
paper and related work from Park et al. [49] suggest that
robots who offer implicit or explicit remarks, feedback, and
praise may promote a growth mindset. Promoting such a
growth mindset in young learners has been shown to improve
academic achievement later in life [5], which leads us to
speculate that having a robot in class can potentially have
a positive impact on learning in the long run. This line of
research suggests that robots can be promising tools for edu-
cation.
Other research has shown benefits of promoting a growth
mindset with older learners. First year high school stu-
dents, for instance, have been found to significantly improve
their grades after participating in a single-session mindset
intervention Yeager et al. [64]. Similar interventions may
continue to help underachieving students throughout high
school, increasing their performance and grades, potentially
lowering the chances of them dropping out Paunesku et al.
[50]. Besides learners, teachers may also benefit from mind-
set interventions. Seaton [56] found that teachers not only
improved their own mindset, but could also more confidently
apply it in practice in their own teaching. It could be an inter-
esting line of future research to investigate how a robot’s
mindset support may adapt, grow, and evolve with the needs
of learners of all ages.
In pursuit of ecological validity we conducted the lon-
gitudinal study unsupervised and in the wild. By doing so,
123
Journal on Multimodal User Interfaces
1
2
3
4
Pretest Posttest
Time
Score
Growth construct
1
2
3
4
Pretest Posttest
Time
Score
Fixed construct
Condition No−Robot Robot
Fig. 8 Pretest and posttest results for the growth and fixed constructs of the mindset questionnaire. Repeated measures ANOVA showed significant
main effects in the growth construct on the source of the praise and on the pre and posttest
we have shown that it is feasible to conduct comparative
HRI studies in real classrooms, capturing real changes in
children’s learning process. As a consequence, however, we
identified two main limitations related to this study. Firstly,
since this study spanned a relatively long period of time we
were unable to follow identical procedures and timelines
in all participating schools, despite our best efforts. As a
consequence, special events such as holidays, sports days,
and school musicals took place at different moments in the
experiment timeline for the different schools. While we do
not expect this to have impacted their mindset directly, it
may have had an influence on the number of sessions chil-
dren initiated throughout the study. Secondly, although we
took care to select similar schools from a similar region,
the results of this study may have been impacted by differ-
ences in educational methods between schools, lesson plans
from individual teachers, or other external factors. That being
said, the schools’ curricula did not explicitly cover the topic
of mindset, and in discussions with the teachers, we found
no indications that would suggest major differences between
classes.
To prevent distractions in the classroom, teachers had
requested that the robot would not make too much noise.
Therefore we did not use the robot’s rich full-body behaviour
animation repertoire, resulting in a somewhat static expe-
rience. When talking with children after the study, some
were disappointed in the robot’s limited movements (e.g. “he
didn’t move his arms or legs” or “he can’t walk”). Addition-
ally, we played all of the robot’s speech through headphones
instead of a speaker, an approach that is not uncommon when
conducting research in classrooms (e.g. see [21,25,32]). We
tried to lessen the potential effects of disembodied speech by
using lip-synchronisation and by explaining beforehand that
the robot talked through the headphones. Although we do
not know how this may have affected the perceived embod-
iment, children did seem to consistently ascribe the voice to
the robot. They often mentioned that it was the robot who
spoke to them, gave instruction, and offered feedback. They
also speculated about aspects of the robot’s voice (e.g. “he
sounds like a boy/girl” or “his voice is like someone my age”).
Some stated explicitly that the robot talked to them through
the headphones. Nobody mentioned that they had found this
to be strange or unpleasant.
For this study we designed a tool for assessing the mind-
set of young children. The ‘effort’ construct, which emerged
from an exploratory factor analysis, showed low internal
consistency and was composed of only two items. It was
therefore dropped from further analysis in this study. A future
version of the questionnaire should focus on further expand-
ing and validating this construct.
Other mindset assessment tools for young children (e.g.
[49]) measure mindset as a singular bipolar dimension that
ranges between a fixed-oriented and growth-oriented mind-
set. However, we find indications in the data that mindset may
instead be considered as two separate unipolar dimensions
that range between a less-fixed to more-fixed mindset and
a less-growth to more-growth mindset. For example, a sin-
gle child may score high on growth mindset and at the same
time also score high on fixed mindset. Our questionnaire was
designed to capture such nuanced situations.
Although the mindset questionnaire has not been rigor-
ously validated as of yet, we have used it in several prior pilot
tests with target users involving a wider range of schools.
Results of these pilot tests were not inconsistent with teach-
123
Journal on Multimodal User Interfaces
ers’ expectations of those children. Therefore, we are fairly
confident that the tool is sufficiently sensitive to capture dif-
ferences between individuals with respect to their fixed and
growth mindsets. The relatively high scores on the growth
construct in the pretest may suggest that the sample from the
target user group may have been skewed towards individu-
als who already exhibit a strong growth mindset. Although
mindset is not an explicit part of the school curriculum, the
Montessori and Freinet teaching methods of the schools that
participated in this study typically have a focus on promoting
learning attitudes associated with a growth mindset, which
may explain why participating children scored so well on the
pretest. In retrospect, these participants may not have been an
accurate representation of the user group who potentially has
the most to gain from an intervention such as ours. It might
therefore be interesting to repeat this study with participants
who initially score lower on growth mindset.
10 Conclusion
This paper describes a longitudinal, in the wild, unsupervised
study in which children could interact at their own initiative
with a fully autonomous Computer Aided Learning (CAL)
system situated in their classroom. The system offered effort-
related praise while children worked on the learning task.
To measure changes in children’s mindset before and after
the intervention we constructed a questionnaire which was
administered as a pretest and posttest interview. The ques-
tionnaire consisted of three constructs related to mindset: (1)
growth-related items; (2) fixed-related items; and (3) effort-
related items.
We deployed two versions of the CAL system: a system
that delivered the praise through headphones only, and an
otherwise identical CAL system that was extended with a
social robot to deliver the praise. A total of 44 children inter-
acted with two consecutive learning tasks over the course of
approximately four months. Additionally, we tested a base-
line version of this CAL system with 17 children, over the
same time span, where there was no robot and where chil-
dren received no effort-related praise. Children who worked
with the latter version showed no significant change in their
mindset.
The main research question that guided this work was:
What are the effects on the mindset of children when extend-
ing an autonomous CAL system with a social robot to deliver
effort-related praise? Overall, results showed significant dif-
ferences on the growth construct between the No-Robot and
Robot conditions. The effort-related praise that was delivered
by the social robot had a positive effect on the children’s
growth mindset, whereas the same praise offered by the
otherwise identical regular CAL system did not result in a
significant effect.
The results from this paper offer an interesting insight for
social roboticists and educational psychologists working on
creating real-world learning interventions. Firstly, we make
a methodological contribution to the field of educational HRI
by showing the feasibility of conducting comparative studies
in real world, longitudinal, unsupervised settings. Secondly,
this study makes an empirical contribution by showing the
potential benefits of using a robot to more effectively accom-
plish the social act of delivering supportive praise to promote
a growth mindset. In a previous study we saw similar results
where a CAL system extended with a social robot performed
better on the social act of eliciting longer and more detailed
verbal explanations [62]. Results from both of these stud-
ies lead us to speculate whether the value of robots could
extend to other social acts in learning. We suggest that future
research explores additional instances of social acts where
social robots may potentially play a key role.
Acknowledgements We thank our colleagues Tim Post and Juliëtte
Walma van der Molen for developing and sharing the mindset ques-
tionnaire and helping adapt it for children in our target age group, and
our colleague Betsy van Dijk for proofreading an earlier draft of the
paper. We would also like to thank the participating schools and teach-
ers for the pleasant collaboration throughout this study and we thank
the children for their enthusiasm and motivation during the activities.
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflict of
interest.
Open Access This article is licensed under a Creative Commons
Attribution 4.0 International License, which permits use, sharing, adap-
tation, distribution and reproduction in any medium or format, as
long as you give appropriate credit to the original author(s) and the
source, provide a link to the Creative Commons licence, and indi-
cate if changes were made. The images or other third party material
in this article are included in the article’s Creative Commons licence,
unless indicated otherwise in a credit line to the material. If material
is not included in the article’s Creative Commons licence and your
intended use is not permitted by statutory regulation or exceeds the
permitted use, you will need to obtain permission directly from the copy-
right holder. To view a copy of this licence, visit http://creativecomm
ons.org/licenses/by/4.0/.
A Mindset questionnaire
The full questionnaire that was used to measure the mindset
of participants during the pretest and posttest is listed below.
Items were answered on a 4-point Likert scale: strongly dis-
agree, somewhat disagree, somewhat agree, strongly agree.
An exploratory factor analysis revealed three mutually exclu-
sive factors present in the data:
123
Journal on Multimodal User Interfaces
Growth (Items 1, 4, 8, 13 and 14) The ‘growth’ con-
struct consists of items that are formulated in such a way
that they align with a mindset oriented towards growth.
These items address that one can change how smart they
are, and that one can learn and do better by working
hard on difficult assignments. When a participant shows
high agreement with these items this corresponds with a
growth mindset. A Cronbach’s Alpha of 0.73 shows an
acceptable internal consistency for this construct.
Fixed (Items 2, 3, 5, 7 and 9) The ‘fixed’ construct con-
sists of items that highlight one’s inability to influence
how smart they are. The items cite innate causes for this
lack of influence. This is an attitude that is characteristic
for individuals with a fixed mindset. Accordingly, when a
participant shows a high agreement with these items this
corresponds with a fixed mindset. A Cronbach’s Alpha
of 0.68 shows a questionable internal consistency for this
construct.
Effort (Items 16 and 18) Finally, the ‘effort’ construct
consists of two items that reveal a preference for spend-
ing less effort, either by working less hard on difficult
assignments or by choosing easier assignments, both of
which are telling of a fixed mindset. When a participant
shows a high agreement with these items this corresponds
with a fixed mindset. A Cronbach’s Alpha of 0.59 shows
a poor internal consistency for this construct. Future iter-
ations of this questionnaire should expand the number of
relevant items, and may focus on further validation of the
construct as a whole.
1. I think I can change how smart I am,
2. I think I can’t change how smart I am, because I am born
like this,
3. I think I will always stay this smart, because I can’t
change that,
4. I think I can become smarter step-by-step,
5. I think I will always stay this smart, because that is fixed
in my brain,
6. I think I can change how smart I am by practising with
assignments of increasing difficulty,
7. I think it is fixed how smart I am and there is nothing I
can do to change that,
8. I think I can change how smart I am, by doing my best,
9. I think that my smartness is fixed in my brain, and I can’t
change it,
10. I think I can always change how smart I am,
11. I work harder on difficult assignments because then I
learn the most,
12. I feel dumb when I have to think really hard for an assign-
ment,
13. I do my best for difficult assignments, because then I
learn the most,
14. I do my best for difficult assignments, because then I will
be able to do them better,
15. I work much less hard for difficult assignments, because
I can’t do them anyway,
16. I work less hard for difficult assignments because I prefer
not to do much effort,
17. I prefer to choose more difficult assignments, because
then I can learn something new,
18. I prefer to choose easier assignments, because then I have
to spend less effort.
References
1. Baylor AL, Kim Y (2005) Simulating instructional roles through
pedagogical agents. Int J Artif Intell Educ 15(2):95–115. https://
doi.org/10.1007/BF02504991
2. Baylor AL, Shen E, Warren D, Freire P (2004) Supporting learners
with math anxiety: the impact of pedagogical agent emotional and
motivational support. In: Proceedings of the workshopon social and
emotional intelligence in learning environments at the international
conference on intelligent tutoring systems (ITS 2004). Springer,
Berlin, pp 6–12
3. Belpaeme T, Baxter PE, Read R, Wood R, Cuayáhuitl H, Kiefer
B, Racioppa S, Kruijff-Korbayová I, Athanasopoulos G, Enescu
V, Looije R, Neerincx M, Demiris Y, Ros-Espinoza R, Beck A,
Cañamero L, Hiolle A, Lewis M, Baroni I, Nalin M, Cosi P, Paci G,
TesserF, Sommavilla G, Humbert R (2012) Multimodal child-robot
interaction: building social bonds. J Hum-Robot Interact 1(2):33–
53. https://doi.org/10.5898/JHRI.1.2.Belpaeme
4. Belpaeme T, Kennedy J, Ramachandran A, Scassellati B, Tanaka F
(2018) Social robots for education: a review. Sci Robot 3(21):1–9.
https://doi.org/10.1126/scirobotics.aat5954
5. Blackwell LS, Trzesniewski KH, Dweck CS (2007) Implicit the-
ories of intelligence predict achievement across an adolescent
transition: a longitudinal study and an intervention. Child Dev
78(1):246–263. https://doi.org/10.1111/j.1467-8624.2007.00995.
x
6. Burke LA, Williams JM (2012) The impact of a thinking skills
intervention on children’s concepts of intelligence. Think Skills
Creat 7(3):145–152. https://doi.org/10.1016/j.tsc.2012.01.001
7. Chandra S, Paradeda R, Yin H, Dillenbourg P, Prada R, Paiva A
(2018) Do children perceive whether a robotic peer is learning
or not? In: Proceedings of the 2018 international conference on
human-robot interaction (HRI 2018). ACM Press, New York, NY,
USA, pp 41–49. https://doi.org/10.1145/3171221.3171274
8. Chang CW, Lee JH, Chao PY, Wang CY, Chen G (2010) Exploring
the possibility of using humanoid robots as instructional tools for
teaching a second language in primary school. Educ Technol Soc
13(2):13–24
9. Charisi V, Davison DP, Wijnen FM, van der Meij J, Reidsma D,
Prescott T, van Joolingen W, Evers V (2015) Towards a child-robot
symbiotic co-development: a theoretical approach. In: Salem M,
Weiss A, Baxter P, Dautenhahn K (eds) Proceedings of the inter-
national symposium on new frontiers in human-robot interaction,
the society for the study of artificial intelligence and simulation of
behaviour (AISB)
10. Charisi V, Gomez E, Mier G, Merino L, Gomez R (2020) Child-
robot collaborative problem-solving and the importance of child’s
voluntary interaction: a developmental perspective. Front Robot AI
7:15. https://doi.org/10.3389/frobt.2020.00015
123
Journal on Multimodal User Interfaces
11. Chevalier P, Li JJ, Ainger E, Alcorn AM, Babovic S, Charisi V,
Petrovic S, Schadenberg BR, Pellicano E, Evers V (2017a) Dia-
logue design for a robot-based face-mirroring game to engage
autistic children with emotional expressions. In: Proceedings of
the 9th international conference on social robotics (ICSR 2017).
Springer, Cham, pp 546–555. https://doi.org/10.1007/978- 3-319-
70022-9_54
12. Chevalier P, Martin JC, Isableu B, Bazile C, Tapus A (2017b)
Impact of sensory preferences of individuals with autism on the
recognition of emotions expressed by two robots, an avatar, and
a human. Auton Robots 41(3):613–635. https://doi.org/10.1007/
s10514-016- 9575-z
13. Coninx A, Baxter P, Oleari E, Bellini S, Bierman B, Blanson
Henkemans O, Cañamero L, Cosi P, Enescu V, Ros Espinoza R,
Hiolle A, Humbert R, Kiefer B, Kruijff-Korbayovà I, Looije R,
Mosconi M, Neerincx M, Paci G, Patsis G, Pozzi C, Sacchitelli F,
Sahli H, Sanna A, Sommavilla G, Tesser F, Demiris Y, Belpaeme
T (2015) Towards long-term social child-robot interaction: using
multi-activity switching to engage young users. J Hum-Robot Inter-
act 5(1):32. https://doi.org/10.5898/JHRI.5.1.Coninx
14. Davison DP, Charisi V, Wijnen FM, Papenmeier A, van der Meij
J, Reidsma D, Evers V (2016) Design challenges for long-term
interaction with a robot in a science classroom. In: Proceedings of
the workshop on long-term child-robot interaction at the interna-
tional conference on robot and human interactive communication
(RO-MAN 2016). IEEE Robotics and Automation Society
15. Davison DP, Wijnen FM, Charisi V, van der Meij J, Evers
V, Reidsma D (2020) Working with a social robot in school:
a long-term real-world unsupervised deployment. In: Proceed-
ings of the international conference on human-robot interaction
(HRI 2020), Cambridge, UK, pp 63–72. https://doi.org/10.1145/
3319502.3374803
16. Davison DP, Wijnen FM, van der Meij J, Reidsma D, Evers V
(2019) Designing a social robot to support children’s inquiry learn-
ing: a contextual analysis of children working together at school.
Int J Soc Robot 11(3):1–25. https://doi.org/10.1007/s12369-019-
00555-6
17. De Castella K, Byrne D (2015) My intelligence may be more
malleable than yours: the revised implicit theories of intelligence
(self-theory) scale is a better predictor of achievement, motivation,
and student disengagement. Eur J Psychol Educ 30(3):245–267.
https://doi.org/10.1007/s10212-015-0244-y
18. Dweck C (2000) Self-theories: their role in motivation, personality,
and development. Psychology Press, Florence
19. Dweck CS (2006) Mindset: the new psychology of success. Ran-
dom House, New York
20. Dweck C, Leggett E (1988) A social-cognitive approach to moti-
vation and personality. Psychol Rev 95(2):256–273
21. Gordon G, Spaulding S, Westlund JK, Lee JJ, Plummer L, Martinez
M, Das M, Breazeal C (2016) Affective personalization of a social
robot tutor for children’s second language skills. In: Proceedings
of the AAAI conference on artificial intelligence, pp 3951–3967
22. Gulz A (2005) Social enrichment by virtual characters—
differential benefits. J Comput Assist Learn 21(6):405–418. https://
doi.org/10.1111/j.1365- 2729.2005.00147.x
23. Gunderson EA, Gripshover SJ, Romero C, Dweck CS, Goldin-
Meadow S, Levine SC (2013) Parent praise to 1- to 3-year-olds
predicts children’s motivational frameworks 5 years later. Child
Dev 84(5):1526–1541. https://doi.org/10.1111/cdev.12064
24. Hood D, Lemaignan S, Dillenbourg P (2015) When children teach
a robot to write: an autonomous teachable humanoid which uses
simulated handwriting. In: Proceedings of the international con-
ference for human-robot interaction (HRI 2015). ACM Press, New
York, pp 83–90. https://doi.org/10.1145/2696454.2696479
25. Hyun EJ, Kim SY, Jang S, Park S (2008) Comparative study of
effects of language instruction program using intelligence robot
and multimedia on linguistic ability of young children. In: Proceed-
ings of the international conference on robot and human interactive
communication (RO-MAN 2008), pp 187–192. https://doi.org/10.
1109/ROMAN.2008.4600664
26. Inhelder B, Piaget J (1958) The growth of logical thinking from
childhood to adolescence: an essay on the construction of formal
operational structures, vol 84. Basic Books, New York
27. Johnson WL, Rickel JW, Lester JC (2000) Animated pedagogical
agents: face-to-face interaction in interactive learning environ-
ments. Int J Artif Intell Educ 11:47–78
28. Kamins ML, Dweck CS (1999) Person versus process praise and
criticism: implications for contingent self-worth and coping. Dev
Psychol 35(3):835–847. https://doi.org/10.1037/0012-1649.35.3.
835
29. Koestner R, Zuckerman M, Koestner J (1987) Praise, involvement,
and intrinsic motivation. J Pers Soc Psychol 53(2):383–390. https://
doi.org/10.1037/0022- 3514.53.2.383
30. Kopp S, Krenn B, Marsella S, Marshall AN, Pelachaud C, Pirker H,
Thórisson KR, Vilhjálmsson H (2006) Towards a common frame-
work for multimodal generation: the behavior markup language.
In: Proceedings of the international conference on intelligent vir-
tual agents (IVA 2006), vol 4133. Springer, Berlin, Heidelberg, pp
205–217. https://doi.org/10.1007/11821830
31. Kory JM, Breazeal C (2014) Storytelling with robots: learning
companions for preschool children’s language development. In:
Proceedings of the international conference on robot and human
interactive communication (RO-MAN 2014). IEEE, pp 643–648.
https://doi.org/10.1109/ROMAN.2014.6926325
32. Kory-Westlund JM, Gordon G, Spaulding S, Lee JJ, Plummer
L, Martinez M, Das M, Breazeal C (2016) Lessons from teach-
ers on performing HRI studies with young children in schools.
In: Proceedings of the international conference on human-robot
interaction (HRI 2016), pp 383–390. https://doi.org/10.1109/ HRI.
2016.7451776
33. Kory-Westlund JM, Jeong S, Park HW, Ronfard S, Adhikari A,
Harris PL, DeSteno D, Breazeal CL (2017) Flat vs. expressive
storytelling: young children’s learning and retention of a social
robot’s narrative. Front Hum Neurosci 11:295. https://doi.org/10.
3389/fnhum.2017.00295
34. Kose-Bagci H, Ferrari E, Dautenhahn K, Syrdal DS, Nehaniv CL
(2009) Effects of embodiment and gestures on social interaction in
drumming games with a humanoid robot. Adv Robot 23(14):1951–
1996. https://doi.org/10.1163/016918609X12518783330360
35. Kulik CLC, Kulik JA (1991) Effectiveness of computer-
based instruction: an updated analysis. Comput Hum Behav
7(1–2):75–94. https://doi.org/10.1016/0747-5632(91)90030-5.
arXiv:1011.1669v3
36. Lazonder AW (2014) Inquiry learning. In: Spector JM, Merrill
MD, Elen J, Bishop MJ (eds) Handbook of research on educational
communications and technology. Springer, New York, pp 453–464.
https://doi.org/10.1007/978-1-4614-3185-5
37. Lee KM, Jung Y, Kim J, Kim SR (2006) Are physically embodied
social agents better than disembodied social agents?: the effects of
physical embodiment, tactile interaction, and people’s loneliness in
human-robot interaction. Int J Hum Comput Stud 64(10):962–973.
https://doi.org/10.1016/j.ijhcs.2006.05.002
38. Leite I, Martinho C, Paiva A (2013) Social robots for long-term
interaction: a survey. Int J Soc Robot 5(2):291–308. https://doi.
org/10.1007/s12369- 013-0178- y
39. Lemaignan S, Jacq A, Hood D, Garcia F, Paiva A, Dillenbourg
P (2016) Learning by teaching a robot: the case of handwriting.
IEEE Robot Autom Maga 23(2):56–66. https://doi.org/10.1109/
MRA.2016.2546700
40. Leyzberg D, Spaulding S, Toneva M, Scassellati B (2012) The
physical presence of a robot tutor increases cognitive learning
123
Journal on Multimodal User Interfaces
gains. In: Proceedings of the annual meeting of the cognitive sci-
ence society, vol 34
41. Li J (2015) The benefit of being physically present: a survey of
experimental works comparing copresent robots, telepresent robots
and virtual agents. Int J Hum Comput Stud 77:23–37. https://doi.
org/10.1016/j.ijhcs.2015.01.001
42. Li J, Davison D, Alcorn A, Williams A, Dimitrijevic SB, Petro-
vic S, Chevalier P, Schadenberg B, Ainger E, Pellicano L, Evers
V (2020) Non-participatory user-centered design of accessible
teacher-teleoperated robot and tablets for minimally verbal autistic
children. In: Proceedings of the international conference on perva-
sive technologies related to assistive environments (PETRA 2020).
Association for Computing Machinery, Corfu, Greece, pp 51–59.
https://doi.org/10.1145/3389189.3393738
43. Mayer RE (2008) Learning and instruction, 2nd edn. Pear-
son, NJ. https://doi.org/10.1016/0959-4752(95)90021-7.
arXiv:1011.1669v3
44. Mellor D, Moore KA (2014) The use of Likert scales with children.
J Pediatr Psychol 39(3):369–379. https://doi.org/10.1093/jpepsy/
jst079
45. Mueller CMC, Dweck CSC (1998) Praise for intelligence can
undermine children’s motivation and performance. J Pers Soc Psy-
chol 75(1):33–52. https://doi.org/10.1037/0022-3514.75.1.33
46. Mumm J, Mutlu B (2011) Designing motivational agents: the role
of praise, social comparison, and embodiment in computer feed-
back. Comput Hum Behav 27(5):1643–1650. https://doi.org/10.
1016/j.chb.2011.02.002
47. Nicholls JG (1984) Achievement motivation: conceptions of abil-
ity, subjective experience, task choice, and performance. Psychol
Rev 91(3):328–346. https://doi.org/10.1037/0033- 295X.91.3.328
48. O’Leary K, O’Leary S (1977) Classroom management: The suc-
cessful use of behavior modification. Pergamon, New York
49. Park HW, Rosenberg-Kima R, Rosenberg M, Gordon G, Breazeal
C (2017) Growing growth mindset with a social robot peer. In:
Proceedings of the international conference on human-robot inter-
action (HRI 2017). ACM Press, New York, pp 137–145. https://
doi.org/10.1145/2909824.3020213
50. Paunesku D, Walton GM, Romero C, Smith EN, Yeager DS, Dweck
CS (2015) Mind-set interventions are a scalable treatment for aca-
demic underachievement. Psychol Sci 26(6):784–793. https://doi.
org/10.1177/0956797615571017
51. Peng H, Ma S, Spector JM (2019) Personalized adaptive learning:
an emerging pedagogical approach enabled by a smart learning
environment. Smart Learn Environ 6:9. https://doi.org/10.1186/
s40561-019- 0089-y
52. Reidsma D, Charisi V, Davison DP, Wijnen FM, van der Meij J,
Evers V, Cameron D, Fernando S, Moore R, Prescott T, Mazzei
D, Pieroni M, Cominelli L, Garofalo R, de Rossi D, Vouloutsi V,
Zucca R, Grechuta K, Blancas M, Verschure P (2016) The EASEL
project: towards educational human-robot symbiotic interaction.
In: Proceedings of the international conference on living machines,
vol 9793. Springer, Cham, pp 297–306. https://doi.org/10.1007/
978-3- 319-42417- 0_27
53. Reidsma D, van Welbergen H (2013) AsapRealizer in practice—a
modular and extensible architecture for a BML realizer. Entertain
Comput 4(3):157–169. https://doi.org/10.1016/j.entcom.2013.05.
001
54. Schadenberg BR, Heylen DK, Evers V (2017) Affect bursts to con-
strain the meaning of the facial expressions of the humanoid robot
Zeno. In: Proceedings of the workshop on social interaction and
multimodal expression for socially intelligent robots at the interna-
tional conference on robot and human interactive communication
(RO-MAN 2017), vol 2059. CEUR, pp 30–39
55. Schadenberg BR, Reidsma D, Heylen DK, Evers V (2020) Dif-
ferences in spontaneous interactions of autistic children in an
interaction with an adult and humanoid robot. Front Robot AI 7:28.
https://doi.org/10.3389/frobt.2020.00028
56. Seaton FS (2018) Empowering teachers to implement a growth
mindset. Educ Psychol Pract 34(1):41–57. https://doi.org/10.1080/
02667363.2017.1382333
57. van der Meij H, van der Meij J, Harmsen R (2015) Animated
pedagogical agents effects on enhancing student motivation and
learning in a science inquiry learning environment. Educ Tech Res
Dev 63(3):381–403. https://doi.org/10.1007/s11423- 015-9378- 5
58. van Joolingen WR, de Jong T (1997) An extended dual search
space model of scientific discovery learning. Instr Sci 25(5):307–
346. https://doi.org/10.1023/A:1002993406499
59. van Joolingen WR, de Jong T, Dimitrakopoulou A (2007) Issues in
computer supported inquiry learning in science. J Comput Assist
Learn 23(2):111–119. https://doi.org/10.1111/j.1365-2729.2006.
00216.x
60. van Straten CL, Peter J, Kühne R (2020) Child-robot relationship
formation: a narrative review of empirical research. Int J Soc Robot
12(2):325–344. https://doi.org/10.1007/s12369-019-00569-0
61. van Waterschoot J, Bruijnes M, Flokstra J, Reidsma D, Davison
DP, Theune M, Heylen D (2018) Flipper 2.0: a pragmatic dialogue
engine for embodied conversational agents. In: Proceedings of the
international conference on intelligent virtual agents (IVA, 2018).
ACM Press. Australia, Sydney, pp 43–50
62. Wijnen FM, Davison DP, Reidsma D, van der Meij J, Charisi V,
Evers V (2019) Now we’re talking: learning by explaining your
reasoning to a social robot. Trans Hum-Robot Interact 9(1):1–29.
https://doi.org/10.1145/3345508
63. Woolf BP (2010) Building intelligent interactive tutors: student-
centered strategies for revolutionizing e-learning. Morgan Kauf-
mann Publishers/Elsevier, Amsterdam. https://doi.org/10.1007/
BF02680460
64. Yeager DS, Hulleman CS, Hinojosa C, Lee HY, O’Brien J, Romero
C, Paunesku D, Schneider B, Flint K, Roberts A, Trott J, Greene D,
Walton GM, Dweck CS (2016) Using design thinking to improve
psychological interventions: the case of the growth mindset dur-
ing the transition to high school. J Educ Psychol 108(3):374–391.
https://doi.org/10.1037/edu0000098
65. Zaraki A, Mazzei D, Giuliani M, De Rossi D (2014) Designing
and evaluating a social gaze-control system for a humanoid robot.
IEEE Trans Hum-Mach Syst 44(2):157–168. https://doi.org/10.
1109/THMS.2014.2303083
Publisher’s Note Springer Nature remains neutral with regard to juris-
dictional claims in published maps and institutional affiliations.
123