ArticlePDF AvailableLiterature Review

Abstract and Figures

Social robots can be used in education as tutors or peer learners. They have been shown to be effective at increasing cognitive and affective outcomes and have achieved outcomes similar to those of human tutoring on restricted tasks. This is largely because of their physical presence, which traditional learning technologies lack. We review the potential of social robots in education, discuss the technical challenges, and consider how the robot’s appearance and behavior affect learning outcomes.
Content may be subject to copyright.
Belpaeme et al., Sci. Robot. 3, eaat5954 (2018) 15 August 2018
1 of 9
Social robots for education: A review
Tony Belpaeme1,2*, James Kennedy2, Aditi Ramachandran3, Brian Scassellati3, Fumihide Tanaka4
Social robots can be used in education as tutors or peer learners. They have been shown to be effective at increasing
cognitive and affective outcomes and have achieved outcomes similar to those of human tutoring on restricted
tasks. This is largely because of their physical presence, which traditional learning technologies lack. We review the
potential of social robots in education, discuss the technical challenges, and consider how the robot’s appearance
and behavior affect learning outcomes.
Virtual pedagogical agents and intelligent tutoring systems (ITSs)
have been used for many years to deliver education, with compre-
hensive reviews available for each field (1, 2). The use of social
robots has recently been explored in the educational domain, with
the expectation of similarly positive benefits for learners (35). A
recent survey of long-term human-robot interaction (HRI) high-
lighted the increasing popularity of using social robots in educa-
tional environments (6), and restricted surveys have previously been
conducted in this domain (7, 8).
In this paper, we present a review of social robots used in educa-
tion. The scope was limited to robots that were intended to deliver
the learning experience through social interaction with learners, as
opposed to robots that were used as pedagogical tools for science,
technology, engineering, and math (STEM) education. We identi-
fied three key research questions: How effective are robot tutors at
achieving learning outcomes? What is the contribution made by the
robot’s appearance and behavior? And what are the potential roles
of a robot in an educational setting? We support our review with
data gleaned from a statistical meta-analysis of published literature.
We aim to provide a platform for researchers to build on by high-
lighting the expected outcomes of using robots to deliver education
and by suggesting directions for future research.
Benefits of social robots as tutoring agents
The need for technological support in education is driven by demo-
graphic and economic factors. Shrinking school budgets, growing
numbers of students per classroom, and the demand for greater
personalization of curricula for children with diverse needs are
fueling research into technology-based support that augments the
efforts of parents and teachers. Most commonly, these systems take
the form of a software system that provides one-on-one tutoring
support. Social interaction enhances learning between humans, in
terms of both cognitive and affective outcomes (9, 10). Research has
suggested that some of these behavioral influences also translate to
interactions between robots and humans (3, 11). Although robots
that do not exhibit social behavior can be used as educational tools
to teach students about technology [such as in (12)], we limited our
review to robots designed specifically to support education through
social interactions.
Because virtual agents (presented on laptops, tablets, or phones)
can offer some of the same capabilities but without the expense of
additional hardware, the need for maintenance, and the challenges
of distribution and installation, the use of a robot in an educational
setting must be explicitly justified. Compared with virtual agents,
physically embodied robots offer three advantages: (i) they can be
used for curricula or populations that require engagement with the
physical world, (ii) users show more social behaviors that are bene-
ficial for learning when engaging with a physically embodied system,
and (iii) users show increased learning gains when interacting with
physically embodied systems over virtual agents.
Robots are a natural choice when the material to be taught
requires direct physical manipulation of the world. For example,
tutoring physical skills, such as handwriting (13) or basketball
free throws (14), may be more challenging with a virtual agent,
and this approach is also taken in many rehabilitation- or therapy-
focused applications (15). In addition, certain populations may
require a physically embodied system. Robots have already been
proposed to aid individuals with visual impairments (16) and for
typically developing children under the age of two (17) who show
only minimal learning gains when provided with educational con-
tent via screens (18).
In addition, often there is an expectation for robot tutors to be
able to move through dynamic and populated spaces and manipu-
late the physical environment. Although not always needed in the
context of education, there are some scenarios where the learning
experience benefits from the robot being able to manipulate objects
and move autonomously, such as when supporting physical experi-
mentation (19) or moving to the learner rather than the learner
moving to the robot. These challenges are not exclusive to social
robotics and robot tutors, but the added elements of having the robot
operate near and with (young) learners add complexities that are
often disregarded in navigation and manipulation.
Physical robots are also more likely to elicit from users social
behaviors that are beneficial to learning (20). Robots can be more
engaging and enjoyable than a virtual agent in cooperative tasks
(2123) and are often perceived more positively (22, 24, 25). Im-
portantly for tutoring systems, physically present robots yield sig-
nificantly more compliance to its requests, even when those requests
are challenging, than a video representation of the same robot (26).
Last, physical robots have enhanced learning and affected later
behavioral choice more substantially than virtual agents. Compared
with instructions from virtual characters, videos of robots, or audio-
only lessons, robots have produced more rapid learning in cognitive
puzzles (27). Similar results have been demonstrated when coaching
users to select healthier snacks (24) and when helping users continue
a 6-week weight-loss program (28). A comprehensive review (25) con-
cluded that the physical presence of a robot led to positive perceptions
1Ghent University, Ghent, Belgium. 2University of Plymouth, Plymouth, UK. 3Yale Uni-
versity, New Haven, CT 065208285, USA. 4University of Tsukuba, Tsukuba, Japan.
*Corresponding author. Email:
Copyright © 2018
The Authors, some
rights reserved;
exclusive licensee
American Association
for the Advancement
of Science. No claim
to original U.S.
Government Works
by guest on September 5, 2018 from
Belpaeme et al., Sci. Robot. 3, eaat5954 (2018) 15 August 2018
2 of 9
and increased task performance when compared with virtual agents
or robots displayed on screens.
Technical challenges of building robot tutors
There are a number of challenges in using technology to support
education. Using a social robot adds to this set of challenges because
of the robot’s presence in the social and physical environment and
because of the expectations the robot creates in the user. The social
element of the interaction is especially difficult to automate: Although
robot tutors can operate autonomously in restricted contexts, fully
autonomous social tutoring behavior in unconstrained environments
remains elusive.
Perceiving the social world is a first step toward being able to act
appropriately. Robot tutors should be able to not only correctly in-
terpret the user’s responses to the educational content offered but
also interpret the rapid and nuanced social cues that indicate task
engagement, confusion, and attention. Although automatic speech
recognition and social signal processing have improved in recent
years, sufficient progress has not been made for all populations.
Speech recognition for younger users, for example, is still insuffi-
ciently robust for most interactions (29). Instead, alternative input
technologies, such as a touch-screen tablets or wearable sensors, are
used to read responses from the learner and can be used as a proxy
to detect engagement and to track the performance of the student
(3032). Robots can also use explicit models of disengagement in a
given context (33) and strategies, such as activity switching, to sus-
tain engagement over the interaction (34). Computational vision has
made great strides in recent years but is still limited when dealing
with the range of environments and social expressions typically found
in educational and domestic settings. Although advanced sensing
technologies for reading gesture, posture, and gaze (35) have found
their way into tutoring robots, most social robot tutors continue to
be limited by the degree to which they can accurately interpret the
learner’s social behavior.
Armed with whatever social signals can be read from the student,
the robot must choose an action that advances the long-term goals
of the educational program. However, this can often be a difficult
choice, even for experienced human instructors. Should the instructor
press on and attempt another problem, advance to a more challenging
problem, review how to solve the current problem, offer a hint, or
even offer a brief break from instruction? There are often conflicting
educational theories in human-based instruction, and whether or not
these same theories hold when considering robot instructors is an
open question. These choices are also present in ITSs, but the explicit
agentic nature of robots often introduces additional options and, at
times, complications. Choosing an appropriate emotional support
strategy based on the affective state of the child (36), assisting with
a meta-cognitive learning strategy (37), deciding when to take a break
(31), and encouraging appropriate help-seeking behavior (4) have
all been shown to increase student learning gains. Combining these
actions with appropriate gestures (38), appropriate and congruent
gaze behavior (39), expressive behaviors and attention-guiding
behaviors (11), and timely nonverbal behaviors (3) also positively
affects student recall and learning. However, merely increasing the
amount of social behavior for a robot does not lead to increased
learning gains: Certain studies have found that social behavior may
be distracting (40, 41). Instead, the social behavior of the robot must
be carefully designed in conjunction with the interaction context
and task at hand to enhance the educational interaction.
Last, substantial research has focused on personalizing interac-
tions to the specific user. Within the ITS community, computational
techniques such as dynamic Bayesian networks, fuzzy decision trees,
and hidden Markov models are used to model student knowledge
and learning. Similar to on-screen tutoring systems, robot tutors use
these same techniques to help tailor the complexity of problems to
the capabilities of the student, providing more complex problems
only when easier problems have been mastered (4244). In addition
to the selection of personalized content, robotic tutoring systems
often provide additional personalization to support individual learn-
ing styles and interaction preferences. Even straightforward forms
of personalization, such as using a child’s name or referencing per-
sonal details within an educational setting, can enhance user percep-
tion of the interaction and are important factors in maintaining
engagement within learning interactions (45, 46). Other affective
personalization strategies have been explored to maintain engage-
ment during a learning interaction by using reinforcement learning
to select the robot’s affective responses to the behavior of children
(47). A field study showed that students who interacted with a robot
that simultaneously demonstrated three types of personalization
(nonverbal behavior, verbal behavior, and adaptive content pro-
gression) showed increased learning gains and sustained engagement
when compared with students interacting with a nonpersonalized
robot (48) Although progress has been made in constituent tech-
nologies of robot tutors—from perception to action selection and
production of behaviors that promote learning—the integration of
these technologies and balancing their use to elicit prosocial behavior
and consistent learning still remain open challenges.
To support our review, we used a meta-analysis of the literature
on robots for education. In this, three key questions framed the
meta-analysis and dictated which information was extracted:
1. Efficacy. What are the cognitive and affective outcomes when
robots are used in education?
2. Embodiment. What is the impact of using a physically em-
bodied robot when compared with alternative technologies?
3. Interaction role. What are the different roles the robot can
take in an educational context?
For the meta-analysis, we used published studies extracted from
the Google Scholar, Microsoft Academic Search, and CiteSeerX
databases by using the following search terms: robot tutor, robot
tutors, socially assistive robotics (with manual filtering of those
relevant to education), robot teacher, robot assisted language
learning, and robot assisted learning. The earliest published work
appeared in 1992, and the survey cutoff date was May 2017. In
addition, proceedings of prominent social HRI journals and con-
ferences were manually searched for relevant material: Interna-
tional Conference on Human-Robot Interaction, International
Journal of Social Robotics, Journal of Human-Robot Interaction,
International Conference on Social Robotics, and the Interna-
tional Symposium on Robot and Human Interactive Communica-
tion (RO-MAN).
The selection of papers was based on four additional criteria:
1) Novel experimental evaluations or analyses should be presented.
2) The robot should be used as the teacher (i.e., the robot is an
agent in the interaction) rather than the robot being used as an ed-
ucational prop or a learner with no intention to educate [e.g., (49)].
by guest on September 5, 2018 from
Belpaeme et al., Sci. Robot. 3, eaat5954 (2018) 15 August 2018
3 of 9
3) The work must have included a physical robot, with an educa-
tive intent. For example, studies considering “coaches” that sought
to improve motivation and compliance, but did not engage in edu-
cation [e.g., (50)], were not included, whereas those that provided
tutoring and feedback were included [e.g., (15)].
4) Only full papers were included. Extended abstracts were omit-
ted because these often contained preliminary findings, rather than
complete results and full analyses.
We withheld 101 papers for analysis and excluded 12 papers for
various reasons (e.g., the paper repeated results from an earlier pub-
lication). The analyzed papers together contain 309 study results (51).
To compare outcomes of the different studies, we first divided
the outcomes of an intervention into either affective or cognitive.
Cognitive outcomes focus on one or more of the following compe-
tencies: knowledge, comprehension, application, analysis, synthesis,
and evaluation (5254). Affective outcomes refer to qualities that are
not learning outcomes per se, for example, the learner being atten-
tive, receptive, responsive, reflective, or inquisitive (53). The meta-
analysis contained 99 (33.6%) data points on cognitive learning
outcomes and 196 (66.4%) data points on affective learning out-
comes; 14 study results did not contain a comparative experiment
on learning outcomes.
Cognitive outcomes are typically measured through pre- and
posttests of student knowledge, whereas affective outcomes are
more varied and can include self-reported measures and observa-
tions by the experimenters. Table1 contains the most common
methods for measuring cognitive and affective outcomes reported
in the literature.
Most studies focused on children (179 data points; 58% of the
sample; mean age, 8.2 years; SD, 3.56), whereas adults (≥18 years
old) were a lesser focus of research in robot tutoring (98 data points;
32% of the sample; mean age, 30.5; SD, 17.5). For 29 studies (9%),
both children and adults were used, or the age of the participants was
not specified.
If the results reported an effect size expressed as Cohen’s d, then
this was used unaltered. In cases where the effect size was not reported
or if it was expressed in a measure other than Cohen’s d, then an
online calculator (55) [see also (56)] was used if enough statistical
information was present in the paper (typically participant numbers,
means, and SDs are sufficient).
We captured the following data gleaned from the publications:
the study design, the number of conditions, the number of partici-
pants per condition, whether participants were children or adults,
participant ages (mean and SD), the robot used, the country in which
the study was run, whether the study used a within or between design,
the reported outcomes (affective or cognitive, with details on what
was measured exactly), the descriptive statistics (where available
mean, SD, t, and F values), the effect size as Cohen’s d, whether the
study involved one robot teaching one person or one robot teaching
many, the role of the robot (presenter, teaching assistant, teacher,
peer, or tutor), and the topic under study (embodiment of the robot,
social character of the robot, the role of the robot, or other).
The studies in our sample reported more on affective outcomes
than cognitive outcomes (Fig.1A). This is due to the relative ease
with which a range of affective outcomes can be assessed by using
questionnaires and observational studies, whereas cognitive outcomes
require administering a controlled knowledge assessment before and
after the interaction with the robot, of which typically only one is
reported per study.
Figure2B shows the countries where studies were run. Robots for
learning research, perhaps unsurprisingly, happen predominantly
in East Asia (Japan, South Korea, and Taiwan), Europe, and the
United States. An exception is the research in Iran on the use of
robots to teach English in class settings.
Table 1. Common measures for determining cognitive and affective outcomes in robots for learning.
Cognitive Learning gain, measured as difference between pre- and posttest score
Administer posttest either immediately after exposure to robot or with delay
Correct for varying initial knowledge, e.g., using normalized learning gain (77)
Difference in completion time of test
Number of attempts needed for correct response
Affective Persistence, measured as number of attempts made or time spent with robot
Number of interactions with the system, such as utterances or responses
Coding emotional expressions of the learner, can be automated using face analysis software (47)
Godspeed questionnaire, measuring the user’s perception of robots (78)
Tripod survey, measuring the learner’s perspective on teaching, environment, and engagement (79)
Immediacy, measuring psychological availability of the robot teacher (3, 10)
Evolution of time between answers, e.g., to indicate fatigue (31)
Coding of video recordings of participants responses
Coding or automated recording of eye gaze behavior (to code attention, for example)
Subjective rating of the robot’s teaching and the learning experience (15)
Foreign language anxiety questionnaire (80)
KindSAR interactivity index, quantitative measure of children’s interactions with a robot (81)
Basic empathy scale, self-report of empathy (82)
Free-form feedback or interviews
by guest on September 5, 2018 from
Belpaeme et al., Sci. Robot. 3, eaat5954 (2018) 15 August 2018
4 of 9
Extracting meaningful statistical data from the published studies
is not straightforward. Of the 309 results reported in 101 pub-
lished studies, only 81 results contained enough data to calculate
an effect size, highlighting the need for more rigorous reporting of
data in HRI.
Efficacy of robots in education
The efficacy of robots in education is of primary interest, and here,
we discuss the outcomes that might be expected when using a robot
in education. The aim is to provide a high-level overview of the
effect size that might be expected when comparing robots with a
variety of control conditions, grouping a range of educational
scenarios with many varying factors between studies (see Fig.3).
More specific analyses split by individual factors will be explored
in subsequent sections.
Learning effects are divided into cognitive and affective out-
comes. Across all studies included in the meta-review, we have
37 results that compared the robot with an alternative, such as
an ITS, an on-screen avatar, or human tutoring. Of these, the aggre-
gated mean cognitive outcome effect size (Cohen’s d weighted by N)
of robot tutoring is 0.70 [95% confidence interval (CI), 0.66 to 0.75]
from 18 data points, with a mean of N = 16.9 participants per data
point. The aggregated mean affective outcome effect size (Cohen’s d
weighted by N) is 0.59 (95% CI, 0.51 to 0.66) from 19 data points,
with a mean of N = 24.4 students per data point. Many studies using
robots do not consider learning in comparison with an alternative,
such as computer-based or human tutoring, but instead against
other versions of the same robot with different behaviors. The
limited number of studies that did compare a robot against an alter-
native offers a positive picture of the contribution to learning made
by social robots, with a medium effect size for affective and cogni-
tive outcomes. Furthermore, positive affective outcomes did not
imply positive cognitive outcomes, or vice versa. In some studies,
introducing a robot improved affective outcomes while not nec-
essarily leading to significant cognitive gains [e.g. (57)].
Human tutors provide a gold standard benchmark for tutor-
ing interactions. Trained tutors are able to adapt to learner needs
and modify strategies to maximize learning (58). Previous work
(59) has suggested that human tutors produce a mean cognitive
outcome effect size (Cohen’s d) of 0.79, so the results observed
when using a robot are in a similar region. However, social robots
are typically deployed in restricted scenarios: short, well-defined
lessons delivered with limited adaptation to individual learners or
flexibility in curriculum. There is no suggestion yet that robots
have the capability to tutor in a general sense as well as a human
can. Comparisons between robots and humans are rare in the liter-
ature, so no meta-analysis data were available to compare the
cognitive learning effect size.
Robot appearance
Because the positive learning outcomes are driven by the physical
presence of the robot, the question remains of what exactly it is
about the robot’s appearance that promotes learning. A wide range
of robots have been used in the surveyed studies, from small toy-
like robots to full-sized android robots. Figure2A shows the most
used robots in the published studies.
The most popular robot in the studies we analyzed is the Nao
robot, a 54-cm-tall humanoid by Softbank Robotics Europe available
as having 14, 21, or 25 degrees of freedom (see Fig.4B). The two
latter versions of Nao have arms, legs, a torso, and a head. They can
walk, gesture, and pan and tilt their head. Nao has a rich sensor suite
and an on-board computational core, allowing the robot to be fully
autonomous. The dominance of Nao for HRI can be attributed to its
wide availability, appealing appearance, accessible price point, tech-
nical robustness, and ease of programming. Hence, Nao has become
an almost de facto platform for many studies in robots for learning.
Another robot popular as a tutor is the Keepon robot, a consumer-
grade version of the Keepon Pro research robot. Keepon is a 25-cm-tall
snowman-shaped robot with a yellow foam exterior without arms
and legs (see Fig.4C). It has four degrees of freedom to make it pan,
roll, tilt, and bop. Originally sold as a novelty for children, it can be
used as a research platform after some modification. Nao and Keepon
offer two extremes in the design space of social robots, and hence, it
is interesting to compare learning outcomes for both.
Comparing Keepon with Nao, the respective cognitive learning
gain is d = 0.56 (N = 10; 95% CI, 0.532 to 0.58) and d = 0.76 (N = 8;
95% CI, 0.52 to 1.01); therefore, both show a medium-sized effect.
However, we note that direct comparisons between different robots
are difficult with the available data, because no studies used the same
experimental design, the same curriculum, and the same student
population with multiple robots. Furthermore, different robots have
tended to be used at different times, becoming popular in studies
when that particular hardware model was first made available and
decreasing in usage over time. Because the complexity of the exper-
imental protocols has tended to increase, direct comparison is not
possible at this point in time.
What is clear from surveying the different robot types is that all
robots have a distinctly social character [except for the Heathkit
One learner
Many learners
Mixed tutor
and teacher
Learning outcomes
Role of robot
C Number of learners per robot D Demographics
E Children’s ages F Adults’ ages
Age (months)
Data points
Age (years)
Data points
Fig. 1. An overview of data from the meta-analysis. (A) Type of learning out-
come studied. (B) Role of the robot in the interaction. (C) Number of learners per
robot in studies. (D) Division between children and adults (≥18 years old). (E) Age
distribution for children. (F) Age distribution for adults.
by guest on September 5, 2018 from
Belpaeme et al., Sci. Robot. 3, eaat5954 (2018) 15 August 2018
5 of 9
HERO robot used in (60)]. All robots have humanoid features—such
as a head, eyes, a mouth, arms, or legs—setting the expectation that the
robot has the ability to engage on a social level. Although there are no
data on whether the social appearance of the robot is a requirement
for effective tutoring, there is evidence that the social and agentic
nature of the robots promotes secondary responses conducive to
learning (61, 62). The choice of robot very often depends on practical
considerations and whether the learners feel comfortable around the
robot. The weighted average height of the robots is 62 cm; the shortest
robot in use is the Keepon at 25 cm, and the tallest is the RoboThespian
humanoid at 175 cm. Shorter robots are often preferred when teach-
ing young children.
Robot behavior
To be effective educational agents, the behavior of social robots must
be tailored to support various aspects of learning across different
learners and diverse educational contexts. Several studies focused on
understanding critical aspects of educational interactions to which
robots should respond, as well as determining both what behaviors
social robots can use and when to deliver these behaviors to affect
learning outcomes.
Our meta-review shows that almost any strategy or social behavior
of the robot aimed at increasing learning outcomes has a positive
effect. We identified the influence of robot behaviors on cognitive
outcomes (d = 0.69; N = 12; 95% CI, 0.56 to 0.83) and affective out-
comes (d = 0.70; N = 32; 95% CI, 0.62 to 0.77).
Similar to findings in the ITS community, robots that personalize
what content to provide based on user performance during an inter-
action can increase cognitive learning gains (43, 44). In addition to
the adaptive delivery of learning material, social robots can offer
socially supportive behaviors and personalized support for learners
within an educational context. Personalized social support, such as
using a child’s name or referring to previous interactions (45, 46), is
the low-hanging fruit of social interaction. More complex prosocial
behavior, such as attention-guiding (11), displaying congruent gaze
behavior (39), nonverbal immediacy (3), or showing empathy with
the learner (36), not only has a positive impact on affective outcomes
but also results in increased learning.
However, just as human tutors must at times sit quietly and allow
students the opportunity to concentrate on problem solving, robot
tutors must also limit their social behavior at appropriate times based
on the cognitive load and engagement of the student (40). The social
behavior of the robot must be carefully designed in conjunction with
the interaction context and task at hand to enhance the educational
interaction and avoid student distraction.
It is possible that the positive cognitive and affective learning out-
comes of robot tutors are not directly caused by the robot having a
physical presence, but rather the physical presence of the robot pro-
motes social behaviors in the learner that, in turn, foster learning and
create a positive learning experience. Robots have been shown to have
a positive impact on compliance (26), engagement (2123), and con-
formity (20), which, in turn, are conducive to achieving learning gains.
Hence, a perhaps valuable research direction is to explore what it is
about social robots that affects the first-order outcomes of engage-
ment, persuasion, and compliance.
Robot role
Social robots for education include a variety of robots having differ-
ent roles. Beyond the typical role of a teacher or a tutor, robots can
also support learning through peer-to-peer relationships and can
support skill consolidation and mastery by acting as a novice. In this
section, we provide an overview of the different roles a robot can
adopt and what their educational benefits are.
Robot as tutor or teacher
As a tutor or teacher, robots provide direct curriculum support
through hints, tutorials, and supervision. These types of educational
robots, including teaching assistant robots (63), have the longest
history of research and development, often targeting curricular
domains for young children. Early field studies placed robots into
classrooms to observe whether they would have any qualitative
impact on the learners’ attitude and progress, but current research
tends toward controlled experimental trials in both laboratory
settings and classrooms (64).
Cognitive outcomes
–1 0123
Number of studies
–1 01
Number of studies
Fig. 3. Histograms of effect sizes (Cohen’s d) for all cognitive and affective
outcomes of robot tutors in the meta-analysis. These combine comparisons
between robots and alternative educational technologies but also comparisons
between different implementations of the robot and its tutoring behavior. In the
large majority of results, adding a robot or adding supportive behavior to the robot
improves outcomes.
Fig. 2. Diversity of robots in education. (A) Types of robots used in the studies.
(B) Nations where the research studies were run.
by guest on September 5, 2018 from
Belpaeme et al., Sci. Robot. 3, eaat5954 (2018) 15 August 2018
6 of 9
A commercial tutor robot called IROBI (Yujin Robotics) was
released in the early 2000s. Designed to teach English, IROBI was
shown to enhance both concentration on learning activities and
academic performance compared with other teaching technology,
such as audio material and a web-based application (65).
The focus on younger children links robot education research with
other scientific areas, such as language development and develop-
mental psychology (66). On the basis of the earlier work that studied
socialization between toddlers and robots in a nursery school (67),
a fully autonomous robot was deployed in classrooms. It was shown
that the vocabulary skills of 18- to 24-month-old toddlers improved
significantly (68). Much of the work in which the robot is used as a
tutor focuses on one-to-one interactions, because these offer the
greatest potential for personalized education.
In some cases, the robot is used as a novel channel through
which a lecture is delivered. In these cases, the robot is not so much
interacting with the learners but acts as a teacher or an assistant for
the teacher (69). The value of the robot in this case lies in improving
attention and motivation in the learners, while the delivery and
assessment is done by the human teacher. Here, the delivery is often
one to many, with the robot addressing an entire group of learners
(33, 63, 69).
Robot as peer
Robots can also be peers or learning companions for humans. Not
only does a peer have the potential of being less intimidating than a
tutor or teacher, peer-to-peer interactions can have significant
advantages over tutor-to-student interactions. Robovie was the first
fully autonomous robot to be introduced into an elementary school
(70). It was an English-speaking robot targeting two grades (first
and sixth) of Japanese children. Through field trials conducted over
2 weeks, improvements in English language skills were observed in
some children. In one case, longer periods of attention on learning
tasks, faster responses, and more accurate responses were shown with
a peer robot compared with an identical-looking tutor robot (19). A
long-term primary school study showed that a peer-like humanoid
robot able to personalize the interaction could increase child learn-
ing of novel subjects (48). Often, the robot is presented as a more
knowledgeable peer, guiding the student along a learning trajectory
that is neither too easy nor too challenging. However, the role of
those robots sometimes becomes ambiguous (tutor versus peer), and
it is difficult to place one above the other in general. Learning com-
panions (71), which offer motivational support but otherwise are
not tutoring, are also successful cases of a peer-like robot.
Robot as novice
Considerable educational benefits can also be obtained from a robot
that takes the role of a novice, allowing the student to take on the
role of an instructor that typically improves confidence while, at the
same time, establishing learning outcomes. This is an instance of
learning by teaching, which is widely known in human education,
also referred to as the protégé effect (72). This process involves the
learner making an effort to teach the robot, which has a direct
impact on their own learning outcomes.
The care-receiving robot (CRR) was the first robot designed
with the concept of a teachable robot for education (73). A small
humanoid robot introduced into English classes improved the
vocabulary learning of 3- to 6-year-old Japanese children (5). The
robot was designed to make deliberate errors in English vocabulary
but could be corrected through instruction by the children. In addi-
tion, CRR was shown to engage children more than alternative tech-
nology, which eventually led to the release of a commercial product
based on the principle of a robot as a novice (74).
This novice role can also be used to teach motor skills. The CoWriter
project explored the use of a teachable robot to help children improve
their handwriting skills (13). A small humanoid robot in conjunction
with a touch tablet helped children who struggled with handwriting
to improve their fine motor skills. Here, the children taught the robot,
who initially had very poor handwriting, and in the process of doing
so, the children reflected on their own writing and showed im-
proved motor skills (13). This suggests that presenting robots as
novices has potential to develop meta-cognitive skills in learners,
because the learners are committing to instructing the learning ma-
terial, requiring a higher level of understanding of the material and an
understanding of the internal representations of their robot partner.
In our meta-analysis, the robot was predominantly used as a
tutor (48%), followed by a role as teacher (38%). In only 9% of
studies was the robot presented as a peer or novice (Fig.1B).
The robot was often used to offer one-to-one interactions (65%),
with the robot used in a one-to-many teaching scenario in only 30%
of the studies (Fig.1C). In 5%, the robot had mixed interactions,
whereby, for example, it first taught more than one student and
then had one-on-one interactions during a quiz.
Although an increasing number of studies confirm the promise of
social robots for education and tutoring, this Review also lays bare a
number of challenges for the field. Robots for learning, and social
robotics in general, require a tightly integrated endeavor. Introducing
these technologies into educational practice involves solving tech-
nical challenges and changing educational practice.
With regard to the technical challenges, building a fluent and
contingent interaction between social robots and learners requires
the seamless integration of a range of processes in artificial intelli-
gence and robotics. Starting with the input to the system, the robot
needs a sufficiently correct interpretation of the social environment
Fig. 4. Illustrative examples of social robots for learning. (A) iCat robot teach-
ing young children to play chess (76). (B) Nao robot supporting a child to improve
her handwriting (13). (C) Keepon robot tutoring an adult in a puzzle game (27).
(D) Pepper robot providing motivation during English classes for Japanese
chi ldren (74).
by guest on September 5, 2018 from
Belpaeme et al., Sci. Robot. 3, eaat5954 (2018) 15 August 2018
7 of 9
for it to respond appropriately. This requires significant progress in
constituent technical fields, such as speech recognition and visual
social signal processing, before the robot can access the social envi-
ronment. Speech recognition, for example, is still insufficiently
robust to allow the robot to understand spoken utterances from
young children. Although these shortcomings can be resolved by
using alternative input media, such as touch screens, this does place
a considerable constraint on the natural flow of the interaction. For
robots to be autonomous, they must make decisions about which
actions to take to scaffold learning. Action selection is a challenging
domain at best and becomes more difficult when dealing with a
pedagogical environment, because the robot must have an under-
standing of the learner’s ability and progress to allow it to choose
appropriate actions. Finally, the generation of verbal and nonverbal
output remains a challenge, with the orchestrated timing of verbal
and nonverbal actions a prime example. In summary, social interac-
tion requires the seamless functioning of a wide range of cognitive
mechanisms. Building artificial social interaction requires the artifi-
cial equivalent of these cognitive mechanisms and their interfaces,
which is why artificial social interaction is perhaps one of the most
formidable challenges in artificial intelligence and robotics.
Introducing social robots in the school curriculum also poses a
logistical challenge. The generation of content for social robots for
learning is nontrivial, requiring tailor-made material that is likely to
be resource-intensive to produce. Currently, the value of the robot
lies in tutoring very specific skills, such as mathematics or hand-
writing, and it is unlikely that robots can take up the wide range of
roles a teacher has, such as pedagogical and carer roles. For the time
being, robots are mainly deployed in elementary school settings. Al-
though some studies have shown the efficacy of tutoring adolescents
and adults, it is unclear whether the approaches that work well
for younger children transfer to tutoring older learners.
Introducing robots might also carry risks. For example, studies
of ITS have shown that children often do not make the best use of
on-demand support and either rely too much on the help function or
avoid using help altogether, both resulting in suboptimal learning.
Although strategies have been explored to mitigate this particular
problem in robots (4), there might be other problems specific to
social robots that still need to be identified and for which solutions
will be needed.
Social robots have, in the broadest sense, the potential to become
part of the educational infrastructure, just as paper, white boards, and
computer tablets have. Next to the functional dimension, robots
also offer unique personal and social dimensions. A social robot has
the potential to deliver a learning experience tailored to the learner,
supporting and challenging students in ways unavailable in current
resource-limited educational environments. Robots can free up pre-
cious time for human teachers, allowing the teacher to focus on
what people still do best: providing a comprehensive, empathic, and
rewarding educational experience.
Next to the practical considerations of introducing robots in edu-
cation, there are also ethical issues. How far do we want the educa-
tion of our children to be delegated to machines, and social robots
in particular? Overall, learners are positive about their experience with
robots for learning, but parents and teaching staff adopt a more
cautious attitude (75). There is much to gain from using robots, but
what do we stand to lose? Might robots lead to an impoverished
learning experience where what is technologically possible is prior-
itized over what is actually needed by the learner?
Notwithstanding, robots show great promise when teaching
restricted topics, with effect sizes on cognitive outcomes almost
matching those of human tutoring. This is remarkable, because our
meta-analysis gathered results from a wide range of countries using
different robot types, teaching approaches, and deployment contexts.
Although the use of robots in educational settings is limited by tech-
nical and logistical challenges for now, the benefits of physical
embodiment may lift robots above competing learning technolo-
gies, and classrooms of the future will likely feature robots that
assist a human teacher.
1. N. C. Krämer, G. Bente, Personalizing e-Learning. The social effects of pedagogical agents.
Educ. Psychol. Rev. 22, 71–87 (2010).
2. J. A. Kulik, J. D. Fletcher, Effectiveness of intelligent tutoring systems: A meta-analytic
review. Rev. Educ. Res. 86, 42–78 (2016).
3. J. Kennedy, P. Baxter, E. Senft, T. Belpaeme, in Proceedings of the International Conference
on Social Robotics (Springer, 2015), pp. 327–336.
4. A. Ramachandran, A. Litoiu, B. Scassellati, in Proceedings of the 11th ACM/IEEE Conference
on Human-Robot Interaction (IEEE, 2016), pp. 247–254.
5. F. Tanaka, S. Matsuzoe, Children teach a care-receiving robot to promote their learning:
Field experiments in a classroom for vocabulary learning. J. Hum. Robot Interact. 1, 78–95
6. I. Leite, C. Martinho, A. Paiva, Social robots for long-term interaction: A survey.
Int. J. Soc. Robot. 5, 291–308 (2013).
7. J. Han, Robot-Aided Learning and r-Learning Services (INTECH Open Access Publisher,
8. O. Mubin, C. J. Stevens, S. Shahid, A. Al Mahmud, J.-J. Dong, A review of the applicability
of robots in education. J. Technol. Educ. Learning 1, 1–7 (2013).
9. J. Gorham, The relationship between verbal teacher immediacy behaviors and student
learning. Commun. Educ. 37, 40–53 (1988).
10. P. L. Witt, L. R. Wheeless, M. Allen, A meta‐analytical review of the relationship between
teacher immediacy and student learning. Commun. Monogr. 71, 184–207 (2004).
11. M. Saerbeck, T. Schut, C. Bartneck, M. D. Janse, Expressive robots in education: Varying
the degree of social supportive behavior of a robotic tutor, in Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems, CHI’10 (ACM, 2010), pp. 1613–1622.
12. V. Girotto, C. Lozano, K. Muldner, W. Burleson, E. Walker, Lessons learned from
in-school use of rtag: A robo-tangible learning environment, in Proceedings of the 2016
CHI Conference on Human Factors in Computing Systems (ACM, 2016), pp. 919–930.
13. D. Hood, S. Lemaignan, P. Dillenbourg, When children teach a robot to write: An
autonomous teachable humanoid which uses simulated handwriting, in Proceedings of the
10th ACM/IEEE International Conference on Human-Robot Interaction (ACM, 2015), pp. 83–90.
14. A. Litoiu, B. Scassellati, Robotic coaching of complex physical skills, in Proceedings of the 10th
ACM/IEEE International Conference on Human-Robot Interaction (ACM, 2015), pp. 211–212.
15. J. Fasola, M. Mataric, A socially assistive robot exercise coach for the elderly.
J. Hum. Robot Interact. 2, 3–32 (2013).
16. A. Kulkarni, A. Wang, L. Urbina, A. Steinfeld, B. Dias, in The Eleventh ACM/IEEE
International Conference on Human Robot Interaction (IEEE Press, 2016), pp. 461–462.
17. B. Scassellati, J. Brawer, K. Tsui, S. N. Gilani, M. Malzkuhn, B. Manini, A. Stone, G. Kartheiser,
A. Merla, A. Shapiro, D. Traum, L. Petitto, Teaching language to deaf infants with a robot
and a virtual human, in Proceedings of the ACM CHI Conference on Human Factors in
Computing Systems, 21 to 26 April 2018, Montréal, Canada (ACM, 2018).
18. R. A. Richert, M. B. Robb, E. I. Smith, Media as social partners: The social nature of young
children’s learning from screen media. Child Dev. 82, 82–95 (2011).
19. C. Zaga, M. Lohse, K. P. Truong, V. Evers, The effect of a robot’s social character on
children’s task engagement: Peer versus tutor, in International Conference on Social
Robotics (Springer, 2015), pp. 704–713.
20. J. Kennedy, P. Baxter, T. Belpaeme, Comparing robot embodiments in a guided discovery
learning interaction with children. Int. J. Soc. Robot. 7, 293–308 (2015).
21. C. D. Kidd, C. Breazeal, Effect of a robot on user perceptions, in Proceedings of the 2004
IEEE/RSJ International Conference on Intelligent Robots and Systems, 2004 (IROS 2004) (IEEE,
2004), vol. 4, pp. 3559–3564.
22. J. Wainer, D. J. Feil-Seifer, D. A. Shell, M. J. Mataric, in Proceedings of the 16th IEEE
International Symposium on Robot and Human interactive Communication, RO-MAN
(IEEE, 2007), pp. 872–877.
23. H. Köse, P. Uluer, N. Akalın, R. Yorgancı, A. Özkul, G. Ince, The effect of embodiment in
sign language tutoring with assistive humanoid robots. Int. J. Soc. Robot. 7, 537–548
by guest on September 5, 2018 from
Belpaeme et al., Sci. Robot. 3, eaat5954 (2018) 15 August 2018
8 of 9
24. A. Powers, S. Kiesler, S. Fussell, C. Torrey, Comparing a computer agent with a humanoid
robot, in Proceedings of the 2nd ACM/IEEE International Conference on Human-Robot
Interaction (IEEE, 2007), pp. 145–152.
25. J. Li, The benefit of being physically present: A survey of experimental works comparing
copresent robots, telepresent robots and virtual agents. Int. J. Hum. Comput. Stud. 77,
23–37 (2015).
26. W. A. Bainbridge, J. W. Hart, E. S. Kim, B. Scassellati, The benefits of interactions with
physically present robots over video-displayed agents. Int. J. Soc. Robot. 3, 41–52 (2011).
27. D. Leyzberg, S. Spaulding, M. Toneva, B. Scassellati, The physical presence of a robot
tutor increases cognitive learning gains, in Proceedings of the 34th Annual Conference of
the Cognitive Science Society, CogSci 2012 (2012), pp. 1882–1887.
28. C. D. Kidd, C. Breazeal, A robotic weight loss coach, in Proceedings of the National
Conference on Artificial Intelligence (MIT Press, 2007), vol. 22, pp. 1985–1986.
29. J. Kennedy, S. Lemaignan, C. Montassier, P. Lavalade, B. Irfan, F. Papadopoulos, E. Senft,
T. Belpaeme, Child speech recognition in human-robot interaction: Evaluations and
recommendations, in Proceedings of the 2017 ACM/IEEE International Conference on
Human-Robot Interaction (ACM/IEEE, 2017), pp. 82–90.
30. P. Baxter, R. Wood, T. Belpaeme, A touchscreen-based ‘sandtray’ to facilitate, mediate and
contextualise human-robot social interaction, in Proceedings of the 7th Annual ACM/IEEE
International Conference on Human-Robot Interaction (ACM, 2012), pp. 105–106.
31. A. Ramachandran, C.-M. Huang, B. Scassellati, Give me a break! Personalized timing
strategies to promote learning in robot-child tutoring, in Proceedings of the 2017 ACM/
IEEE International Conference on Human-Robot Interaction (ACM, 2017), pp. 146–155.
32. D. Szafir, B. Mutlu, Pay attention! Designing adaptive agents that monitor and improve
user engagement, in Proceedings of the SIGCHI Conference on Human Factors in Computing
Systems, CHI’12 (ACM, 2012), pp. 11–20.
33. I. Leite, M. McCoy, D. Ullman, N. Salomons, B. Scassellati, Comparing models of
disengagement in individual and group interactions, in Proceedings of the 10th Annual
ACM/IEEE International Conference on Human-Robot Interaction (ACM, 2015), pp. 99–105.
34. A. Coninx, P. Baxter, E. Oleari, S. Bellini, B. Bierman, O. B. Henkemans, L. Cañamero, P. Cosi,
V. Enescu, R. Ros Espinoza, A. Hiolle, R. Humbert, B. Kiefer, Towards long-term social
child-robot interaction: Using multi-activity switching to engage young users.
J. Hum. Robot Interact. 5, 32–67 (2016).
35. S. Lemaignan, F. Garcia, A. Jacq, P. Dillenbourg, From real-time attention assessment to
“with-me-ness” in human-robot interaction, in Proceedings of the 11th ACM/IEEE
International Conference on Human-Robot Interaction (IEEE, 2017).
36. I. Leite, G. Castellano, A. Pereira, C. Martinho, A. Paiva, Empathic robots for long-term
interaction. Int. J. Soc. Robot. 6, 329–341 (2014).
37. A. Ramachandran, C.-M. Huang, E. Gartland, B. Scassellati, Thinking aloud with a tutoring
robot to enhance learning, in Proceedings of the 2018 ACM/IEEE International Conference
on Human-Robot Interaction (ACM, 2018), pp. 59–68.
38. C.-M. Huang, B. Mutlu, Modeling and evaluating narrative gestures for humanlike robots,
in Proceedings of the Robotics: Science and Systems Conference, RSS’13 (2013).
39. C.-M. Huang, B. Mutlu, The repertoire of robot behavior: Enabling robots to achieve
interaction goals through social behavior. J. Hum. Robot Interact. 2, 80–102 (2013).
40. J. Kennedy, P. Baxter, T. Belpaeme, The robot who tried too hard: Social behaviour of a
robot tutor can negatively affect child learning, in Proceedings of the 10th ACM/IEEE
International Conference on Human-Robot Interaction (ACM, 2015), pp. 67–74.
41. E. Yadollahi, W. Johal, A. Paiva, P. Dillenbourg, When deictic gestures in a robot can harm
child-robot collaboration, in Proceedings of the 17th ACM Conference on Interaction
Design and Children (ACM, 2018), pp. 195–206.
42. G. Gordon, C. Breazeal, Bayesian active learning-based robot tutor for children’s word-reading
skills, in Proceedings of the 29th AAAI Conference on Artificial Intelligence, AAAI-15 (2015).
43. D. Leyzberg, S. Spaulding, B. Scassellati, Personalizing robot tutors to individual learning
differences, in Proceedings of the 9th ACM/IEEE International Conference on Human-Robot
Interaction (ACM, 2014).
44. T. Schodde, K. Bergmann, S. Kopp, Adaptive robot language tutoring based on Bayesian
knowledge tracing and predictive decision-making, in Proceedings of the 2017 ACM/IEEE
International Conference on Human-Robot Interaction (ACM, 2017), pp. 128–136.
45. J. Janssen, C. van der Wal, M. Neerincx, R. Looije, Motivating children to learn arithmetic
with an adaptive robot game, in Proceedings of the Third international conference on Social
Robotics (ACM, 2011), pp. 153–162.
46. O. A. Blanson Henkemans, B. P. Bierman, J. Janssen, M. A. Neerincx, R. Looije,
H. van der Bosch, J. A. van der Giessen, Using a robot to personalise health education for
children with diabetes type 1: A pilot study. Patient Educ. Couns. 92, 174–181 (2013).
47. G. Gordon, S. Spaulding, J. K. Westlund, J. J. Lee, L. Plummer, M. Martinez, M. Das,
C. Breazeal, Affective personalization of a social robot tutor for children’s second
language skills, in Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence
(AAAI, 2016), pp. 3951–3957.
48. P. Baxter, E. Ashurst, R. Read, J. Kennedy, T. Belpaeme, Robot education peers in a
situated primary school study: Personalisation promotes child learning. PLOS ONE 12,
e0178126 (2017).
49. D. Leyzberg, E. Avrunin, J. Liu, B. Scassellati, Robots that express emotion elicit better
human teaching, in Proceedings of the 6th International Conference on Human-Robot
Interaction (ACM, 2011), pp. 347–354.
50. C. D. Kidd, “Designing for long-term human-robot interaction and application to weight
loss,” thesis, Massachusetts Institute of Technology (2008).
51. The meta-analysis data are available at
52. B. Bloom, M. Engelhart, E. Furst, W. Hill, D. Krathwohl, Taxonomy of Educational Objectives:
The Classification of Educational Goals. Handbook I: Cognitive Domain
(Donald McKay, 1956).
53. D. Krathwohl, B. Bloom, B. Masia, Taxonomy of Educational Objectives: The Classification of
Educational Goals. Handbook II: The Affective Domain (Donald McKay, 1964).
54. D. R. Krathwohl, A revision of bloom’s Taxonomy: An overview. Theory Pract. 41, 212–218
56. M. W. Lipsey, D. B. Wilson, Practical Meta-Analysis (Sage Publications, Inc, 2001).
57. C.-M. Huang, B. Mutlu, Learning-based modeling of multimodal behaviors for humanlike
robots, in Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot
Interaction (ACM, 2014), pp. 57–64.
58. B. S. Bloom, The 2 sigma problem: The search for methods of group instruction as
effective as one-to-one tutoring. Educ. Res. 13, 4–16 (1984).
59. K. VanLehn, The relative effectiveness of human tutoring, intelligent tutoring systems,
and other tutoring systems. Educ. Psychol. 46, 197–221 (2011).
60. T. W. Draper, W. W. Clayton, Using a personal robot to teach young children.
J. Genet. Psychol. 153, 269–273 (1992).
61. M. Imai, T. Ono, H. Ishiguro, Physical relation and expression: Joint attention for
human-robot interaction. IEEE Trans. Ind. Electron. 50, 636–643 (2003).
62. B. Mutlu, J. Forlizzi, J. Hodgins, A storytelling robot: Modeling and evaluation of
human-like gaze behavior, in Humanoid Robots, 2006 6th IEEE-RAS International
Conference (IEEE, 2006), pp. 518–523.
63. Z.-J. You, C.-Y. Shen, C.-W. Chang, B.-J. Liu, G.-D. Chen, A robot as a teaching assistant in
an English class, in Proceedings of the Sixth International Conference on Advanced Learning
Technologies (IEEE, 2006), pp. 87–91.
64. T. Belpaeme, P. Vogt, R. Van den Berghe, K. Bergmann, T. Göksun, M. De Haas, J. Kanero,
J. Kennedy, A. C. Küntay, O. Oudgenoeg-Paz, F. Papadopoulos, Guidelines for designing
social robots as second language tutors. Int. J. Soc. Robot. 10, 1–17 (2018).
65. J.-H. Han, M.-H. Jo, V. Jones, J.-H. Jo, Comparative study on the educational use of home
robots for children. J. Inform. Proc. Syst. 4, 159–168 (2008).
66. J. Movellan, F. Tanaka, I. Fasel, C. Taylor, P. Ruvolo, M. Eckhardt, The RUBI project:
A progress report, in Proceedings of the Second ACM/IEEE International Conference on
Human-Robot Interaction (ACM, 2007).
67. F. Tanaka, A. Cicourel, J. R. Movellan, Socialization between toddlers and robots at an
early childhood education center. Proc. Natl. Acad. Sci. U.S.A. 104, 17954–17958 (2007).
68. J. R. Movellan, M. Eckhardt, M. Virnes, A. Rodriguez, Sociable robot improves toddler
vocabulary skills, in Proceedings of the 4th ACM/IEEE International Conference on
Human-Robot Interaction (ACM, 2009), pp. 307–308.
69. M. Alemi, A. Meghdari, M. Ghazisaedy, Employing humanoid robots for teaching English
language in Iranian junior high-schools. Int. J. Humanoid Robot. 11, 1450022 (2014).
70. T. Kanda, T. Hirano, D. Eaton, H. Ishiguro, Interactive robots as social partners and peer
tutors for children: A field trial. Hum. Comput. Interact. 19, 61–64 (2004).
71. N. Lubold, E. Walker, H. Pon-Barry, Effects of voice-adaptation and social dialogue on
perceptions of a robotic learning companion, in The Eleventh ACM/IEEE International
Conference on Human Robot Interaction (IEEE Press, 2017), pp. 255–262.
72. C. C. Chase, D. B. Chin, M. A. Oppezzo, D. L. Schwartz, Teachable agents and the protégé
effect: Increasing the effort towards learning. J. Sci. Educ. Technol. 18, 334–352 (2009).
73. F. Tanaka, T. Kimura, The use of robots in early education: A scenario based on ethical
consideration, in Proceedings of the 18th IEEE International Symposium on Robot and
Human Interactive Communication (IEEE, 2009), pp. 558–560.
74. F. Tanaka, K. Isshiki, F. Takahashi, M. Uekusa, R. Sei, K. Hayashi, Pepper learns together
with children: Development of an educational application, in IEEE-RAS 15th International
Conference on Humanoid Robots, HUMANOIDS 2015 (IEEE, 2015), pp. 270–275.
75. J. Kennedy, S. Lemaignan, T. Belpaeme, The cautious attitude of teachers towards social
robots in schools, in Proceedings of the Robots 4 Learning Workshop at RO-MAN 2016 (2016).
76. I. Leite, A. Pereira, G. Castellano, S. Mascarenhas, C. Martinho, A. Paiva, Social robots in
learning environments: A case study of an empathic chess companion, in Proceedings of
the International Workshop on Personalization Approaches in Learning Environments (2011).
77. R. R. Hake, Interactive-engagement vs. traditional methods: A six-thousand-student
survey of mechanics test data for introductory physics courses. Am. J. Phys. 66, 64–74
78. C. Bartneck, D. Kulić, E. Croft, S. Zoghbi, Measurement instruments for the
anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of
robots. Int. J. Soc. Robot. 1, 71–81 (2009).
by guest on September 5, 2018 from
Belpaeme et al., Sci. Robot. 3, eaat5954 (2018) 15 August 2018
9 of 9
79. R. F. Ferguson, The Tripod Project Framework (Tripod, 2008).
80. M. Alemi, A. Meghdari, M. Ghazisaedy, The impact of social robotics on l2 learners’
anxiety and attitude in English vocabulary acquisition. Int. J. Soc. Robot. 7, 523–535
81. M. Fridin, Storytelling by a kindergarten social assistive robot: A tool for constructive
learning in preschool education. Comput. Educ. 70, 53–64 (2014).
82. D. Jolliffe, D. P. Farrington, Development and validation of the basic empathy scale.
J. Adolesc. 29, 589–611 (2006).
Acknowledgments: We are grateful to E. Ashurst for support in collecting the data for the
meta-analysis. Funding: This work is partially funded by the H2020 L2TOR project (688014),
Japan Society for the Promotion of Science KAKENHI (15H01708), and NSF award 1139078.
Author contributions: All authors contributed equally to the manuscript; T.B. and J.K.
contributed to the meta-analysis. Competing interests: J.K. is a research scientist at Disney
Research. Data and materials availability: The meta-analysis data are available at https://
Submitted 31 March 2018
Accepted 23 July 2018
Published 15 August 2018
Citation: T. Belpaeme, J. Kennedy, A. Ramachandran, B. Scassellati, F. Tanaka, Social robots for
education: A review. Sci. Robot. 3, eaat5954 (2018).
by guest on September 5, 2018 from
Social robots for education: A review
Tony Belpaeme, James Kennedy, Aditi Ramachandran, Brian Scassellati and Fumihide Tanaka
DOI: 10.1126/scirobotics.aat5954
, eaat5954.3Sci. Robotics
This article cites 34 articles, 1 of which you can access for free
Terms of ServiceUse of this article is subject to the
is a registered trademark of AAAS.Science Robotics
American Association for the Advancement of Science. No claim to original U.S. Government Works. The title
New York Avenue NW, Washington, DC 20005. 2017 © The Authors, some rights reserved; exclusive licensee
(ISSN 2470-9476) is published by the American Association for the Advancement of Science, 1200Science Robotics
by guest on September 5, 2018 from
... Robots were first introduced in educational contexts as tools for programming or explaining technology, but as robot technology advanced humanoid robots are nowadays also used as embodied social agents in education (Angel-Fernandez & Vincze, 2018;Belpaeme et al., 2018;Benitti, 2012;Mubin et al., 2013). The motive for using humanoid robots in educational environments is that they are supposed to increase students' motivation, engagement, and concentration (Keane et al., 2017;Pandey & Gelin, 2017). ...
... Hence, it has become interesting to explore the possibilities and limitations of using social robots for teaching and learning. When the robot is used as an embodied social agent, it can be designed to act in the role of a peer, learning companion, tutor, teaching assistant, or teacher (Belpaeme et al., 2018;Pandey & Gelin, 2017;Sharkey, 2016;Woo et al., 2021), with varying levels of involvement in the learning task (Mubin et al., 2013). Social robots have also been assigned the role of learning companion, as a peer or a tutee, often in combination with the theory of learning by teaching (Pandey & Gelin, 2017). ...
... However, it seemed difficult to practice well-developed communicative skills with the robot. Results that are consistent with previous research (Belpaeme et al., 2018). Despite this, it emerged that the robot has advantages, including the fact that it asks other/better/more questions than a human learning companion does. ...
Full-text available
The idea of using social robots for teaching and learning has become increasingly prevalent and robots are assigned various roles in different educational settings. However, there are still few authentic studies conducted over time. Our study explores teachers' perceptions of a learning activity in which a child plays a digital mathematics game together with a humanoid robot. The activity is based on the idea of learning-by-teaching where the robot is designed to act as a tutee while the child is assigned the role of a tutor. The question is how teachers perceive and talk about the robot in this collaborative child-robot learning activity? The study is based on data produced during a 2-years long co-design process involving teachers and students. Initially, the teachers reflected on the general concept of the learning activity, later in the process they participated in authentic game-play sessions in a classroom. All teachers' statements were transcribed and thematically coded, then categorized into two different perspectives on the robot: as a social actor or didactic tool. Activity theory was used as an analytical lens to analyze these different views. Findings show that the teachers discussed the activity’s purpose, relation to curriculum, child-robot collaboration, and social norms. The study shows that teachers had, and frequently switched between, both robot-perspectives during all topics, and their perception changed during the process. The dual perspectives contribute to the understanding of social robots for teaching and learning, and to future development of educational robot design.
... Robots' social behaviour is also an important factor in their performance: for example, expressive robots narrating stories to preschool children have an effect on children's recollection of stories that is comparable to expressive humans, and better than static, inexpressive humans [9]. When used in an educational context, social robots have also been shown to have a positive effect on learning outcomes, even at a very young age [4]. Even when used in schools, social robots usually interact with children one-to-one, especially when they act as tutors. ...
... Even when used in schools, social robots usually interact with children one-to-one, especially when they act as tutors. While our work, analogously to Belpaeme et al.'s [4] focuses on one-toone interaction to support learning, we propose the use of a social robot in an informal, playful context. However, children are not independent users of technology, and as such, parental expectations and concerns must also be taken in consideration: while an exploratory study suggest a generally positive attitude towards storytelling robots for children [19], the attitude of parents towards technology has a strong cultural component and can also change over time. ...
... Robotic systems have been effectively employed in educational applications with the aim of increasing engagement and social interaction among youngsters, rehabilitation or therapy, as well as enhancing the overall learning experience [1]. In particular, there are many examples in the existing literature where the use of robots has made the educational experience more engaging and enjoyable, thus supporting knowledge retention, and leading to an overall positive perception of the experience, e.g., [2], [3], [4]. ...
... In this section, we briefly discuss relevant literature on the use of robotics systems in education. Robot systems are increasingly being used in education as tutors or peer learners, given their ability to increase cognitive engagement and, in some tasks, be as effective as human tutoring [1]. Examples of social robots used in education are Keepon and Dragonbot, which are both animal-like, and human-like robots such as NAO, Wakamaru, and Robovie [10]. ...
This paper describes the methodology and outcomes of a series of educational events conducted in 2021 which leveraged robot swarms to educate high-school and university students about epidemiological models and how they can inform societal and governmental policies. With a specific focus on the COVID-19 pandemic, the events consisted of 4 online and 3 in-person workshops where students had the chance to interact with a swarm of 20 custom-built brushbots -- small-scale vibration-driven robots optimized for portability and robustness. Through the analysis of data collected during a post-event survey, this paper shows how the events positively impacted the students' views on the scientific method to guide real-world decision making, as well as their interest in robotics.
... Participants noted that enhancing access could have implications for improving disaster relief, for instance, by providing ambulance services. RAS could also help those in remote areas to access basic services, with examples ranging from how "early childhood remote diagnosis and consultation may reduce mortality" to delivering medical supplies, blood or vaccines 38 , or improving education 30 . Furthermore, RAS could facilitate environmental conservation and research in inaccessible locations 39 . ...
Full-text available
Robotics and autonomous systems are reshaping the world, changing healthcare, food production and biodiversity management. While they will play a fundamental role in delivering the UN Sustainable Development Goals, associated opportunities and threats are yet to be considered systematically. We report on a horizon scan evaluating robotics and autonomous systems impact on all Sustainable Development Goals, involving 102 experts from around the world. Robotics and autonomous systems are likely to transform how the Sustainable Development Goals are achieved, through replacing and supporting human activities, fostering innovation, enhancing remote access and improving monitoring. Emerging threats relate to reinforcing inequalities, exacerbating environmental change, diverting resources from tried-and-tested solutions and reducing freedom and privacy through inadequate governance. Although predicting future impacts of robotics and autonomous systems on the Sustainable Development Goals is difficult, thoroughly examining technological developments early is essential to prevent unintended detrimental consequences. Additionally, robotics and autonomous systems should be considered explicitly when developing future iterations of the Sustainable Development Goals to avoid reversing progress or exacerbating inequalities.
... Social robots stand to advance human capabilities and wellbeing across a wide span of domains like education (Mubin, Stevens, Shahid, Al Mahmud, & Dong, 2013;Belpaeme, Kennedy, Ramachandran, Scassellati, & Tanaka, 2018) and healthcare (Broekens, Heerink, Rosendal, et al., 2009;Breazeal, 2011;Cifuentes, Pinto, Céspedes, & Múnera, 2020). For social robots to be successfully integrated into the society (especially those designed as sociable partners (Breazeal, 2004)), they are expected to behave in accordance with human social norms (Bartneck & Forlizzi, 2004); failure to do so can risk interaction breakdowns (Porfirio, Sauppé, Albarghouthi, & Mutlu, 2018;Mutlu & Forlizzi, 2008). ...
Conference Paper
Full-text available
To enable natural and fluid human-robot interactions, robots need to not only be able to communicate with humans through natural language, but also do so in a way that complies with the norms of human interaction, such as politeness norms. Doing so is particularly challenging, however, in part due to the sensitivity of such norms to a host of different contextual and intentional factors. In this work, we explore computational models of context-sensitive human politeness norms, using ex-plainable machine learning models to demonstrate the value of both speaker intention and task context in predicting adherence with indirect speech norms. We argue that this type of model, if integrated into a robot cognitive architecture, could be highly successful at enabling robots to predict when they themselves should similarly adhere to these norms.
... A service robot with social interactive features is an autonomous robot that interacts and communicates with humans by following social behaviors and norms expected by their users (Bartneck and Forlizzi 2004). This kind of robots are developed for application domains such as health care (Edwards et al. 2018), education (Belpaeme et al. 2018), entertainment (Pérula-Martínez et al. 2017), and caretaking (Moyle et al. 2018). Most users in these application domains prefer human-like interaction abilities in human-robot interaction since it provides a seamless bond between robots and them (Tapus et al. 2007;Yuan and Li 2017). ...
Full-text available
Service robots with social interactive features are developed to cater to the demand in various application domains. These robots often need to approach toward users to accomplish typical day-to-day services. Thereby, the approaching behavior of a service robot is a crucial factor in developing social interactivity between users and the robot. In this regard, a robot should be capable of maintaining proper proxemics at the termination position of an approach that improves the comfort of users. Proxemics preferences of humans depend on physical user behavior as well as personal factors. Therefore, this paper proposes a novel method to adapt the termination position of an approach based on physical user behavior and user feedback. Physical behavior of a user is perceived by the robot through analyzing skeletal joint movements of the user. These parameters are taken as inputs for a fuzzy neural network that determines the appropriate interpersonal distance. The preference of a user is learnt by modifying the internal parameters of the fuzzy neural network based on user feedback. A user study has been conducted to compare and contrast behavior of the proposed system over the existing approaches. The outcomes of the user study confirm a significant improvement in user satisfaction due to the adaptation toward users based on feedback.
The inclusion of technologies such as telepractice, and virtual reality in the field of communication disorders has transformed the approach to providing healthcare. This research article proposes the employment of similar advanced technology – social robots, by providing a context and scenarios for potential implementation of social robots as supplements to stuttering intervention. The use of social robots has shown potential benefits for all the age group in the field of healthcare. However, such robots have not yet been leveraged to aid people with stuttering. We offer eight scenarios involving social robots that can be adapted for stuttering intervention with children and adults. The scenarios in this article were designed by human–robot interaction (HRI) and stuttering researchers and revised according to feedback from speech-language pathologists (SLPs). The scenarios specify extensive details that are amenable to clinical research. A general overview of stuttering, technologies used in stuttering therapy, and social robots in health care is provided as context for treatment scenarios supported by social robots. We propose that existing stuttering interventions can be enhanced by placing state-of-the-art social robots as tools in the hands of practitioners, caregivers, and clinical scientists.
The use of AI and robots in library and information science is garnering attention due to early applications and their potential to contribute to the digital transformation of the information professions. This paper assesses the challenges and opportunities for LIS education in these topics. To achieve this aim, this paper reviews the curriculum, through subject descriptions, of five ALIA accredited LIS courses in Australia and the ALIA foundation knowledge documentation. Content analysis is employed to identify and assess the framing of AI, robotics and related themes in the documentation. Findings indicate only one subject mentions AI to position subject content and none mention robotics. An analysis of the framing of related themes, such as digital technology, data, and information ethics, is discussed. Findings also indicate multiple areas for the inclusion of these topics within the five categories of the ALIA foundation knowledge, while allowing for differentiation among programmatic and institutional foci. This paper argues that a form of integration of these topics in LIS professional education will be necessary in order to meet future skills needs. This paper concludes with opportunities for LIS education in Australia.
In this paper, we examine the process of designing robot-performed iconic hand gestures in the context of a long-term study into second language tutoring with children of approximately 5 years old. We explore four factors that may relate to their efficacy in supporting second language tutoring: the age of participating children; differences between gestures for various semantic categories, e.g. measurement words, such as small, versus counting words, such as five; the quality (comprehensibility) of the robot’s gestures; and spontaneous reenactment or imitation of the gestures. Age was found to relate to children’s learning outcomes, with older children benefiting more from the robot’s iconic gestures than younger children, particularly for measurement words. We found no conclusive evidence that the quality of the gestures or spontaneous reenactment of said gestures related to learning outcomes. We further propose several improvements to the process of designing and implementing a robot’s iconic gesture repertoire.
Full-text available
In current youth-care programs, children with needs (mental health, family issues, learning disabilities, and autism) receive support from youth and family experts as one-to-one assistance at schools or hospitals. Occasionally, social robots have featured in such settings as support roles in a one-to-one interaction with the child. In this paper, we suggest the development of a symbiotic framework for real-time Emotional Support (ES) with social robots Knowledge Graphs (KG). By augmenting a domain-specific corpus from the literature on ES for children (between the age of 8 and 12) and providing scenario-driven context including the history of events, we suggest developing an experimental knowledge-aware ES framework. The framework both guides the social robot in providing ES statements to the child and assists the expert in tracking and interpreting the child's emotional state and related events over time.
Conference Paper
Full-text available
This paper describes research aimed at supporting children's reading practices using a robot designed to interact with children as their reading companion. We use a learning by teaching scenario in which the robot has a similar or lower reading level compared to children, and needs help and extra practice to develop its reading skills. The interaction is structured with robot reading to the child and sometimes making mistakes as the robot is considered to be in the learning phase. Child corrects the robot by giving it instant feedbacks. To understand what kind of behavior can be more constructive to the interaction especially in helping the child, we evaluated the effect of a deictic gesture, namely pointing on the child's ability to find reading mistakes made by the robot. We designed three types of mistakes corresponding to different levels of reading mastery. We tested our system in a within-subject experiment with 16 children. We split children into a high and low reading proficiency even-though they were all beginners. For the high reading proficiency group, we observed that pointing gestures were beneficial for recognizing some types of mistakes that the robot made. For the earlier stage group of readers pointing were helping to find mistakes that were raised upon a mismatch between text and illustrations. However, surprisingly, for this same group of children, the deictic gestures were disturbing in recognizing mismatches between text and meaning.
Conference Paper
Full-text available
Children with insufficient exposure to language during critical developmental periods in infancy are at risk for cognitive, language, and social deficits [55]. This is especially difficult for deaf infants, as more than 90% are born to hearing parents with little sign language experience [48]. We created an integrated multi-agent system involving a robot and virtual human designed to augment language exposure for 6-12 month old infants. Human-machine design for infants is challenging, as most screen-based media are unlikely to support learning in [33]. While presently, robots are incapable of the dexterity and expressiveness required for signing, even if it existed, developmental questions remain about the capacity for language from artificial agents to engage infants. Here we engineered the robot and avatar to provide visual language to effect socially contingent human conversational exchange. We demonstrate the successful engagement of our technology through case studies of deaf and hearing infants.
Full-text available
In recent years, it has been suggested that social robots have potential as tutors and educators for both children and adults. While robots have been shown to be effective in teaching knowledge and skill-based topics, we wish to explore how social robots can be used to tutor a second language to young children. As language learning relies on situated, grounded and social learning, in which interaction and repeated practice are central, social robots hold promise as educational tools for supporting second language learning. This paper surveys the developmental psychology of second language learning and suggests an agenda to study how core concepts of second language learning can be taught by a social robot. It suggests guidelines for designing robot tutors based on observations of second language learning in human–human scenarios, various technical aspects and early studies regarding the effectiveness of social robots as second language tutors.
Full-text available
The benefit of social robots to support child learning in an educational context over an extended period of time is evaluated. Specifically, the effect of personalisation and adaptation of robot social behaviour is assessed. Two autonomous robots were embedded within two matched classrooms of a primary school for a continuous two week period without experimenter supervision to act as learning companions for the children for familiar and novel subjects. Results suggest that while children in both personalised and non-personalised conditions learned, there was increased child learning of a novel subject exhibited when interacting with a robot that personalised its behaviours, with indications that this benefit extended to other class-based performance. Additional evidence was obtained suggesting that there is increased acceptance of the personalised robot peer over a non-personalised version. These results provide the first evidence in support of peer-robot behavioural personalisation having a positive influence on learning when embedded in a learning environment for an extended period of time.
Conference Paper
Full-text available
An increasing number of human-robot interaction (HRI) studies are now taking place in applied settings with children. These interactions often hinge on verbal interaction to effectively achieve their goals. Great advances have been made in adult speech recognition and it is often assumed that these advances will carry over to the HRI domain and to interactions with children. In this paper, we evaluate a number of automatic speech recognition (ASR) engines under a variety of conditions, inspired by real-world social HRI conditions. Using the data collected we demonstrate that there is still much work to be done in ASR for child speech, with interactions relying solely on this modality still out of reach. However, we also make recommendations for child-robot interaction design in order to maximise the capability that does currently exist.
Conference Paper
Thinking aloud, while requiring extra mental effort, is a metacognitive technique that helps students navigate through complex problem-solving tasks. Social robots, bearing embodied immediacy that fosters engaging and compliant interactions, are a unique platform to deliver problem-solving support such as thinking aloud to young learners. In this work, we explore the effects of a robot platform and the think-aloud strategy on learning outcomes in the context of a one-on-one tutoring interaction. Results from a 2x2 between-subjects study (n=52) indicate that both the robot platform and use of the think-aloud strategy promoted learning gains for children. In particular, the robot platform effectively enhanced immediate learning gains, measured right after the tutoring session, while the think-aloud strategy improved persistent gains as measured approximately one week after the interaction. Moreover, our results show that a social robot strengthened students» engagement and compliance with the think-aloud support while they performed cognitively demanding tasks. Our work indicates that robots can support metacognitive strategy use to effectively enhance learning and contributes to the growing body of research demonstrating the value of social robots in novel educational settings.
Conference Paper
A common practice in education to accommodate the short attention spans of children during learning is to provide them with non-task breaks for cognitive rest. Holding great promise to promote learning, robots can provide these breaks at times personalized to individual children. In this work, we investigate personalized timing strategies for providing breaks to young learners during a robot tutoring interaction. We build an autonomous robot tutoring system that monitors student performance and provides break activities based on a personalized schedule according to performance. We conduct a field study to explore the effects of different strategies for providing breaks during tutoring. By comparing a fixed timing strategy with a reward strategy (break timing personalized to performance gains) and a refocus strategy (break timing personalized to performance drops), we show that the personalized strategies promote learning gains for children more effectively than the fixed strategy. Our results also reveal immediate benefits in enhancing efficiency and accuracy in completing educational problems after personalized breaks, showing the restorative effects of the breaks when administered at the right time.
Conference Paper
In this paper, we present an approach to adaptive language tutoring in child-robot interaction. The approach is based on a dynamic probabilistic model that represents the inter-relations between the learner's skills, her observed behaviour in tutoring interaction, and the tutoring action taken by the system. Being implemented in a robot language tutor, the model enables the robot tutor to trace the learner's knowledge and to decide which skill to teach next and how to address it in a game-like tutoring interaction. Results of an evaluation study are discussed demonstrating how participants in the adaptive tutoring condition successfully learned foreign language words.