ArticlePDF Available

Deriving Design Principles for Educational Chatbots from Empirical Studies on Human–Chatbot Interaction

Authors:
Deriving Design Principles for Educational Chatbots from Empirical Studies on Human–Chatbot
Interaction
교육용 챗봇 설계 원리 도출: 휴먼-챗봇 상호작용에 관한 실증연구를 토대로
저자
(Authors)
Hyojung Jung, Jinju Lee, Chaeyeon Park
출처
(Source)
한국디지털콘텐츠학회 논문지 21(3), 2020.3, 487-493 (7 pages)
Journal of Digital Contents Society 21(3), 2020.3, 487-493 (7 pages)
발행처
(Publisher)
한국디지털콘텐츠학회
Digital Contents Society
URL http://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE09321283
APA Style Hyojung Jung, Jinju Lee, Chaeyeon Park (2020). Deriving Design Principles for Educational Chatbots from
Empirical Studies on Human–Chatbot Interaction. 한국디지털콘텐츠학회 논문지, 21(3), 487-493.
이용정보
(Accessed)
저작권 안내
DBpia에서 제공되는 모든 저작물의 저작권은 원저작자에게 있으며, 누리미디어는 각 저작물의 내용을 보증하거나 책임을 지지 않습니다. 그리고 DBpia에서 제공되는 저작물은 DBpia와 구독
계약을 체결한 기관소속 이용자 혹은 해당 저작물의 개별 구매자가 비영리적으로만 이용할 수 있습니다. 그러므로 이에 위반하여 DBpia에서 제공되는 저작물을 복제, 전송 등의 방법으로 무단
이용하는 경우 관련 법령에 따라 민, 형사상의 책임을 질 수 있습니다.
Copyright Information
Copyright of all literary works provided by DBpia belongs to the copyright holder(s)and Nurimedia does not guarantee contents of the literary work or assume responsibility for the
same. In addition, the literary works provided by DBpia may only be used by the users affiliated to the institutions which executed a subscription agreement with DBpia or the
individual purchasers of the literary work(s)for non-commercial purposes. Therefore, any person who illegally uses the literary works provided by DBpia by means of reproduction or
transmission shall assume civil and criminal responsibility according to applicable laws and regulations.
단국대학교 죽전캠퍼스
220.149.***.10
2020/07/09 11:12 (KST)
Copyright 2020 The Digital Contents Society
487
http://www.dcs.or.kr
pISSN: 1598-2009
eISSN: 2287-738X
JDCS
디지털콘학회논문
Journal of Digital Contents Society
Vol. 21, No. 3, pp. 487-493, Mar. 2020
봇 설계 원: 휴먼-챗봇호작용에 관한 실증연구를 토대로
1 · 2*· 3
1단국대학교 자유교양대학 조교수
2한양대학교 교육공학과 박사과정
3한양대학교 교육공학과 석사과정
Deriving Design Principles for Educational Chatbots from
Empirical Studies on HumanChatbot Interaction
Hyojung Jung1 · Jinju Lee2* · Chaeyeon Park3
1Assistant Professor, College of General Education, Dankook University, Gyeonggi-do 16890, South Korea
2Doctoral Course, Department of Educational Technology, Hanyang University, Seoul 04763, South Korea
3Master’s Course, Department of Educational Technology, Hanyang University, Seoul 04763, South Korea
[ ]
연구에서는 교육용 챗봇대한 체계적연구통해 챗봇 설계 고려해원리도출하였다. 위하여육용
봇에 대한 선행연분석진행하였으며, 결과토대챗봇역할고려설계 원리제안하였. 선행연구를 토대
교육챗봇의 역할크게 튜터, 가자, 답자, 중재, 학습동료구분할 있었다. 역할별로 고려해야설계 원리를 탐색
결과, 튜터챗봇은 감성 원리(Live emotion principle), 원리(Modality principle), 외생부하 조절 원리(Extraneous principle)
고려해하는 것으로 나타났다. 가자 역할의 챗봇은 효과 원리(Bot effect principle), 답자봇을 개발할 때는 원리
(Gender principle) 양식 원리고려해한다. 재자 챗봇경우 중립적 감정 원리(Neutral emotion principle), 동료 학습
챗봇경우, 원리와 더불모방 원리(Imitation principle), 중립감정 원리(Neutral emotion principle) 고려해야 한다.
으로챗봇의 역할따른 콘텐츠 제시 방법교육 챗봇의 차별화된 역할대한 연구를 더욱 심층적으로 수행할 필요있다.
[Abstract]
This study derives design principles according to the role of chatbots through a systematic review of educational chatbots. We
propose design principles that should be considered, depending on the role of the chatbot. When designing a chatbot that plays
the role of a tutor, it is necessary to consider the Live emotion principle, Modality principle, and Extraneous principle. When
designing a chatbot that acts as an evaluator, the Bot effect principle should be considered. When developing a chatbot that acts
as a responder, the Gender principle and Modality principle should be considered. In the case of a chatbot that plays the role of
a moderator, it is necessary to consider the Neutral emotion principle, and in the case of a chatbot that plays the role of peer
learner, the Modality principle (voice), the Imitation principle, and the Neutral emotion principle should be considered. In the
future, it is necessary to study the method of contents presentation and the differentiated role of educational chatbots.
색인어 : 챗봇, 챗봇매개학습(CML), 설계원리
Key word : Chatbot, Chatbot-Mediated Learning (CML), Design Principles
http://dx.doi.org/10.9728/dcs.2020.21.3.487
This is an Open Access article distributed under
the terms of the Creative Commons Attribution
Non-CommercialLicense(http://creativecommons
.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial
use, distribution, and reproduction in any medium, provided the
original work is properly cited.
Received 25 January 2020; Revised 15 March 2020
Accepted 25 March 2020
*Corresponding Author; Jinju Lee
Tel: +82-2-2220-1128
E-mail: jinju.a.lee@gmail.com
단국대학교 죽전캠퍼스 | IP:220.149.***.10 | Accessed 2020/07/09 11:12(KST)
디지털콘텐학회논문지
(J. DCS) Vol. 21, No. 3, pp. 487-493, Mar. 2020
http://dx.doi.org/10.9728/dcs.2020.21.3.487
488
Ⅰ. Introduction
Chatbots are computer programs that help humans
communicate with computers through text or voice interactions.
With the proliferation of Massive Open Online Courses (MOOCs)
and the widespread use of messaging apps, the need for chatbots
in education is increasing. There are three reasons for introducing
chatbots. First, customer management costs can be lowered [11].
Second, they can shorten the time within which a response is
provided to the customer, can support the service 24 hours a day,
and can improve user satisfaction through customized
consultation. Third, it is possible to improve the product or
service by collecting information about the customer’s needs
during the conversation with the chatbot. We may expect the
same possibility in the context of education. When using chatbot
technology for educational purposes, providing feedback to
learners can be made more efficient, and it can be done all the
time, increasing learners’ satisfaction. In addition, learning
support may be optimized by collecting a variety of information
about the learners. However, while chatbot technology is
evolving, its integration into education tends to be rather sluggish
[1]. There is a lack of research on the design principles to
consider when developing an educational chatbot. This study
aims to promote the development of educational chatbots by
setting out the principles to be considered in designing
educational chatbots, based on systematic analysis.
RQ1: How have chatbots been incorporated into empirical
studies on humanchatbot interaction?
RQ2: What implications for educational chatbots can be
derived from the studies?
Ⅱ. Theoretical Background
2-1 Expectations and Roles of Chatbots
A chatbot is a computer program to simulate human
conversation via text or voice interaction [19]. Other terms for
chatbots include talkbots, chatterbots, conversational agents,
artificial conversational entities, and a conversational system.
Efforts have been made to introduce chatbots or similar
technologies in the education field, and related terms include a
pedagogical agent or intelligent pedagogical agent (IPA),
intelligent tutoring systems (ITS), and Artificial Intelligence
Markup Language (AIML) -based chatbot. In the context of
technology-mediated learning [2], chatbot-mediated learning
(CML) contributes to motivation, self-directed learning, and
individual learning by providing learners with individual learning
environments that enhance the learning process and its outcomes.
More specifically, chatbots can influence the learner’s learning
process the way in which information is found and
communicated. In other words, rather than being provided with
the contents passively, learners can support themselves to ask
questions and lead the way. Second, learners can effectively
support the learning process in large classrooms or in large online
courses such as MOOCs. This may contribute to lowering the
dissatisfaction experienced by learners and lowering the dropout
rate. Third, learners can help them to make the right judgment by
providing optimal information at the right time, and can provide
continuous feedback to learners / teachers.
Application area
Role of bot
Goal
Demonstration
Peer learner
Demonstration Partners
Ideation
Peer learner
Provide peer feedback
Fitness
Peer learner
Fitness companion
Q&A
Guide
Website navigation
Survey
Guide
Record response
Information retrieval
Guide
Search support
Customer service
Guide
Customer Agent
1.
챗봇 련 선행연
Table 1. An empirical study of a chatbot
In general, chatbots are responsible for providing guidance,
answering questions, or facilitating specific actions as coaches or
colleagues (Table 1). In the educational context, the role of the
chatbot can be set in various ways, which can be divided into five
roles (see table 2). They are: tutors who guide and support the
learning process of individual learners; evaluators who check the
learner’s progress and diagnose performance; respondents who
answer learners’ questions; communicators who mediate
instructors and learners through interaction with learners; and
fellow learners who exchange everyday conversations.
Educational role
of chatbot
Details
Tutor
Provide individual and personalized support
Evaluator
Assess learner’s progress and performance
Responder
Answer questions related to learning task
Moderator
Be a communicating channel between instructor and
learner
Peer learner
Be an interlocutor for common dialogue and conversation
2.
교육 역에 챗봇 역할
Table 2. Educational roles of a chatbot
2-2 Principles of Chatbot Design
The following should be considered when designing chomps
derived from Facebook (bot) [8], interoperability [12], and
Microsoft [17] design and development principles.
The principles in table 3 provide guidelines on how to interact
with chatbots from the UI or UX standpoint, but do not provide a
단국대학교 죽전캠퍼스 | IP:220.149.***.10 | Accessed 2020/07/09 11:12(KST)
Deriving Design Principles for Educational Chatbots from Empirical Studies on Human–Chatbot Interaction
489
http://www.dcs.or.kr
standard on the purpose for which it should be used. In order to
actively use a chatbot in an educational context, design and
development guidelines should be prepared from the viewpoint of
teaching and learning.
Category
Principles
Source
Consistency
Use the UI components of the chat platform
uniformly
[12]
Optimize for all users and usage
[12]
Shortening
Support a way to solve problems faster
[17]
Provide button and button-type replies to help
quick selection in limited circumstances
[8], [17]
Feedback
Minimize the waiting process and make the user
aware of the waiting state
[8], [17]
Provide notifications in appropriate situations
[17]
Conversation
Organize the flow of words and contexts
naturally, and maintain the standards of dialogue
[8], [12],
[17]
Ask your questions carefully and check your
intentions
[8]
Provide appropriate humor
[8]
Problem response
Provide opportunities to respond to failures
[8], [17]
Provide the ability to go back and cancel
[8], [12]
Recognition
Let users know clearly how to use chatbot
[8]
Make intuitive awareness of the chatbot’s UI
components
[8], [17]
3.
발을 한 설계 원리의
Table 3. Chatbot design principles (example)
NOTE: [8] Facebook, [12] Intercom, [17] Microsoft
Hints for deriving chatbot design principles can be found in the
Conversational Agents (CA) study. Traditional research was
mainly on agent support, voice, and appearance (see table 4). The
research that is required for the future is empirical and qualitative
study of the change due to the agent’s participation, and research
into the role of the agent.
Principle
Contents
Reference
Personification
principle
The learner learns better when the agent is
represented by a personalized method rather
than a non-personalized method.
[4], [13]
Voice principle
The learner learns better when exposed to a
human voice method (human-voice
method) rather than a machine-voice
method.
Image principle
The learner learns better when the speaker’s
face appears on the screen (image-present
method) rather than when it does not appear
(no-image method).
4.
화형 이전트의 설계 원리에 대한 연구
Table 4. Research related to conversational agents
Ⅲ. Methodology
In order to establish an empirical ground from which to derive
design principles for educational chatbots, we first explored
previous chatbot studies and summarized their findings. From
there we extracted several implications for a chatbot design that is
suitable in an educational context. The review process began by
identifying the relevant research papers from Social Science
Citation Index (SSCI) and Science Citation Index Expanded
(SCIE) journals, which are of high quality and impact.
Conference proceedings and conceptual papers were excluded
from the search. Research papers published since 2005 were
collected using the keywords “conversational agent”, “chatbot”,
“pedagogical agent”, “conversational system”, “dialog system”,
“chatterbot”, “chat bot”, “chat-bot”, and “intelligent pedagogical
agent”. After the search process, we screened the articles by
distinguishing empirical studies that focused on interactions
between humans and chatbots. A total of seven studies from six
articles were reviewed.
Ⅳ. Findings
4-1 Research question 1: How have chatbots been
incorporated into empirical studies on
human-chatbot interaction?
To answer the research question, we organized the review
findings into two sets; one sorted by chatbot feature and the other
by research variables and results. Basic information on each study
was included in the first set (see table 5). Of the seven studies
reviewed, all the researches were conducted under a higher
education setting except for that of Corti and Gillespie (2016) [6],
which was in an open setting, and that of van der Meij, van der
Meij, and Harnsen (2015) [21], at a secondary school. The articles
covered target knowledges in a varied range of disciplines such as
healthy eating behavior [3], the circulatory system [9],
instructional planning [14], and kinematics [21]. The chatbots
used in the studies also differed from each other.
The chatbot features examined in the studies were mostly
variations of delivery types (or representation types). They
included expressions made by chatbots (e.g., facial expression,
emotional expression, empathetic expression), the gender of the
chatbots (i.e., male and female), modality (e.g., voice, text), and
other representation types (e.g., head movement). A few studies
incorporated instructional features into chatbots by providing
prompts and feedback [9] and motivational scaffolding [21].
Ref.
no.
Setting
Participants
Context
Target
knowledge
Chatbot type
Chatbot
feature
[3]
Higher
education
144
-
Healthy
eating
behavior
Embodied
conversation
al
agent
(GRETA)
- Various
pr es en ta ti on
types
- Facial
expression
- Emotional
expression
- Modality
5.
챗봇 역할 따라 구분 선행 연구
Table 5. Articles reviewed sorted by chatbot features
단국대학교 죽전캠퍼스 | IP:220.149.***.10 | Accessed 2020/07/09 11:12(KST)
디지털콘텐학회논문지
(J. DCS) Vol. 21, No. 3, pp. 487-493, Mar. 2020
http://dx.doi.org/10.9728/dcs.2020.21.3.487
490
Ref.
no.
Setting
Participants
Context
Target
knowledge
Chatbot type
Chatbot
feature
[6]
-
108 adults
Lab
experiment
-
Artificial
conversation
al
agent
(Cleverbot)
- Modality
[9]
Higher
education
123
undergrads
Meta tutor
learning
envir onme
nt
Ci rc ul at ory
system
Four
pedagogica l
agents
- Gavin the
guide
- Mary the
monitor
- Pam the
planner
- Sam the
strategizer
- Prompt
and
feedback
[14]
Higher
education
142
college
students
Co mp ut er
literacy
course
Instructi onal
planning
Peda gogi cal
agent as a
learning
co mp an io n
(PAL)
- Gender
difference
- Emotional
expression
[14]
Higher
education
56
pre-service
teachers
Course in
introductor
y
educational
technology
Instructi onal
planning
Peda gogi cal
agent as a
learning
co mp an io n
(PAL)
- Gender
difference
- Empathic
expression
[18]
Higher
education
60
undergrads
Common
dialogue
-
E m bo d i ed
conversation
al
agent (ECA)
- Facial
expression
- Head
movement
[21]
Secondary
school
61
third-years
Inquiry
learning
Kinematics
Animated
pedagogica l
agent (APA)
- Motivational
scaffolding
- Modality
The major findings of the studies are listed in table 6. Overall,
the results showed a tendency for participants to project their
human-to-human interaction practices to their human-to-chatbot
interaction, especially when the chatbot was designed to be more
human-like. In detail, participants report more positive outcomes
when the chatbots express or represent emotion than when they
interact with chatbots designed to exhibit neutral emotion [3, 14,
18]. They also exhibited social stereotyping towards a gendered
chatbot [14]. In cases of modality, though the results were not
perfectly consistent, participants seemed to better understand a
text-based chatbot than a speaking chatbot [3], while they showed
more human-like interaction with the latter [6, 18].
4-2 Research question 2: What implications for
educational chatbots can be derived from the
studies?
From the review, we reorganized the findings with similar
attributes and characteristics. Explanations for each attribute were
Ref.
no.
Intervention
Dependent variable
Result
[3]
Presentation type
- Neutral expression
- Neutral expression
(human)
- Voice only
- Text only
- Consistent expression
- Inconsistent expression
Perception
- Likelihood of
following
- Ease of
understanding
- Trustworthiness
- Helpful
- Likeable
- Quality of
evidence
- Convincingness
Memory
Ease of understanding
- Text >
Neutral, human, voice
Trustworthiness
- Neutral, text, voice >
Human
Helpful
- Neutral, human >
Voice
Likeable
- Neutral, human >
Voice
6.
계와 관된 행 연구 결과 요약
Table 6. Summary of results in articles
Ref.
no.
Intervention
Dependent variable
Result
performance
Memory performance
- Voice, human, text >
Neutral
- Consistent >
Neutral, inconsistent
[6]
Screen
- Text
- Voice
Aware
- Participants were
informed that their
interlocutor is a chatbot
- Not informed
Intersubjective
effort
- Voice > Text
- Informed > Not informed
[9]
- Prompt and feedback
- No prompt and no
feedback
-Achievement
emotions
- Personality
- Agent response
- Pre-test
- Post-test
- Relationship between trait
emotions (anger, anxiety)
and personality
(agreeableness,
conscientiousness,
neuroticism)
for agent-directed emotion
(enjoyment, pride, boredom,
neutral)
- No significant relationship
between personality and trait
emotion on learning gain
[14]
Emotional expression
- Positive
- Negative
- Neutral
Gender of agent
- Male
- Female
- Social judgement
- Interest
- Self-efficacy
- Learning
Social judgement
- Positive, neutral > Negative
- Positive male > Positive
female
Interest
- Positive male > Positive
female
Learning
[14]
Empathetic response
- Responsive
- Nonresponsive
Gender of agent
- Male
- Female
- Social judgement
- Interest
- Self-efficacy
- Learning
Social judgement
- Male > Female
Interest
- Responsive >
Nonresponsive
Self-efficacy
- Responsive >
Nonresponsive
[18]
Interaction mode
- Written input
- Spoken input
Subject groups
- Science
- Humanities
User attitude
- Spoken input produces a
warmer attitude and richer
language use
- This effect is more evident
in the Humanities group
[21]
Time
- Pre-intervention
- During intervention 1
- During intervention 2
- After intervention
Condition
- Visible agent with
voice
- Voice only
- No agent
Student gender
- Boy
- Girl
- Task relevance
change
- Self-efficacy over
time
- Agent appraisal
- Pre-test
- Post-test
Self-efficacy
- Boy > Girl
- No main effect for
condition
Agent appraisal
- Girl > Boy
Learning
- Condition & gender fixed,
students made significant
progress over time
- Benefits of agent group
over control group is
doubtful
단국대학교 죽전캠퍼스 | IP:220.149.***.10 | Accessed 2020/07/09 11:12(KST)
Deriving Design Principles for Educational Chatbots from Empirical Studies on Human–Chatbot Interaction
491
http://www.dcs.or.kr
then elaborated in the learning context. The implications are as
follows.
ŸLive emotion chatbots are better when designed to display
consistent facial expressions or positive emotional
expressions.
ŸNeutral emotion a chatbot with a neutral emotional
expression is more acceptable for persuasion.
ŸModality written text is better for delivering information or
a guiding process; spoken text is better for affective support.
ŸExtraneous too many animated or visual graphics have a
detrimental effect on performance.
ŸGender people project social gender stereotyping according
to the chatbot’s gender; people value information from a
chatbot differently, depending on its gender representation.
ŸBot effect a chatbot can perform works that are redundant
and require accuracy better than a human can.
ŸImitation more human-like chatbots drive more human-like
interactions and establish a trusting relationship when giving
information.
After extracting the implications, they were matched with each
role of the educational chatbot (i.e., tutor, assessment, question
and answer, communication, common dialogue); see table 7.
Ⅴ. Discussion
This study derives design principles according to the role of a
chatbot by using a systematic review of recently published
literature on educational chatbots. This approach can be expected
to help in the design and development of educational chat-bots in
situations where there is insufficient chatbot development and
related research in an educational context. The findings of this
study can be summarized as follows.
5-1 Key result
In order to derive design principles for educational chatbots,
the seven studies examined in this study examined how
appearance characteristics such as facial expressions, gender, and
style of chatbot affect the learning process and performance. As a
result, when the chatbot expresses emotionally rather than
neutrally, text-based rather than speech-based human interactions
contribute more to learning. The design principles derived from
this are the Live emotion principle, Neutral emotion principle,
Modality principle, Extraneous principle, Gender principle, Bot
effect principle, Imitation principle, and so on. In addition, this
study matched design principles to be considered according to the
role of chatbot when designing an educational chatbot. When
designing a chatbot that plays the role of a tutor, it is necessary to
consider the Live emotion principle, Modality principle, and
Extraneous principle. When designing a chatbot that acts as an
evaluator, the Bot effect principle should be considered. When
developing a chatbot that acts as a responder, the Gender
principle and Modality principle should be considered. In the case
of a chatbot that plays the role of a moderator, it is necessary to
consider the Neutral emotion principle, and in the case of a
chatbot that plays the role of peer learner, the Modality principle
(voice), the Imitation principle, and the Neutral emotion principle
should be considered. In this study, we explored some principles
for educational chatbots based on previous studies, but most of
them were related to the appearance characteristics of chatbots. In
the future, research is needed on the contents presentation method
of chatbots and differentiated roles.
5-2 Areas for further study
As mentioned above, there are relatively few studies on the
principles to be considered in the design of educational chatbots
and the appropriate design principles according to the role of the
chatbots. Related research needs to be actively conducted in the
future, and research on suitable design principles is required
according to the purpose and role of the chatbot.
Prior studies have found that it is difficult to find consensus on
the characteristics of educationally effective chatbots, but learners
Educational role of
chatbot
Implication from the studies
Tutor
Live emotion chatbots are better when designed to
display consistent facial expressions or positive
emotional expressions
Modality written text is better for delivering
information or a guiding process; spoken text is better for
affective support
Extraneous too many animated or visual graphics have
a detrimental effect on performance
Evaluator
Bot effect a chatbot can perform works that are
redundant and require accuracy better than a human can
Responder
Gender people project social gender stereotypes to the
chatbot’s gender; people value information from a
chatbot differently, depending on its gender
representation
Modality; text written text is better for delivering
information
Moderator
Neutral emotion a chatbot with a neutral emotional
expression is more acceptable for persuasion and
establishing a trusting relationship than for giving
information
Peer learner
Modality; voice spoken input produces a warmer
attitude and richer language use
Imitation more human-like chatbots drive more
human-like interaction
Neutral emotion a chatbot with a neutral emotional
expression is more acceptable for persuasion and
establishing a trusting relationship than for giving
information
7.
출된 육용 계 원리
Table 7. Implication from review for educational chatbot
단국대학교 죽전캠퍼스 | IP:220.149.***.10 | Accessed 2020/07/09 11:12(KST)
디지털콘텐학회논문지
(J. DCS) Vol. 21, No. 3, pp. 487-493, Mar. 2020
http://dx.doi.org/10.9728/dcs.2020.21.3.487
492
want to learn with more human and emotional chatbots. Although
this may be beneficial in terms of motivation, further research is
needed to determine whether it will have significant effects on
learning outcomes. In addition, it is necessary to study the
differences between education through chatbots and through other
educational methods, and in short- and long-term settings.
It is also necessary to study how the role of the instructor and
how the interaction between the instructor and the learner is
changed by the educational use of the chatbot. Research is also
required on the side effects of using chatbots and the degree of
acceptance according to learners’ characteristics; for example,
study of how the chatbot’s performance varies according to a
learner’s ability to use a computer, propensity to cooperate,
learning style, and learning level. There is also a need for research
on the cost-effectiveness of educational use. It is also necessary to
discuss which educational context is the most effective when a
chatbot is used for any educational purpose, and that from a
cost-effectiveness analysis it is worth introducing a chatbot.
5-3 Limitation
This study has some limitations. First of all, although some
papers have educational contexts, they include cases that are not
for educational purposes, so it is hard to say that they derive
principles entirely for educational chatbots. Since this study did
not examine the gray literature, such as theses, current research,
academic journals, and research reports, there is a possibility of
publication bias. It is also difficult to avoid language bias because
it includes papers in English only. However, this study attempted
to study the special area of the educational chatbot, which was not
sufficiently examined in the past, and it is considered to have
sufficient advantages because it tried to derive differentiated
principles. In order to develop a chatbot with various purposes
and roles for educational purposes, it is necessary to make various
efforts with various experts.
Acknowledgements
This work was supported by National Research Foundation of
Korea Grant funded by the Korean Government(KRF-2019-S1A5
A8-036708)
References
[1] 6 Ways Artificial Intelligence and Chatbots Are Changing
Education. Chatbots magazine. Available:
https://chatbotsmagazine.com/six-ways-a-i-and-chatbots-ar
e-changing-education-c22e2d319bbf.
[2] Alavi, M., Leidner, D. E, “Research commentary:
Technology-mediated learningA call for greater depth
and breadth of research”, Information systems research,
Vol. 12, No. 1, pp. 1-10, 2001.
[3] Berry, D., Butler, L., de Rosis, F, “Evaluating a realistic
agent in an advice-giving task”, International Journal of
Human-Computer Studies, Vol. 63, No. 3, pp. 304-327,
2005.
[4] Bodemer, D., Ploetzner, R., Feuerlein, I., Spada, H, “The
active integration of information during learning with
dynamic and interactive visualisations”, Learning and
Instruction, Vol. 14, No. 3, pp. 325-341, 2004.
[5] Chatbots Infographic - Key Statistics 2017. Available:
https://www.bevytechnologies.com/infographic-chatbots-k
ey-statistics-2017.
[6] Corti, K., Gillespie, A, “Co-constructing intersubjectivity
with artificial conversational agents: People are more
likely to initiate repairs of misunderstandings with agents
represented as human”, Computers in Human Behavior,
Vol. 58, pp. 431-442, 2016.
[7] Eisman, E., López, V., Castro, J, “A framework for
designing closed domain virtual assistants”, Expert
Systems with Applications, Vol. 39, No. 3, pp. 3135-3144,
2012.
[8] Facebook for developers: Design Principles. Available:
https://developers.facebook.com/docs/messenger-platform/
introduction/general-best-practices/
[9] Harley, J., Carter, C., Papaionnou, N., Bouchet, F., Landis,
R., Azevedo, R., Karabachian, L, “Examining the
predictive relationship between personality and emotion
traits and stu-dents’ agent-directed emotions: towards
emotionally-adaptive agent-based learning environments”,
User Modeling and User-Adapted Interaction, Vol. 26, No.
2-3, pp. 177-219, 2016.
[10] Hasler, B., Tuchman, P., Friedman, D, “Virtual research
assistants: Replacing human interviewers by automated
avatars in virtual worlds”, Computers in Human Behavior,
Vol. 29, No. 4, pp. 1608-1616, 2013.
[11] Hayashi, Y., Ono, K, In 2013 IEEE International Workshop
on Robot and Human Interactive Communication, pp.
120-125. IEEE Press, Roman, 2013.
[12] Intercom: Principles of Bot design: Inside Intercom.
Available: https://blog.intercom.io/principles-bot-design/
[13] Kester, L., Kirschner, P., van Merriënboer, J, “The
management of cognitive load during complex cognitive
skill acquisition by means of computer-simulated problem
solving”, British Journal of Educational Psychology, Vol.
75, No. 1, pp. 71-85, 2005.
[14] Kim, Y., Baylor, A., Shen, E, “Pedagogical agents as
단국대학교 죽전캠퍼스 | IP:220.149.***.10 | Accessed 2020/07/09 11:12(KST)
Deriving Design Principles for Educational Chatbots from Empirical Studies on Human–Chatbot Interaction
493
http://www.dcs.or.kr
learning companions: the impact of agent emotion and
gender”, Journal of Computer Assisted Learning, Vol. 23,
No. 3, pp. 220-234, 2007.
[15] Louvet, J., Duplessis, G., Chaignaud, N., Vercouter, L.,
Kotowicz, J, “Modeling a collaborative task with social
commitments”, Procedia Computer Science, Vol. 112, pp.
377-386, 2017.
[16] Mell, J., Lucas, G., & Gratch, J, “An effective conversation
tactic for creating value over repeated negotiations”, In: the
2015 International Conference on Autonomous Agents and
Multiagent Systems, pp. 1567-1576. International
Foundation for Autonomous Agents and Multiagent
Systems, Istanbul, 2015.
[17] Microsoft: Principles of bot design. Available:
https://docs.microsoft.com/en-us/azure/bot-service/bot-ser
vice-design-principles?view=azure-bot-service-4.0
[18] Novielli, N., de Rosis, F., Mazzotta, I, “User attitude
towards an embodied conversational agent: Effects of the
interaction mode”, Journal of Pragmatics, Vol. 42, No. 9,
pp. 2385-2397, 2010.
[19] Rouse, M, “What is chatbot?” [Online]. Available:
https://searchcustomerexperience.techtarget.com/definition
/chatbot.
[20] Turunen, M., Hakulinen, J., Ståhl, O., Gambäck, B.,
Hansen, P., Rodríguez Gancedo, M., de la Cámara, R.,
Smith, C., Charlton, D., Cavazza, M, “Multimodal and
mobile conversational Health and Fitness Companions”,
Computer Speech & Language, Vol. 25, No. 2, pp.
192-209, 2011.
[21] van der Meij, H., van der Meij, J., Harmsen, R, “Animated
pedagogical agents effects on enhancing student
motivation and learning in a science inquiry learning
environment”, Educational Technology Research and
Development, Vol. 63, No. 3, pp. 381-403, 2015.
[22] Xu, K., Lombard, M, “Persuasive computing: Feeling peer
pressure from multiple computer agents”, Computers in
Human Behavior, Vol. 74, pp. 152-162, 2017.
정효정(Hyojung Jung)
이진주(Jinju Lee)
박채연(Chaeyeon Park)
단국대학교 죽전캠퍼스 | IP:220.149.***.10 | Accessed 2020/07/09 11:12(KST)
... To address these issues, promoting engagement through gender sensitivity is crucial (MR1). As Jung et al. (2020) noted, gender stereotyping in CA interactions can significantly impact how information is received and valued by users, especially in the computer science domain. Faenza et al. (2021) highlight the persistence of the digital gender gap, attributing it significantly to entrenched stereotypes that paint the computer science sector as dominantly masculine and unwelcoming to women, thereby deterring their participation. ...
... Establishing Credible and Relatable Interactions (MR4) is crucial to a supportive learning environment. As Jung et al. (2020) mentioned, imitating human-like traits in chatbots can foster more authentic interactions and build a trusting relationship, which is crucial for engaging learners effectively. As Latham (2022) points out, trust in the CA is vital for mitigating negative feelings such as frustration and enhancing the learning experience. ...
... Enhancing User Empowerment (MR6) plays a crucial role. As outlined by Jung et al. (2020) and Karra and Lasfar (2021), CAs can offer tailored support and feedback like a real teacher, fostering student empowerment. This empowerment is further enhanced by their ability to provide instant corrective feedback, generate automatic scoring, and assist with revisions throughout the learning process, as discussed by Zhang and Aslan (2021). ...
Conference Paper
Full-text available
In our study, we delve into gender disparities within online computer science courses, focusing on the impact of stereotypes on learners. Our focus is narrowed to gender disparities within online computer science courses, where we investigate the impact of gender stereotypes on learners' choices and performance. Drawing on Stereotype Threat Theory, we pinpoint psychological barriers hindering inclusivity and propose conversational agents as a design intervention to address these challenges. Our conversational agent prototype, developed and evaluated with a seventh-grade class, aims to dismantle gender stereotypes, to motivate girls to pursue computer science, and contribute to broader societal goals of gender equality in the IT field. Utilizing a design science approach, our findings provide actionable insights for platform providers to engage underrepresented users. In addition, our research contributes valuable design knowledge for conversational agents, specifically tailored to support girls in computer science education.
... These simulations are integrated into educational settings to enhance both technical and non-technical skill acquisition (Komasawa & Yokohira, 2023), allowing students to experience realistic scenarios that may be difficult or impossible to replicate in a traditional classroom, providing a deep understanding of the subject matter through immersive learning experiences. Additionally, according to Jung et al. (2020) and Kuhail et al. (2023), AI chatbots can simulate the social dynamics of human interaction, enhancing the learning experience through simulated social learning environments and engaging students in meaningful dialogues akin to those between peers, making the educational process more dynamic and closely mirroring the interactive nuances of peer learning. ...
Chapter
Full-text available
Online learning has become fundamental to modern academic and professional development. Amidst its widespread adoption, there is increasing integration of artificial intelligence (AI) to enhance the learning experience. Understanding student engagement within these AI-powered digital platforms is crucial, as it directly influences learning outcomes and satisfaction. This chapter provides a narrative review of key theories and models essential for analyzing engagement in virtual learning contexts. Particularly, it focuses on constructivist learning theory, social learning theory, cognitive load theory, flow theory, technology acceptance model, self-determination theory, cognitive theory of multimedia learning, and feedback intervention theory. By examining these frameworks through an epistemological lens, the chapter explores how knowledge acquisition, cognitive processing, and social learning principles interact within AI-enhanced educational contexts. The insights reported here can serve as a guide for optimizing AI to maximize student involvement and educational efficacy.
... The advent of AI-driven chatbots, exemplified by ChatGPT and Gemini, has catalyzed a substantial disruption within the sphere of education, profoundly impacting conventional pedagogical methods, evaluation procedures, and the monitoring of academic progress. The versatility inherent in AI chatbots is evident in their diverse applications (Ram et al., 2018;Vaidyam et al., 2019;Wollny et al., 2021;Xu et al., 2017), which have been rigorously explored within the academic domain (Hobert, 2019;Hobert & von Wolff 2019;Jung et al. 2020;Pérez-Marín, 2021;Pérez et al., 2020;Smutny & Schreiberova, 2020;Strzelecki & ElArabawy, 2024;Winkler & Söllner, 2018;Wollny et al., 2021). ...
Article
Full-text available
The widespread popularity of ChatGPT and other AI chatbots has sparked debate within the scientific community, particularly regarding their impact on academic integrity among students. While several studies have examined AI's role in education, a significant gap remains concerning how AI chatbot usage affects students' perceptions of academic integrity. This study aims to address this gap through rigorous quantitative techniques to explore the dynamics of student interactions with AI chatbots and assess whether this engagement diminishes academic integrity in higher education. Using a non-experimental design, the research investigates the causal relationship between AI chatbot usage and academic integrity, focusing on eight latent variables identified in the literature. A stratified sampling technique was employed to collect a representative sample of 594 participants via a 5-point Likert scale survey from four Southern Asian countries. The dataset underwent extensive statistical analysis using Structural Equation Modeling (SEM) techniques. The findings establish significant links between motivations for using AI chatbots and a decline in academic integrity. The study identifies a behavioral link between academic integrity and pedagogical limitations, highlighting traditional classroom-based pedagogy as the most impactful factor influencing students' motivation to engage with AI chatbots. This research not only quantitatively addresses ethical concerns related to AI in academia but also offers insights into user behavior by assigning distinct weights to post-usage behavioral factors, differentiating it from earlier studies that treated these factors equally.
... In education, Griol and Callejas proposed a modular architecture to integrate chatbots into multimodal applications for education, featuring the ability to easily adapt technical and pedagogical content [10]. Farah et al. proposed a technical blueprint for integrating task-oriented agents in education along with a proof-of-concept implementation [5], while Jung et al. proposed a set of chatbot design principles derived from a literature review of empirical studies [12]. ...
Chapter
Full-text available
Driven by the rising popularity of chatbots such as ChatGPT, there is a budding line of research proposing guidelines for chatbot design, both in general and specifically for digital education. Nevertheless, few researchers have focused on providing conceptual tools to frame the chatbot design process itself. In this paper, we present a model to guide the design of educational chatbots. Our model aims to structure participatory design sessions in which different stakeholders (educators, developers, and learners) collaborate in the ideation of educational chatbots. To validate our model, we conducted an illustrative study in which 25 software design students took part in a simulated participatory design session. Students were divided into eight groups, assigned the role of one of the different stakeholders, and instructed to use our model. The results of our qualitative analysis suggest that our model helped structure the design process and align the contributions of the various stakeholders.
... Academic integrity tools, like Turnitin, have introduced AI detectors to identify ChatGPT-generated content (Chechitelli, 2023), but this hasn't deterred the increasing use of such AI-based platforms. The versatility inherent in AI chatbots is evident in their diverse applications (Ram et al., 2018;Vaidyam et al., 2019;Wollny et al., 2021;Xu et al., 2017), which have been rigorously explored within the academic domain (Hobert, 2019;Hobert & von Wolff, 2019;Jung et al., 2020;Pérez-Marín, 2021;Pérez et al., 2020;Smutny & Schreiberova, 2020;Winkler & Söllner, 2018;Wollny et al., 2021). ...
Article
Full-text available
ChatGPT has been a revolutionizing AI Chatbot, that has impacted academia significantly. Although there has been a number of studies over the influence of AI and education, a focus towards why students use it is minimal. This study explores factors affecting student use of ChatGPT through a triangulation approach, blending exploratory, qualitative, and quantitative analyses. Through exploration of factors via prior literature and analysis of narratives by the respondents of qualitative analysis to measuring their relationships quantitatively using SEM techniques, the study identifies core reasons to use ChatGPT by students of higher education. The findings highlight the significance of six out of seven studied exogenous variables, demonstrating strong positive associations with students' intention to use ChatGPT and their actual engagement with the platform. These empirical results validate the importance of these factors in shaping students' interactions with ChatGPT, also revealing a previously undiscovered factor that enriches existing knowledge. By substantiating qualitative insights and establishing a robust quantitative model, it enhances students' comprehension of ChatGPT adoption in educational contexts, contributing significantly to the field's knowledge base and works as a guide for policymakers to effectively design strategies in order to get maximum benefit from the tool's usage in academia.
... The results concluded that there is a good relationship between the chatbot and e-commerce, and that by integrating the chatbot in e-commerce it increases the rates of customer confidence and satisfaction, which increases the chances of the purchase decision in general due to the speed of response and is not affected by the mood and works throughout the day, and its ability to nominate the appropriate product for the customer Display all product data quickly, and quickly meet the needs of the customer, and that there are statistically significant differences between customers due to the age factor in favor of the younger age groups, but the more complex the tasks, problems and inquiries facing the customer, the less confidence and use of the chatbot, so when comparing all dimensions with Some of them did not show statistically significant differences, and the study presented several research recommendations and its application Keywords: artificial intelligence -interactive chatbotschatbots -e-commerce -customer satisfaction -purchase intention and decision. (Jung et al., 2020) . - - - ...
Article
Innovative technologies are a part and parcel of modern education. Being integrated in the education process, neural network technologies are of particular interest now. Chatbots are used for different purposes: to gain and consolidate knowledge, to test knowledge, to study foreign languages, to organize the education process, etc. The use of generative AI contributes to education efficiency, improves skills and students’ motivation. One of the most popular and promising generative neural networks is ChatGPT. The use of this chatbot got mixed reviews, and this article examines the possibilities of using chatbots in education. We attempt to use ChatGPT as a support tool for compiling a training manual on developing listening skills for EFL (English as a foreign language) students. We analyze the advantages and disadvantages of using ChatGPT for compiling teaching materials and outline recommendations on how to use this tool efficiently. We found that, in general, the use of ChatGPT optimizes the process of preparing training materials for intermediate-level students. However, it is important to take into account a number of features such as the language register, the genre and text type, semantic breakdown of the text, the risk of neural network making an error, etc. The article concludes that the chatbot, if used correctly, can become an effective virtual assistant for an EFL teacher.
Article
Modern Software Engineering thrives with innovative tools that aid developers in creating better software grounded on quality standards. Software bots are an emerging and exciting trend in this regard, supporting numerous software development activities. As an emerging trend, few studies describe and analyze different bots in software development. This research presents a systematic literature review covering the state of the art of applied and proposed bots for software development. Our study spans literature from 2003 to 2022, with 82 different bots applied in software development activities, covering 83 primary studies. We found four bot archetypes: chatbots which focus on direct communication with developers to aid them, analysis bots that display helpful information in different tasks, repair bots for resolving software defects, and development bots that combine aspects of other bot technologies to provide a service to the developer. The primary benefits of using bots are increasing software quality, providing useful information to developers, and saving time through the partial or total automation of development activities. However, drawbacks are reported, including limited effectiveness in task completion, high coupling to third-party technologies, and some prejudice from developers toward bots and their contributions. We discovered that including Bots in software development is a promising field of research in software engineering that has yet to be fully explored.
Article
Full-text available
Our goal is to design software agents able to collaborate with a user on a document retrieval task. To this end, we studied a corpus of human-human collaborative document retrieval task involving a user and an expert. Starting with a scenario built from the analysis of this corpus, we adapt it for a human-machine collaborative task. We propose a model based on social commitments to link the task itself (collaborative document retrieval) and the interaction with the user that our assistant agent has to manage. Then, we specify some steps of the scenario with our model. The notion of triggers in our model implements the deliberative process of the assistant agent.
Article
Full-text available
The current study examined the relationships between learners’ (N=123N = 123) personality traits, the emotions they typically experience while studying (trait studying emotions), and the emotions they reported experiencing as a result of interacting with four pedagogical agents (agent-directed emotions) in MetaTutor, an advanced multi-agent learning environment. Overall, significant relationships between a subset of trait emotions (trait anger, trait anxiety) and personality traits (agreeableness, conscientiousness, and neuroticism) were found for four agent-directed emotions (enjoyment, pride, boredom, and neutral) though the relationships differed between pedagogical agents. These results demonstrate that some trait emotions and personality traits can be used to predict learners’ emotions directed toward specific pedagogical agents (with different roles). Results provide suggestions for adapting pedagogical agents to support learners’ (with certain characteristics; e.g., high in neuroticism or agreeableness) experience of adaptive emotions (e.g., enjoyment) and minimize their experience on non-adaptive emotions (e.g., boredom). Such an approach presents a scalable and easily implementable method for creating emotionally-adaptive, agent-based learning environments, and improving learner-pedagogical agent interactions in order to support learning.
Article
Full-text available
This article explores whether people more frequently attempt to repair misunderstandings when speaking to an artificial conversational agent if it is represented as fully human. Interactants in dyadic conversations with an agent (the chat bot Cleverbot) spoke to either a text screen interface (agent's responses shown on a screen) or a human body interface (agent's responses vocalized by a human speech shadower via the echoborg method) and were either informed or not informed prior to interlocution that their interlocutor's responses would be agent-generated. Results show that an interactant is less likely to initiate repairs when an agent-interlocutor communicates via a text screen interface as well as when they explicitly know their interlocutor's words to be agent-generated. That is to say, people demonstrate the most “intersubjective effort” toward establishing common ground when they engage an agent under the same social psychological conditions as face-to-face human–human interaction (i.e., when they both encounter another human body and assume that they are speaking to an autonomously-communicating person). This article's methodology presents a novel means of benchmarking intersubjectivity and intersubjective effort in human-agent interaction.
Conference Paper
Full-text available
Automated negotiation research focuses on getting the most value from a single negotiation, yet real-world settings often involve repeated serial negotiations between the same parties. Repeated negotiations are interesting because they allow the discovery of mutually beneficial solutions that don't exist within the confines of a single negotiation. This paper introduces the notion of Pareto efficiency over time to formalize this notion of value-creation through repeated interactions. We review literature from human negotiation research and identifS' a dialog strategy, favors and ledgers, that facilitates this process. As part of a longer-term effort to build intelligent virtual humans that can train human negotiators, we create a conversational agent that instantiates this strategy, and assess its effectiveness with human users, using the established Colored Trails negotiation testbed. In an empirical study involving a series of repeated negotiations, we show that humans are more likely to discover Pareto optimal solutions over time when matched with our favor-seeking agent. Further, an agent that asks for favors during early negotiations, regardless of whether these favors are ever repaid, leads participants to discover more joint value in later negotiations, even under the traditional definition of Pareto optimality within a single negotiation. Further, agents that match their words with deeds (repay their favors) create the most value for themselves. We discuss the implications of these findings for agents that engage in long-term interactions with human users. Copyright © 2015, International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.
Article
Full-text available
This study focuses on the design and testing of a motivational animated pedagogical agent (APA) in an inquiry learning environment on kinematics. The aim of including the APA was to enhance students’ perceptions of task relevance and self-efficacy. Given the under-representation of girls in science classrooms, special attention was given to designing an APA that would appeal to the female students. A review of the literature suggested that the best design solution would be an agent who was female, young, attractive, and “cool”. An experiment compared three conditions: agent (image and voice), voice (no image), and control (no image and no voice). The research question was whether students’ motivation and knowledge changed over time as they worked in the inquiry learning environment, and whether condition and gender affected such changes. Participants were 61 third-year students (mean age 14.7 years) from a secondary school. Gender was distributed evenly within and across conditions. A significant main effect of time on self-efficacy was found, with self-efficacy beliefs increasing significantly for both boys and girls. In addition, there was a significant interaction between time, condition, and gender for self-efficacy. About halfway during training, girls’ self-efficacy beliefs significantly increased in both experimental conditions and decreased in the control condition. For boys the opposite pattern was found. Girls also gave higher appraisals for the agent. Students in all three conditions realized significant knowledge gains, which did not differ by gender. The discussion critically considers the need for, and design of motivational scaffolding in inquiry learning environments.
Article
We describe how the interaction mode with an embodied conversational agent (ECA) affects the users’ perception of the agent and their behavior during interaction, and propose a method to recognize the social attitude of users towards the agent from their verbal behavior. A corpus of human–ECA dialogues was collected with a Wizard-of-Oz study in which the input mode of the user moves was varied (written vs. speech-based). After labeling the corpus, we evaluated the relationship between input mode and social attitude of users towards the agent. The results show that, by increasing naturalness of interaction, spoken input produces a warmer attitude of users and a richer language: this effect is more evident for users with a background in humanities. Recognition of signs of social attitude is needed for adapting the ECA's verbal and nonverbal behavior.
Article
Abstract The potential of emotional interaction between human and computer has recently interested researchers in human–computer interaction. The instructional impact of this interaction in learning environments has not been established, however. This study examined the impact of emotion and gender of a pedagogical agent as a learning companion (PAL) on social judgements, interest, self-efficacy, and learning. Two experiments investigated separately the effects of a PAL's emotional expression and empathetic response. Experiment 1 focused on emotional expression (positive vs. negative vs. neutral) and gender (male vs. female) with a sample of 142 male and female college students in a computer literacy course. Experiment 2 investigated the impact of empathetic response (responsive vs. non-responsive) and gender with 56 pre-service teachers. Overall, the results yielded main and interaction effects of PAL emotion and gender on the dependent variables. In particular, the PAL's empathetic response had a positive impact on learner interest and self-efficacy; PAL gender had a positive impact on recall. The findings imply that the emotion and the gender of the digital learning companion could be utilized to optimize college students' motivation and learning.
Article
The aim of this study was to empirically evaluate an embodied conversational agent called GRETA in an effort to answer two main questions: (1) What are the benefits (and costs) of presenting information via an animated agent, with certain characteristics, in a ‘persuasion’ task, compared to other forms of display? (2) How important is it that emotional expressions are added in a way that is consistent with the content of the message, in animated agents? To address these questions, a positively framed healthy eating message was created which was variously presented via GRETA, a matched human actor, GRETA's voice only (no face) or as text only. Furthermore, versions of GRETA were created which displayed additional emotional facial expressions in a way that was either consistent or inconsistent with the content of the message. Overall, it was found that although GRETA received significantly higher ratings for helpfulness and likability, presenting the message via GRETA led to the poorest memory performance among users. Importantly, however, when GRETA's additional emotional expressions were consistent with the content of the verbal message, the negative effect on memory performance disappeared. Overall, the findings point to the importance of achieving consistency in animated agents.