ArticlePDF Available

Abstract and Figures

Artificial intelligence (AI) systems offer effective support for online learning and teaching, including personalizing learning for students, automating instructors’ routine tasks, and powering adaptive assessments. However, while the opportunities for AI are promising, the impact of AI systems on the culture of, norms in, and expectations about interactions between students and instructors are still elusive. In online learning, learner–instructor interaction (inter alia, communication, support, and presence) has a profound impact on students’ satisfaction and learning outcomes. Thus, identifying how students and instructors perceive the impact of AI systems on their interaction is important to identify any gaps, challenges, or barriers preventing AI systems from achieving their intended potential and risking the safety of these interactions. To address this need for forward-looking decisions, we used Speed Dating with storyboards to analyze the authentic voices of 12 students and 11 instructors on diverse use cases of possible AI systems in online learning. Findings show that participants envision adopting AI systems in online learning can enable personalized learner–instructor interaction at scale but at the risk of violating social boundaries. Although AI systems have been positively recognized for improving the quantity and quality of communication, for providing just-in-time, personalized support for large-scale settings, and for improving the feeling of connection, there were concerns about responsibility, agency, and surveillance issues. These findings have implications for the design of AI systems to ensure explainability, human-in-the-loop, and careful data collection and presentation. Overall, contributions of this study include the design of AI system storyboards which are technically feasible and positively support learner–instructor interaction, capturing students’ and instructors’ concerns of AI systems through Speed Dating, and suggesting practical implications for maximizing the positive impact of AI systems while minimizing the negative ones.
Content may be subject to copyright.
The impact ofarticial intelligence
onlearner–instructor interaction inonline
learning
Kyoungwon Seo1* , Joice Tang2, Ido Roll3, Sidney Fels4 and Dongwook Yoon2
Abstract
Artificial intelligence (AI) systems offer effective support for online learning and teach-
ing, including personalizing learning for students, automating instructors’ routine
tasks, and powering adaptive assessments. However, while the opportunities for AI
are promising, the impact of AI systems on the culture of, norms in, and expectations
about interactions between students and instructors are still elusive. In online learning,
learner–instructor interaction (inter alia, communication, support, and presence) has
a profound impact on students’ satisfaction and learning outcomes. Thus, identifying
how students and instructors perceive the impact of AI systems on their interaction
is important to identify any gaps, challenges, or barriers preventing AI systems from
achieving their intended potential and risking the safety of these interactions. To
address this need for forward-looking decisions, we used Speed Dating with story-
boards to analyze the authentic voices of 12 students and 11 instructors on diverse use
cases of possible AI systems in online learning. Findings show that participants envision
adopting AI systems in online learning can enable personalized learner–instructor
interaction at scale but at the risk of violating social boundaries. Although AI systems
have been positively recognized for improving the quantity and quality of communica-
tion, for providing just-in-time, personalized support for large-scale settings, and for
improving the feeling of connection, there were concerns about responsibility, agency,
and surveillance issues. These findings have implications for the design of AI systems to
ensure explainability, human-in-the-loop, and careful data collection and presentation.
Overall, contributions of this study include the design of AI system storyboards which
are technically feasible and positively support learner–instructor interaction, capturing
students’ and instructors’ concerns of AI systems through Speed Dating, and sug-
gesting practical implications for maximizing the positive impact of AI systems while
minimizing the negative ones.
Keywords: Artificial intelligence, Boundary, Learner–instructor interaction, Online
learning, Speed dating
Open Access
© The Author(s), 2021. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits
use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original
author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third
party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the mate-
rial. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or
exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://
creat iveco mmons. org/ licen ses/ by/4. 0/.
RESEARCH ARTICLE
Seoetal. Int J Educ Technol High Educ (2021) 18:54
https://doi.org/10.1186/s41239-021-00292-9
*Correspondence:
kwseo@seoultech.ac.kr
1 Department of Applied
Artificial Intelligence,
Seoul National University
of Science and Technology,
232 Gongneung-ro,
Gongneung-dong,
Nowon-gu, Seoul 01811,
Korea
Full list of author information
is available at the end of the
article
Page 2 of 23
Seoetal. Int J Educ Technol High Educ (2021) 18:54
Introduction
e opportunities for artificial intelligence (AI) in online learning and teaching are
broad (Anderson etal., 1985; Baker, 2016; Roll etal., 2018;Seo etal., 2020b; VanLehn,
2011), ranging from personalized learning for students and automation of instructors’
routine tasks to AI-powered assessments (Popenici & Kerr, 2017). For example, AI tutor-
ing systems can provide personalized guidance, support, or feedback by tailoring learn-
ing content based on student-specific learning patterns or knowledge levels (Hwang
etal., 2020). AI teaching assistants help instructors save time answering students’ sim-
ple, repetitive questions in online discussion forums, and instead instructors can dedi-
cate their saved time to higher-value work (Goel & Polepeddi, 2016). AI analytics allows
instructors to understand students’ performance, progress, and potential by decrypting
their clickstream data (Roll & Winne, 2015;Fong etal., 2019;Seo etal., 2021; Holstein
etal., 2018).
While the opportunities for AI are promising, students and instructors may perceive
the impact of AI systems negatively. For instance, students may perceive indiscriminate
collection and analysis of their data through AI systems as a privacy breach, as illus-
trated by the Facebook–Cambridge Analytica data scandal (Chan, 2019; Luckin, 2017).
e behavior of AI agents that do not take into account the risk of data bias or algorith-
mic bias can be perceived by students as discriminatory (Crawford & Calo, 2016; Mur-
phy, 2019). Instructors worry that relying too much on AI systems might compromise
the student’s ability to learn independently, solve problems creatively, and think critically
(Wogu etal., 2018). It is important to examine how students and instructors perceive the
impact of AI systems in online learning environments (Cruz-Benito etal., 2019).
e AI in Education (AIEd) community is increasingly exploring the impact of AI sys-
tems in online education. For example, Roll andWylie (2016) call for more involvement
of AI systems in the communication between students and instructors, and in educa-
tion applications outside school context. At the same time, Zawacki-Richter and his col-
leagues (2019) conducted a systematic review of AIEd publications from 2007 to 2018
and as a result found a lack of critical reflection of the ethical impact and risks of AI sys-
tems on learner–instructor interaction. Popenici and Kerr (2017) investigated the impact
of AI systems on learning and teaching, and uncovered potential conflicts between stu-
dents and instructors, such as privacy concerns, changes in power structures, and exces-
sive control. All of these studies called for more research into the impact of AI systems
on learner–instructor interaction, which will help us identify any gaps, issues, or barriers
preventing AI systems from achieving their intended potential.
Indeed, learner–instructor interaction plays a crucial role in online learning. Kang and
Im (2013) demonstrated that factors of learner–instructor interaction, such as commu-
nication, support, and presence, improve students’ satisfaction and learning outcomes.
e learner–instructor interaction further affects students’ self-esteem, motivation to
learn, and confidence in facing new challenges (Laura & Chapman, 2009). Less is known,
however, about how introducing AI systems in online learning will affect learner–
instructor interaction. Guilherme (2019, p. 7) predicted that AI systems would have “a
deep impact in the classroom, changing the relationship between teacher and student.
More work is needed to understand how and why various forms of AI systems affect
learner–instructor interaction in online learning (Felix, 2020).
Page 3 of 23
Seoetal. Int J Educ Technol High Educ (2021) 18:54
Considering the findings in the literature and the areas for further research, the pre-
sent study aimed to identify how students and instructors perceive the impact of AI sys-
tems on learner–instructor interaction in online learning. To this end, we used Speed
Dating, a design method that allows participants to quickly interact with and experience
the concepts and contextual dimensions of multiple AI systems without any technical
implementation (Davidoff etal., 2007). In Speed Dating, participants are presented with
various hypothetical scenarios via storyboards while researchers conduct interviews to
understand the participants’ immediate reactions (Zimmerman & Forlizzi, 2017). ese
interviews provided rich opportunities to understand the way students and instructors
perceive the impact of AI systems on learner–instructor interaction and the boundaries
beyond which AI systems are perceived as “invasive.
e study offers several unique contributions. First, as part of the method, we designed
storyboards that can be used to facilitate further research on AI implications for online
learning. Second, the study documents the main promises and concerns of AI in online
learning, as perceived by both students and instructors in higher education. Last, we
identify practical implications for the design of AI-based systems in online learning.
ese include empahses on explainability, human-in-the-loop, and careful data collec-
tion and presentation.
is paper is organized as follows. e next section provides the theoretical frame-
work and background behind this research paper by describing the main aspects of the
learner–instructor interaction and AI systems in education. “Materials and methods
section is related to the methodological approach followed in this research and describes
the storyboards used to collect data, the participants, the study procedure, and the per-
formed qualitative analysis. “Findings” section shows the results obtained and the main
findings related to the research question. Finally, “Discussion and conclusion” section
provides an overview of the study’s conclusions, limitations, and future research.
Background
is paper explores the impact of AI systems on learner–instructor interaction in online
learning. We first proposed a theoretical framework based on studies on learner–
instructor interaction in online learning. We then reviewed the AI systems currently in
use in online learning environments.
Theoretical framework
Interaction is paramount for successful online learning (Banna etal., 2015; Nguyen etal.,
2018). Students exchange information and knowledge through interaction and con-
struct new knowledge from this process (Jou etal., 2016). Moore (1989) classified these
interactions in online learning into three types: learner–content, learner–learner, and
learner–instructor. ese interactions help students become active and more engaged
in their online courses (Seo etal., 2021; Martin etal., 2018), and by doing so strengthen
their sense of community which is essential for the continuous usage of online learning
platforms (Luo etal., 2017).
Martin and Bolliger (2018) found that the learner–instructor interaction is the most
important among Moore’s three types of interactions. Instructors can improve student
engagement and learning by providing a variety of communication channels, support,
Page 4 of 23
Seoetal. Int J Educ Technol High Educ (2021) 18:54
encouragement, and timely feedback (Martin etal., 2018). Instructors can also enhance
students’ sense of community by engaging and guiding online discussions (Shackelford &
Maxwell, 2012; Zhang etal., 2018). Collectively, learner–instructor interaction has a sig-
nificant impact on students’ satisfaction and achievement in online learning (Andersen,
2013; Kang & Im, 2013; Walker, 2016).
e five-factor model of learner–instructor interaction offers a useful lens for inter-
preting interactions between students and the instructor in online learning (see Table1;
Kang, 2010). Robinson etal. (2017) found that communication and support are key fac-
tors of the learner–instructor interaction for designing meaningful online collaborative
learning. Richardson et al. (2017) added that the perceived presence during learner–
instructor interaction positively influences student motivation, satisfaction, learning,
and retention in online courses. Kang and Im (2013) synthesized these findings by show-
ing that communication, support, and presence are the three most important factors in
improving students’ achievement and satisfaction over other factors. us, in this study,
we focused on communication, support, and presence between students and instructors.
AI systems are likely to affect the way learner–instructor interaction occurs in online
learning environments (Guilherme, 2019). If students and instructors have strong con-
cerns about the impact of AI systems on their interactions, then they would not use such
systems, in spite of perceived benefits (Felix, 2020). To the best of our knowledge, the
impact of AI systems on learner–instructor interaction has limited empirical studies,
and Misiejuk and Wasson (2017) have called for more work on this.
Articial intelligence inonline learning
ere are a variety of AI systems that are expected to affect learner–instructor inter-
action in online learning. For example, Goel and Polepeddi (2016) developed an AI
teaching assistant named Jill Watson to augment the instructor’s communication
with students by autonomously responding to student introductions, posting weekly
announcements, and answering routine, frequently asked questions. Perin and Lauter-
bach (2018) developed an AI scoring system that allows faster communication of grades
between students and the instructor. Luckin (2017) showed AI systems that support
both students and instructors by providing constant feedback on how students learn and
Table 1 The five-factor model of learner–instructor interaction in online learning environments,
adapted from Kang and Im (2013)
Factor of learner–
instructor
interaction
Denition
Communication Instructional communication (Q & A) between learners and the instructor about topics
directly related to learning contents
Support Instructional management by the instructor, including supporting learning materials and
providing feedbacks directly related to learning contents
Presence Perceived connectivity between students and instructors during the online learning
process
Guidance Guidance by the instructor through providing encouragement and positive reactions that
are not directly related to learning contents
Social intimacy Social interaction by the instructor, such as introduction, greetings, and exchange of
personal information that are not directly related to learning contents
Page 5 of 23
Seoetal. Int J Educ Technol High Educ (2021) 18:54
the progress they are making towards their learning goals. Ross etal. (2018) developed
online adaptive quizzes to support students by providing learning contents tailored to
each student’s individual needs, which improved student motivation and engagement.
Heidicker etal. (2017) showed that virtual avatars allow several physically separated
users to collaborate in an immersive virtual environment by increasing sense of pres-
ence. Aslan and her colleagues (2019) developed AI facial analytics to improve instruc-
tors’ presence as a coach in technology-mediated learning environments. When looking
at these AI systems, in-depth insight into how students and instructors perceive the AI’s
impact is important (Zawacki-Richter etal., 2019).
e recent introduction of commercial AI systems for online learning has dem-
onstrated the complex impact of AI on learner–instructor interaction. For instance,
Proctorio (Proctorio Inc., USA), a system that aims to prevent cheating by monitoring
students and their computer screens during an exam, seems like a fool-proof plan to
monitor students in online learning, but students complain that it increases their test-
taking anxiety (McArthur, 2020). e idea of being recorded by Proctorio distracts stu-
dents and creates an uncomfortable test-taking atmosphere. In a similar vein, although
Squirrel AI (Squirrel AI Learning Inc., China) aims to provide adaptive learning by
adjusting itself automatically to the best method for an individual student, there is a risk
that this might restrict students’ creative learning (Beard, 2020). ese environments
have one thing in common: Unlike educational technologies that merely mediate inter-
actions between instructors and students, AI systems have more autonomy in the way in
which it interprets data, infers learning, and at times, takes instructional decisions.
In what follows, we describe Speed Dating with storyboards, an exploratory research
method that allows participants to quickly experience different forms of AI systems
possible in the near future, to examine the impact of those systems on learner–instruc-
tor interaction (“Materials and methods”). Findings offer new insights on students’ and
instructors’ boundaries, such as when AI systems are perceived as “invasive” (“Find-
ings”). Lastly, we discuss how our findings provide implications for future AI systems in
online learning (Discussion and conclusion).
Materials andmethods
e goal of this study is to gain insight on students’ and instructors’ perception of the
impact of AI systems on learner–instructor interaction (inter alia, communication, sup-
port, and presence; Kang & Im, 2013) in online learning. e study was conducted amid
the COVID-19 pandemic, thus students and instructors have heightened awareness
about the importance of online learning and fresh experiences from the recent online
courses. Our aim was not to evaluate specific AI technologies, but instead, to explore
areas where AI systems positively contribute to learner–instructor interaction and
where more attention is required.
We used Speed Dating with storyboards, an exploratory research method that allows
participants to experience a number of possible AI systems in the form of storyboards,
to prompt participants to critically reflect on the implications of each AI area (Zimmer-
man & Forlizzi, 2017). Exposure to multiple potential AI areas that are likely to be avail-
able in the future helps participants to shape their own perspectives and to evaluate the
Page 6 of 23
Seoetal. Int J Educ Technol High Educ (2021) 18:54
AI systems in a more nuanced way (Luria etal., 2020). We first created a set of eleven
four-cut storyboards for the comprehensive and diverse use cases of possible AI systems
in online learning (see “Creating storyboards” section), and then used these storyboards
to conduct Speed Dating with student and instructor participants (see “Speed dating
section). Overall, we address the following research question:
How do students and instructors perceive the impact of AI systems on learner–
instructor interaction (inter alia, communication, support, and presence) in online
learning?
Creating storyboards
To create AI system storyboards which are technically feasible and positively support
learner–instructor interaction, we ran an online brainwriting activity (Linsey & Becker,
2011) in which we asked a team of designers to come up with scenarios about possi-
ble AI systems in online learning. We recruited six designers from our lab (four fac-
ulty members and two PhD candidates) with an average of 15.4years (SD = 4.7 years)
of design experience in human–computer interaction (HCI). Each team member wrote
down scenarios using a Google Slides file and then passed it on to another team mem-
ber. is process was repeated four times until all designers agreed that the scenarios
of AI systems were technically feasible and supported learner–instructor interaction in
online learning.
As initial scenarios were made by HCI designers, in order to validate their techni-
cal feasibility and positive impact on learner–instructor interaction, we enacted addi-
tional interviews with six AI experts with an average of 10.8years (SD = 7.8 years) of
research experience and 8years (SD = 6.2years) of teaching experience (see Appendix
A , Table7, for details). e first two authors conducted semi-structured interviews with
AI experts using a video conferencing platform (i.e., Zoom). We showed each scenario
to AI experts and asked the following questions: “Can you improve this scenario to make
it technically feasible?” and “Can you improve this scenario to have a positive impact
on learner–instructor interaction based on your own online teaching experience?” After
showing all the scenarios, the following question was asked: “Do you have any research
ideas that can be used as a new scenario?” e scenario was modified to reflect the opin-
ions of AI experts and AIEd literature. e interviews lasted around 41min on average
(SD = 7.3min). Each AI expert was compensated 100 Canadian dollars for their time.
e process was cleared by the Research Ethics Board.
As shown in Table2, we ended up with 11 scenarios which support learner–instructor
interaction (i.e., communication, support, and presence) in online learning. Scenarios
were categorized by the first two authors with reference to the learner–instructor inter-
action factors as defined in Table1 (see “eoretical framework” section). For example,
although the AI Teaching or Grading Assistant scenarios could be considered systems
of support for the instructor, “support” within the learner–instructor interaction frame-
work refers to support for the student. erefore, since the scenarios illustrate increased
or expedited communication between students and instructors rather than direct sup-
port for students, AI Teaching and Grading Assistant scenarios are categorized as sys-
tems for communication. We note that these categories are not definitive, and scenarios
Page 7 of 23
Seoetal. Int J Educ Technol High Educ (2021) 18:54
may have interleaving aspects of several learner–instructor interaction factors. How-
ever, the final categories in Table2 refer to the factors that best define the respective
scenarios.
Seven scenarios (Scenarios 1, 3, 5, 6, 8, 9, and 11) have well reflected the state-of-
the-art AI systems that were identified in “Artificial intelligence in online learning”
Table 2 Factors of learner–instructor interaction, scenario titles, and scenario summaries
ID Factor of learner–
instructor
interaction
Scenario title Scenario summary
1 Communication AI Teaching Assistant (Goel & Polepeddi,
2016)AI answers student questions before,
during, or after online courses based on
answers to questions gathered in previ-
ous courses
2 AI Companion (Woolf et al., 2010) AI emotionally supports students who
are concerned about their grades and
workload, and provides assistance when
students use language related to self-
destructive behavior
3 AI Grading Assistance (Perin & Lauter-
bach, 2018)AI helps TAs quickly grade assignments by
offering suggestions that they should
choose to accept or change for each
question
4 AI Peer Review AI normalizes peer review grades by keep-
ing students’ holistic profiles in mind,
and by comparing each students’ history
of peer reviews with others as well as
with peer reviews from previous itera-
tions of the course
5 Support AI Analytics (Luckin, 2017) AI provides an analysis of students’
clickstream, quiz, login/logout, and eye-
tracking data to instructors
6 Intelligent Suggestions (Luckin, 2017) AI suggests study materials and strategies
to students based on an analysis of stu-
dents’ clickstream and quiz performance
data
7 AI Group Project Organizer AI helps write meeting minutes using
speech recognition, suggests action
plans from group discussions through
text summarization, and gives editing
tips based on assignment data from
previous iterations of the course
8 Adaptive Quiz (Ross et al., 2018) AI provides students with a personalized
set of exercise problems that suits their
level of knowledge
9Presence Virtual Avatar (Heidicker et al., 2017) AI communicates facial expressions and
body language without explicitly using a
student’s camera feed through a virtual
avatar
10 AI Breakout Room Matching AI matches students in breakout rooms in
a way that optimizes discussion by ana-
lyzing microphone data (e.g., frequency,
length and tone)
11 AI Facial Analytics (Aslan et al., 2019) AI gauges students’ emotions with-
out sharing videos, and notifies the
instructor in real-time when a specific
student seems especially distressed or
unengaged
Page 8 of 23
Seoetal. Int J Educ Technol High Educ (2021) 18:54
section. e following four scenarios were created based on research ideas from AI
experts: AI Companion (Scenario 2), AI Peer Review (Scenario 4), AI Group Project
Organizer (Scenario 7), and AI Breakout Room Matching (Scenario 10). ese 11 final
scenarios were not to exhaust all AI systems in online learning or to systematically
address all topics, but rather to probe a range of situations that shed light on the reali-
ties that present themselves with the utilization of AI systems in online learning.
We generated four-cut storyboards based on the scenarios in Table2. Figure1 shows
an illustrated example of a storyboard detailing the scenario through captions. We styl-
ized the characters in a single visual style and as flat cartoon shades in order to reduce
gender and ethnic clues and enable participants to put themselves in the shoes of the
characters in each storyboard (Truong etal., 2006; Zimmerman & Forlizzi, 2017). e
Fig. 1 A storyboard example of scenario 8, Adaptive Quiz in Table 2
Table 3 Summary of the students’ information
ID Major Year level Age Gender
S1 Economics 4 21 M
S2 Biology 4 21 W
S3 Sociology 3 20 W
S4 Behavioural Neuroscience 4 20 W
S5 Psychology 4 21 W
S6 Computer Science 2 19 M
S7 Nursing 2 20 W
S8 Computer Engineering 3 21 M
S9 Business and Computer Science 4 21 W
S10 Computer Science 5 22 M
S11 Civil Engineering 4 20 M
S12 Philosophy 2 18 W
Page 9 of 23
Seoetal. Int J Educ Technol High Educ (2021) 18:54
full set of storyboards can be viewed at https:// osf. io/ 3aj5v/? view_ only= bc5fa 97e6f
7d46f db668 72588 ff1e2 2e.
Speed dating
Participants
Next, we conducted a Speed Dating activity with storyboards. We recruited 12 students
(see Table3) and 11 instructors (see Table4) for a Speed Dating activity. For diversity, we
recruited students from 11 different majors and recruited instructors from nine different
subjects. Students and instructors had a minimum of three months of online learning
or teaching experience due to the COVID-19 pandemic. Overall, students had at least
one year of university experience and instructors had at least three years of teaching
experience. We required students and instructors to have online learning and teaching
experience respectively so as to control the expected and experienced norms of student-
instructor interaction within online university classes. Conversely, we did not require
participants to have knowledge of AI systems as we wanted their perspective on the
intended human–AI interactions and their potential effects as illustrated. Previous stud-
ies showed that Speed Dating works well without any prior knowledge or experience
with AI systems, so no special knowledge or experience was required to participate in
this study (Luria etal., 2020; Zimmerman & Forlizzi, 2017). Each participant was com-
pensated with 25 Canadian dollars for their time.
Procedure
We conducted semi-structured interviews with participants using a video conferencing
platform (i.e., Zoom). We designed the interview questions to capture how the partici-
pants perceive the AI systems illustrated in the storyboards (see Appendix B). Partici-
pants read each of the storyboards aloud and then expressed their perceptions of AI in
online learning. Specifically, we asked participants to critically reflect on how incorpo-
rating the AI system into an online course would affect learner–instructor interaction
Table 4 Summary of the instructors’ information
ID Teaching subject Teaching
experience Average class
size Age Gender
I1 Political Science 20 years 150 56 M
I2 Chinese Language and Culture 6 years 30 34 W
I3 Chinese Language and Culture 5 years 30 31 W
I4 Business Analytics 5 years 60 37 M
I5 Korean Language 17 years 30 42 W
I6 Chinese Language 12 years 25 45 W
I7 Occupational Therapy 45 years 50 51 W
I8 Physics 22 years 180 56 M
I9 Computer Science 27 years 170 52 M
I10 Computer Science 3 years 130 28 W
I11 Chemistry 14 years 430 37 M
Page 10 of 23
Seoetal. Int J Educ Technol High Educ (2021) 18:54
and whether they would like to experience its effect. We also asked them to choose AI
systems that would work well and which would not work well, to capture their holis-
tic point of view regarding perceived affordances and drawbacks. e entire interview
lasted around 50.9min (SD = 10.7min), with 3–5min spent to share each storyboard
and probe participants on its specific implications.
Data analysis
Each interview was audio recorded and transcribed for analysis. We used a Reflexive
ematic Analysis approach (Braun & Clarke, 2006; Nowell etal., 2017). After a period
of familiarization with the interview data, the first two authors began by generating
inductive codes with an initial round of semantic codes related to intriguing statements
or phrases in the data. e two authors then coded each transcript by highlighting and
commenting on data items through Google Docs, independently identifying patterns
that arose through extended examination of the dataset. Any conflicts regarding such
themes were resolved through discussion between the two authors. Later, through a
deductive approach guided by the learner–instructor interaction factors adapted from
Kang and Im (2013), data were coded and collated into themes in a separate word docu-
ment. An example of our codes can be viewed at https:// osf. io/ 3aj5v/? view_ only= bc5fa
97e6f 7d46f db668 72588 ff1e2 2e. We then utilized three iterative discussions with all five
authors present that yielded recurrent topics and themes by organizing the data around
significant thematic units; the final six major themes were derived from twelve codes.
e themes, which describe the impact of AI systems, were as follows: (1) Quantity and
Quality, (2) Responsibility, (3) Just-in-time Support, (4) Agency, (5) Connection, and (6)
Surveillance. e findings below are presented according to these themes.
Findings
e central theme of participants’ responses, which stood out repeatedly in our study,
was that adopting AI systems in online learning can enable personalized learner–
instructor interaction at scale but at the risk violating social boundaries. Participants
were concerned that AI systems could create responsibility, agency, and surveillance
issues in online learning if they violated social boundaries in each factor of learner–
instructor interaction (i.e., communication, support, and presence). Table5 summarizes
the perceived benefits and concerns of students and instructors about the impact of AI
systems on learner–instructor interaction, as noted with ( +) and ( ) respectively. Each
quote outlines whether the response came from a student (“S”) or an instructor (“I”).
Communication
In online learning environments, communication refers to questions and answers
between students and the instructor about topics directly related to learning contents,
such as instructional materials, assignments, discussions, and exams (Kang & Im, 2013).
Students and instructors expect AI systems will positively impact the quantity and qual-
ity of communication between them but bears the risk causing miscommunication and
responsibility issues, as described below.
Page 11 of 23
Seoetal. Int J Educ Technol High Educ (2021) 18:54
Quantity andquality
Students believe that the anonymity afforded by AI would make them less self-conscious
and, as a result, allow them to ask more questions. In online learning environments, stu-
dents are generally afraid to ask questions to their instructors during class, primarily
because they “worry that someone already asked it” (S4) or “don’t want to seem dumb
by instructors or peers” (S10). Students perceive that the anonymity from both an AI
Teaching Assistant (Scenario 1) and an AI Companion (Scenario 2) would make them
“less afraid to ask questions” (S10), “wouldn’t feel bad about wasting the professor’s time”
(S11), and would be “less distracting to class” (S12). Bluntly put, participant S11 stated:
“If it’s a dumb question, I’ve got an AI to handle it for me. e AI won’t judge me. e AI
is not thinking like, wow, what an idiot.” S5 expanded on this idea, mentioning that ask-
ing questions to an AI removes self-consciousness that typically exists in instructional
communications: “… you don’t feel like you’re bothering a person by asking the ques-
tions. You can’t really irritate an AI, so you can ask as many as you need to.” As a result,
Table 5 Summary of the students’ and instructors’ perceptions of AI systems in online learning
( +) indicates perceived benet and ( ) indicates perceived concern
Factor of learner–
instructor
interaction
The impact of AI systems Students’ perceptions Instructors’ perceptions
Communication Quantity & Quality ( +) Students believe that
the anonymity afforded
by AI would make them
less self-conscious and, as
a result, allow them to ask
more questions
( +) Instructors believe that AI
could help answer simple,
repetitive questions, which
would allow them to focus
on more meaningful com-
munication with students
Responsibility ( ) Students worry that
AI could give unreliable
answers and negatively
impact their grades
( ) Instructors predicted
conflicts between students
and the instructor due to
AI-based misunderstandings
or misleadingness
Support Just-in-time support ( +) Students believe that AI
would support personal-
ized learning experiences,
particularly with studying
and group projects
( +) Instructors believe
AI could be effectively
leveraged to help students
receive just-in-time person-
alized support
Agency ( ) Students perceived that
canned and standardized
support from AI might
have a negative influence
on their ability to learn
effectively
( ) Instructors are wary of the
fact that too much support
from AI could take away
students’ opportunities for
exploration and discovery
Presence Connection ( +) Students believe that
AI can address privacy
concerns and support
learner–instructor con-
nections by providing
social interaction cues
without personal camera
information
( +) Instructors believe that
the addition of AI would
help them become more
aware of students’ needs
Surveillance ( ) Students are uncom-
fortable with the measure-
ment of their unconscious
behavior, such as eye
tracking or facial expres-
sion analysis, because it
feels like surveillance
( ) Instructors were negative
about relying on AI inter-
pretation to understand
students’ social interaction
cues
Page 12 of 23
Seoetal. Int J Educ Technol High Educ (2021) 18:54
all 12 students answered that AI systems would nudge them to ask more questions in
online learning.
Instructors believe that AI could help answer simple, repetitive questions, which would
allow them to focus on more meaningful communication with students. Answering repet-
itive questions from students takes a huge amount of time (I11). Instructors reflected
that the time saved from tedious tasks, such as answering administrative questions,
could allow course teams to focus on more content-based questions (I10). Because an
AI Teaching Assistant (Scenario 1) answers students’ repetitive questions and AI Grad-
ing Assistance (Scenario 3) and AI Peer Review (Scenario 4) enable fast feedback loops,
instructors can communicate more meaningfully with students by helping to “focus
more on new questions” (I6) or “use their time for more comprehensive or more irregu-
lar questions” (I4). As well-stated by I10: “I think it allows us time to have conversa-
tions that are more meaningful… in some ways you’re choosing quality over quantity.
e more time I have, the more time, I can do things like answer emails or answer things
on Piazza, things that actually will communicate with the student.
Responsibility
Although students believe AI systems would improve the quantity and quality of instruc-
tional communication, they worry that AI could give unreliable answers and negatively
impact their grades. For example, S4 worried that “I just want to make sure it’s a really
reliable source, because if the AI is answering questions from students, and then they’re
going to apply that answer to the way they do their work in the future, and it might be
marked wrong. en it’s hard to go to the instructor and say, oh, this answer was what
was given to me, but you said it was wrong.” Most students (10 out of 12) feel like the
lack of explainability of AI would make it hard to blame despite the fact that it may hold
a position of responsibility in some situations, such as answering questions where its
answers should be considered as truth. S9 said that “Whereas with AI and just intelligent
systems that you don’t fully understand the back end to in a sense, it’s harder to decipher
the reasoning behind the answer or why they gave that answer.” In particular, students
are concerned about how instructors would react if something went wrong because they
trusted the AI. S11 expects that “I can see a lot of my fellow engineering students finding
more room to argue for their marks. I can see people not being as willing to accept their
fate with this kind of system.
Instructors predicted conflicts between students and the instructor due to AI-based mis-
understandings or misleadingness. For example, a conflict could arise from potential dis-
crepancies between answers from AI, the instructor, and human TAs. As expressed by
I4, “Students will argue that, oh AI is wrong. I demand a better assessment right? So,
you can say that easily for the AI. But for the authoritative figure like TA and instructor,
maybe it’s hard to do that.” Similarly, I6 argued a conflict could stem from the opposite
direction: “If an AI gives students a great suggestion, if the instructor and TA decided
to regrade, it would just be a lot of trouble.” Several instructors (five out of 11) also
worried about conflicts that could arise from the quality of response. I1 said that “e
concern is the quality of the response, given that there can be ambiguity in the way the
students post questions. My concern here is that the algorithm may respond incorrectly
or obliquely.” I8 also cautioned AI-based misunderstandings or misleadingness: “If you
Page 13 of 23
Seoetal. Int J Educ Technol High Educ (2021) 18:54
have a conversation in person, you can always clarify misunderstandings or things like
that. I don’t think a machine can do that yet. So there’s a bit of a potential for misunder-
standings so misleading the students.
Support
In online learning environments, support refers to the instructor’s instructional man-
agement for students, such as providing feedback, explanations, or recommendations
directly related to what is being taught (Kang & Im, 2013). Students and instructors
expect a positive impact from AI systems in terms of enabling just-in-time personalized
support for students at scale, but they expect a negative impact in that excessive support
could reduce student agency and ownership of learning.
Just‑in‑time support
Students believe that AI would support personalized learning experiences, particularly
with studying and group projects. Ultimately, all 12 students felt that AI could help them
work to their strengths, mainly in scenarios regarding instructor-independent activities
like studying (Scenario 5, 6, 8) and group projects (Scenario 7). Students like S2, S3, and
S9 focused on how adaptive technologies could make studying more effective and effi-
cient, as it would “allow [them] to fully understand the concept of what [they’re] learn-
ing,” and “allows for them to try and focus on where they might be weaker.” In some
cases, the sense of personalization led students to describe the systems as if they could
fulfill roles as members of the course team. For example, S1 referred to the Adaptive
Quiz system (Scenario 8) as a potential source of guidance: “I think being able to have
that quiz to help me, guide me, I’m assuming it would help me.” Likewise, S5 described
the presence of an AI Group Project Organizer (Scenario 7) as “having a mentor with
you, helping you do it” which would help students “focus more on maybe just research-
ing things, writing their papers, whatever they need to do for the project.
Instructors believe AI could be effectively leveraged to help students receive just-in-
time personalized support. I1 said that “one of the best learning mechanisms is to be
confronted right away with the correct answer or the correct way of finding the right
answer” when doing quizzes and assignments. Many instructors (10 out of 11) expressed
approval towards AI-based Intelligent Suggestions (Scenario 5) and an Adaptive Quiz
system (Scenario 8). All 11 instructors appreciated how immediate feedback afforded by
AI could help students study and effectively understand gaps in their knowledge, par-
ticularly at times when they would be unavailable. Similarly, I4 and I11 appreciated that
AI could support students who would otherwise be learning asynchronously. For exam-
ple, AI systems could be supportive of student engagement “because the students are
getting real-time answers, particularly in an online world where they may not be in the
same time zone, this is a synchronous type [of] learning event for them where they could
be doing it when they’re studying” (I11).
Agency
Despite the fact that students appreciated the support that they could potentially receive
from AI, students perceived that canned and standardized support might have a negative
Page 14 of 23
Seoetal. Int J Educ Technol High Educ (2021) 18:54
influence on their ability to learn effectively. For example, S11 shared how he felt the
usage of systems that collect engagement data would “over standardize” the learning
process by prescribing how an engaged student would or should act. He likened some
of the AI examples to “helicopter parenting,” expressing that guidance—whether from
an AI or parent—can set an arbitrary pace for a student to follow, despite the fact that
the learning experience should involve “learning about yourself and going at your own
pace.” Several other students (four out of 12) were concerned with the potential effect of
a system like the AI Group Project Organizer (Scenario 7), citing concerns that students
“wouldn’t put that much effort” into their group projects because “it might just end up
AI doing all the work for them” (S2). Similarly, S6 focused on how AI could detract from
the fact that experiences with schoolwork can help students later in life: “… I think it’s
like giving them a false sense of security in the sense that like, I’m so used to doing pro-
jects with this AI helper that when I go into the real world, I’m not going to be ready. I’m
just not going to be prepared for it.
Instructors are similarly wary of the fact that too much support from AI could take
away students’ opportunities for exploration and discovery. Many instructors (nine out
of 11) were concerned that students could lose opportunities to learn new skills or learn
from their mistakes. Responding about the AI Group Project Organizer (Scenario 7), I7
stressed that she wouldn’t want to standardize inconsistent group projects since part of
an instructor’s job is “helping people understand how group work is conducted… [and] if
you’re just laying on a simple answer, you miss that opportunity.” Similarly, other instruc-
tors (five out of 12)—primarily those in humanities-based fields—were concerned “it
may take the creativity away from the students” since students’ projects “can be hugely
different from each other, yet equally good,” and suggestions based on historical data
could steer students towards certain directions (I6). I4 even expressed that he currently
tries “not to share previous work because [he] thinks that restricts their creativity.” After
experiencing all the storyboards related to AI-powered support, I11 presented a vital
question: “At what stage is it students’ work and what stage is it the AI’s algorithm?”.
Presence
In online learning environments, presence refers to a factor that makes students and
instructors perceive each other’s existence during the learning process (Kang & Im,
2013). Students and instructors expect the impact of AI systems to be positive in terms
of giving them a feeling of improved connectivity, and to be negative in terms of increas-
ing the risk of surveillance problems.
Connection
Students believe that AI can address privacy concerns and support learner–instruc-
tor connections by providing social interaction cues without personal camera informa-
tion. Many students (10 out of 12) stated that they don’t want to turn on their camera
in online courses, even though turning off the camera adversely affects their presence in
class, because they have concerns like: “looking like a mess” (S1), “just in my pajamas”
(S2), and “feeling too invasive” (S4). Specifically, S9 stated that turning on the camera
“makes you more anxious and conscious of what you’re doing and as a result, it deters
from you engaging with the content.” In this sense, most students (11 out of 12) liked the
Page 15 of 23
Seoetal. Int J Educ Technol High Educ (2021) 18:54
Virtual Avatar system (Scenario 9), where AI communicates student facial expressions
and body language to the instructor via a virtual avatar. Students expect that this will
make them “feel more comfortable going to lecture” (S2), “feel less intrusive for at home
learning” (S4), and “showcase much more of their expression or confusion or under-
standing” (S10). Overall, many students (nine out of 12) appreciated the potential of AI
systems as “it solves the problem of not needing to show your actual face, but you can
still get your emotions across to the instructor” (S10).
Instructors believe that the addition of AI would help them become more aware of stu-
dents’ needs. Many instructors (10 out of 11), particularly those that taught larger under-
graduate courses, stated that students tend to turn off their cameras in online learning
spaces, “so something that you really, really miss from teaching online is reading body
language” (I10). Instructors generally expressed that AI systems like the Virtual Avatar
(Scenario 9) and the AI Facial Analytics (Scenario 11) could be helpful, due to the fact
that they would allow students to share their body language and facial expressions with-
out directly sharing their video feed. I4 appreciated that the AI Facial Analytics could
automate the process of looking at students’ faces “to see if they got it.” Similarly, I5
liked that a Virtual Avatar could give “any sign that someone is listening,” as “it’s some-
times very tough, especially if [she’s] making a joke.” Furthermore, I4 emphasized that
turning on the camera can be helpful not just for the instructor but also for students’
own accountability since “if students don’t turn on the camera, it’s very likely that they
are going to do something else.” Overall, instructors appreciated AI’s ability to provide
critical information to understand how students are doing and how they feel in online
courses.
Surveillance
Although AI can strengthen the connection between students and instructors, students are
uncomfortable with the measurement of their unconscious behavior, such as eye tracking
or facial expression analysis, because it feels like surveillance. All 12 students discussed
how they would be anxious about being represented by unconscious eye-tracking data.
S1 professed: “I don’t really know what my eyes are doing. I think it might just make me
a little nervous when it comes to taking quizzes or tests and all that. I might be scared
that I might have accidentally cheated.” S12 additionally spoke on how that would make
her more anxious when sending emails or asking questions due to concern that instruc-
tors would judge him based on his unconscious behavior before taking care of his ques-
tions. Note that most students (10 out of 12) feel uncomfortable with AI Facial Analytics
(Scenario 11). For example, S6 was concerned that facial expression is “something that
happens [that] might be outside of your control,” so AI might miss the nuance of authen-
tic human emotion and flatten and simplify it in a way that might cause more confu-
sion. In a similar vein, S11 said that “e nuances of social interaction is something that
should be left up to humans and not guided because it’s innately something that, that’s
what makes us human is the social interaction portion.” Overall, students complained
that they didn’t want to use AI’s measures of unconscious behavior, such as eye tracking
or facial expression analysis, even if there are positive aspects.
Instructors were negative about relying on AI interpretation to understand students’ social
interaction cues. All instructors felt uncomfortable with collecting private data, such as eye
Page 16 of 23
Seoetal. Int J Educ Technol High Educ (2021) 18:54
movements and facial expressions of students through AI, because “not all the students feel
comfortable sharing their private information with the instructor” (I2, I5). Additionally, I9 was
concerned that AI Facial Analytics might force students to smile to get a good engagement
score, which could adversely affect online learning itself. In this sense, many instructors (nine
out of 11) declined to use AI systems that use eye tracking and facial expression analysis in
their online courses. Furthermore, I6 would rather “choose to rely on my own kind of sense
of the classroom dynamic instead of AI systems” because she believed that the social relation-
ship between students and instructors should be authentic. Plus, other instructors stated they
“don’t have time to check all of the interface[s]”, or would have trouble “knowing that that data
is accurately reflecting, [that] the student is responding to [their] content” rather than extrane-
ous stimulation in their personal environments (I3, I7). Overall, instructors were uncomfort-
able with AI giving detailed information about how students engage with their online courses,
and they wanted to understand these social interaction cues for themselves.
In summary, students and instructors expect that AI systems will benefit learner–
instructor interaction in online learning in terms of improving the quantity and quality
of communication, enabling just-in-time personalized support for students at scale, and
giving them a feeling of improved connectivity. However, at the same time, students and
instructors were concerned that AI systems could create responsibility, agency, and sur-
veillance issues in online learning if they violated social boundaries. ese boundaries
that make AI perceived to be negative will be discussed in the next section.
Discussion andconclusion
Our research question focused on examining how students and instructors perceive the
impact of AI systems on learner–instructor interaction (inter alia, communication, support,
and presence) in online learning. Although the growing body of AIEd research has been
conducted to investigate the useful functionalities of AI systems (Seo etal., 2020b; Popenici
& Kerr, 2017; Zawacki-Richter etal., 2019), little has been done to understand students’ and
instructors’ concerns on AI systems. Recent use of AI systems in online learning showed
that careless application can cause surveillance and privacy issues (Lee, 2020), which makes
students feel uncomfortable (Bajaj & Li, 2020). In this study, we found that students and
instructors perceive the impact of AI systems as double-edged swords. Consequently,
although AI systems have been positively recognized for improving the quantity and quality
of communication, for providing just-in-time, personalized support for large-scale students,
and for improving the feeling of connection, there were concerns about responsibility,
agency, and surveillance issues. In fact, what students and instructors perceive negatively
often stemmed from the positive aspects of AI systems. For example, students and instruc-
tors appreciated AI’s immediate communication, but at the same time they were concerned
about AI-based misunderstandings or misleadingness. Although students and instructors
valued the just-in-time, personalized support of AI, they feared that AI would limit their
ability to learn independently. Students and instructors valued the social interaction cues
provided by AI, but they are uncomfortable with the loss of privacy due to AI’s excessive
data collection. As shown in Table6, this study provides rich opportunities to identify the
boundaries beyond which AI systems are perceived as “invasive.
First, although AI systems improve instructional communication due to the anonymity
it can provide for students, students were concerned about responsibility issues that could
Page 17 of 23
Seoetal. Int J Educ Technol High Educ (2021) 18:54
arise when AI’s unreliable and unexplained answers lead to negative consequences. For
instance, when communicating with an AI Teaching Assistant, the black-box nature of the
AI system leaves no choices for students to check whether the answers from AI are right or
wrong (Castelvecchi, 2016). Accordingly, students believe they would have a hard time deci-
phering the reasoning behind an AI’s answer. is can result in serious responsibility issues
if students apply an AI’s answers to their tests but instructors mark them as wrong. As well,
students would find more room to argue for their marks because of AI’s unreliability.
Acknowledging that AI systems cannot always provide the right answer, a potential solu-
tion to this problem is to ensure the system is explainable. Explainability refers to the abil-
ity to offer human-understandable justifications for the AI’s output or procedures (Gunning,
2017). Explainability gives students the opportunity to check whether an AI’s answer is
right or wrong by themselves, and in doing so can make AI more reliable and responsible
(Gunning, 2017). Explainability should be the boundary that determines students’ trust and
acceptance of AI systems. How to ensure the explainability of AI systems in the online learn-
ing communication context will be an interesting research topic. For example, instead of pro-
viding unreliable answers that may mislead or confuse students, AI systems should connect
students to relevant sources of information that students can navigate on their own.
Second, while AI systems enable some degree of personalized support, there is a risk of
over-standardizing the learning process by prescribing how an engaged student would or
should act. Despite the fact that students appreciate the support that they could poten-
tially receive from AI systems, students also worry that canned and standardized support
would have a negative influence on their agency over their own learning. Instructors are
similarly wary of the fact that too much support from AI systems could take away stu-
dents’ opportunities for exploration and discovery. Many instructors were concerned that
students could lose opportunities to learn new skills or learn from their mistakes.
A solution to mediate this challenge may be to keep instructors involved. e role of AI sys-
tems in online education should not be to reduce learning to a set of canned and standardized
procedures that reduce the student agency, but rather to enhance human thinking and aug-
ment the learning process. In practice, adaptive support is often jointly enacted by AI systems
Table 6 Summary of the boundaries beyond which AI systems are perceived as invasive, and their
potential solutions
Factor of learner–
instructor
interaction
Boundary beyond which AI systems are
perceived as invasive Potential solution
Communication Responsibility issues that could arise when
AI driven decisions lead to negative
consequences
Human-understandable justifications for the
AI’s output or procedures (i.e., explain-
ability)
Support Over-standardizing the learning process
by prescribing how an engaged student
should act
Bring students and instructors into the
decision-making loop and try to inform
them of the decision-making context (i.e.,
human-in-the-loop); make decisions flex-
ible, support multiple paths to success; be
more careful about high-stakes decisions
Presence Uncomfortable with the measurement of
their unconscious behavior, such as facial
expression analysis or eye tracking, as it
feels like surveillance
Establishing clear, simple, and transparent
data norms about the nature of data being
collected from students and what kind of
data is okay to be presented to instructors;
maintain agency and provide an effective
process of consent for data sharing
Page 18 of 23
Seoetal. Int J Educ Technol High Educ (2021) 18:54
and human facilitators, such as instructors or peers (Holstein et al., 2020). In this context,
Baker (2016, p. 603) tried to reconcile humans with AI systems by combining “stupid tutor-
ing systems and intelligent humans.” AI systems can process large amounts of information
quickly, but do not respond well to complex contexts. Humans cannot process information
as AI systems do, but instead they are flexible and intelligent in a variety of contexts. When AI
systems bring human beings into the decision-making loop and try to inform them, humans
can learn more efficiently and effectively (Baker, 2016). e human-in-the-loop is the solution
to ensure students’ perceived agency in online learning. How to balance artificial and human
intelligences to promote students’ agency is an important research direction (e.g., goldilocks
conditions for human–AI interaction; Seo etal., 2020a).
ird, even though AI strengthens the perceived connection between students and
instructors, students are uncomfortable with the measurement of their unconscious behav-
ior, such as facial expression analysis or eye tracking, as it feels like surveillance. While most
students liked the Virtual Avatar system (Scenario 9) where AI simply delivers student facial
expressions and body language to the instructor via an avatar, students declined to use the
AI Facial Analytics (Scenario 11), which might miss the nuance of social interaction by
flattening and simplifying it in a way that might cause more confusion. Interpreting social
interaction from unconscious behavior could be the boundary beyond which AI systems
are perceived as “invasive.” Students felt uncomfortable about being represented by their
unconscious behavior because they did not know what their gaze or face was doing. Stark
(2019) described facial recognition as the plutonium of AI: “[facial recognition] is dangerous,
racializing, and has few legitimate uses; facial recognition needs regulation and control on
par with nuclear waste.” Students complained about their presence being represented by the
interpretation of the AI system. In a similar vein, the instructor negatively felt the AI system’s
involvement in interpreting the meaning of student behavior.
Establishing clear, simple, and transparent data norms and agreements about the nature of
data being collected from students and what kind of data is okay to be presented to instruc-
tors are important considerations for future research (Ferguson, 2019; Tsai etal., 2020).
While this study revealed important findings and implications for using AI systems in
online learning, the study recognizes some limitations that should be considered when inter-
preting the patterns of the results. First, although this study attempted to capture various
forms of AI systems in online learning based on the ideations from HCI designers and AI
experts, it might be possible that other kinds of AI systems exist. Different AI systems might
offer different insights. As such, further studies can be conducted with different kinds of
AI systems. Next, students’ and instructors’ perceptions of AI systems could be affected by
different disciplines. In the current study, we recruited students and instructors in diverse
majors and subjects. Although this helped us to generalize our findings from participants
with diverse backgrounds, there’s more room to investigate how students and instruc-
tors in different disciplines perceive AI systems differently. In our findings, we anecdotally
found that instructors in humanities-based fields were more concerned about rapport with
students and students’ creativity in courses compared to other disciplines. In order to fully
investigate this, future research should consider the different learner–instructor interaction
needs of participants from different majors (e.g., engineering vs. humanities).
Another limitation is that the study was conducted by reading the storyboards,
rather than directly interacting with AI systems. is might have limited participants’
Page 19 of 23
Seoetal. Int J Educ Technol High Educ (2021) 18:54
perceptions about the AI systems. If participants have continuous, direct interactions
with the AI systems in the real world, their perceptions may change. As such, future
researchers should examine students’ responses to direct exposures of AI systems. is
can be accomplished in a variety of ways. For example, one could conduct a lab experi-
ment using virtual reality, the wizard-of-oz method, or the user enactment method to see
how students actually respond to AI systems. It would also be meaningful to conduct a
longitudinal study to understand whether and/or how student perceptions would change
over time.
Theoretical implications
is study provides theoretical implications for a learner–instructor interaction frame-
work by highlighting and mapping key challenges in AI-related ethical issues (i.e.
responsibility, agency, and surveillance) in online learning environments. Researchers
have requested clear ethical guidelines for future research to prevent AI systems from
accidently harming people (Loi etal., 2019). Although several ethical frameworks and
professional codes of conduct have been developed to mitigate the potential dangers and
risks of AI in education, significant debates continue about their specific impact on stu-
dents and instructors (Williamson & Eynon, 2020). e results of this study increase our
understanding of the boundaries that determine student and instructor trust and accept-
ance of AI systems, and provide a theoretical background for designing AI systems that
positively support learner–instructor interactions in a variety of learning situations.
Practical implications
is study has practical implications for both students and instructors. Interestingly, most
of the negative experiences with AI systems came from students’ unrealistic expectations
and misunderstandings about AI systems. e AI system’s answer is nothing more than
an algorithm based on accumulated data, yet students typically expect the AI system to be
accurate. ese misconceptions can be barriers to the effective use of AI systems by stu-
dents and instructors. To address this, it is important to foster AI literacy in students and
instructors without a technical background (Long & Magerko, 2020). For example, recent
studies have published guides on how to incorporate AI into K-12 curricula (Touretzky
etal., 2019), and researchers are exploring how to engage young learners in creative pro-
gramming activities involving AI (Zimmermann-Niefield etal., 2019).
Furthermore, in order to minimize the negative impact of AI systems on learner–instruc-
tor interaction, it is important to address tensions where AI systems violate the boundaries
between students and instructors (e.g., responsibility, agency, and surveillance issues). We
proposed that future AI systems should ensure explainability, human-in-the-loop, and care-
ful data collection and presentation. By doing so, AI systems will be more closely integrated
into future online learning. It is important to note that the present study does not argue that
AI systems will replace the entire role of human instructors. Rather, in the online learning of
the future, AI systems and humans will work closely together, and for this, it is important to
use these systems with consideration about perceived affordances and drawbacks.
Page 20 of 23
Seoetal. Int J Educ Technol High Educ (2021) 18:54
Appendix A: summary oftheAI experts’ information
Appendix B: speed dating interview script
1. Introduction
Hello, thank you for taking time for this interview today. We’re really looking for-
ward to learning from your experience with online learning.
Today, we’ll be discussing a set of 11 storyboards that are related to AI systems
for online courses. When reading the storyboards, try to think about them in the
context of your discipline and experiences. Our goal is to reveal your perceptions
of AI in online learning.
For your information, the interview will take about 60min. e interview will be
audio recorded but will be confidential and de-identified.
2. For each storyboard
Do you think this AI system supports learner–instructor interaction? Yes, no, or
do you feel neutral? Why?
[When the participant is a student] Would the incorporation of this AI system
into your courses change your interaction with the instructor?
[When the participant is an instructor] Would incorporating this AI system into
the course change how you interact with students?
Do you have any reservations or concerns about this AI system?
Table 7 Summary of the AI experts’ information
ID AI research
experience AI research subject Teaching
experience Teaching subject
1 11 years Computer graphics, computer vision
using machine learning 2 years Deep learning for computer vision and
graphics
2 15 years Business analytics 5 years Business analytics
3 24 years Machine learning, tools for creativity 18 years Machine learning, computer graphics,
AI, HCI
4 3 years Learning technologies 10 years Language education
5 8 years Intelligent transport systems, data
science 2 years Transport planning
6 4 years Learning sciences, sales enablement 11 years Organizational behaviour and human
resource management
Page 21 of 23
Seoetal. Int J Educ Technol High Educ (2021) 18:54
3. After examining all storyboards (capturing participants’ holistic point of view)
Of the storyboards shown today, which AI systems do you think would work well
in your online classroom? Why? Also, which ones wouldn’t work well?
How do you think the adoption of AI would affect the relationship between stu-
dents and the instructor?
4. Conclusion
Do you have any final comments?
ank you for taking the time to interview with us today. We really appreciate that
you took time to participate in our study and share your expertise. Your insights
were really helpful.
Acknowledgements
The authors would like to thank all students, instructors, and AI experts for their great support and inspiration.
Authors’ contributions
KS: conceptualization, methodology, investigation, writing—original draft, visualization, project administration; JT:
conceptualization, methodology, investigation, data curation, writing—original draft, project administration; IR: writ-
ing—review and editing, conceptualization; SF: writing—review and editing, supervision, project administration, funding
acquisition; DY: writing—review and editing, conceptualization, supervision, project administration. KS and JT contribute
equally. All authors read and approved the final manuscript.
Funding
This study was financially supported by Seoul National University of Science & Technology.
Availability of data and materials
The full set of storyboards and an example of our codes can be viewed at https:// osf. io/ 3aj5v/? view_ only= bc5fa 97e6f
7d46f db668 72588 ff1e2 2e.
Declarations
Competing interests
The authors declare that they have no competing interests.
Author details
1 Department of Applied Artificial Intelligence, Seoul National University of Science and Technology, 232 Gongneung-ro,
Gongneung-dong, Nowon-gu, Seoul 01811, Korea. 2 Department of Computer Science, The University of British Colum-
bia, Vancouver, Canada. 3 Faculty of Education in Science and Technology, Technion-Israel Institute of Technology, Haifa,
Israel. 4 Department of Electrical and Computer Engineering, The University of British Columbia, Vancouver, Canada.
Received: 20 April 2021 Accepted: 29 July 2021
References
Andersen, J. C. (2013). Learner satisfaction in online learning: An analysis of the perceived impact of learner-social media and
learner–instructor interaction. Doctoral dissertation. East Tennessee State University, Tennessee.
Anderson, J. R., Boyle, C. F., & Reiser, B. J. (1985). Intelligent tutoring systems. Science, 228(4698), 456–462.
Aslan, S., Alyuz, N., Tanriover, C., Mete, S. E., Okur, E., D’Mello, S. K., & Arslan Esme, A. (2019). Investigating the impact of a
real-time, multimodal student engagement analytics technology in authentic classrooms. In: Proceedings of the 2019
CHI conference on human factors in computing systems (pp. 1–12).
Bajaj, M., & Li, J. (2020). Students, faculty express concerns about online exam invigilation amidst COVID-19 outbreak. Retrieved
February 8, 2021, from https:// www. ubyss ey. ca/ news/ Stude nts- expre ss- conce rns- about- online- exams/
Baker, R. S. (2016). Stupid tutoring systems, intelligent humans. International Journal of Artificial Intelligence in Education,
26(2), 600–614.
Banna, J., Lin, M. F. G., Stewart, M., & Fialkowski, M. K. (2015). Interaction matters: Strategies to promote engaged learning
in an online introductory nutrition course. Journal of Online Learning and Teaching/MERLOT, 11(2), 249.
Page 22 of 23
Seoetal. Int J Educ Technol High Educ (2021) 18:54
Beard, A. (2020). Can computers ever replace the classroom?. Retrieved January 10, 2021, from https:// www. thegu ardian.
com/ techn ology/ 2020/ mar/ 19/ can- compu ters- ever- repla ce- the- class room
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101.
Castelvecchi, D. (2016). Can we open the black box of AI? Nature News, 538(7623), 20.
Chan, R. (2019). The Cambridge Analytica whistleblower explains how the firm used Facebook data to sway elections. Business
Insider. Retrieved from https:// www. busin essin sider. com/ cambr idge- analy tica- whist leblo wer- chris topher- wylie-
faceb ook- data- 2019- 10
Crawford, K., & Calo, R. (2016). There is a blind spot in AI research. Nature, 538(7625), 311–313.
Cruz-Benito, J., Sánchez-Prieto, J. C., Therón, R., & García-Peñalvo, F. J. (2019). Measuring students’ acceptance to AI-driven
assessment in eLearning: Proposing a first TAM-based research model. In: International conference on human–com-
puter interaction (pp. 15–25). Springer, Cham.
Davidoff, S., Lee, M. K., Dey, A. K., & Zimmerman, J. (2007). Rapidly exploring application design through speed dating. In:
International conference on ubiquitous computing (pp. 429–446). Springer, Berlin, Heidelberg.
Felix, C. V. (2020). The role of the teacher and AI in education. In: International perspectives on the role of technology in
humanizing higher education. Emerald Publishing Limited.
Ferguson, R. (2019). Ethical challenges for learning analytics. Journal of Learning Analytics, 6(3), 25–30.
Fong, M., Dodson, S., Harandi, N. M., Seo, K., Yoon, D., Roll, I., & Fels, S. (2019). Instructors desire student activity, literacy,
and video quality analytics to improve video-based blended courses. In Proceedings of the Sixth (2019) ACM Confer-
ence on Learning@ Scale (pp. 1–10).
Goel, A. K., & Polepeddi, L. (2016). Jill Watson: A virtual teaching assistant for online education. Georgia Institute of
Technology.
Guilherme, A. (2019). AI and education: The importance of teacher and student relations. AI & Society, 34(1), 47–54.
Gunning, D. (2017). Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web,
2(2).
Heidicker, P., Langbehn, E., & Steinicke, F. (2017). Influence of avatar appearance on presence in social VR. In: 2017 IEEE
symposium on 3D user interfaces (3DUI) (pp. 233–234). IEEE.
Holstein, K., Hong, G., Tegene, M., McLaren, B. M., & Aleven, V. (2018). The classroom as a dashboard: Co-designing wear-
able cognitive augmentation for K-12 teachers. In: Proceedings of the 8th international conference on learning analytics
and knowledge (pp. 79–88).
Holstein, K., Aleven, V., & Rummel, N. (2020). A conceptual framework for human–AI hybrid adaptivity in education. In:
International conference on artificial intelligence in education (pp. 240–254). Springer, Cham.
Hwang, G. J., Xie, H., Wah, B. W., & Gašević, D. (2020). Vision, challenges, roles and research issues of Artificial Intelligence in
Education. Computers and Education: Artificial Intelligence, 1, 100001.
Jou, M., Lin, Y. T., & Wu, D. W. (2016). Effect of a blended learning environment on student critical thinking and knowledge
transformation. Interactive Learning Environments, 24(6), 1131–1147.
Kang, M. S. (2010). Development of learners’ perceived interaction model and scale between learner and instructor in e-learning
environments. Doctoral dissertation. Korea University, Korea.
Kang, M., & Im, T. (2013). Factors of learner–instructor interaction which predict perceived learning outcomes in online
learning environment. Journal of Computer Assisted Learning, 29(3), 292–301.
Laura, R. S., & Chapman, A. (2009). The technologisation of education: Philosophical reflections on being too plugged in.
International Journal of Children’s Spirituality, 14(3), 289–298.
Lee, S. (2020). Proctorio CEO releases student’s chat logs, sparking renewed privacy concerns. Retrieved February 8, 2021, from
https:// www. ubyss ey. ca/ news/ proct orio- chat- logs/
Linsey, J. S., & Becker, B. (2011). Effectiveness of brainwriting techniques: comparing nominal groups to real teams. In:
Design creativity 2010 (pp. 165–171). Springer.
Loi, D., Wolf, C. T., Blomberg, J. L., Arar, R., & Brereton, M. (2019). Co-designing AI futures: Integrating AI ethics, social com-
puting, and design. In: Companion publication of the 2019 on designing interactive systems conference 2019 companion
(pp. 381–384).
Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. In: Proceedings of the 2020
CHI conference on human factors in computing systems (pp. 1–16).
Luckin, R. (2017). Towards artificial intelligence-based assessment systems. Nature Human Behaviour, 1(3), 1–3.
Luo, N., Zhang, M., & Qi, D. (2017). Effects of different interactions on students’ sense of community in e-learning environ-
ment. Computers & Education, 115, 153–160.
Luria, M., Zheng, R., Huffman, B., Huang, S., Zimmerman, J., & Forlizzi, J. (2020). Social boundaries for personal agents in the
interpersonal space of the home. In: Proceedings of the 2020 CHI conference on human factors in computing systems
(pp. 1–12).
Martin, F., & Bolliger, D. U. (2018). Engagement matters: Student perceptions on the importance of engagement strate-
gies in the online learning environment. Online Learning, 22(1), 205–222.
Martin, F., Wang, C., & Sadaf, A. (2018). Student perception of helpfulness of facilitation strategies that enhance instructor
presence, connectedness, engagement and learning in online courses. The Internet and Higher Education, 37, 52–65.
McArthur, A. (2020). Students struggle with online test proctoring systems. Retrieved January 10, 2021, from https:// unive rse.
byu. edu/ 2020/ 12/ 17/ stude nts- strug gle- with- online- test- proct oring- syste ms/
Misiejuk, K., & Wasson, B. (2017). State of the field report on learning analytics. Centre for the Science of Learning & Technol-
ogy (SLATE), University of Bergen.
Moore, M. G. (1989). Three types of interaction. American Journal of Distance Education, 3(2), 1–7.
Murphy, R. F. (2019). Artificial intelligence applications to support K–12 teachers and teaching. RAND Corporation. https://
doi. org/ 10. 7249/ PE315
Nguyen, T. D., Cannata, M., & Miller, J. (2018). Understanding student behavioral engagement: Importance of student
interaction with peers and teachers. The Journal of Educational Research, 111(2), 163–174.
Nowell, L. S., Norris, J. M., White, D. E., & Moules, N. J. (2017). Thematic analysis: Striving to meet the trustworthiness crite-
ria. International Journal of Qualitative Methods, 16(1), 1609406917733847.
Page 23 of 23
Seoetal. Int J Educ Technol High Educ (2021) 18:54
Perin, D., & Lauterbach, M. (2018). Assessing text-based writing of low-skilled college students. International Journal of
Artificial Intelligence in Education, 28(1), 56–78.
Popenici, S. A., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher educa-
tion. Research and Practice in Technology Enhanced Learning, 12(1), 22.
Richardson, J. C., Maeda, Y., Lv, J., & Caskurlu, S. (2017). Social presence in relation to students’ satisfaction and learning in
the online environment: A meta-analysis. Computers in Human Behavior, 71, 402–417.
Robinson, H., Kilgore, W., & Warren, S. (2017). Care, communication, support: Core for designing meaningful online col-
laborative learning. Online Learning Journal. https:// doi. org/ 10. 24059/ olj. v21i4. 1240
Roll, I., & Winne, P. H. (2015). Understanding, evaluating, and supporting self-regulated learning using learning analytics.
Journal of Learning Analytics, 2(1), 7–12.
Roll, I., & Wylie, R. (2016). Evolution and revolution in artificial intelligence in education. International Journal of Artificial
Intelligence in Education, 26(2), 582–599.
Roll, I., Russell, D. M., & Gašević, D. (2018). Learning at scale. International Journal of Artificial Intelligence in Education, 28(4),
471–477.
Ross, B., Chase, A. M., Robbie, D., Oates, G., & Absalom, Y. (2018). Adaptive quizzes to increase motivation, engagement
and learning outcomes in a first year accounting unit. International Journal of Educational Technology in Higher
Education, 15(1), 30.
Seo, K., Fels, S., Kang, M., Jung, C., & Ryu, H. (2020a). Goldilocks conditions for workplace gamification: How narrative
persuasion helps manufacturing workers create self-directed behaviors. Human–Computer Interaction. 1–38.
Seo, K., Fels, S., Yoon, D., Roll, I., Dodson, S., & Fong, M. (2020b). Artificial intelligence for video-based learning at scale. In
Proceedings of the Seventh ACM Conference on Learning@ Scale (pp. 215–217).
Seo, K., Dodson, S., Harandi, N. M., Roberson, N., Fels, S., & Roll, I. (2021). Active learning with online video: The impact of
learning context on engagement. Computers & Education, 165, 104132.
Shackelford, J. L., & Maxwell, M. (2012). Contribution of learner–instructor interaction to sense of community in graduate
online education. MERLOT Journal of Online Learning and Teaching, 8(4), 248–260.
Stark, L. (2019). Facial recognition is the plutonium of AI. XRDS: Crossroads, the ACM Magazine for Students, 25(3), 50–55.
Touretzky, D., Gardner-McCune, C., Martin, F., & Seehorn, D. (2019). Envisioning AI for K-12: What should every child know
about AI?. In: Proceedings of the AAAI conference on artificial intelligence (Vol. 33, No. 01, pp. 9795–9799).
Truong, K. N., Hayes, G. R., & Abowd, G. D. (2006). Storyboarding: an empirical determination of best practices and effec-
tive guidelines. In: Proceedings of the 6th conference on designing interactive systems (pp. 12–21).
Tsai, Y. S., Whitelock-Wainwright, A., & Gašević, D. (2020). The privacy paradox and its implications for learning analytics. In:
Proceedings of the tenth international conference on learning analytics & knowledge (pp. 230–239).
VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems.
Educational Psychologist, 46(4), 197–221.
Walker, C. H. (2016). The correlation between types of instructor-student communication in online graduate courses and stu-
dent satisfaction levels in the private university setting. Doctoral dissertation. Carson-Newman University, Tennessee.
Williamson, B., & Eynon, R. (2020). Historical threads, missing links, and future directions in AI in education. Learning, Media
and Technology, 45(3), 223–235.
Wogu, I. A. P., Misra, S., Olu-Owolabi, E. F., Assibong, P. A., Udoh, O. D., Ogiri, S. O., & Damasevicius, R. (2018). Artificial intel-
ligence, artificial teachers and the fate of learners in the 21st century education sector: Implications for theory and
practice. International Journal of Pure and Applied Mathematics, 119(16), 2245–2259.
Woolf, B. P., Arroyo, I., Muldner, K., Burleson, W., Cooper, D. G., Dolan, R., & Christopherson, R. M. (2010). The effect of moti-
vational learning companions on low achieving students and students with disabilities. In: International conference
on intelligent tutoring systems (pp. 327–337). Springer, Berlin, Heidelberg.
Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence
applications in higher education–where are the educators? International Journal of Educational Technology in Higher
Education, 16(1), 39.
Zhang, C., Chen, H., & Phang, C. W. (2018). Role of instructors’ forum interactions with students in promoting MOOC
continuance. Journal of Global Information Management (JGIM), 26(3), 105–120.
Zimmerman, J., & Forlizzi, J. (2017). Speed dating: Providing a menu of possible futures. She Ji: THe Journal of Design,
Economics, and Innovation, 3(1), 30–50.
Zimmermann-Niefield, A., Turner, M., Murphy, B., Kane, S. K., & Shapiro, R. B. (2019). Youth learning machine learning
through building models of athletic moves. In Proceedings of the 18th ACM international conference on interaction
design and children (pp. 121–132).
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
... This can reduce the time needed to complete the final assignment and improve the quality of student work. According to Seo et al. (2021), utilization of AI in language assessment can speed up the data processing process and increase the accuracy of assessment results. With the help of AI, students in the English Literature department can get better results in preparing their final projects. ...
... According Alhalangy & Abdalgane (2023) stated that AI can make it easier to search for resources in the context of learning English as a foreign language (EFL). By using this technology, students can access various reference sources without having to spend a long time searching for them manually (Seo et al., 2021). This allows them to focus more on the analysis and development of ideas in their final project. ...
... However, although AI offers many benefits, there are several challenges that need to be overcome in its use, especially in the context of higher education. One of the main challenges is AI's limitations in understanding the nuances of certain cultural and academic contexts (Jian, 2023;Seo et al., 2021). AI can provide advice based on existing data, but it cannot always understand the depth of human thinking. ...
Article
Utilization of artificial intelligence (AI) technology is very important in helping English. This research aims to explore the utilization of artificial intelligence (AI) technology in helping English Literature Study Program students complete their final projects at university level. The research design used is a qualitative approach. The population in this study were students from the English Literature Study Program who were completing their final projects, with a sample of 10 students selected purposively. Data was collected through interviews using Google Form which contained five questions related to the utilization of AI in the final project. Data analysis was carried out by referring to the points contained in the interviews to answer the research objectives. The research results show that AI really helps students in completing their final projects, especially in terms of finding ideas, understanding concepts, and improving English language skills. However, some students also realize the limitations of AI in terms of source validity and accuracy of information provided. The conclusion of this research is that the utilization of AI must be balanced with critical evaluation of the information and sources provided to ensure the quality and acceptability of students’ final projects. This research provides a deeper understanding of the role of AI in supporting students’ final assignment completion, as well as the importance of vigilance in verifying information obtained from this technology.
... While human instructors are vital, AI can serve as a valuable support system by managing routine tasks like grading and administrative duties. Seo et al. (2021) highlight that AI tools can personalize online learning, allowing instructors to focus on mentoring and fostering classroom discussions. Additionally, AI creates interactive environments through simulations and game-like experiences that cater to various learning styles, encouraging active participation (Luckin, 2018;Seo et al., 2021). ...
... Seo et al. (2021) highlight that AI tools can personalize online learning, allowing instructors to focus on mentoring and fostering classroom discussions. Additionally, AI creates interactive environments through simulations and game-like experiences that cater to various learning styles, encouraging active participation (Luckin, 2018;Seo et al., 2021). By leveraging AI, higher education institutions can enhance the learning experience, meeting diverse student needs and fostering a more dynamic educational environment. ...
... AI tools can significantly support educators by automating repetitive tasks such as grading, content creation, and lesson planning, enabling them to focus on more meaningful aspects of teaching. In Vietnam, where teachers often manage large classes and administrative duties, AI systems can streamline grading by automatically assessing multiple-choice quizzes and providing feedback on essays through natural language processing (Seo et al., 2021). This saves time while ensuring consistency and fairness in evaluation. ...
Article
Full-text available
The integration of generative AI tools in higher education presents both significant opportunities and challenges. This paper addresses two key questions: how AI should be incorporated into higher education and what ethical and pedagogical principles should guide its use. While AI enhances creativity, efficiency, and personalized learning, it also raises concerns about over-reliance, bias, and ethical implications. Through a comprehensive review of existing literature, this study examines the ethical and pedagogical impact of AI in education. Findings suggest that institutions must ensure AI complements traditional learning, uphold academic integrity, and promote critical thinking. Human-AI collaboration and equitable access are essential to support diverse learners. Additionally, prioritizing ethical AI use, data privacy, and AI literacy safeguards student rights and prepares educators and students for the evolving technological landscape. The paper concludes that institutions must develop clear AI policies, embed AI ethics into curricula, and provide ongoing faculty training. By addressing these considerations, higher education can harness AI’s benefits while maintaining the critical role of human educators and ensuring equitable and ethical access for all learners.
... Despite the growing interest in integrating artificial intelligence (AI) into educational settings, there remains a significant gap in understanding the factors that influence teachers' readiness to adopt and incorporate AI technologies into their teaching practices. While numerous studies (Chassignol et al., 2018;Islahi, 2019;Holden & Blade, 2020& Seo et al., 2021 ...
... AI technologies offer educators innovative methods to enhance their teaching strategies, such as personalized assistance, effective communication channels, and datadriven insights (Brooker, 2023). Teachers integrating AIpowered tools can enhance their effectiveness, foster student self-regulation, and facilitate meaningful exchanges (Holden & Blade, 2020;Seo et al., 2021;Torda, 2020). Despite the advantages, many educators have not embraced AI-enabled devices. ...
Article
Full-text available
Abstract The integration of artificial intelligence (AI) into teaching and learning practices holds significant potential for enhancing learning experiences. Successful implementation of AI depends on teachers' readiness to adopt and utilize AI tools effectively. This paper combines existing literature to identify key factors influencing teachers' readiness to integrate AI into their teaching practices. Drawing upon various theoretical frameworks such as the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT), this review examines socio-technical factors that shape teachers' attitudes and intentions towards AI integration. Factors such as perceived usefulness, ease of use, technological self-efficacy, professional development opportunities, institutional support, and pedagogical alignment emerge as essential determinants of teachers' readiness. Additionally, individual characteristics, including age, experience, and attitudes towards technology, play significant roles in shaping teachers' readiness levels. Moreover, contextual factors such as organizational culture, policy frameworks, and infrastructure availability are crucial in facilitating or hindering AI integration efforts. Through a comprehensive analysis of these factors, this paper provides in sights into designing effective strategies for promoting teachers' readiness and successful adoption of AI in educational settings. By addressing these factors, educators and policymakers can foster a conducive environment for utilizing AI technologies to enhance teaching and learning outcomes. This review highlights the importance of addressing complex factors influencing teachers' readiness to utilize AI, thereby promoting a culture of innovation and technology integration in education.
Article
Full-text available
The integration of Artificial Intelligence (AI) in English as a Foreign Language (EFL) classroom has emerged as a transformative approach to teaching and learning. This study aims to explore English teachers' perceptions, practices, and the pedagogical use of AI in EFL instruction in Nepal. This research employed a quantitative research design. A cross-sectional survey design was adopted to collect data from a representative sample of English teachers. The study findings reveal a growing yet moderate adoption of AI tools such as chatbots, virtual tutors, adaptive learning platforms, and automated feedback systems in ELT classrooms. These tools were found to enhance personalized learning, foster learner autonomy, and accommodate diverse learning strategies in EFL classroom. However, their utilization varies due to factors such as infrastructure limitations, insufficient training, and unequal access to resources. English teachers reported moderate engagement with AI for tasks like creating presentations, developing test items, and organizing instructional materials, reflecting both the potential and challenges of integrating AI into routine pedagogical practices. Findings highlight that AI-driven platforms like Grammarly, ChatGPT, and Duolingo offer innovative solutions for English language teaching, providing real-time feedback, improving writing skills, and gamifying learning experiences. Despite these advancements, challenges such as technical barriers, data reliability, and teacher preparedness persist. The study underscores the need for targeted professional development programs, enhanced institutional support, and equitable access to AI tools to maximize their impact on English language teaching.
Article
To explore the opportunities and challenges of artificial intelligence (AI) in nursing and its impact. Bibliographic review using Arksey and O'Malley's framework, enhanced by Levac, Colquhoun and O'Brien and following PRISMA guidelines, including qualitative and mixed studies. MeSH terms and keywords such as nursing education and ethical considerations were used in databases such as PubMed, Scopus, Web of Science, CINAHL, IEEE Xplore and Google Scholar. Of all, 53 studies were included, highlighting various opportunities and challenges of AI integration and opportunities for personalised learning, training improvement and evaluation. Highlighting challenges related to academic integrity, accuracy, data privacy and security, for the development of critical thinking skills. The integration of AI in nursing education offers significant advantages for improving the quality and effectiveness of education, such as academic integrity, critical thinking and equitable access, for this reason, faculty training should be geared toward the integration of AI in nursing education.
Article
Full-text available
Keywords Abstract AI for education; enchanted determinism; global AI discourse; higher education; OECD; UNESCO. The hype surrounding AI for education continues with no sign of dying down in the near term. Given the influence of UNESCO and OECD on national educational policies worldwide, this study examined how they frame artificial intelligence (AI) and how their discourse may affect the wider educational landscape. Drawing upon the theory of Critical Discourse Analysis, this study adopted a two-stage analysis method: framework analysis followed by directed qualitative content analysis. Four themes were identified, including the necessity of AI-driven educational transformation, imagined educational futures enabled by AI, challenges brought about by AI for education, and solutions and walkthroughs. They were critiqued using the schema of practical argument proposed for political discourse analysis which is composed of a Value premise, a Goal premise, a Circumstantial premise, a Means-Goal premise and a Claim (or conclusion). Findings show that while admitting the existence of enormous uncertainties and challenges, UNESCO and OECD take for granted AI's disruptiveness, inevitability, and potential to change education, its effect on the whole society for the better and its encouragement of the acceleration of AI for education. Possible ramifications of this framing on the ecology of education and beyond were then discussed. The article concludes by calling for a vigilant and critical approach to the AI narratives promoted by influential global agencies, arguing that the future of education depends on our ability to question, adapt and thoughtfully integrate technology without succumbing to unexamined inevitabilities or unwarranted optimism.
Article
Full-text available
Gamification uses game design elements to create gameful real-work experiences that promise to make boring and repetitive jobs become rejuvenating and engaging. The conventional gamification approach, however, provided hedonic gaming experiences or enforced worker’s productive behaviors, which has not been a great success. In this context, three different gamified systems (No vs. Conventional vs. Narrative Gamification) were employed for real manufacturing workers from the Hyundai Motors Company, and their productivity, emotional reactions, and behavioral items were compared. We found three Goldilocks conditions for workplace gamification: i) “be effective not only physically but also psychologically,” ii) “have the right amount of affective reactions,” and iii) “create self-directed goals in relation to personal and organizational values.” In particular, the extrinsic rewards of the conventional gamification approach suffered from the hedonic adaptation effect, which induced the give-and-take mindset for the manufacturing workers. In contrast, the narrative persuasion introduced in this study established self-directed (personal and organizational) moral goals, by which the interaction cycle ‘action – feedback – motivate – re-engage’ was positively reinforced. Our findings contribute to the research on workplace gamification by shifting scholarly attention from extrinsic rewards to narrative persuasion.
Article
Full-text available
The rapid advancement of computing technologies has facilitated the implementation of AIED (Artificial Intelligence in Education) applications. AIED refers to the use of AI (Artificial Intelligence) technologies or application programs in educational settings to facilitate teaching, learning, or decision making. With the help of AI technologies, which simulate human intelligence to make inferences, judgments, or predictions, computer systems can provide personalized guidance, supports, or feedback to students as well as assisting teachers or policymakers in making decisions. Although AIED has been identified as the primary research focus in the field of computers and education, the interdisciplinary nature of AIED presents a unique challenge for researchers with different disciplinary backgrounds. In this paper, we present the definition and roles of AIED studies from the perspective of educational needs. We propose a framework to show the considerations of implementing AIED in different learning and teaching settings. The structure can help guide researchers with both computers and education backgrounds in conducting AIED studies. We outline 10 potential research topics in AIED that are of particular interest to this journal. Finally, we describe the type of articles we like to solicit and the management of the submissions.
Conference Paper
Full-text available
The presence of voice activated personal assistants (VAPAs) in people's homes rises each year [31]. Industry efforts are invested in making interactions with VAPAs more personal by leveraging information from messages and calendars, and by accessing user accounts for 3 rd party services. However, the use of personal data becomes more complicated in interper-sonal spaces, such as people's homes. Should a shared agent access the information of many users? If it does, how should it navigate issues of privacy and control? Designers currently lack guidelines to help them design appropriate agent behaviors. We used Speed Dating to explore inchoate social mores around agent actions within a home, including issues of proac-tivity, interpersonal conflict, and agent prevarication. Findings offer new insights on how more socially sophisticated agents might sense, make judgements about, and navigate social roles and individuals. We discuss how our findings might impact future research and future agent behaviors.
Article
Learning with online video is pervasive in higher education. Recent research has explored the importance of student engagement when learning with video in online and blended courses. However, little is known about students’ goals and intents when engaging with video. Furthermore, there is limited empirical evidence on the impact of learning context on engagement with video, which limits our understanding of how students learn from video. To address this gap, we identify a set of engagement goals for learning with video, and study associated student activity in relation to learning context (course week, exam, and rewatch). In Study 1, we conducted a survey (n = 116) that maps students’ video viewing activities to their engagement goals and intents. We identified a variety of engagement goals, specifically Reflect, Flag, Remember, Clarify, Skim, Search, Orient, and Take a break. In Study 2, we analyzed clickstream data generated by 387 students enrolled in three semester-long courses. We examined the impact of learning context on students’ engagement with video. A multilevel model showed different patterns for online and blended courses. Students in the online course showed much more strategic and adaptive use of video. As the semester progressed, students in the online courses performed fewer Reflect and Search. During exam weeks and when rewatching videos, online students performed more Search within the video. The only trend that was found for blended learning students was an increase in Skim with course week. These findings have implications for video players that adapt to context, such as helping students easily locate important in-video information during the exam week or when rewatching previously watched videos.
Article
This response to Neil Selwyn’s paper, ‘What’s the problem with learning analytics?’, relates his work to the ethical challenges associated with learning analytics and proposes six ethical challenges for the field.
Conference Paper
Video-based learning (VBL) is widespread; however, there are numerous challenges when teaching and learning with video. For instructors, creating effective instructional videos takes considerable time and effort. For students, watching videos can be a passive learning activity. Artificial intelligence (AI) has the potential to improve the VBL experience for students and teachers. This half-day workshop will bring together multi-disciplinary researchers and practitioners to collaboratively envision the future of VBL enhanced by AI. This workshop will be comprised of a group discussion followed by a presentation session. The goal of the workshop is to facilitate the cross-pollination of design ideas and critical assessments of AI approaches to VBL.
Chapter
Educational AI (AIEd) systems are increasingly designed and evaluated with an awareness of the hybrid nature of adaptivity in real-world educational settings. In practice, beyond being a property of AIEd systems alone, adaptivity is often jointly enacted by AI systems and human facilitators (e.g., teachers or peers). Despite much recent research activity, theoretical and conceptual guidance for the design of such human–AI systems remains limited. In this paper we explore how adaptivity may be shared across AIEd systems and the various human stakeholders who work with them. Based on a comparison of prior frameworks, which tend to examine adaptivity in AIEd systems or human coaches separately, we first synthesize a set of dimensions general enough to capture human–AI hybrid adaptivity. Using these dimensions, we then present a conceptual framework to map distinct ways in which humans and AIEd systems can augment each other’s abilities. Through examples, we illustrate how this framework can be used to characterize prior work and envision new possibilities for human–AI hybrid approaches in education.