Available via license: CC BY 4.0
Content may be subject to copyright.
Jeong-Bae Son*, Natasha Kathleen Ružićand Andrew Philpott
Artificial intelligence technologies and
applications for language learning and
teaching
https://doi.org/10.1515/jccall-2023-0015
Received June 14, 2023; accepted August 21, 2023; published online September 15, 2023
Abstract: Artificial intelligence (AI) is changing many aspects of education and is
gradually being introduced to language education. This article reviews the literature
to examine main trends and common findings in relation to AI technologies and
applications for second and foreign language learning and teaching. With special
reference to computer-assisted language learning (CALL), the article explores nat-
ural language processing (NLP), data-driven learning (DDL), automated writing
evaluation (AWE), computerized dynamic assessment (CDA), intelligent tutoring
systems (ITSs), automatic speech recognition (ASR), and chatbots. It contributes to
discussions on understanding and using AI-supported language learning and
teaching. It suggests that AI will be continuously integrated into language education
and AI technologies and applications will have a profound impact on language
learning and teaching. Language educators need to ensure that AI is effectively used
to support language learning and teaching in AI-powered contexts. More rigorous
research on AI-supported language learning and teaching is recommended to
maximise second and foreign language learning and teaching with AI.
Keywords: artificial intelligence; AI applications; natural language processing;
computer-assisted language learning; AI-supported language teaching
1 Introduction
Artificial intelligence (AI) is the ability of computer systems to perform tasks that
normally require human intelligence (Encyclopedia Britannica, 2021; Oxford Refer-
ence, 2021). From a technical perspective, AI is a computer technology that enables
*Corresponding author: Jeong-Bae Son, University of Southern Queensland, Springfield Central,
Australia, E-mail: jeong-bae.son@usq.edu.au. https://orcid.org/0000-0001-5346-5483
Natasha Kathleen Ružić,Institute for Migration and Ethnic Studies, Zagreb, Croatia,
E-mail: natasha.ruzic@imin.hr. https://orcid.org/0000-0002-6706-5429
Andrew Philpott, Kwansei Gakuin University, Nishinomiya, Japan, E-mail: andrewphilpott83@gmail.com.
https://orcid.org/0000-0002-8056-4775
J. China Comput. Assist. Lang. Learn. 2023; aop
Open Access. © 2023 the author(s), published by De Gruyter. This work is licensed under the
Creative Commons Attribution 4.0 International License.
computer systems to simulate human intelligence (Liang et al., 2021; Pokrivcakova,
2019). It is changing many workplaces and institutions and continues to become more
integrated into education (e.g., Mindzak & Eaton, 2021; Naffiet al., 2022; Srinivasan,
2022; Zhang & Aslan, 2021). Kukulska-Hulme et al. (2020) classified the impact of AI on
educational contexts in terms of learning for AI, learning about AI, and learning with
AI. They categorised learning with AI into system-facing applications of AI (sup-
porting institutions’administration functions such as market and finance), student-
facing applications of AI, and teacher-facing applications of AI. Among them, AI
applications that can be accessible and used for learning and teaching should be of
interest to students and teachers.
This article reviews the literature in order to examine the main trends and
common findings of studies conducted in relation to AI in language education and
contribute to discussions on understanding and using AI-supported language
learning and teaching. While it is essential to discuss big data, data mining, deep
learning, and machine learning in understanding AI, the focus of the article is spe-
cifically on AI technologies and applications for second language (L2) and foreign
language (FL) learning and teaching. The authors searched for keywords associated
with AI in computer-associated language learning (CALL)-related journals published
in 2010–2022, such as CALICO Journal,Computer Assisted Language Learning,Lan-
guage Learning & Technology,ReCALL,System,Computers & Education,Computers
and Education –Artificial Intelligence, and Interactive Learning Environments. They
also searched books and book chapters directly related to AI in language learning
and teaching. In addition, they checked bibliographies of recent articles and chapters
for other relevant pieces.
The article explores seven categories of AI technologies and applications for
language education, considering the coverage and focus of studies published in the
selected journals and books. The categories are interrelated and can be combined or
expanded in line with the nature of AI. They include natural language processing
(NLP), data-driven learning (DDL), automated writing evaluation (AWE), comput-
erized dynamic assessment (CDA), intelligent tutoring systems (ITSs), automatic
speech recognition (ASR), and chatbots. The article presents each of the categories
and then future directions, followed by a conclusion.
2 AI in language learning and teaching
2.1 Natural language processing
NLP allows machines to understand human language and is used to make AI a
valuable tool for language learning. It offers the utility of machine translation (MT),
2Son et al.
wherein a source language is automatically converted to a target language (see Lee,
2023, for a review of MT). Previous research on the application of NLP has focused on
methods for assisting learning and learner feedback (Esit, 2011), improving the
analysis of learner input for a variety of tasks and the optimal processing architec-
ture (Amaral et al., 2011), and the potential for computer systems to automatically
generate activities using methods commonly used by teachers, such as questioning
(Chinkina et al., 2020). It informs researchers and educators of learning processes
how various textual aspects can influence learners leading to better text selection
and construction at different stages of learning (e.g., Monteiro & Kim, 2020) and how
different devices can be used to improve learner outcomes (e.g., Pérez-Paredes et al.,
2018).
Pérez-Paredes et al. (2018) surveyed 230 teachers in Spain and the UK to
examine the teachers’use and perceptions of NLP technologies as open educational
resources (OERs). They found that the teachers’knowledge about the technologies
was generally low and the technologies were under-used while the most recognised
tools were online dictionaries and spell checkers. Nevertheless, the teachers
showed positive attitudes towards using OER NLP technologies. Pérez-Paredes et al.
suggested that language teachers should be educated on the benefits of using NLP
technologies and be supported to develop their skills for using them. Chinkina et al.
(2020), on the other hand, compared the results of two studies on the use of
computer-generated wh-questions. Their results showed that the computer-
generated questions were of a similar quality to those designed by a human
teacher.
2.2 Data-driven learning
DDL is facilitated by the use of corpora. It is encouraged through students’own
investigation of patterns naturally occurring in their target language. The corpora
of language provide learners with authentic linguistic data (Pérez-Paredes, 2022).
Researchers have attempted to examine how to improve DLL in practice such as
using corpus data for essay writing correction (Tono et al., 2014; Wu, 2021), scientific
report writing (Crosthwaite & Steeples, 2022), and extensive reading (Hadley &
Charles, 2017); using big data with an inclusive approach (Godwin-Jones, 2021);
incorporating DLL into mobile-assisted language learning (MALL) (Pérez-Paredes
et al., 2019); and assisting teachers in incorporating DDL into their lessons
(Crosthwaite et al., 2021).
Tono et al. (2014) examined the types of words and phrases that were more suited
to using corpus data to assist in error corrections. They identified three main
categories of writing errors their undergraduate students made: misinformation,
AI technologies and applications 3
addition, and omission. When using corpus data, misinformation errors were the
most difficult to correct while addition and omission errors were efficiently resolved.
Tono et al. asserted that corpora are a useful tool, but an overdependency on the tool
for all forms of writing correction is not advised. In a different context, Hadley
and Charles (2017) found that a DLL approach to improving the reading speed
and lexicogrammatical knowledge of low proficiency students resulted in lower
improvements than those students who did not use the DDL approach. They sug-
gested a data-directed approach to DDL providing more scaffolding, structure, and
assistance would be better suited to low proficiency learners. Wu (2021) investigated
the way in which seven students of English in Chinese Taiwan used a corpus to assist
in identifying collocation patterns in essay writing. She found that the students
varied in their ability to use various affordances available and underlined the
importance of learner training.
Crosthwaite et al. (2021) investigated the methods that nine Indonesian sec-
ondary school trainee teachers used to incorporate DDL corpus tools into their lesson
plans. They found that, while the teachers generally incorporated DDL into their
lesson plans, the teachers did not have the required skills and instructional knowl-
edge to use the tools for directed pedagogical purposes and tasks. They emphasised
the need for experts to use corpora, especially at the school level. With a focus on
scientific report writing, Crosthwaite and Steeples (2022) investigated 14–15 years old
female students’use of DDL after completing training on the use of DDL and corpus
data. They found that the students’metalinguistic knowledge was not notably
developed but their productive knowledge was improved. The students were more
inclined to use non-corpus applications when looking for definitions and whole
passages. After three months, the students reported that they rarely used the corpora
while seven students indicated their preference for and use of non-corpora online
tools such as Google.
Boulton and Vyatkina (2021) identified that, between 2016 and 2019, almost 200
empirical studies on DDL were published and most studies concluded that DDL
wasadvantageousforlanguagelearning. They pointed out the need for more
theory-driven research on DDL and the lack of replication studies. In a systematic
review of DDL in CALL research published in five journals during 2011–2015, on the
other hand, Pérez-Paredes (2022) reported that only 4.3% of the 759 articles
published in the journals discussed DDL or corpora use in language learning, and
the majority (69 %) of the articles investigated learners’attitudes towards DDL.
The review highlighted teacher training, which should be imperative to the suc-
cessful uptake of DDL, and technical problems, which should be addressed to
normalise DDL.
4Son et al.
2.3 Automated writing evaluation
AWE provides students with feedback on their written work (Lee, 2020; Li et al., 2017;
Link et al., 2022; Liu & Kunnan, 2016; Ranalli, 2018; Zhai & Ma, 2022). It allows students
to gain valuable information on the types of errors they make (Link et al., 2014).
Chukharev-Hudilainen and Saricaoglu (2016) developed an automated causal
discourse analyser and examined its accuracy in evaluating essays written by seven
English as a second language (ESL) students at an American university. Their find-
ings supported the use of automated causal discourse analysers as an effective tool
for assisting writers. In a study of the use of an automated causal discourse evalu-
ation tool, Saricaoglu (2019) found that there was no improvement in 31 ESL students’
written causal explanations and pointed out the importance of pedagogical choices,
teacher training, and combined feedback from the tool and the teacher. Lee et al.
(2013) also found that the combination of teacher feedback and essay critiquing
system feedback received more positive acceptance than essay critiquing system
feedback only. In another study of 75 Turkish EFL university students’use of AWE,
Han and Sari (2022) found that their experimental group, which used both automated
feedback and teacher feedback, demonstrated greater gains than those in the
teacher-only feedback group.
In a study exploring postsecondary teachers’use and perceptions of Grammarly
(https://www.grammarly.com/), Koltovskaia (2023) highlighted that teacher attitudes
and skills were highly influential in the use of AWE. Similarly, Link et al. (2014) found
that the implementation of AWE was impacted by teachers’perceptions, abilities,
and desires to strive for best practices. They argued that a benefit for students using
AWE tools outside the classroom was the improvement of student autonomy. This
point was supported by Barrot (2023) who investigated the use of automated written
corrective feedback (AWCF) offered by Grammarly and asserted that AWCF sup-
ported autonomous learning. Wang et al.’s (2013) study also supported the benefits of
AWE for autonomous learning and improved learner accuracy. In using a mobile
version of Grammarly, Dizon and Gayed (2021) found that 31 Japanese EFL students
improved their grammar use and lexical richness as a result of using Grammarly
although there was no real difference in their syntactic complexity or fluency.
Wilken (2018) investigated two Chinese university students’use of AWE over a
four-week period and discovered that the students valued the identification of errors
with first language (L1) glossed feedback and could use the feedback to find errors on
their own. She suggested that AWE developers need to build on their resources
expanding selections and providing the option to use L1 or only L2. The option for
self-correction was also supported by Harvey-Scholes (2018) and Godwin-Jones (2022)
as a positive aspect of assisting students working without the presence of the teacher.
AI technologies and applications 5
Jiang and Yu (2022) examined a group of Chinese EFL students’experiences with
automated feedback in their writing activities and emphasised the need for devel-
oping students’awareness of resources and strategies for using automated feedback.
They found that explicit error feedback provided greater assistance than generic
feedback. Chen et al. (2022) also found that their four Chinese EFL students were
highly motivated to improve their AWE scores and spent time attending to language
errors such as word selection and grammatical issues.
In a study of 24 Chinese EFL learners’engagement with automated feedback
using eye-tracking, Liu and Yu (2022) emphasised the importance of explicit feed-
back. Burstein et al. (2016) suggested the development of AWE, which breaks genres
into component subconstructs focusing on the development of core competencies in
writing a variety of genres. Cotos and Pendar (2016) investigated AWE feedback for
discourse in research articles and asserted the need to extend AWE to consider
contextual information and sequencing and to develop meaning-oriented feedback
systems. Feng and Chukharev-Hudilainen’s (2022) study also focused on genre-
specific writing, using a specialised corpus. Their study reported that using a genre-
based AWE system improved 13 Chinese EFL students’use of linguistic features and
rhetorical moves for communicative goals in their writing.
Shi et al. (2022) examined the use of evidence in argumentative writing with 29
Chinese EFL students and the Virtual Writing Tutor (https://virtualwritingtutor.com/).
They found that collaborative discourse focusing on AWE feedback helped the
students improve their writing while the students appeared to pass through three
main phases during the process: trustful, sceptical, and critical. Wambsganss et al.
(2022) expanded the use of AWE for argumentative writing with automated feed-
back and social comparison nudging. They found no significant difference in their
participants’writing ability improvements between the automated feedback group
and the non-automated feedback group; however, the group with social comparison
nudging wrote higher quality texts containing convincing arguments. They specu-
lated that the social comparison nudging facilitated psychological processes such as
adapting to norms and comparisons.
2.4 Computerized dynamic assessment
CDA provides learners with automatic mediations (Ebadi & Saeedian, 2015) and
allows learners to analyse language-related issues (Kamrood et al., 2021; Tianyu & Jie,
2018). In CDA, corrective feedback (CF) has been commonly discussed as a key topic.
CF assists students in gaining feedback on their errors while assisting teachers in
gaining a deeper understanding of their students’ability levels (Ebadi & Rahimi,
2019). The ability of computers to provide appropriate and effective CF has been of
6Son et al.
interest to researchers with the added benefit of an online version of CF being able to
be accessed by many students at the same time. In a small-scale study, Ebadi and
Rahimi (2019) used Google Docs (https://docs.google.com/) as a writer collaboration
tool in their mixed approach to online dynamic assessment with three EFL university
students. Their students reported positive views of the dynamic assessment process
although they showed some difficulties in writing more challenging texts.
In an intelligent CALL (ICALL) environment where insights from computa-
tional linguistics and NLP are integrated, Ai (2017) investigated the use of graduated
CF (i.e., feedback progressing from general and implicit to specificandexplicit)
with six students learning Chinese at an American university. His ICALL system for
the Chinese language could track learners’microgenetic changes as they worked
through iterations of graduated CF to complete an English-to-Chinese translation
task. He found that the graduated approach to CF was effective in helping the
students self-identify and self-correct a range of grammatical issues (e.g., punctu-
ation, grammatical objects, verb complement) although there were some instances
in which the ICALL system failed to provide effective graduated CF and an onsite
tutor provided necessary remedies to the students.
Zhang and Lu (2019) investigated the use of a CDA listening test with 19 students
learning Chinese at an American university and found that the diagnostic language
assessment was successful not only for assessment but also for helping teachers
facilitate more individualised support for students. The assessment offered flexi-
bility in location and timeframe for taking the test. With a different focus on CF, Gao
and Ma (2019) examined two different forms of computer-automated metalinguistic
CF in drills with 117 intermediate level EFL students at a Chinese university. They
reported that those who were in CF groups performed better than those who were in
a no-feedback group while no significant effect of the CF was transferred to subse-
quent writing tasks. Yang and Qian (2020), on the other hand, conducted a study of
the use of CDA as a teaching and assessment method to promote Chinese EFL stu-
dents’reading comprehension and reported that, after four weeks of learning, those
who were taught using CDA performed more efficiently than those who were taught
using conventional teaching methods.
2.5 Intelligent tutoring systems
ITSs are computer systems designed to provide personalised and interactive
instruction to students without intervention from a human teacher. They have been
the most common role of AI in language education (Liang et al., 2021). When used in
an EFL context, they aim to support FL learning effectively and efficiently (e.g., Choi,
2016). They can be used as supplements to traditional approaches to education or as
AI technologies and applications 7
standalone applications for self-study. They can be used in any educational context
with learners of any age (e.g., Xu et al., 2019). They leverage human obsession with
digital technology to provide encapsulated learning experiences (Mohamed & Lamia,
2018). There are various types of ITSs (e.g., Bibauw et al., 2019; Heift, 2010) and some
use AI and machine learning algorithms to adapt to the needs of users (Jiang, 2022).
ITSs can provide personalised experiences to users by assessing ability,
detecting errors, and providing CF and deliver activities to students, which are
specifically targeted at what they need to work on, such as pronunciation, vocab-
ulary, or grammar (e.g., Amaral & Meurers, 2011; Choi, 2016). They can also provide
a situational context for users. For example, they can provide cultural information
related to the language being studied. Choi (2016) argued that an ICALL tutoring
system can support the acquisition of grammatical concepts. Xu et al. (2019) con-
ducted a meta-analysis to investigate the effectiveness of using ITSs for students in
K-12 classrooms and found that ITSs produced a larger effect size on reading
comprehension when compared to traditional instruction. Future research is
encouraged to develop more advanced ITSs with more sophisticated NLP to provide
more individualised targeted feedback.
2.6 Automatic speech recognition
ASR is a technology that uses AI and machine learning techniques to understand and
produce spoken and written text. It is commonly used in software applications that
utilise voice recognition and speech-to-text, such as intelligent personal assistants
(IPAs), automatic transcribers, and notetaking apps (e.g., Evers & Chen, 2022). ASR is
also used on smartphones; when a user dictates a message into a phone and the
phone understands the language and performs an action using the language. ASR has
progressed rapidly over the last decade becoming more accurate and widely
implemented in a broad range of industries (Daniels & Iwago, 2017). In a review of
technology types and their effectiveness, Golonka et al. (2014) stated that the
measurable impact of technology on FL learning largely came from studies on ASR.
ASR has generated a great deal of interest in the field of CALL (e.g., Ahn & Lee,
2016; Chen, 2011; de Vries et al., 2015; van Doremalen et al., 2016). The literature (e.g.,
Chen et al., 2023; Moussalli & Cardoso, 2020; Tai & Chen, 2023) shows that IPAs have
great potential to be used as a tool for L2/FL learning. Learners can practice as
much as they like in an anxiety-reduced environment (Tai & Chen, 2023). In a study
on the evaluation of an IPA, Dizon (2020) found that the use of Alexa (https://
developer.amazon.com/alexa) led to an improvement in L2 speaking proficiency.
Similarly, Chen et al. (2023) claimed that Google Assistant (https://assistant.google.
com/) could be useful for speaking and listening. IPAs are generally accurate at
8Son et al.
understanding users’commands (e.g., Daniels & Iwago, 2017; Dizon et al., 2022). It is
immediate feedback and natural language use, which make ASR in IPAs beneficial
for L2/FL development and improvement.
The use of ASR in messaging apps, software, and websites supports the
improvement of L2 pronunciation as users receive immediate, personalised, and
autonomous feedback (e.g., Chen, 2011; Dai & Wu, 2023; Dizon, 2017; McCrocklin,
2016, 2019). Bashori et al. (2022) investigated two EFL learning websites that use ASR
to provide different types of feedback. Compared to the control group, the treat-
ment group, which used the ASR-based websites, improved not only their pro-
nunciation skills but also their receptive vocabulary. Evers and Chen (2022)
presented a practical approach to utilising ASR technology for pronunciation
practice. Their EFL students read aloud into the notetaking app Speechnotes
(speechnotes.co), which transcribed their speech into text. When they had finished
transcribing, they reviewed their mistakes. They indicated that reviewing their
mistakes by themselves or especially with someone else was beneficial. Evers and
Chen’s study showed how a combination of peer feedback and technology feedback
using ASR could improve learners’pronunciation.
The integration of ASR into apps and software allows the learning experience
to become interactive, engaging, and enjoyable, which in turn supports L2/FL
motivation (Moussalli & Cardoso, 2020; Tai & Chen, 2023). IPAs such as Alexa and
Google Assistant provide students with opportunities for conversation (e.g., Chen
et al., 2023; Dizon, 2017). In Evers and Chen’s (2022) study, students showed positive
attitudes towards using ASR-based software to work on their pronunciation.
McCrocklin (2016) suggested that positive attitudes also lead to autonomous
learning as students enjoy ASR-based activities and can do them by themselves.
Teachers also had positive perceptions about the use of ASR-based software to
improve L2 speaking performance in van Doremalen et al.’s (2016) study. In addi-
tion, ASR can be incorporated into games and simulations designed for language
learning and can make the environment immersive (e.g., Morton et al., 2012).
Forsyth et al. (2019) reported that their students liked interacting with an animated
chatting system. When students feel comfortable communicating with an ASR
system, the system can reduce the students’anxiety, increase their willingness to
communicate, and have a positive impact on their L2/FL motivation (Ayedoun et al.,
2019;Chenetal.,2023;Tai&Chen,2023).
Another benefit of ASR is that it can personalise learning content according to a
learner’s needs and goals. Chen et al. (2023) found that Google Assistant was good for
individualised learning as leaners could control the pace and content based on their
needs. Related to accented speech, Spring and Tabuchi (2022) reported that Japanese EFL
students could improve their vowel-related pronunciation as practicing with the ASR
system allowed them to focus on their pronunciation mistakes and correct the mistakes.
AI technologies and applications 9
In a different context, Walker et al. (2011) showed how non-native English-speaking
nurses could use a nurse-patient simulator to practice speaking English in a no-risk
environment. ASR can also be useful for testing purposes. For example, Cox and Davies
(2012) examined the use of oral tests that used ASR to assess the speaking abilities of EFL
learners. They found that the tests could be used to predict speaking ability and could
thereforebeusefulinspecificsituationssuchasstudentclassplacement.Forsythetal.
(2019) argued that it would be feasible to use systems based on ASR for conversation-
based assessment such as an animated agent.
A few negative concerns are also noted in the literature. For example, it can be
harder for low level learners to be understood by an IPA (e.g., Dizon, 2017), and, if
learners have trouble communicating their command, they often give up (e.g.,
Dizon et al., 2022). McCrocklin (2019) also reported that some students were frus-
trated when ASR-based software did not understand their utterances. Even though
Cox and Davies (2012) did not find any gender bias, it is possible that some
ASR-basedsoftwareismoreaccurateforL2leanerswhohavespecific accents
compared to others. In addition, Daniels and Iwago (2017) warned about privacy
concerns while explaining that it is not clear what data IPAs store, where the data
are stored, and how the data are used. Researchers call for future research that
examines which systems are most effective (Evers & Chen, 2022) and how a range of
non-native English speakers with various accents can benefit from ASR systems
(Bashori et al., 2022; Chen et al., 2023).
2.7 Chatbots
A chatbot, also known as a bot, chatterbot, dialogue system, conversational agent,
virtual assistant, or virtual agent, is a software application that interacts with users
via chat (Bibauw et al., 2019; Coniam, 2014; Wang et al., 2021) and stimulates human
conversations by asking and answering questions via text or audio (Kim et al., 2021).
Chatbots are commonly found on companies’websites in a range of industries
such as marketing, healthcare, technical support, customer service, and education,
providing targeted services to website visitors (Fryer et al., 2020; Wang et al., 2021).
Generally, a user asks the chatbot a question, and the chatbot interprets the input,
processes the user’s intent, and then provides a programmed response to the user
(Kim et al., 2021; Smutny & Schreiberova, 2020). Chatbots commonly perform
form-filling tasks such as collecting information to confirm someone’s identity or
information about a problem or an item they want to purchase and then directing
them to an answer or preparing the information for a human to easily review.
Chatbots have been around since the 1960s when Weizenbaum (1966) developed
ELIZA, a psychotherapist bot. They have been developed considerably since ELIZA.
10 Son et al.
Other notable chatbots include ALICE and Cleverbot. Web-based chatbots have been
utilised for several decades and are commonly integrated into messenger apps such
as Facebook Messenger (https://www.messenger.com/) (Smutny & Schreiberova,
2020). Chatbots can also have human-like appearances (e.g., Replika [https://replika.
com/]) that have social life-like characteristics, which can emotionally immerse users
in the experience using text, audio, and other visual cues (Ayedoun et al., 2019). These
days, chatbots use techniques such as NLP, pattern matching, and neural machine
translation to achieve their goals (Huang et al., 2018; Smutny & Schreiberova, 2020).
Interest in chatbots is rising due to their potential to support L2 and FL learning
in interesting ways (Wang et al., 2021). For learning EFL, Huang et al. (2017) developed
a dialogue-based chatbot called GenieTutor to target specific areas of language
learning interest, such as ordering food, or just to chat freely about any topic. For
learning a range of languages, the Mondly chatbot (https://app.mondly.com/) was
designed as an additional component of a language learning platform. A chatbot has
unlimited patience, can instantly respond to requests using natural language, can
lower learners’anxiety, which encourages willingness to communicate and self-
correction if mistakes are made, can focus on specific topics and areas of interest, and
does not require a human teacher or interlocutor (Bibauw et al., 2019; Coniam, 2014;
Fryer et al., 2020). Students can practice aspects of language that they might not feel
comfortable practicing with a human or practice recently learnt language (Fryer
et al., 2020).
Goda et al. (2014) showed that the use of a chatbot prior to a group discussion led
to an increase in student output and supported the awareness of critical thinking
skills. Kim et al. (2021) found positive results when using a chatbot. They specifically
found that using an AI bot via text or voice prior to completing speaking tasks led to
improved speaking performance. The voice-based chatbot led to greater perfor-
mance than the text-based chatbot and the face-to-face condition. Ayedoun et al.
(2019) argued that, if a chatbot has the ability to perform communication strategies, it
can encourage willingness to communicate. In a different context, Coniam (2014)
found that chatbots generally provided grammatically acceptable answers to ques-
tions. If chatbots can provide teachers with logs of conversations between chatbots
and students, the teachers will be able to identify the students’errors from the logs
and plan lessons to fix the errors.
Negative results have also been reported in the literature. There is a concern
about the novelty effect of using chatbots to support language learning. For example,
Fryer et al. (2017) compared students’interest in tasks in an FL course between
completing a task with a human and completing a task with a chatbot. Their results
suggested that a chatbot could provide initial interest due to its novelty, but student
interest dropped quickly. Smutny and Schreiberova (2020) criticised chatbots for being
too mechanic in their behaviour and lacking important communication components.
AI technologies and applications 11
Coniam (2014) also criticised a number of English language chatbots for providing
answers that lacked meaning and were not grammatically accurate. Empirical
studiesthatexaminetheimpactofchatbotsonL2andFLlearningarestilllacking
(Kim et al., 2021). Bibauw et al. (2019) called for studies that have more partici-
pants and occur over long periods of time. Smutny and Schreiberova (2020)
suggested that future research should aim to provide guidelines for teachers to
integrate chatbots into their teaching and conduct a content analysis of learners’
conversations with chatbots.
ChatGPT (https://chat.openai.com/) has recently generated great interest in
various fields. It produces detailed written responses to requests for information
based on vast databases. While ChatGPT has a significant issue with factual accuracy
(e.g., Vincent, 2022), its impact on education is being discussed by many educators
and researchers (e.g., Illingworth, 2023; Liu et al., 2023; Loble, 2023). Through a pilot
study of the use of ChatGPT for writing an academic paper, Zhai (2022) reported that
the text written by the AI chatbot was coherent and informative and suggested that
improving students’creativity and critical thinking should be focused on in educa-
tion. If carefully planned and used, ChatGPT might offer a rich opportunity for
language teachers to enhance language teaching and create an engaging language
learning experience for their students.
3 Future directions
The number of studies on the use of AI in language education is increasing. The
studies generally explore AI technologies or applications with specific types of AI
algorithms or systems (e.g., Pikhart, 2020). Recent studies (e.g., Chen et al., 2023;
Moussalli & Cardoso, 2020; Wang et al., 2022) have reported that language learners
show positive attitudes towards AI tools for language learning. AI can provide instant
feedback and flexibility in learning environments. By using AI, learners can become
more independent in their learning and have more opportunities to learn outside
the classroom (Srinivasan, 2022). In terms of language skills, the most common skill
investigated in AI-related CALL research has been writing (Liang et al., 2021).
In a review of studies on the use of AI in English language learning and teaching
published between 2015 and 2021, Sharadgah and Sa’di (2022) pointed out gaps in the
literature, including inherent issues related to body language, gestures, expressions,
emotions, translation, lack of elaborate descriptions of teaching materials used for
learning driven by AI, and uncertainties of what can be considered under the realm of
AI. Therefore, there is a strong need for more rigorous research in various contexts.
Research on AI teaching assistants (e.g., Kim et al., 2020) and facial expression recog-
nition of AI (e.g., Gao et al., 2021) is being reported, but there is still a long way to go.
12 Son et al.
There are also concerns that language teachers are not yet prepared for AI (e.g.,
Kessler, 2021). In addition, there are ethical issues we need to consider when research
on AI is conducted with data from learners and teachers. Future research and practice
should address the potential and challenges of the pedagogical and technical devel-
opment of AI and the effective use of AI.
4 Conclusion
This literature review indicates that AI will be continuously developed and inte-
grated into CALL. There will be more discussions on technical requirements and
pedagogical responsibilities for the use of AI in language learning and teaching.
Language educators need to ensure that AI is effectively used to support language
learning and teaching in AI-powered contexts with a clear understanding of what
needs to be considered in the implementation of AI-supported language learning and
teaching. They need to be prepared to use AI technologies and applications and to
support learning experiences in specific contexts. They also need to ask the question
of how to deal with human skills such as critical thinking, collaboration, and crea-
tivity in their practices in AI environments. Researchers are recommended to
respond to the need for more rigorous research on AI technologies and applications
for L2 and FL learning and teaching.
References
Ahn, T. Y., & Lee, S.-M. (2016). User experience of a mobile speaking application with automatic speech
recognition for EFL learning. British Journal of Educational Technology,47(4), 778–786.
Ai, H. (2017). Providing graduated corrective feedback in an intelligent computer-assisted language
learning environment. ReCALL,29(3), 313–334.
Amaral, L. A., & Meurers, D. (2011). On using intelligent computer-assisted language learning in real-life
foreign language teaching and learning. ReCALL,23(1), 4–24.
Amaral, L., Meurers, D., & Ziai, R. (2011). Analyzing learner language: Towards a flexible natural language
processing architecture for intelligent language tutors. Computer Assisted Language Learning,24(1),
1–16.
Ayedoun, E., Hayashi, Y., & Seta, K. (2019). Adding communicative and affective strategies to an embodied
conversational agent to enhance second language learners’willingness to communicate.
International Journal of Artificial Intelligence in Education,29,29–57.
Barrot, J. S. (2023). Using automated written corrective feedback in the writing classrooms: Effects on L2
writing accuracy. Computer Assisted Language Learning,36(4), 584–607.
Bashori, M., van Hout, R., Strik, H., & Cucchiarini, C. (2022). “Look, I can speak correctly”: Learning
vocabulary and pronunciation through websites equipped with automatic speech recognition
AI technologies and applications 13
technology. Computer Assisted Language Learning. Advance online publication. https://doi.org/10.
1080/09588221.2022.2080230
Bibauw, S., François, T., & Desmet, P. (2019). Discussing with a computer to practice a foreign language:
Research synthesis and conceptual framework of dialogue-based CALL. Computer Assisted Language
Learning,32(8), 1–51.
Boulton, A., & Vyatkina, N. (2021). Thirty years of data-driven learning: Taking stock and charting new
directions over time. Language, Learning and Technology,25(3), 66–89.
Burstein, J., Elliot, N., & Molloy, H. (2016). Informing automated writing evaluation using the lens of genre:
Two studies. CALICO Journal,33(1), 117–141.
Chen, H. H.-J. (2011). Developing and evaluating an oral skills training website supported by automatic
speech recognition technology. ReCALL,23(1), 59–78.
Chen, Z., Chen, W., Jia, J., & Le, H. (2022). Exploring AWE-supported writing process: An activity theory
perspective. Language, Learning and Technology,26(2), 129–148.
Chen, H. H.-J., Yang, C. T. Y., & Lai, K. K. W. (2023). Investigating college EFL learners’perceptions toward
the use of Google Assistant for foreign language learning. Interactive Learning Environments,31(3),
1335–1350.
Chinkina, M., Ruiz, S., & Meurers, D. (2020). Crowdsourcing evaluation of the quality of automatically
generated questions for supporting computer-assisted language teaching. ReCALL,32(2), 145–161.
Choi, I.-C. (2016). Efficacy of an ICALL tutoring system and process-oriented corrective feedback. Computer
Assisted Language Learning,29(2), 334–364.
Chukharev-Hudilainen, E., & Saricaoglu, A. (2016). Causal discourse analyzer: Improving automated
feedback on academic ESL writing. Computer Assisted Language Learning,29(3), 494–516.
Coniam, D. (2014). The linguistic accuracy of chatbots: Usability from an ESL perspective. Text & Talk,35(5),
545–567.
Cotos, E., & Pendar, N. (2016). Discourse classification into rhetorical functions for AWE feedback. CALICO
Journal,33(1), 92–116.
Cox, T. L., & Davies, R. S. (2012). Using automatic speech recognition technology with elicited oral response
testing. CALICO Journal,29(4), 601–618.
Crosthwaite, P., & Steeples, B. (2022). Data-driven learning with younger learners: Exploring
corpus-assisted development of the passive voice for science writing with female secondary school
students. Computer Assisted Language Learning. Advance online publication. https://doi.org/10.1080/
09588221.2022.2068615
Crosthwaite, P., Luciana, & Wijaya, D. (2021). Exploring language teachers’lesson planning for
corpus-based language teaching: A focus on developing tpack for corpora and DDL. Computer
Assisted Language Learning. Advance online publication. https://doi.org/10.1080/09588221.2021.
1995001
Dai, Y., & Wu, Z. (2023). Mobile-assisted pronunciation learning with feedback from peers and/or
automatic speech recognition: A mixed-methods study. Computer Assisted Language Learning,
36(5–6), 861–884.
Daniels, P., & Iwago, K. (2017). The suitability of cloud-based speech recognition engines for language
learning. The JALT CALL Journal,13(3), 229–239.
de Vries, B. P., Cucchiarini, C., Bodnar, S., Strik, H., & van Hout, R. (2015). Spoken grammar practice and
feedback in an ASR-based CALL system. Computer Assisted Language Learning,28(6), 550–576.
Dizon, G. (2017). Using intelligent personal assistants for second language learning: A case study of Alexa.
TESOL Journal,8(4), 811–830.
Dizon, G. (2020). Evaluating intelligent personal assistants for L2 listening and speaking development.
Language, Learning and Technology,24(1), 16–26.
14 Son et al.
Dizon, G., & Gayed, J. (2021). Examining the impact of Grammarly on the quality of mobile L2 writing. The
JALT CALL Journal,17(2), 74–92.
Dizon, G., Tang, D., & Yamamoto, Y. (2022). A case study of using Alexa for out-of-class, self-directed
Japanese language learning. Computers and Education: Artificial Intelligence,3, 100088.
Ebadi, S., & Rahimi, M. (2019). Mediating EFL learners’academic writing skills in online dynamic
assessment using Google Docs. Computer Assisted Language Learning,32(5–6), 527–555.
Ebadi, S., & Saeedian, A. (2015). The effects of computerized dynamic assessment on promoting at-risk
advanced Iranian EFL students’reading skills. Issues in Language Teaching,4(2), 1–26.
Encyclopedia Britannica. (2021). Artificial intelligence. https://www.britannica.com/technology/artificial-
intelligence
Esit, Ö. (2011). Your verbal zone: An intelligent computer-assisted language learning program in support of
Turkish learners’vocabulary learning. Computer Assisted Language Learning,24(3), 211–232.
Evers, K., & Chen, S. (2022). Effects of an automatic speech recognition system with peer feedback on
pronunciation instruction for adults. Computer Assisted Language Learning,35(8), 1869–1889.
Feng, H.-H., & Chukharev-Hudilainen, E. (2022). Genre-based AWE system for engineering graduate
writing: Development and evaluation. Language, Learning and Technology,26(2), 58–77.
Forsyth, C. M., Luce, C., Zapata-Rivera, D., Jackson, G. T., Evanini, K., & So, Y. (2019). Evaluating English
language learners’conversations: Man vs. machine. Computer Assisted Language Learning,32(4),
398–417.
Fryer, L. K., Ainley, M., Thompson, A., Gibson, A., & Sherlock, Z. (2017). Stimulating and sustaining interest
in a language course: An experimental comparison of chatbot and human task partners. Computers
in Human Behavior,75, 461–468.
Fryer, L. K., Coniam, D., Carpenter, R., & Lăpușneanu, D. (2020). Bots for language learning now: Current
and future directions. Language, Learning and Technology,24(2), 8–22.
Gao, J., & Ma, S. (2019). The effect of two forms of computer-automated metalinguistic corrective
feedback. Language, Learning and Technology,23(2), 65–83.
Gao, Y., Tao, X., Wang, H., Gang, Z., & Lian, H. (2021). Artificial intelligence in language education:
Introduction of Readizy. Journal of Ambient Intelligence and Humanized Computing. Advance online
publication. https://doi.org/10.1007/s12652-021-03050-x
Goda, Y., Yamada, M., Matsukawa, H., Hata, K., & Yasunami, S. (2014). Conversation with a chatbot before
an online EFL group discussion and the effects on critical thinking. Journal of Information Systems
Education,13(1), 1–7.
Godwin-Jones, R. (2021). Big data and language learning: Opportunities and challenges. Language,
Learning and Technology,25(1), 4–19.
Godwin-Jones, R. (2022). Partnering with AI: Intelligent writing assistance and instructed language
learning. Language, Learning and Technology,26(2), 5–24.
Golonka, E. M., Bowles, A. R., Frank, V. M., Richardson, D. L., & Freynik, S. (2014). Technologies for foreign
language learning: A review of technology types and their effectiveness. Computer Assisted Language
Learning,27(1), 70–105.
Hadley, G., & Charles, M. (2017). Enhancing extensive reading with data-driven learning. Language,
Learning and Technology,21(3), 131–152.
Han, T., & Sari, E. (2022). An investigation on the use of automated feedback in Turkish EFL students’
writing classes. Computer Assisted Language Learning. Advance online publication. https://doi.org/10.
1080/09588221.2022.2067179
Harvey-Scholes, C. (2018). Computer-assisted detection of 90 % of EFL student errors. Computer Assisted
Language Learning,31(1–2), 144–156.
Heift, T. (2010). Developing an intelligent language tutor. CALICO Journal,27(3), 443–459.
AI technologies and applications 15
Huang, J.-X., Kwon, O.-W., Lee, K.-S., & Kim, Y.-K. (2018). Improve the chatbot performance for the DB-CALL
system using a hybrid method and a domain corpus. In P. Taalas, J. Jalkanen, L. Bradley &
S. Thouësny (Eds.), Future-proof CALL: Language learning as exploration and encounters –short papers
from EUROCALL 2018 (pp. 100–105). Research-publishing.net.
Huang, J.-X., Lee, K.-S., Kwon, O.-W., & Kim, Y.-K. (2017). A chatbot for a dialogue-based second language
learning system. In K. Borthwick, L. Bradley & S. Thouësny (Eds.), CALL in a climate of change: Adapting
to turbulent global conditions –short papers from EUROCALL 2017 (pp. 151–156). Research-
publishing.net.
Illingworth, S. (2023). ChatGPT: Students could use AI to cheat, but it’s a chance to rethink assessment
altogether. The Conversation. https://theconversation.com/chatgpt-students-could-use-ai-to-cheat-
but-its-a-chance-to-rethink-assessment-altogether-198019
Jiang, R. (2022). How does artificial intelligence empower EFL teaching and learning nowadays? A review
on artificial intelligence in the EFL context. Frontiers in Psychology,13, 1049401.
Jiang, L., & Yu, S. (2022). Appropriating automated feedback in L2 writing: Experiences of Chinese EFL
student writers. Computer Assisted Language Learning,35(7), 1329–1353.
Kamrood, A. M., Davoudi, M., Ghaniabadi, S., & Amirian, S. M. R. (2021). Diagnosing L2 learners’
development through online computerized dynamic assessment. Computer Assisted Language
Learning,34(7), 868–897.
Kessler, G. (2021). Current realities and future challenges for CALL teacher preparation. CALICO Journal,
38(3), i–xx.
Kim, H.-S., Kim, N. Y., & Cha, Y. (2021). Is it beneficial to use AI chatbots to improve learners’speaking
performance? The Journal of Asia TEFL,18(1), 161–178.
Kim, J., Merrill, K., Xu, K., & Sellnow, D. D. (2020). My teacher is a machine: Understanding students’
perceptions of AI teaching assistants in online education. International Journal of Human-Computer
Interaction,36, 1902–1922.
Koltovskaia, S. (2023). Postsecondary L2 writing teachers’use and perceptions of Grammarly as a
complement to their feedback. ReCALL,35(3), 290–304.
Kukulska-Hulme, A., Beirne, E., Conole, G., Costello, E., Coughlan, T., Ferguson, R., FitzGerald, E., Gaved, M.,
Herodotou, C., Holmes, W., Mac Lochlainn, C., Nic Giollamhichil, M., Rienties, B., Sargent, J.,
Scanlon, E., Sharples, M., & Whitelock, D. (2020). Innovating pedagogy 2020: Open University innovation
report 8. The Open University. https://www.open.ac.uk/blogs/innovating/
Lee, C. (2020). A study of adolescent English learners’cognitive engagement in writing while using an
automated content feedback system. Computer Assisted Language Learning,33(1–2), 26–57.
Lee, S.-M. (2023). The effectiveness of machine translation in foreign language education: A systematic
review and meta-analysis. Computer Assisted Language Learning,36(1–2), 103–125.
Lee, C., Cheung, W. K. W., Wong, K. C. K., & Lee, F. S. L. (2013). Immediate web-based essay critiquing
system feedback and teacher follow-up feedback on young second language learners’writings: An
experimental study in a Hong Kong secondary school. Computer Assisted Language Learning,26(1),
39–60.
Li, Z., Feng, H.-H., & Saricaoglu, A. (2017). The short-term and long-term effects of AWE feedback on ESL
students’development of grammatical accuracy. CALICO Journal,34(3), 355–375.
Liang, J.-C., Hwang, G.-J., Chen, M.-R. A., & Darmawansah, D. (2021). Roles and research foci of artificial
intelligence in language education: An integrated bibliographic analysis and systematic review
approach. Interactive Learning Environments. Advance online publication. https://doi.org/10.1080/
10494820.2021.1958348
Link, S., Dursun, A., Karakaya, K., & Hegelheimer, V. (2014). Towards best ESL practices for implementing
automated writing evaluation. CALICO Journal,31(3), 323–344.
16 Son et al.
Link, S., Mehrzad, M., & Rahimi, M. (2022). Impact of automated writing evaluation on teacher feedback,
student revision, and writing improvement. Computer Assisted Language Learning,35(4), 605–634.
Liu, D., Bridgeman, A., & Miller, B. (2023). As uni goes back, here’s how teachers and students can use
ChatGPT to save time and improve learning. The Conversation. https://theconversation.com/as-uni-
goes-back-heres-how-teachers-and-students-can-use-chatgpt-to-save-time-and-improve-learning-
199884
Liu, S., & Kunnan, A. J. (2016). Investigating the application of automated writing evaluation to Chinese
undergraduate English majors: A case study of WriteToLearn.CALICO Journal,33(1), 71–91.
Liu, S., & Yu, G. (2022). L2 learners’engagement with automated feedback: An eye-tracking study.
Language, Learning and Technology,26(2), 78–105.
Loble, L. (2023). The rise of ChatGPT shows why we need a clearer approach to technology in schools. The
Conversation. https://theconversation.com/the-rise-of-chatgpt-shows-why-we-need-a-clearer-
approach-to-technology-in-schools-199596
McCrocklin, S. M. (2016). Pronunciation learner autonomy: The potential of automatic speech recognition.
System,57,25–42.
McCrocklin, S. (2019). Learners’feedback regarding ASR-based dictation practice for pronunciation
learning. CALICO Journal,36(2), 119–137.
Mindzak, M., & Eaton, S. E. (2021). Artificial intelligence is getting better at writing, and universities should
worry about plagiarism. The Conversation. https://theconversation.com/artificial-intelligence-is-
getting-better-at-writing-and-universities-should-worry-about-plagiarism-160481
Mohamed, H., & Lamia, M. (2018). Implementing flipped classroom that used an intelligent tutoring
system into learning process. Computers & Education,124,62–76.
Monteiro, K., & Kim, Y. (2020). The effect of input characteristics and individual differences on L2
comprehension of authentic and modified listening tasks. System,94, 102336.
Morton, H., Gunson, N., & Jack, M. (2012). Interactive language learning through speech-enabled virtual
scenarios. Advances in Human-Computer Interaction,2012, 389523.
Moussalli, S., & Cardoso, W. (2020). Intelligent personal assistants: Can they understand and be
understood by accented L2 learners? Computer Assisted Language Learning,33(8), 865–890.
Naffi, N., Davidson, A.-L., Boch, A., Nandaba, B. K., & Rougui, M. (2022). AI-powered chatbots, designed
ethically, can support high-quality university teaching. The Conversation. https://theconversation.
com/ai-powered-chatbots-designed-ethically-can-support-high-quality-university-teaching-172719
Oxford Reference. (2021). Artificial intelligence. https://www.oxfordreference.com/view/10.1093/oi/
authority.20110803095426960
Pérez-Paredes, P. (2022). A systematic review of the uses and spread of corpora and data-driven learning
in CALL research during 2011–2015. Computer Assisted Language Learning,35(1–2), 36–61.
Pérez-Paredes, P., Guillamón, C. O., & Jiménez, P. A. (2018). Language teachers’perceptions on the use of
OER language processing technologies in MALL. Computer Assisted Language Learning,31(5–6),
522–545.
Pérez-Paredes, P., Guillamón, C. O., Vyver, J. V., Meurice, A., Jiménez, P. A., Conole, G., & Hernándezd, P. S.
(2019). Mobile data-driven language learning: Affordances and learners’perception. System,84,145–159.
Pikhart, M. (2020). Intelligent information processing for language education: The use of artificial
intelligence in language learning apps. Procedia Computer Science,176, 1412–1419.
Pokrivcakova, S. (2019). Preparing teachers for the application of AI-powered technologies in foreign
language education. Journal of Language and Cultural Education,7(3), 135–153.
Ranalli, J. (2018). Automated written corrective feedback: How well can students make use of it? Computer
Assisted Language Learning,31(7), 653–674.
AI technologies and applications 17
Saricaoglu, A. (2019). The impact of automated feedback on L2 learners’written causal explanations.
ReCALL,31(2), 189–203.
Sharadgah, T. A., & Sa’di, R. A. (2022). A systematic review of research on the use of artificial intelligence in
English language teaching and learning (2015–2021): What are the current effects? Journal of
Information Technology Education: Research,21, 337–377.
Shi, Z., Liu, F., Lai, C., & Jin, T. (2022). Enhancing the use of evidence in argumentative writing through
collaborative processing of content-based automated writing evaluation feedback. Language,
Learning and Technology,26(2), 106–128.
Smutny, P., & Schreiberova, P. (2020). Chatbots for learning: A review of educational chatbots for the
Facebook Messenger. Computers & Education,151, 103862.
Spring, R., & Tabuchi, R. (2022). The role of ASR training in EFL pronunciation improvement: An in-depth
look at the impact of treatment length and guided practice on specific pronunciation points. CALL-EJ,
23(3), 163–185. http://www.callej.org/journal/23-3/Spring-Tabuchi2022.pdf
Srinivasan, V. (2022). AI & learning: A preferred future. Computers and Education: Artificial Intelligence,3,
100062.
Tai, T. Y., & Chen, H. H. J. (2023). The impact of Google Assistant on adolescent EFL learners’willingness to
communicate. Interactive Learning Environments,31(3), 1485–1502.
Tianyu, Q., & Jie, Z. (2018). Computerized dynamic assessment and second language learning:
Programmed mediation to promote future development. Journal of Cognitive Education and
Psychology,17(2), 198–213.
Tono, Y., Satake, Y., & Miura, A. (2014). The effects of using corpora on revision tasks in L2 writing with
coded error feedback. ReCALL,26(2), 147–162.
van Doremalen, J., Boves, L., Colpaert, J., Cucchiarini, C., & Strik, H. (2016). Evaluating automatic speech
recognition-based language learning systems: A case study. Computer Assisted Language Learning,
29(4), 833–851.
Vincent, J. (2022). AI-generated answers temporarily banned on coding Q&A site stack overflow. Verge.
https://www.theverge.com/2022/12/5/23493932/chatgpt-ai-generated-answers-temporarily-
banned-stack-overflow-llms-dangers
Walker, N. R., Cedergren, H., Trofimovich, P., & Gatbonton, E. (2011). Automatic speech recognition for
CALL: A task-specific application for training nurses. Canadian Modern Language Review,67(4),
459–479.
Wambsganss, T., Janson, A., & Leimeister, J. M. (2022). Enhancing argumentative writing with automated
feedback and social comparison nudging. Computers & Education,191, 104644.
Wang, J., Hwang, G.-W., & Chang, C.-Y. (2021). Directions of the 100 most cited chatbot-related human
behavior research: A review of academic publications. Computers and Education: Artificial Intelligence,
2, 100023.
Wang, X., Pang, H., Wallace, M. P., Wang, Q., & Chen, W. (2022). Learners’perceived AI presences in
AI-supported language learning: A study of AI as a humanized agent from community of inquiry.
Computer Assisted Language Learning. Advance publication. https://doi.org/10.1080/09588221.2022.
2056203
Wang, Y.-J., Shang, H.-F., & Briody, P. (2013). Exploring the impact of using automated writing evaluation in
English as a foreign language university students’writing. Computer Assisted Language Learning,
26(3), 234–257.
Weizenbaum, J. (1966). ELIZA –A computer program for the study of natural language communication
between man and machine. Communications of the ACM,9(1), 36–45.
Wilken, J. L. (2018). Perceptions of L1 glossed feedback in automated writing evaluation: A case study.
CALICO Journal,35(1), 30–48.
Wu, Y.-j. (2021). Discovering collocations via data-driven learning in L2 writing. Language, Learning and
Technology,25(2), 192–214.
18 Son et al.
Xu, Z., Wijekumar, K., Ramirez, G., Hu, X., & Irey, R. (2019). The effectiveness of intelligent tutoring systems
on K-12 students’reading comprehension: A meta-analysis. British Journal of Educational Technology,
50(6), 3119–3137.
Yang, Y., & Qian, D. D. (2020). Promoting L2 English learners’reading proficiency through computerized
dynamic assessment. Computer Assisted Language Learning,33(5–6), 628–652.
Zhai, X. (2022). ChatGPT user experience: Implications for education. Social science Research Network
(SSRN). https://doi.org/10.2139/ssrn.4312418
Zhai, N., & Ma, X. (2022). Automated writing evaluation (AWE) feedback: A systematic investigation of
college students’acceptance. Computer Assisted Language Learning,35(9), 2817–2842.
Zhang, K., & Aslan, A. B. (2021). AI technologies for education: Recent research & future directions.
Computers and Education: Artificial Intelligence,2, 100025.
Zhang, J., & Lu, X. (2019). Measuring and supporting second language development using computerized
dynamic assessment. Language and Sociocultural Theory,6(1), 92–115.
Bionotes
Jeong-Bae Son
University of Southern Queensland, Springfield Central, Australia
jeong-bae.son@usq.edu.au
https://orcid.org/0000-0001-5346-5483
Jeong-Bae Son, PhD, teaches Applied Linguistics and TESOL courses and supervises doctoral students at
the University of Southern Queensland in Australia. His areas of specialisation are computer-assisted
language learning and language teacher education. He is the President of the Asia-Pacific Association for
Computer-Assisted Language Learning (APACALL) and Editor of the APACALL Book Series. Details of his
research can be found on his website at <https://drjbson.com/>.
Natasha Kathleen Ružić
Institute for Migration and Ethnic Studies, Zagreb, Croatia
natasha.ruzic@imin.hr
https://orcid.org/0000-0002-6706-5429
Natasha Kathleen Ružić, PhD, works at the Institute for Migration and Ethnic Studies in Zagreb, Croatia, as
a Senior Expert Advisor, and lectures in computer-mediated communication at the FHS in Zagreb. Her
research interests include educational outcomes for migrants and computer-assisted language learning.
Andrew Philpott
Kwansei Gakuin University, Nishinomiya, Japan
andrewphilpott83@gmail.com
https://orcid.org/0000-0002-8056-4775
Andrew Philpott, PhD, is an Applied Linguistics researcher and EFL instructor based at Kwansei Gakuin
University in Japan. His areas of research interest include L2 motivation, gamification, and computer-
assisted language learning.
AI technologies and applications 19