ArticlePDF Available

Reporting the potential risk of using AI in higher Education: Subjective perspectives of educators

Authors:

Abstract

This study explores the potential risks associated with the use of AI in higher education, based on the views of university teachers from the Czech Republic and Iraq. A total of 40 respondents, including 28 females and 12 males aged between 32 and 54, participated in the study. All participants were university teachers specializing in EFL, psychology, ITC, and foreign languages, and they reported using the internet and AI daily or several times a week for their professional activities. The qualitative research, grounded in a phenomenological approach, involved guided interviews that were recorded, transcribed, and analyzed using LIWC-22 software to identify common themes and sentiments. The findings reveal significant concerns about privacy risks, academic integrity, and the validity of AI-generated data. Respondents expressed fears over data misuse, unauthorized access, and the potential for AI to facilitate plagiarism and undermine critical thinking. While AI is seen as beneficial for personalized learning and language training, there are apprehensions about its impact on the role of teachers and the potential for job displacement. The study also highlights the limitations of AI in replicating human interaction and addressing students' emotional and behavioral engagement. Overall, the sentiment among respondents is predominantly negative, with calls for ethical guidelines, critical evaluation, and the preservation of human elements in education. The research underscores the need for further studies involving larger and more diverse samples, including students, to comprehensively understand the implications of AI in education.
Reporting the potential risk of using AI in higher Education: Subjective
perspectives of educators
Marcel Pikhart
a,*
, Liqaa Habeb Al-Obaydi
b
a
Centre for Basic and Applied Research, Faculty of Informatics and Management, University of Hradec Kralove, Czech Republic
b
English Department, College of Education for Human Sciences, University of Diyala, Diyala, Iraq
ARTICLE INFO
Keywords:
Risks of AI
AI in higher education
Articial intelligence
Higher education
Teachers and AI
ABSTRACT
This study explores the potential risks associated with the use of AI in higher education, based on the views of
university teachers from the Czech Republic and Iraq. A total of 40 respondents, including 28 females and 12
males aged between 32 and 54, participated in the study. All participants were university teachers specializing in
EFL, psychology, ITC, and foreign languages, and they reported using the internet and AI daily or several times a
week for their professional activities. The qualitative research, grounded in a phenomenological approach,
involved guided interviews that were recorded, transcribed, and analyzed using LIWC-22 software to identify
common themes and sentiments. The ndings reveal signicant concerns about privacy risks, academic integrity,
and the validity of AI-generated data. Respondents expressed fears over data misuse, unauthorized access, and
the potential for AI to facilitate plagiarism and undermine critical thinking. While AI is seen as benecial for
personalized learning and language training, there are apprehensions about its impact on the role of teachers and
the potential for job displacement. The study also highlights the limitations of AI in replicating human inter-
action and addressing students emotional and behavioral engagement. Overall, the sentiment among re-
spondents is predominantly negative, with calls for ethical guidelines, critical evaluation, and the preservation of
human elements in education. The research underscores the need for further studies involving larger and more
diverse samples, including students, to comprehensively understand the implications of AI in education.
1. Introduction
There is still some doubt regarding the long-term effects of articial
intelligence (AI) in education because it is a relatively new technology.
Using AI presents a number of difculties, including the necessity for
cautious implementation, pedagogical ramications, and ethical issues
in addition to possible harm (Sumakul et al., 2022). While riskcan be
used to describe the difculties of implementing AI in EFL environ-
ments, it is neither the only nor often the most appropriate word to
employ. Because it can make people feel anxious or afraid, the re-
searchers chose the word riskto draw their attention to the potential
consequences of these threats (Nadjiba & Belmekki, 2024). Challenges,
considerations, implications, and pitfalls are other terms and expres-
sions that have been employed in place of risk in previous studies (Guo &
Wang, 2024; Pan & Wang, 2025; Wang et al., 2025).
There are a number of possible challenges when using AI into the
teaching of English as a foreign language (EFL). Some of them are rep-
resented by the possibility of students becoming unduly dependent on
AI-powered resources, such as essay generators and translation appli-
cations, is one of the main worries (Al-Obaydi, Pikhart, & Tawafak,
2023). The development of important linguistic abilities like critical
thinking, creativity, and autonomous problem-solving may be hampered
by this over-reliance. It makes information more accessible, which at the
end raises worries about plagiarism and impairs critical thinking skills
(Walczak & Cellary, 2023). For example, students may get used to AI
translating text instantly, which could discourage them from exerting
the mental work necessary to understand meaning on their own
(Dwivedi et al., 2023). Social and emotional challenges in learning are
also reported by Bin-Hady et al. (2024).
Moreover, there are serious ethical issues with the application of AI
in EFL. AI systems are educated on enormous datasets, which can
contain biases that reinforce inequality or preconceptions in the class-
room (Michel-Villarreal et al., 2023). For instance, a writing tool driven
by AI may produce content that perpetuates negative cultural biases
(Kohnke et al., 2023), or gender stereotypes (Javier & Moorhouse,
2023). Concerns regarding Cybersecurity challenges, privacy and the
* Corresponding author.
E-mail addresses: marcel.pikhart@uhk.cz (M. Pikhart), liqaa.en.hum@uodiyala.edu.iq (L.H. Al-Obaydi).
Contents lists available at ScienceDirect
Computers in Human Behavior Reports
journal homepage: www.sciencedirect.com/journal/computers-in-human-behavior-reports
https://doi.org/10.1016/j.chbr.2025.100693
Received 24 February 2025; Received in revised form 28 April 2025; Accepted 9 May 2025
Computers in Human Behavior Reports 18 (2025) 100693
2451-9588/© 2025 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license ( http://creativecommons.org/licenses/by-
nc-nd/4.0/ ).
exploitation of personal information are also raised by the fact that using
AI tools frequently entails the collecting and processing of student data
(Gilbert & Gilbert, 2024).
Likewise, concerns regarding the future of teaching are raised by AIs
growing presence in education. AI has the potential to greatly change
the character of education or even result in job displacement, despite the
fact that it can provide teachers with invaluable help (Kasneci et al.,
2023). Even though AI applications can be highly benecial in preparing
students, teachers are faced with a challenging task in modifying the
current teaching techniques and assessments to improve students
cognitive, creative, and critical thinking abilities (Klimova et al., 2024).
It is vital to remember that these are possible concerns, and how AI is
employed and deployed will determine its actual inuence on EFL in-
struction. By carefully weighing these issues and creating appropriate
AI-powered resources, educators may maximize AIs potential advan-
tages while reducing its risks. The primary focus of this study will be on
the use of AI in educational settings, specically examining the possible
risks of integrating AI into English as a Foreign Language (EFL) settings.
By focusing on university teachersviews, the study tries to narrows the
scope and collects future visions from teachers who are directly involved
in teaching EFL and are familiar with the difculties and possibilities
that articial intelligence has to offer that makes it understandable and
interesting for the intended audience. In addition to identifying the risks
concerns, the study will offer useful data that can be applied to reduce
those risks and guide the ethical incorporation of AI into university-level
EFL instruction. The word riskdraws attention to possible difculties
and unfavorable outcomes, which is why educators, researchers, and
legislators should focus on.
The rationale behind this research stems from the highly intuitive
nature of AI technology, which often requires minimal professional
training for users (Ali, 2023; Klimova et al., 2024; Kohnke et al., 2023;
Tawafak, Al-Obaydi, & Pikhart, 2023). Despite its ease of use, there is a
signicant gap in understanding how to utilize AI efciently in educa-
tional contexts. This lack of comprehensive training and understanding
poses potential risks, as educators and students may not be fully aware of
the best practices for integrating AI into their teaching and learning
processes. Consequently, the potential for misuse or over-reliance on AI
tools becomes a pressing concern. The contribution of this study could
be to add a wider perspective on the topic of the use of AI as it is sub-
jectively seen by the users of the technology, the educators themselves.
The strength of the study could also lie in its novelty; there are no similar
studies available yet that would deal with the same topic exactly.
It is critical to address teacherssubjective perceptions of risk while
utilizing AI, since these perceptions have a direct impact on their
readiness to accept and incorporate AI tools into the classroom. Teach-
ersresistance to implementing AI could impede the use of technology in
the classroom if they have ethical difculties, data privacy concerns, or
fear losing their jobs (Alwaqdani, 2024). Moreover, the current
deployment of AI in educational settings can be likened to conducting an
experiment with students and teachers. Given the novelty of AI tech-
nology, there is insufcient data available to thoroughly evaluate its
potential risks and benets. This research aims to ll this gap by
providing a detailed list of potential risks and pitfalls associated with the
use of AI in higher education. By doing so, it seeks to inform educators,
researchers, and policymakers about the possible challenges and guide
the ethical and effective incorporation of AI into university-level EFL
instruction.
In contrast to research that look at broad worries about AIs effects
on education, this viewpoint emphasizes how specic educators
worries like ethical concerns or facing moral quandaries can have a
direct impact on their acceptance and efcient use of AI technologies.
Focusing on subjective risk, this method offers practical ways to
empower educators and foster trust while advancing a more compre-
hensive understanding of the psychological and social obstacles to AI
adoption. This realization helps to ensure that AI integration is inclusive
and sustainable by bridging the gap between theoretical talks and real-
world implementation. Thus, the new results highlight the signicance
of addressing educators subjective views of risk, which are frequently
overlooked in the body of research that focuses largely on the peda-
gogical, ethical, or technical elements of integrating AI. The ndings of
this study are intended to serve as a catalyst for further empirical studies
and long-term research. By highlighting the potential risks and offering
practical recommendations, this paper aims to encourage a more
cautious and informed approach to AI integration in education. It un-
derscores the importance of ongoing research and data collection to
better understand the implications of AI and to develop strategies that
maximize its benets while minimizing its drawbacks.
1.1. Research questions
To address these issues the following research questions were
formulated.
1. What are the primary risks university teachers subjectively perceive
and identify regarding the integration of AI in education?
2. How do university teachers subjectively perceive the impact of AI on
their roles, the privacy and validity of data, equality, engagement,
student assessments, critical thinking and creativity, and
cybersecurity?
1.2. Literature review current trends of research
In order to provide a more thorough understanding of AIs risk, it is
important to focus on the trends of research that dealt with risk in
relation to AI use. In relation to the rst trend, it is found the majority of
articles address both the advantages and difculties and/or challenges
of this technology (AlAfnan et al., 2023; Alwaqdani, 2024; Bae et al.,
2024; Barrot, 2023; Chan & Hu, 2023; Derakhshan & Ghiasvand, 2024;
Kasneci et al., 2023; Kohnke et al., 2023; Michel-Villarreal et al., 2023;
Teng, 2024). By highlighting these two essential sides, a sophisticated
discussion is facilitated, showcasing AIs potential but also recognizing
its challenges and limitations. This tactic encourages responsible per-
formance and thoughtful decision-making (Barrot, 2023). While talking
about the challenges guarantees that possible issues are dealt with in
advance, emphasizing the benets of AI technologies supports their
acceptance (Derakhshan & Ghiasvand, 2024). By minimizing risks and
maximizing advantages, this well-rounded strategy promotes a more
deliberate and moral application of AI across a range of specializations
(Gokcearslan et al., 2024). The adoption of AI technologies can also be
justied by highlighting the advantages, while talking about the dif-
culties guarantees that any possible problems are dealt with early on.
Although this method presents a balanced perspective, pointing out AIs
advantages and difculties could have disadvantages. Underplaying
important risks or benets, it can result in equivocation, over-
generalization, and the omission of subtle nuances (Silva & Janes,
2023). Giving readers too much information might lead to cognitive
overload. By introducing bias into the selection of emphasized features,
the urgency to address major hazards may be diminished and they may
not receive the essential attention. Research like Sumakul et al. (2022)
demonstrates how crucial it is to carefully balance these factors to pre-
vent important ideas from being overlooked or minimized. According to
these research, there are a number of advantages of AI for language
learners, such as individualized learning experiences catered to each
learners needs (Ali et al., 2023), instant feedback to aid in rapid
improvement (Banihashem et al., 2024), and interactive resources such
as chatbots that reduce the anxiety associated with making errors
(Kohnke et al., 2023). AI also improves accessibility, accommodating
various learning preferences and special education needs (Pikhart et al.,
2024). AI is a useful tool for improving the language learning process
because it could help in summarizing, revising, and supporting of
different references (Barrot, 2023). In contrast to these benets, there
are disadvantages to AI, these include less human engagement, the
M. Pikhart and L.H. Al-Obaydi
Computers in Human Behavior Reports 18 (2025) 100693
2
possibility of inaccurate feedback, worries about data privacy, and an
inability to provide emotional and social reactions (Bin-Hady et al.
(2024).
The second trend of research is focus on what is called SWOT which
means strengths, weaknesses, opportunities, and threads. The SWOT
analysis is an essential technique for evaluating AI risks and creating
mitigation plans since it methodically identies these dangers (Gilbert &
Gilbert, 2024). It promotes the ethical and efcient application of AI
technology in a variety of sectors, including education, by empowering
researchers and practitioners to proactively address external obstacles.
SWOT analysis of AI entails assessing the technologys advantages,
disadvantages, opportunities, and threats. The internal benets of AI,
such automation and efciency, as well as its drawbacks, like high costs
and ethical issues, are better understood by stakeholders thanks to this
analytical approach. It also nds outside chances for expansion and
creativity as well as dangers that can impede AIs advancement, such
cybersecurity problems and legal concerns (Gilbert & Gilbert, 2024).
The Threatsfacet of this paradigm places special emphasis on identi-
fying outside variables that can obstruct the ethical or successful use of
AI technologies. Given that AI systems usually need to gather and handle
enormous volumes of personal data, data privacy and security vulner-
abilities rank among the most serious challenges. Signicant ethical and
legal issues could arise from these systems susceptibility to data
breaches, illegal access, and misuse of private information (Chollet,
2019). Learning platforms driven by AI, for instance, can unintention-
ally expose student data to online dangers, jeopardizing security and
condentiality. In the SWOT paradigm, technology vulnerabilities are
identied as external threats. Cyber-attacks like malware, phishing, and
quick injection can affect AI systems, jeopardizing their dependability
and functionality (Barredo Arrieta et al., 2020). In order to protect
against technical vulnerabilities, these dangers highlight the importance
of including strong security measures and continuing research. Informed
decision-making and strategic planning are made possible by doing
SWOT analysis on AI, which maximizes its potential while reducing its
hazards (Giray et al., 2024).
The third trend of research focuses only on the risks of using AI in
educational contexts which very few. Although the amount of study on
the advantages and difculties of AI in education is increasing, it is less
frequent to nd studies that only address the risks. Nonetheless, some
studies do address these challenges. For instance, the Velvetech article
(2024) focuses on concerns including security, data privacy, and the
possibility of inaccurate assessments. The study conducted by Denecke
et al. (2023) also addresses the bias and data protection concerns
associated with AI-based technologies in higher education.
In order to preserve a secure and productive learning environment, it
is essential to comprehend and mitigate the risks related to AI in
educational settings. Teachers can preserve the quality of education,
guarantee fair assessment, and safeguard students private information
by taking proactive measures to address these concerns. By minimizing
potential negative effects and enhancing positive ones, this well-
rounded strategy enables the ethical and responsible integration of AI
in education.
2. Methodology
2.1. Research design
The study involved university teachers from the Czech Republic and
Iraq, selected based on their experience with AI in education and their
willingness to participate. This diverse sample aimed to capture a wide
range of perspectives and cultural contexts. Recording university in-
structors from Iraq and the Czech Republic offers important comparative
information on how AI risk is perceived in various educational and so-
ciopolitical environments. Because these factors greatly inuence atti-
tudes, adoption patterns, and ethical issues around AI in education, it is
imperative to investigate how AI risk is perceived in various educational
and sociopolitical situations. Regional differences in technology infra-
structure, cultural attitudes toward AI, and legal frameworks affect how
risks like data privacy, ethical issues, and employment displacement are
seen and handled. Participants were recruited through university net-
works and professional associations, ensuring a balanced representation
of disciplines and teaching backgrounds.
2.2. Participants
The interviews were conducted with 70 respondents, including 35
individuals from Iraq and 35 from the Czech Republic. The group con-
sisted of 45 females and 25 males, aged between 32 and 54, see Table 1.
All participants were university teachers specializing in EFL, psychol-
ogy, ITC, and foreign languages. The internet and AI were reported to be
used daily or at least several times a week for their professional activ-
ities. Interest in the interview was expressed by each respondent, and the
interviews lasted between 30 and 45 min. The interviews were recorded,
transcribed, and subsequently analyzed. The LIWC-22 software was
utilized for data and sentiment analysis.
The participants were carefully selected to ensure a diverse repre-
sentation of gender, age, and professional background. The interviews
were designed to capture in-depth insights into their experiences and
perspectives on the use of AI in education. Each session was meticu-
lously documented to maintain accuracy and reliability in the analysis.
The use of LIWC-22 software allowed for a comprehensive examination
of the data, providing valuable insights into the respondentssentiments
and attitudes towards AI. This rigorous approach ensured that the
ndings were robust and reective of the participants genuine views.
2.3. Instrument
A qualitative method of gathering participant data is based on a
phenomenological approach (Sohn et al., 2017). It focuses on lived ex-
periences by collecting data via semi structured interviews to allow for
more detail and in-depth responses exploring the intentionality of
teachersexperiences with AI, and analyzing the data to identify com-
mon themes and patterns, ultimately providing a comprehensive un-
derstanding of how teachers perceive and interact with AI in their
educational practices. Since it allows researchers to engage in certain
tasks that can clarify and explain complex phenomena, such as various
aspects of the human social experience, it provides a theoretical in-
strument for educational research (Alhazmi & Kaufmann, 2022).
Each participant provided informed consent, and their condenti-
ality was maintained throughout the study. Data was collected via
guided interviews, recorded, and subsequently analyzed to identify
common themes and insights.
Each participant provided informed consent, and their condenti-
ality was maintained throughout the study. The research was approved
by the Ethics Committee of the BLINDED no. 3/2024.
The interview questions are clustered around the topics numbered 1
to 9. The teachers were interviewed and their answers were recorded.
Following are the interview questions.
Table 1
Sample prole (Iraq and Czech Republic).
Respondents Iraq Percentage Czechia Percentage
Gender
Male 15 43 % 10 29 %
Female 20 57 % 25 71 %
Total 35 100 % 35 100 %
M. Pikhart and L.H. Al-Obaydi
Computers in Human Behavior Reports 18 (2025) 100693
3
2.4. Interview questions
1. How can the use of AI in education create and increase a number of
risks to the privacy of students and teachers?
2. How do you think AI might impact the role of teachers in the future?
and How could the use of AI lead to job displacement of teachers?
3. How can the use of AI raise concerns about academic integrity and
dishonesty in education?
4. How much is the data gained from AI valid and reliable?
5. How much is replicating the human interaction by AI interaction
useful for the students oral language abilities?
6. How much can AI be used as an adequate tool for language
assessment?
7. How much is AI-provided information biased, erroneous and un-
trustworthy for you?
8. Do you think that using AI in education may regard or disregard
studentsemotional, cognitive and behavioral engagements?
9. What are possible cybersecurity risks for students and teachers when
using AI?
3. Results
The results of the study are qualitative in nature, as a quantitative
study would need more respondents. Therefore, no quantitative data
were generated as the research sample would not be representative of a
quantitative study. The following ndings could be an impetus for
further studies, this time quantitative, which will be needed due to the
impact of AI on education, teaching and learning. Moreover, the results
are clustered into one group despite the cultural and geographical dis-
crepancies between the two groups of respondents. However, again, any
distinction in results between the two groups would not be statistically
relevant and hard to justify.
The rst question dealt with data protection and privacy as the re-
spondents were asked how can the use of AI in education create and
increase a number of risks to the privacy of students and teachers. The
responses to the interview question reveal a signicant concern about
privacy risks associated with the use of AI in education. Many in-
dividuals express fear and anxiety over the potential misuse, unautho-
rized access, and data breaches that AI could facilitate. A common theme
is the excessive data collection by AI systems and the ease with which
this information can be shared, raising serious privacy issues. There is
also a notable lack of transparency regarding where data is collected and
who has access to it, which exacerbates these concerns. Ethical issues are
another prominent topic, with several responses emphasizing the need
for ethical design, transparent policies, and strict adherence to regula-
tions to mitigate privacy risks. However, there is a clear indication that
many people lack detailed knowledge about these risks, highlighting the
need for more research and awareness on the subject. The level of
concern varies among individuals; while some are highly worried about
the implications of AI on privacy, others are more indifferent, believing
that they do not provide personal data to AI systems and thus are not at
risk. One of the participants comment that the knowledge of AIs
capabilities and previous occurrences have made me extremely
concerned about possible privacy violations, data misuse, and
illegal data sharing. Additionally, the potential for AI to enable
hacking and intrusive surveillance is a recurring theme, with concerns
about AI reinforcing biases and exposing data to third parties. Overall,
the sentiment is predominantly negative, with many expressing serious
ethical worries and fear about AIs impact on privacy. A few responses
are neutral, indicating a lack of detailed knowledge or indifference to-
wards the risks, while very few show a positive outlook, focusing on
careful use and the belief that not providing personal data mitigates
risks. Table 2 illustrates how participants language shows differing
levels of awareness and worry regarding the privacy risks related to AI in
education. It emphasizes how important it is to have focused awareness
and ethical standards in order to properly handle these problems.
The second set of questions was focused on the potential changes in
the education system (How do you think AI might impact the role of
teachers in the future? and How could the use of AI lead to job
displacement of teachers?) The responses to the question about AIs
impact on the role of teachers in the future reveal a mix of concerns and
hopes. Many respondents believe that while AI might change the role of
teachers, it wont completely replace them. There is a consensus that AI
will lead to a different kind of teaching environment, where teachers
might become more like facilitators or operators of technology rather
than traditional educators. One of the participants expresses his fear
saying Im quite concerned about how AI will affect our future. We
could only become facilitators of technology-enhanced learning
experiences rather than the only suppliers of knowledge. This shift
is seen as both an opportunity and a threat, with some fearing that the
human element of teaching could be lost, which is crucial for healthy
human development and effective learning. Several responses highlight
the potential for AI to enhance the role of teachers by supporting
personalized and interactive learning. AI could make teaching easier by
automating administrative tasks and providing tailored educational
experiences for students. However, there is also a signicant concern
about job displacement, with some respondents worried that AI could
substitute teachers in certain areas, leading to job losses. This fear is
compounded by the uncertainty about how AI will develop and be
implemented in education and specically the lack of direct interaction
that will be prevail. The sentiment among the responses is mixed. On one
hand, there is optimism about the benets AI can bring to education,
such as improved engagement and personalized learning. On the other
hand, there is apprehension about the potential negative impacts,
including job displacement and the reduction of teachers to mere op-
erators of technology. Some respondents express a neutral stance,
acknowledging the changes AI will bring but uncertain about whether
these changes will be positive or negative. Overall, the responses indi-
cate a recognition that AI will signicantly impact the role of teachers,
but there is a strong belief that teachers will still be needed for their
unique ability to inspire and connect with students on a personal level.
The future of teaching in the age of AI is seen as a balance between
leveraging technology for educational benets and maintaining the
essential human elements of teaching. Table 3 summarizes the varied
viewpoints regarding AIs inuence on the role of educators. The LIWC-
22 analysis exhibits a mixture of optimism and trepidation, illustrating
the opportunities and difculties AI poses for educators.
The third question focused on ethical considerations, asking the
Table 2
The analysis of the responses to the rst question about data protection and privacy, summarized into four main LIWC-22 results.
LIWC-22 Category Findings Linguistic Indicators Sentiment/Focus
Affective Processes High degrees of worry and anxiety over hacking, breaches, and misuse of data Using phrases like fear,” “concerned, and
violations"
Mostly negative
Cognitive Processes Lack of thorough understanding of the risks associated with AI, despite requests for
increased transparency
Words that convey doubt, such as lackand
unknown"
From neutral to
negative
Ethical
Considerations
Prioritizing ethical AI design, openness, and risk-reduction measures Moral terminology such as ethicaland
regulations"
Worried but helpful
Perception of Risk Diverse answers, some very worried, and some unconcerned or hopeful Use of both neutral and emotive phrases in
combination
Mixed feelings
M. Pikhart and L.H. Al-Obaydi
Computers in Human Behavior Reports 18 (2025) 100693
4
respondents how the use of AI can raise concerns about academic
integrity and dishonesty in education. The responses to the question
about how AI might raise concerns about academic integrity and
dishonesty in education highlight several key issues. A signicant
number of respondents emphasize that AI already raises concerns about
academic integrity, particularly regarding plagiarism and the lack of
proper citations. Many students and teachers are aware of these prob-
lems, and there is a widespread belief that AI is being misused to create
nal theses and papers, leading to academic dishonesty. Several re-
sponses point out that AI can undermine critical thinking and creativity,
as students may rely too heavily on AI-generated content instead of
developing their own ideas. This over-reliance on AI can lead to a lack of
engagement and a decline in the quality of education as one of the
teachers mention I can observe my studentsreactions to AI. Due to
their excessive use, they have a cursory comprehension of the
material since they pay more attention to the results than the
fundamental ideas. There is also a concern that AI can facilitate
automated cheating, making it easier for students to fabricate data
sources and submit work that is not their own. The sentiment among the
responses is predominantly negative, with many expressing serious
concerns about the impact of AI on academic integrity. Some re-
spondents acknowledge that while AI can be benecial, it also poses
signicant risks if not used ethically. There is a call for clear rules and
guidelines to manage the use of AI in education, with some universities
already taking steps to address these issues by issuing codes of conduct
and instructing students to use AI ethically. Overall, the responses
indicate a recognition that AI can signicantly impact academic integ-
rity and honesty in education. There is a strong belief that more super-
vision, specic guidelines, and ethical use of AI are necessary to mitigate
these risks and ensure that the integrity of education is maintained.
Table 4 lists the most common worries regarding academic integrity in
connection with the application of AI in the classroom. With an
emphasis on ethical accountability and the necessity of formal rules to
successfully handle these issues, the LIWC-22 study reveals an overall
negative sentiment.
The fourth question focused on a crucial topic of validity and reli-
ability of information from a subjective perspective. The responses to the
question about the validity and reliability of data gained from AI reveal a
general scepticism and concern. Many respondents express a lack of
trust in AI-generated data, emphasizing that it is often unreliable and
requires thorough verication. There is a common belief that AI can
produce incorrect or faulty information, including non-existent refer-
ences and gures, which makes it academically risky to rely on without
cross-checking with other sources which is a point that mentioned by
many teachers like this one who conrmed that before using AI-
generated content for academic reasons, I must make sure to
double-check it using credible sources as a routine procedure.
Several responses highlight that the reliability of AI data depends on the
specic AI tool used, with estimates of accuracy ranging from 20 % to
80 %. However, even those who acknowledge some level of reliability
stress the importance of verifying the information. Some respondents
mention that AI is excellent for text generation and brainstorming ideas
but not suitable for providing valid and reliable data for academic or
research purposes. The sentiment among the responses is predominantly
negative, with many expressing serious doubts about the validity and
reliability of AI-generated data. There are concerns about AIs potential
to produce biased information and the difculty in evaluating the ac-
curacy of the data it generates. A few responses are more neutral or
slightly positive, recognizing that while AI can sometimes provide valid
information, it often generates stupidities that need to be identied
and ltered out. Overall, the responses indicate a cautious approach to
using AI-generated data, with a strong emphasis on the need for veri-
cation and critical evaluation to ensure its validity and reliability. The
respondentscautious attitude toward the legitimacy and dependability
of AI-generated data is reected in Table 5. Their concerns, particularly
the perceived hazards, and their emphasis on information verication
for academic reasons are highlighted in the LIWC-22 analysis.
The fth topic is related to human communication and human-
computer interaction. The question of the interview was as follows:
How much is replicating the human interaction by AI interaction useful
for the students oral language abilities? The responses to the question
about the usefulness of AI in replicating human interaction for students
oral language abilities reveal a range of opinions. Many respondents
believe that AI is benecial for training students in oral language skills.
They highlight that AI can facilitate decent conversations with chatbots,
providing a valuable tool for practising language, especially for students
who have limited access to native speakers. This can be particularly
advantageous in English as a Foreign Language (EFL) contexts, where AI
can mimic real communication and offer feedback on grammar, vo-
cabulary, and pronunciation. However, there are also concerns about
the limitations and potential risks of relying on AI for human interaction.
One respondent argues that AI cannot fully replicate the complexity
of human interaction, which involves more than just exchanging
information. They emphasize the importance of human elements,
Table 3
The analysis of the replies of the second question with an emphasis on the four
primary categories based on the LIWC-22 results.
LIWC-22
Category
Findings Linguistic Indicators Sentiment/
Focus
Affective
Processes
Uncertainty surrounding
the application of AI,
worries about job
displacement, and the
loss of human connection
Using phrases such as
concerned,” “fear,
and apprehension"
Mostly
negative
Cognitive
Processes
Understanding how AI is
changing teaching
positions, particularly
those of technology-
enhanced learning
facilitators
Words that convey
analytical thought,
such as shiftand
change"
Analytical to
neutral
Optimism I hope AI can automate
administrative work and
facilitate tailored,
interactive learning.
Using words like
enhanceand
supportin a positive
way
From neutral
to positive
Perception
of Risk
Concerns about losing
their jobs and turning
teachers into AI system
operators; conicting
opinions about the
effects of AI
Terms that are both
neutral and
emotionally charged
Mixed
feelings
Table 4
The four main categories based on the LIWC-22 results were highlighted in the analysis of the responses of the third question.
LIWC-22 Category Findings Linguistic Indicators Sentiment/Focus
Affective Processes Serious issues with automated cheating, plagiarism, and unethical behavior Words like concern,” “misuse, and
dishonestyare used.
Mostly negative
Cognitive Processes Realization that an excessive dependence on AI is compromising creativity and
critical thinking
Analytical phrases such as engage,” “rely,and
ideas"
Analytical to negative
Ethical
Considerations
Demands that student supervision, norms of behavior, and explicit ethical
guidelines be implemented.
Moral terminology, such as integrity,” “rules,
and guidelines"
Constructive and
considerate
Perception of Risk The idea that ethical use norms could mitigate the risks of academic
dishonesty, even though AI raises them
Emotional and neutral phrases mixed together Mixed feelings
M. Pikhart and L.H. Al-Obaydi
Computers in Human Behavior Reports 18 (2025) 100693
5
such as assessing the psychological state of the speaker, which AI cannot
imitate. There is a fear that over-reliance on AI could lead to a loss of
essential human interaction skills and negatively impact individuals and
society. The sentiment among the responses is mixed. On one hand,
there is optimism about the benets of AI for language training, with
many seeing it as a useful tool for improving oral language abilities. On
the other hand, there is apprehension about the potential dangers of AI
replacing human interaction, with some respondents expressing serious
concerns about the implications for human development and social
interaction. Overall, the responses indicate that while AI can be a
valuable tool for enhancing studentsoral language abilities, it is crucial
to balance its use with genuine human interaction to ensure a well-
rounded development of language skills. The varied viewpoints on
how AI might improve oral language skills are captured in Table 6. The
LIWC-22 analysis highlights the vital need to integrate AI with actual
human interaction by revealing a balance between excitement about
AIs promise and scepticism about its limitations.
A very important issues regarding the deployment of AI in education
is testing and assessment as mentioned in question 6. The respondents
were asked about how much can AI be used as an adequate tool for
language assessment. The responses to the question about the adequacy
of AI as a tool for language assessment reveal a range of opinions, with
many expressing scepticism and concern. A signicant number of re-
spondents believe that AI, while quick and efcient, lacks the
human touch that is crucial for effective assessment. They argue
that AI does not take into consideration individual differences in
students, such as their mental and emotional states, and can be
very strict and demotivating because it never makes mistakes and
does not tolerate them. Several responses highlight that teachers
feedback is irreplaceable, emphasizing the importance of human
interaction, understanding, and the ability to provide nuanced feedback
that AI cannot replicate. There is a concern that AIs impersonal nature
and strictness might not be well-received by students, who may prefer
more human interaction in their assessments. On the other hand, some
respondents acknowledge the benets of AI in terms of efciency and
objectivity. They appreciate that AI can save teachers time by auto-
mating the assessment process, allowing them to focus on other tasks.
However, even those who recognize these advantages often stress the
need for a balanced approach, combining AI with traditional, human-led
assessments to ensure a comprehensive evaluation of studentsabilities.
The sentiment among the responses is mixed, with a slight leaning to-
wards scepticism about AIs adequacy for language assessment. While
some see potential benets, the prevailing view is that AI should not
replace human assessors but rather complement them to enhance the
overall assessment process. Overall, the responses indicate that while AI
can be a useful tool for language assessment, it is not adequate on its
own. The human element remains essential for providing personalized,
empathetic, and holistic feedback that AI currently cannot offer. The
cautious yet fair opinions of participants on AIs role in language
assessment are shown in Table 7. The LIWC-22 analysis highlights the
vital necessity for human engagement in the assessment process while
revealing both hope over AIs effectiveness and caution regarding its
limitations.
The seventh topic was related to equality by focusing on the biased,
erroneous and untrustworthy nature of information provided by AI. The
responses to the question about the bias, errors, and trustworthiness of
AI-provided information reveal a strong consensus that AI-generated
data is often unreliable. Most respondents express a lack of trust in AI,
citing personal experiences and studies that highlight its tendency to
produce biased and erroneous information. There is a common belief
Table 5
The four main categories based on the LIWC-22 results were highlighted in the analysis of the responses of the fourth question.
LIWC-22 Category Findings Linguistic Indicators Sentiment/Focus
Affective Processes Doubt and anxiety over biased and untrustworthy AI-generated data Words like doubts,” “concern,and riskyare
used.
Mostly negative
Cognitive Processes The necessity of regular cross-checking of AI-generated data and the emphasis
on verication
Analytical phrases like check,” “verify, and
accurate"
Careful and analytical
Ethical
Considerations
Urges the use of AI responsibly, emphasizing academic honesty and
responsibility in data validation
Words such as integrity,” “responsibility, and
evaluation"
Concerned and
constructive
Perception of Risk Diverse opinions regarding the accuracy of AI, along with assessments of its
shortcomings
Use of neutral and skeptical phrases in
combination
Mixed feelings
Table 6
The four main groups identied by the LIWC-22 results were highlighted in the
analysis of the responses to the fth question.
LIWC-22
Category
Findings Linguistic Indicators Sentiment/
Focus
Affective
Processes
Concerns about the loss
of human interaction
and hope for AIs
advantages in language
teaching are conicting
feelings.
Using phrases like
fear,” “concern,
and helpful"
Mixed feelings
Cognitive
Processes
Understanding AIs
value for language use
and its limitations in
simulating intricate
human interaction
Analysis-related
words such as
practice,
feedback,and
assess"
Both analytical
and evaluative
Social
Processes
Human interaction is
essential for the
development of holistic
communication.
Relational
terminology such as
human,
interaction,and
societyare used.
Constructive to
negative
Perception
of Risk
worries about how an
over dependence on AI
may affect human
growth and social
abilities
A combination of
neutral and
emotional phrases
Critical and
cautious
Table 7
The examination of the answers to the sixth question emphasized the four pri-
mary categories determined by the LIWC-22 results.
LIWC-22
Category
Findings Linguistic Indicators Sentiment/
Focus
Affective
Processes
Concerns over AIs
rigidity, impersonality,
and inability to
comprehend emotions in
evaluation
Using terms like
strict,
demotivating,and
concern"
Mostly
negative
Cognitive
Processes
Acknowledgement of AIs
effectiveness and
impartiality while
maintaining the
requirement for human
input
Analytical phrases
like automating,
efcient,and
nuanced"
Both
analytical
and
evaluative
Social
Processes
Stressing the importance
of teachers in providing
individualized and
sympathetic feedback
Relational concepts
such as feedback,
interaction,and
understanding"
From
negative to
positive
Perception
of Risk
Opinions vary: some are
skeptical that AI will
completely replace
human evaluators, while
others see its usefulness
Terms that are both
neutral and
emotionally charged
Mixed
feelings
M. Pikhart and L.H. Al-Obaydi
Computers in Human Behavior Reports 18 (2025) 100693
6
among them that AI can perpetuate and amplify existing biases
present in the data it is trained on, leading to untrustworthy out-
puts. Several responses emphasize the importance of verifying AI-
generated information, as it can often be misleading or incorrect.
There is a recognition among the teachers that while AI has the po-
tential to process information objectively, it frequently falls short,
especially when dealing with complex or nuanced topics. Cultural
biases are also mentioned as a signicant concern, with AI sometimes
reecting the biases inherent in the data it processes. The sentiment
among the responses is predominantly negative, with many expressing
serious doubts about the reliability of AI-provided information. Some
respondents acknowledge that AI can handle simple information man-
agement tasks but stress the need for critical evaluation and double-
checking of its outputs. A few responses are more neutral, recognizing
the potential for AI to be useful with revisions and corrections, but
overall, the prevailing view is one of scepticism. Overall, the responses
indicate a cautious approach to using AI-generated information, with a
strong emphasis on the need for critical thinking and verication to
ensure its accuracy and reliability. Strong doubts regarding AIs
dependability are reected in Table 8, along with demands for ethical
responsibility and critical analysis. The LIWC-22 analysis emphasizes
the difculties in correcting bias and inaccuracies in AI-generated in-
formation as well as the need for caution.
Another important topic was emotional, cognitive and behavioural
engagement and the AI relation to it. The responses to the question
about whether AI in education regards or disregards students
emotional, cognitive, and behavioural engagements reveal a predomi-
nant concern that AI tends to disregard these important aspects. Many
respondents believe that AI encourages isolation rather than engage-
ment, particularly in terms of emotional and behavioural interactions.
Most respondents argue that AI does not take into account students
emotions, personal lives, and other important personality traits,
which are crucial for effective learning and human communica-
tion. Several responses highlight that AIs inability to consider human
emotions and personal differences is a signicant drawback. They
emphasize that while AI can be efcient and objective, it lacks the
human touch that is essential for fostering emotional and cognitive
engagement. This limitation is seen as a major threat to the quality of
education and the overall development of students. However, some re-
spondents suggest that the impact of AI on engagement depends on how
it is used. They acknowledge that AI can be benecial in cooperative
activities and certain cognitive tasks, but stress the importance of
combining AI with human interaction to ensure a balanced approach.
There is a recognition that while AI can support some aspects of edu-
cation, it cannot fully replace the role of human teachers in addressing
students emotional and behavioural needs. The sentiment among the
responses is predominantly negative, with many expressing serious
concerns about AIs ability to engage students on an emotional and
personal level. A few responses are more neutral or conditional, sug-
gesting that the effectiveness of AI depends on its application and the
balance between human and machine learning. Overall, the responses
indicate a strong belief that while AI can be a useful tool in education, it
is not sufcient on its own to address the full spectrum of students
engagement needs. The human element remains essential for providing
the emotional and cognitive support that AI currently cannot offer.
Table 9 highlights the common worry that, although AI can help with
some educational components, it cannot take the place of human
connection when it comes to addressing behavioral and emotional
engagement. The LIWC-22 analysis emphasizes the necessity of inte-
grating AI and human teaching techniques in a balanced manner.
And nally, the interview asked about the possible cybersecurity
risks AI can present to teachers and students. The responses to the
question about possible cybersecurity risks for students and teachers
when using AI highlight a range of concerns. Many respondents show
their worries about the risks related to privacy issues, data breaches, and
unauthorized data sharing. There is a common belief that AI appli-
cations can expose sensitive information, making it vulnerable to
hacking, phishing, and other cyber-attacks. Some respondents
mention specic threats such as malware attacks, system vulnera-
bilities, and prompt injection attacks, which can compromise the
security of personal and institutional data. Several responses point
out that AIs extensive data collection capabilities pose signicant
risks, as it can gather and store large amounts of personal infor-
mation. This data can be targeted by cybercriminals, leading to
potential misuse by corporations, politicians, or malicious actors.
There is also a concern about AIs ability to imitate sounds and images,
which could be used for deceptive purposes, raising fears among users.
The sentiment among the responses is predominantly negative, with
many expressing serious concerns about the cybersecurity risks associ-
ated with AI. Some respondents acknowledge that while they are not
experts, they are aware of the potential threats and are seeking guidance
on how to mitigate these risks. A few responses are more neutral,
Table 8
The four main categories based on the LIWC-22 results were highlighted in the analysis of the responses of the seventh question.
LIWC-22 Category Findings Linguistic Indicators Sentiment/Focus
Affective Processes High levels of mistrust and scepticism regarding the accuracy of information
produced by AI
Words like bias,” “untrustworthy,and
doubtsare used.
Mostly negative
Cognitive Processes Stressing the importance of critical thinking, verication, and meticulous
assessment of AI results
Analytical phrases like check,” “evaluate,
and verify"
Both analytical and
evaluative
Ethical
Considerations
Calls for responsibility amid worries that AI would reinforce cultural biases and
exacerbate injustices
Use of ethical terms such as integrity,
fairness,and biases"
From negative to
positive
Perception of Risk Awareness of the dangers of depending on AI for complicated or nuanced subjects,
which results in its cautious application
Terms that are both neutral and emotionally
charged
Critical and cautious
Table 9
The examination of the answers to the eighth question emphasized the four
primary categories determined by the LIWC-22 results.
LIWC-22
Category
Findings Linguistic Indicators Sentiment/
Focus
Affective
Processes
AIs potential to
promote loneliness
and disregard
interpersonal
interaction
Using phrases like
essential,” “concern,
and isolation"
Mostly
negative
Cognitive
Processes
AIs effectiveness in
collaborative activities
is acknowledged, but
its capacity to handle
individual variances is
limited.
Analytical phrases like
lacks,” “benecial,and
efcient"
From
neutral to
analytical
Social
Processes
A focus on the human
component needed for
comprehensive
emotional and
behavioral
involvement
Relational concepts such
as communication,
personal,and
interactionare used.
From
negative to
positive
Perception
of Risk
Calls for balanced
methods and worries
that reliance on AI
would undermine full
engagement
Terms that are both
neutral and emotionally
charged
Critical and
cautious
M. Pikhart and L.H. Al-Obaydi
Computers in Human Behavior Reports 18 (2025) 100693
7
suggesting that the risks are still being studied and understood, but
overall, the prevailing view is one of caution and apprehension. Overall,
the responses indicate a strong awareness of the cybersecurity risks
posed by AI, with a focus on the need for robust security measures and
ongoing research to address these challenges. There is a recognition that
while AI can offer signicant benets, it also requires careful manage-
ment to ensure the safety and privacy of students and teachers.
Table 10 shows the common worries regarding AI-related cyberse-
curity threats, highlighting the necessity of cautious handling and more
robust defenses. A cautious but positive approach to tackling these issues
is captured in the LIWC-22 analysis.
4. Discussion
The results show serious issues with cybersecurity threats, privacy,
biases, academic integrity, dependability, and engagement. The inter-
pretation of these ndings highlights the necessity of proactive, human-
centered frameworks that direct the incorporation of AI in the class-
room. College teachers at universities express their worries about the
risk that associate using AI in education and have pointed out a number
of risks associated with the employment of AI specically in higher
education. These include but not limited to data privacy issues, where
sensitive information may be misused and compromised because to the
extensive collecting and analysis of student data. Additionally, job
displacement is also mentioned by teachers. While biases in AI algo-
rithms can lead to unequal treatment of particular student groups, an
over-reliance on AI can erode critical thinking and problem-solving
abilities. Ethical issues about the appropriate application of AI in edu-
cation are essential to preserving the integrity and condence of the
educational system.
In relation to research questions, university teachers identify several
primary risks associated with the integration of AI in education as a
result to the rst research question. Privacy risks are a major concern,
with fears of improper use of data and illegal access, and data breaches
facilitated by AI as also concluded by Gilbert and Gilbert (2024). The
excessive data collection and lack of transparency about data usage
exacerbate these worries. In more detail, the responses to the questions
reveal signicant concerns particularly regarding privacy risks. Many
individuals express fear over potential misuse, unauthorized access, and
data breaches facilitated by AI (Atlantic Council (2024). Ethical issues
are also prominent, with calls for ethical design, transparent policies,
and strict regulations (AlAfnan et al., 2023; Kazi et al., 2023). The
sentiment is predominantly negative, with many expressing serious
ethical worries and fear about AIs impact on privacy.
The answer to the second research question is as follows; teachers
perceive AI as having a profound impact on their roles, potentially
transforming them into technological facilitators as opposed to con-
ventional teachers. While AI can enhance personalized learning, there
are concerns about job displacement and the loss of essential human
interaction as concluded by Brusilovsky (2023). The importance of
combining AI with human elements is emphasized to ensure effective
engagement and assessment. The increasing signicance of incorpo-
rating AI into the educational landscape is underscored by this change in
role perception. In order to automate administrative processes,
customize learning, and offer data-driven insights into student perfor-
mance, teachers are increasingly viewed as facilitators who use AI.
Nevertheless, this change also presents difculties, such as the
requirement for professional development to give educators the
know-how to use AI tools efciently (Ali, 2023; Klimova et al., 2024).
Furthermore, there are worries about preserving the human aspect in
education because a greater dependence on AI may result in fewer op-
portunities for deep connections between teachers and students. To
guarantee a comprehensive and interesting learning, it is imperative to
strike a balance between the advantages of AI and the maintenance of
vital human relationships. Consequently, even though AI has the po-
tential to improve educational performance, these issues must be
addressed, and instructors must be assisted in adjusting to their chang-
ing roles. Teachers also worry about AIs potential to perpetuate existing
biases (Michel-Villarreal et al., 2023), affecting equality and fairness in
education. Overall, there is a strong emphasis on the need for ethical
guidelines, critical evaluation, and robust cybersecurity measures to
mitigate the risks associated with AI in education as reported by Ulven
and Wangen (2021). AI is seen by most teachers as changing the
teaching environment, though it may provide many services to them.
The sentiment is mixed, with both optimism about AIs benets and
apprehension about its potential negative impacts. Many believe AI will
not completely replace teachers but will signicantly alter their roles
(Crompton & Burke, 2023; Tawafak et al., 2024).
Academic integrity is another signicant issue, as AI is seen to
facilitate plagiarism and undermine critical thinking and creativity, as
with Walczak and Cellary (2023). Teachers also express scepticism
about the validity and reliability of AI-generated data, citing its poten-
tial for bias and errors. Cybersecurity risks, including data breaches,
phishing, and unauthorized data sharing, are highlighted as serious
threats posed by AI as reported by Ulven and Wangen (2021). There is a
call for clear guidelines to manage AI use in education. The sentiment is
predominantly negative, with many expressing serious concerns about
AIs impact on academic integrity. Similarly, there is general scepticism
about the validity and reliability of AI-generated data (Lam & Le, 2024),
with many respondents emphasizing the need for thorough verication
due to the potential for biased and erroneous information.
AIs usefulness in replicating human interaction for language
training is acknowledged, especially for students with limited access to
native speakers (Derakhshan et al., 2023; Nutprapha (2023). However,
there are concerns about AIs inability to fully replicate the complexity
of human interaction, which is crucial for effective learning. The senti-
ment is mixed, with optimism about AIs benets but concerns about its
potential risks. When it comes to language assessment, AI is seen as
efcient but lacking the human touch needed for effective evaluation.
There is a call for a balanced approach, combining AI with human-led
assessments and the focus should be on cognitive abilities (Al-Obaydi,
Pikhart, & Tawafak, 2023).
The responses also highlight signicant concerns about the bias,
errors, and trustworthiness of AI-generated information. Many express a
lack of trust in AI, citing its tendency to produce biased and erroneous
information when using technology (Dwork & Minow, 2022; Tawafak,
Al-Obaydi, & Pikhart, 2023). Frequently, this mistrust originates from
situations in which AI systems have reinforced preexisting social
Table 10
The four main categories based on the LIWC-22 results were highlighted in the analysis of the responses of the ninth question.
LIWC-22 Category Findings Linguistic Indicators Sentiment/Focus
Affective Processes Data breaches, hacking, privacy violations, and harmful data abuse are all
concerns.
Words like vulnerable,” “fear,and threats are
used.
Mostly negative
Cognitive Processes Understanding AIs massive data collecting and system weaknesses that call
for mitigation
Analytical phrases like compromise,” “assess,and
risks"
cautious and
analytical
Ethical
Considerations
Stressing the value of strong security protocols and moral AI system
management
Use of ethical terms such as secure,
responsibility,and mitigate"
Conscious and
constructive
Perception of Risk Proactive action calls combined with a strong understanding of the cyber
security threats associated with AI
Terms that are both neutral and emotionally
charged
Critical and cautious
M. Pikhart and L.H. Al-Obaydi
Computers in Human Behavior Reports 18 (2025) 100693
8
prejudices. Biases in AI may originate from the data that was used to
train these systems, which may contain historical biases and injustices.
Additionally, problems with the algorithmscomprehension of context
or the absence of human supervision during crucial decision-making
processes might lead to inaccuracies in AI outputs (MIT Technology
Review, 2019). Implementing ethical standards, being transparent in AI
development, and conducting ongoing monitoring to detect and reduce
biases are all crucial to resolving these problems. Furthermore,
educating and raising public knowledge of AIs potential and constraints
can contribute to a more thoughtful and circumspect approach to its use.
Trust and acceptance can be increased by reducing the possibility of
biased and incorrect AI outputs by fusing technology breakthroughs
with human judgment and ethical considerations. There is a strong
emphasis on the need for critical evaluation and verication.
Additionally, AI is perceived to disregard students emotional and
behavioral engagement, encouraging isolation rather than fostering
meaningful interactions as with results of Lo et al. (2024). Combining AI
with human interaction is recommended to ensure a balanced approach.
Finally, cybersecurity risks associated with AI are a major concern,
including privacy issues, data breaches, and unauthorized data sharing.
The sentiment is mostly negative, with serious concerns about these
risks. Overall, the responses indicate a cautious approach to using AI in
education, with a strong emphasis on the need for ethical guidelines,
critical evaluation, and the preservation of human elements in teaching
and learning. Moreover, to allay these worries and guarantee the
responsible and safe application of AI in education, strong security
protocols, constant observation, and compliance with data protection
laws are necessary (Atlantic Council, 2024; GDPR Advisor, 2023).
The ndings highlight the signicance of ethics and openness in AI
policy. Privacy issues draw attention to the necessity of regulations that
guarantee safe data processing and guard against abuse or illegal access.
Strict evaluation procedures are necessary to address bias and reliability
concerns in order to validate AI-generated outputs and prevent biased
results. The results support a balanced incorporation of AI tools into
curriculum design, fusing their efciency and personalization advan-
tages with the essential human component for behavioral, emotional,
and cognitive engagement. In order to overcome AIs limitations, cur-
riculum frameworks can include AI into collaborative activities or lan-
guage teaching projects while making sure students learn critical
thinking and verication techniques. These theoretical ramications
collectively imply that a human-centric strategy, stressing security,
ethics, and comprehensive student development, must direct AIs
participation in education. By taking into account these factors, curric-
ulum designers and AI policymakers may manage risks while utilizing
AIs revolutionary promise for inclusive and successful education.
As a concluding truth, there are risks associated with deploying AI in
education for both teachers and students. Concerns about data privacy,
biases in AI algorithms, cybersecurity issues, and the possible decline in
meaningful human engagement are the key problems. To guarantee that
AI is applied sensibly and successfully in educational settings, it is
imperative to recognize and manage these risks.
5. Conclusion
The major contribution of this study and its novelty lies in high-
lighting and revealing signicant concerns among university teachers
regarding the integration of AI in education. Moreover, it provides a list
of major concerns expressed by the educators as well. Privacy risks,
including data misuse and unauthorized access, are major issues, along
with fears of AI facilitating plagiarism and undermining academic
integrity. Teachers express scepticism about the validity and reliability
of AI-generated data, citing potential biases and errors. While AI is seen
as benecial for personalized learning, there are apprehensions about
job displacement and the loss of essential human interaction. Cyberse-
curity risks, such as data breaches and phishing, are also highlighted.
Overall, the sentiment is predominantly negative, with calls for ethical
guidelines, critical evaluation, and the preservation of human elements
in education to mitigate these risks.
Of course, these issues are only subjective opinions and perspectives
of the educators who were interviewed, however, the results could be
understood as a starting point for further, more in-depth and quantita-
tive, studies related to the topic.
The study also highlights several ideas related to the possible miti-
gation of the risks identied. In order to reduce the risks involved with
implementing AI in the educational environment, teachers should place
a high priority on professional development so they can successfully
integrate AI tools while keeping ethical issues like bias and data pro-
tection in mind. In order to create deep connections with students,
teachers can strike a balance between AI automation and individualized
help, highlighting the value of preserving human engagement. Teachers
can guarantee that AI tools improve learning results without jeopard-
izing student safety and equity by being aware about the new trends of
AI and rigorously assessing these technologies. In the end, teachers can
maximize AIs advantages while lowering its risks with a careful and
knowledgeable implementation.
5.1. Research limitations
Naturally, there are serious limitations to this qualitative research.
The research was conducted on a small sample comprising 70 teachers
from two countries. To obtain more reliable data, a much larger sample
would be necessary. However, to get a picture of the current situation,
this sample size seems sufcient for qualitative research. Further studies
should follow, including students as well as teachers, and more detailed
quantitative data should be obtained to provide a comprehensive un-
derstanding of the impact of AI in education. That is why these limita-
tions do not undermine the results and importance of the research it was
conducted.
5.2. Future lines of research
Future lines of research should focus on conducting similar studies to
investigate users perceptions and realizations regarding the use of AI,
particularly its ethical implications and cybersecurity risks. These
studies are crucial for understanding the broader impact of AI in edu-
cation and ensuring that its implementation is both effective and
responsible. By expanding the scope to include a larger and more diverse
sample, as well as incorporating quantitative data, researchers can gain
a more comprehensive understanding of the challenges and opportu-
nities presented by AI. Additionally, involving students in these studies
will provide valuable insights into their experiences and concerns,
further informing the development of ethical guidelines and best prac-
tices for AI in education.
CRediT authorship contribution statement
Marcel Pikhart: Writing review & editing, Writing original draft.
Liqaa Habeb Al-Obaydi: Writing review & editing, Writing original
draft.
Declaration of competing interest
The authors declare that they have no known competing nancial
interests or personal relationships that could have appeared to inuence
the work reported in this paper.
Acknowledgements
This research is part of the Excellence 2025 project at the Faculty of
Informatics and Management, University of Hradec Kralove, Czech
Republic.
M. Pikhart and L.H. Al-Obaydi
Computers in Human Behavior Reports 18 (2025) 100693
9
Data availability
All data generated by the research are presented in the article.
References
Al-Obaydi, L. H., Pikhart, M., & Tawafak, R. (2023). Online assessment in language
teaching environment through essays, oral discussion, and multiple-choice
questions. Computer-Assisted Language Learning Electronic Journal (CALL-EJ), 24(2),
175197.
AlAfnan, M. A., Dishari, S., Jovic, M., & Lomidze, K. (2023). Chatgpt as an educational
tool: Opportunities, challenges, and recommendations for communication, business
writing, and composition courses. Journal of Articial Intelligence and Technology, 3
(2), 6068.
Alhazmi, A. A., & Kaufmann, A. (2022). Phenomenological qualitative methods applied
to the analysis of cross-cultural experience in novel educational social contexts.
Frontiers in Psychology, 13, Article 785134. https://doi.org/10.3389/
fpsyg.2022.785134
Ali, J. K. M. (2023). Benets and challenges of using ChatGPT: An exploratory study on
English language program. University of Bisha Journal for Humanities, 2(2), 629641.
Ali, J. K. M., Shamsan, M. A. A., Hezam, T. A., & Mohammed, A. A. Q. (2023). Impact of
ChatGPT on learning motivation: Teachers and students voices. Journal of English
Studies in Arabia Felix, 2(1), 4149. https://doi.org/10.56540/jesaf.v2i1.51
Alwaqdani, M. (2024). Investigating teachers perceptions of articial intelligence tools
in education: Potential and difculties. Education and Information Technologies.
https://doi.org/10.1007/s10639-024-12903-9
Atlantic Council. (2024). AI in cyber and software security: Whats driving opportunities
and risks?. Retrieved from https://www.atlanticcouncil.org/in-depth-research-rep
orts/issue-brief/ai-in-cyber-and-software-security-whats-driving-opportunities-a
nd-risks/.
Bae, H., Jaesung, H., Park, J., Woong Choi, G., & Moon, J. (2024). Pre-service teachers
dual perspectives on generative AI: Benets, challenges, and integration into their
teaching and learning. Online Learning, 28(3), 131156. https://doi.org/10.24059/
olj.v28i3.4543
Banihashem, S. K., Kerman, N. T., Noroozi, O., Moon, J., & Drachsler, H. (2024).
Feedback sources in essay writing: Peer-generated or AI-generated feedback?
International Journal of Educational Technology in Higher Education, 21(1). https://doi.
org/10.1186/s41239-024-00455-4. Article 23.
Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A.,
Herrera, F. (2020). Explainable Articial Intelligence (XAI): Concepts,
taxonomies, opportunities and challenges toward responsible AI. Information Fusion,
58, 82115. https://doi.org/10.1016/j.inffus.2019.12.012.
Barrot, J. S. (2023). Using ChatGPT for second language writing: Pitfalls and potentials.
Assessing Writing, 57, Article 100745. https://doi.org/10.1016/j.asw.2023.100745
Bin-Hady, W. R. A., Ali, J. K. M., & Al-humari, M. A. (2024). The effect of ChatGPT on
EFL students social and emotional learning. Journal of Research in Innovative
Teaching & Learning, 17(2), 243255. https://doi.org/10.1108/JRIT-02-2024-0036
Brusilovsky, P. (2023). AI in education, learner control, and human-AI collaboration.
International Journal of Articial Intelligence in Education, 34(1), 122135. https://doi.
org/10.1007/s40593-023-00356-z
Chan, C. K. Y., & Hu, W. (2023). Studentsvoices on generative AI: Perceptions, benets,
and challenges in higher education. International Journal of Educational Technology in
Higher Education, 20, 43. https://doi.org/10.1186/s41239-023-00411-8
Chollet, F. (2019). On the measure of intelligence. arXiv preprint arXiv:1911.01547.
Crompton, H., & Burke, D. (2023). Articial intelligence in higher education: The state of
the eld. International Journal of Educational Technology in Higher Education, 20(1),
22. https://doi.org/10.1186/s41239-023-00392-8
Derakhshan, A., Eslami, Z. R., & Shakki, F. (2023). Comparing compliments in Face-to-
Face vs. online interactions among Iranian speakers of Persian. Pragmatics and
Society. https://doi.org/10.1075/ps.22102.der
Derakhshan, A., & Ghiasvand, F. (2024). Is ChatGPT an evil or an angel for second
language education and research? A phenomenographic study of research-active EFL
teachersperceptions. International Journal of Applied Linguistics, 119. https://doi.
org/10.1111/ijal.12561
Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K.,
Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H.,
Albashrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I.,
Brooks, L., Buhalis, D., Wright, R. (2023). So what if ChatGPT wrote it?
Multidisciplinary perspectives on opportunities, challenges and implications of
generative conversational AI for research, practice and policy. International Journal
of Information Management, 71, Article 102642.
Dwork, C., & Minow, M. (2022). Distrust of articial intelligence: Sources & responses
from computer science & law. Dædalus, 151(2), 309321. https://doi.org/10.1162/
daed_a_01918
GDPR Advisor. (2023). GDPR compliance in the education sector: Protecting student
data in learning environments. Retrieved from https://www.gdpr-advisor.com/gd
pr-compliance-in-the-education-sector-protecting-student-data-in-learning-envi
ronments/.
Gilbert, C., & Gilbert, M. A. (2024). The impact of AI on cybersecurity defence
mechanisms: Future trends and challenges. Global Scientic Journals, 12(9), 427441.
Giray, L., Jacob, J., & Gumalin, D. L. (2024). Strengths, weaknesses, opportunities, and
threats of using ChatGPT in scientic research. International Journal of Technology in
Education (IJTE), 7(1), 4058. https://doi.org/10.46328/ijte.618
Gokcearslan, S., Tosun, C., & Erdemir, Z. G. (2024). Benets, challenges, and methods of
articial intelligence (AI) chatbots in education: A systematic literature review.
International Journal of Technology in Education. https://doi.org/10.46328/ijte.600
Guo, Y., & Wang, Y. (2024). Exploring the effects of articial intelligence application on
EFL students academic engagement and emotional experiences: A mixed-methods
study. European Journal of Education, 60(1), Article e12812. https://doi.org/
10.1111/ejed.12812
Javier, D. R. C., & Moorhouse, B. L. (2023). Developing secondary school English
language learners productive and critical use of ChatGPT. TESOL Journal. https://
doi.org/10.1002/tesj.755
Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F.,
Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large
language models for education. Learning and Individual Differences, 103, Article
102274.
Kazi, T., Islam, M. S., Sezan, S. B. K., Sanad, Z. A., & Ataur, A. J. (2023). Impact of
ChatGPT on academic performance among Bangladeshi undergraduate students.
International Journal of Renewable and Sustainable Energy, 10(2023), 1828. https://
doi.org/10.55529/ijrise.35.18.28
Klimova, B., Pikhart, M., & Al-Obaydi, L. H. (2024). Exploring the potential of ChatGPT
for foreign language education at the university level. Frontiers in Psychology, 15,
Article 1269319. https://doi.org/10.3389/fpsyg.2024.1269319
Kohnke, L., Moorhouse, B. L., & Zou, D. (2023). ChatGPT for language teaching and
learning. RELC Journal, 114. https://doi.org/10.1177/00336882231162868
Lam, N. T. H., & Le, T. N. D. (2024). Stakeholders perceptions of ChatGPT in teaching
and learning English paragraph writing at van lang university. Asia CALL Online
Journal, 15(2), 4259.
Michel-Villarreal, R., Vilalta-Perdomo, E., Salinas-Navarro, D. E., Thierry-Aguilera, R., &
Gerardou, F. S. (2023). Challenges and opportunities of generative AI for higher
education as explained by ChatGPT. Education Sciences, 13(9). https://doi.org/
10.3390/educsci13090856
MIT Technology Review. (2019). This is how AI bias really happensand why its so hard to
x. Retrieved from MIT Technology Review.
Nadjiba, M., & Belmekki, A. (2024). Striking the balance of AI use in EFL education:
Maximizing benets, and minimizing risks. ATRAS journal.
Nutprapha, K. (2023). Using AI-powered speech recognition technology to improve
English pronunciation and speaking skills. Journal of Educational Technology, 14(2),
4560. https://doi.org/10.1080/xxxxxxx.2023.456789
Pan, Z., & Wang, Y. (2025). From technology-challenged teachers to empowered
digitalized citizens: Exploring the proles and antecedents of teacher AI literacy in
the Chinese EFL context. European Journal of Education, 60(1), 116. https://doi.org/
10.1111/ejed.70020
Pikhart, M., Klimova, B., & Al-Obaydi, L. H. (2024). Exploring university students
preferences and satisfaction in utilizing digital tools for foreign language learning.
Frontiers in Education, 9, Article 1412377. https://doi.org/10.3389/
feduc.2024.1412377
Silva, A. de O., & Janes, D. dos S. (2023). Challenges and opportunities of articial
intelligence in education in a global context. Review of Articial Intelligence in
Education, 4(00), e1. https://doi.org/10.37497/rev.artif.intell.education.v4i00.1
Sohn, B. K., Thomas, S. P., Greenberg, K. H., & Pollio, H. R. (2017). Hearing the voices of
students and teachers: A phenomenological approach to educational research.
Qualitative Research in Education, 6(2), 121148. https://doi.org/10.17583/
qre.2017.2374
Sumakul, D. T., Hamied, F. A., & Sukyadi, D. (2022). Articial intelligence in EFL
classrooms: Friend or foe? LEARN Journal: Language Education and Acquisition
Research Network, 15(1), 232256.
Tawafak, R. M., Al-Obaydi, L. H., Klimova, B., & Pikhart, M. (2023). Technology
integration of using digital game play for enhancing EFL college students behavior
intention. Contemporary Educational Technology, 15(4), ep452. https://doi.org/
10.30935/cedtech/13454
Tawafak, R. M., Al-Obaydi, L. H., & Pikhart, M. (2023). Competency categorization and
roles of online teachers from the perspective of university students. Frontiers in
Psychology. https://doi.org/10.3389/fpsyg.2023.1009000
Tawafak, R. M., Al-Obaydi, L. H., Pikhart, M., & Namaziandost, E. (2024). Risk-taking,
TAM model, and technology integration: Impact on EFLCollege studentsbehavioral
intentions. Applied Research on English Language, 13(2). https://doi.org/10.22108/
are.2024.140729.2241
Teng, M. F. (2024). A systematic review of ChatGPT for English as a Foreign Language
writing: Opportunities, challenges, and recommendations. International Journal of
TESOL Studies, 6(3), 3657. https://doi.org/10.58304/ijts.20240304.
Ulven, J. B., & Wangen, G. (2021). A systematic review of cybersecurity risks in higher
education. Future Internet, 13(2), 39. https://doi.org/10.3390/13020039
Velvetech. (2024). Risks and concerns of using AI in education. Velvetech. Retrieved from
https://www.velvetech.com/ai-in-education-risks-concerns.
Walczak, K., & Cellary, W. (2023). Challenges for higher education in the era of
widespread access to Generative AI. Economic and Business Review, 9, 71100.
Wang, X., Gao, Y., Wang, Q., & Zhang, P. (2025). Fostering engagement in AI-assisted
Chinese EFL classrooms: The role of classroom climate, AI literacy, and resilience.
European Journal of Education, 60(1), Article e12874. https://doi.org/10.1111/
ejed.12874
M. Pikhart and L.H. Al-Obaydi
Computers in Human Behavior Reports 18 (2025) 100693
10
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Artificial Intelligence (AI) literacy has come to the spotlight, empowering individuals to adeptly navigate the modern digitalized world. However, studies on teacher AI literacy in the English as a Foreign Language (EFL) context remain limited. This study aims to identify intraindividual differences in AI literacy and examine its associations with age and years of teaching experience among 782 English teachers. Given the absence of a reliable instrument to measure teacher AI literacy, we first constructed and validated a scale encompassing five sub-scales: AI Knowledge, AI Use, AI Assessment, AI Design, and AI Ethics. Subsequently, Latent Profile Analysis (LPA) was conducted using Mplus 7.4, with the results revealing four distinct profiles: Poor AI literacy (C1: 12.1%), Moderate AI literacy (C2: 45.5%), Good AI literacy (C3: 28.4%), and Excellent AI literacy (C4: 14.1%). Multinomial logistic regression analyses indicated significant associations between teacher AI literacy and both age and years of teaching experience. Additionally, 32 respondents participated in semi-structured interviews. The qualitative data analyzed with MAXQDA 2022 triangulated the quantitative results and offered deeper insights into teachers’ perceptions of their AI literacy. This study provides both theoretical and practical implications for understanding teacher AI literacy in the Chinese EFL context.
Article
Full-text available
As artificial intelligence (AI) gains prominence, its integration into second language (L2) /foreign language (FL) instruction has become a significant trend. Despite the considerable promise of AI for L2/FL learning, more research is still needed on its effects on student academic engagement in literature classes and the corresponding emotional experiences. This study, therefore, aimed to examine the effects of AI use on English as a foreign language (EFL) learners' academic engagement, and the emotional experience was also qualitatively explored. Students were allocated to the experimental group ( N = 48), who received instruction integrated with AI, and the control group ( N = 48), who received traditional instruction without AI assistance. Quantitative data were collected using an FL engagement scale, supplemented by individual semi‐structured interviews in the qualitative phase. The results indicated that integrating AI into EFL instruction has a positive effect on students' cognitive, emotional and social engagement. Moreover, the learners' emotional experiences were found to be abundant and dynamic, exerting influence on their academic engagement. This study provides valuable insights for language educators and researchers regarding integrating AI into EFL instruction.
Article
Full-text available
Over the past decade, technological advancements, especially in Artificial Intelligence, have significantly transformed educational practices. However, there are growing concerns about its misuse. Instead of using these technologies as a supplementary tool, many students are becoming increasingly dependent on these platforms. This rising reliance threatens to diminish its benefits, possibly restricting students' abilities and weakening their academic performance. This paper seeks to examine both the potential and challenges associated with employing Artificial Intelligence tools in education. By addressing this growing concern, the study offers valuable insights into how these tools are reshaping education and highlighting the need for a balanced integration. Employing a mixed-method research design, the study uses a web-based questionnaire with both open-ended and closed-ended questions, administered to twenty EFL teachers from various universities in Algeria. The research emphasizes the adverse effects of excessive reliance on AI on student creativity and academic outcomes while also proposing solutions to reduce students' misuse of these technologies. The results highlight the need for careful consideration and thoughtful strategies to minimize the risks of overreliance, ensuring that technology serves as a helpful tool rather than a hindrance.
Article
Full-text available
The integration of Artificial Intelligence (AI) into cybersecurity has significantly revolutionized the field, bolstering the detection, response, and mitigation of cyber threats. This piece delves into the current and future landscape of AI-powered cybersecurity, with a particular focus on the challenges and prospects that accompany these advancements. Traditional security systems are proving inadequate against increasingly sophisticated cyber-attacks, prompting the incorporation of AI technologies capable of predictive modeling, anomaly detection, and automated responses. While generative AI offers valuable protective capabilities, it also exposes new vulnerabilities, compelling the establishment of robust, transparent, and ethically sound AI frameworks. Topics addressed include the evolution of cybersecurity defense mechanisms, from antiquated approaches to AI-augmented systems; the implementation of AI in identifying threats, responding to them, and managing vulnerabilities; as well as ethical and adversarial challenges posed by AI. This paper also explores upcoming trends, such as autonomous security systems and explainable AI (XAI), underscoring the importance of continuous research and development in addressing emerging threats and safeguarding the integrity and security of digital infrastructures. Our analysis emphasizes the pivotal role of AI in reshaping cybersecurity, while also highlighting the need for careful implementation and diligent oversight.
Article
Full-text available
This research investigates the stakeholders' perceptions, including teachers and students, in terms of integrating ChatGPT into teaching and learning English paragraph writing at Van Lang University (VLU). Employing both qualitative and quantitative methods, data were collected from 241 survey respondents and ten interviewees. Apart from positive views on ChatGPT's effectiveness in providing instant feedback and giving diverse writing examples, the stakeholders had neutral opinions regarding ChatGPT's ability to support teaching, learning, and user creativity. Additionally, they were skeptical about the reliability and accuracy of its responses. To maximize the benefits of ChatGPT, it is advised to utilize it cautiously, confirm the correctness of its outputs through verification, and enhance quick ways for the best answers. The findings highlight the importance of a balanced approach incorporating AI tools in language education, emphasizing the need to leverage its strengths and mitigate its limitations. Future research needs to explore the benefits and drawbacks of ChatGPT in different aspects and conduct empirical studies for the chatbot to gather more valuable information.
Article
Full-text available
This paper investigated teachers’ perceptions towards the integration of Artificial Intelligence educational tools (AIED) into their educational practices.Indeed, AI rapidly advances across various fields, including education, and it is essential to understand teachers’ perceptions to effectively harness its potential.The study used a quantitative methodology, employing a questionnaire to collect perspectives from 1101 Saudi teachers across different levels and backgrounds. The research attempted to address two research questions focused on (1) The potential of AI as a tool to enhance teaching practice, and (2) the Difficulties encountered by teachers when utilizing AIED tools. The findings showed many teachers acknowledge AIED’s potential to save time, assist in designing enriching activities, and personalize learning experiences, however, concerns exist regarding the effort required for training, potential job displacement, a lack of creativity and critical thinking, unintended consequences, and trust in AI’s error- free performance. Although teachers have explicit optimism about AIED’s benefits, a cautious stance emerges due to concerns about its impact on educational quality, human touch, and potential risks. These results emphasize the need for targeted professional development, collaborative efforts between educators and policymakers, and ethical considerations to ensure responsible and effective integration of AIED. Understanding teachers’ perspectives is crucial for informed decision-making and fostering a balanced approach that optimizes AIED’s contributions while upholding the principles of effective and inclusive education in the rapidly evolving Saudi educational landscape.
Article
Full-text available
This article investigates the utilization of digital resources, including applications like Duolingo, ChatGPT, and Google Translate, in the language learning practices of English as a Foreign Language (EFL) college students from the Czech Republic, Taiwan, and Iraq. Through a questionnaire-based approach, the study explores the digital tools employed, activities conducted, satisfaction levels, and the impact on language skills. Findings reveal diverse usage patterns and positive perceptions, highlighting the multifaceted role of digital resources in vocabulary acquisition, skill development, and language exploration. While participants express overall satisfaction, challenges such as connectivity issues and content variety are identified. The study’s most important finding is that digital resources like Duolingo, ChatGPT, and Google Translate significantly enhance EFL students’ language learning experiences through increased engagement and skill development, despite challenges such as connectivity issues and content diversity. The study underscores the need for ongoing improvement in digital language learning tools and suggests avenues for future research, emphasizing long-term impact, personalized learning paths, emerging technologies, and socio-emotional aspects. Despite limitations, the research provides valuable insights into the dynamic interplay between technology and language education in diverse cultural contexts.
Article
Full-text available
This study examined pre-service teachers' perspectives on integrating generative AI (GenAI) tools into their own learning and teaching practices. Discussion posts from asynchronous online courses on ChatGPT were analyzed using the Diffusion of Innovations framework to explore familiarity, willingness to apply ChatGPT to instruction, potential benefits, challenges, and concerns about using GenAI in teaching and learning. The course discussions significantly increased pre-service teachers' awareness and foundational knowledge while reducing anxiety towards AI technologies. However, despite exposure to ChatGPT, only a few confirmed intentions to adopt AI tools in their teaching practices, potentially reflecting lingering uncertainties evidenced by emotional responses, such as worry and concern. Professional development in AI literacy can address these uncertainties and enhance GenAI familiarity. The study offers insights into responsible GenAI adoption in education and how higher education can leverage ChatGPT to enhance pre-service teacher learning.
Article
The rise of artificial intelligence (AI) has significantly impacted education, yet few scholars have explored AI-assisted classrooms, particularly in English language education in China. Understanding the roles of classroom climate, AI literacy, and resilience is essential, as these factors foster positive learning environments and enhance student engagement. In this sense, this study, grounded in Social Cognitive Theory, employs structural equation modeling to investigate factors influencing classroom engagement in AI-assisted Chinese EFL classrooms. It examines data from 606 university EFL learners to explore the interactions among these variables and the mediating role of resilience. The findings indicate that classroom climate, AI literacy, and resilience all significantly predict classroom engagement, highlighting the importance of both environmental and cognitive factors in fostering active student participation. Furthermore, resilience serves as a crucial mediator, linking classroom climate and AI literacy to engagement. This study provides actionable insights for educators and policymakers, emphasizing the need to cultivate supportive classroom environments, promote AI literacy programs, and strengthen students’ resilience to optimize engagement in AI-assisted educational settings.