ArticlePDF Available

Abstract and Figures

This study explores university students’ perceptions of generative AI (GenAI) technologies, such as ChatGPT, in higher education, focusing on familiarity, their willingness to engage, potential benefits and challenges, and effective integration. A survey of 399 undergraduate and postgraduate students from various disciplines in Hong Kong revealed a generally positive attitude towards GenAI in teaching and learning. Students recognized the potential for personalized learning support, writing and brainstorming assistance, and research and analysis capabilities. However, concerns about accuracy, privacy, ethical issues, and the impact on personal development, career prospects, and societal values were also expressed. According to John Biggs’ 3P model, student perceptions significantly influence learning approaches and outcomes. By understanding students’ perceptions, educators and policymakers can tailor GenAI technologies to address needs and concerns while promoting effective learning outcomes. Insights from this study can inform policy development around the integration of GenAI technologies into higher education. By understanding students’ perceptions and addressing their concerns, policymakers can create well-informed guidelines and strategies for the responsible and effective implementation of GenAI tools, ultimately enhancing teaching and learning experiences in higher education.
Content may be subject to copyright.
Open Access
© The Author(s) 2023. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits
use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original
author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third
party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the mate-
rial. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or
exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://
creat iveco mmons. org/ licen ses/ by/4. 0/.
RESEARCH ARTICLE
Chanand Hu Int J Educ Technol High Educ (2023) 20:43
https://doi.org/10.1186/s41239-023-00411-8
International Journal of Educational
Technology in Higher Education
Students’ voices ongenerative AI:
perceptions, benets, andchallenges inhigher
education
Cecilia Ka Yuk Chan1* and Wenjie Hu1
Abstract
This study explores university students’ perceptions of generative AI (GenAI) technolo-
gies, such as ChatGPT, in higher education, focusing on familiarity, their willingness
to engage, potential benefits and challenges, and effective integration. A survey of 399
undergraduate and postgraduate students from various disciplines in Hong Kong
revealed a generally positive attitude towards GenAI in teaching and learning. Students
recognized the potential for personalized learning support, writing and brainstorming
assistance, and research and analysis capabilities. However, concerns about accuracy,
privacy, ethical issues, and the impact on personal development, career prospects,
and societal values were also expressed. According to John Biggs 3P model, student
perceptions significantly influence learning approaches and outcomes. By understand-
ing students’ perceptions, educators and policymakers can tailor GenAI technologies
to address needs and concerns while promoting effective learning outcomes. Insights
from this study can inform policy development around the integration of GenAI tech-
nologies into higher education. By understanding students’ perceptions and address-
ing their concerns, policymakers can create well-informed guidelines and strategies
for the responsible and effective implementation of GenAI tools, ultimately enhancing
teaching and learning experiences in higher education.
Highlights
is study focuses on the integration of generative AI (GenAI) technologies,
likeChatGPT, intohigher education settings.
University students’ perceptions ofgenerative AI technologies inhigher educa-
tion were explored, includingfamiliarity, potential benefits, andchallenges.
A survey of 399 undergraduate and postgraduate students from various dis-
ciplines in Hong Kong revealed a generally positive attitude towards GenAI
inteaching andlearning.
Insights fromthis study can inform policy development aroundtheintegration
of GenAI technologies intohigher education, helping tocreate well-informed
guidelines andstrategies forresponsible andeffective implementation.
*Correspondence:
Cecilia.Chan@cetl.hku.hk
1 University of Hong Kong, Hong
Kong, China
Page 2 of 18
Chanand Hu Int J Educ Technol High Educ (2023) 20:43
Keywords: ChatGPT, Generative AI, Student perception, AI literacy, Risks, Advantages,
Holistic competencies
Generative Articial Intelligence
Generative AI (GenAI) encompasses a group of machine learning algorithms designed
to generate new data samples that mimic existing datasets. One of the foundational tech-
niques in GenAI is the Variational Autoencoder (VAE), which is a type of neural net-
work that learns to encode and decode data in a way that maintains its essential features
(Kingma & Welling, 2013). Another popular GenAI method is Generative Adversarial
Networks (GANs), which consist of two neural networks working in competition to gen-
erate realistic data samples (Goodfellow etal., 2014). GenAI models use advanced algo-
rithms to learn patterns and generate new content such as text, images, sounds, videos,
and code. Some examples of GenAI tools include ChatGPT, Bard, Stable Diffusion, and
Dall-E. Its ability to handle complex prompts and produce human-like output has led to
research and interest into the integration of GenAI in various fields such as healthcare,
medicine, education, media, and tourism.
ChatGPT, for example, has caused a surge of interest in the use of GenAI in higher
education since its release in November 2022 (Hu, 2023). It is a conversational AI sys-
tem developed by OpenAI, an autoregressive large language model (more than 175 bil-
lion parameters) has been pre-trained on a large corpus of text data. It can generate
human-like responses to a wide range of text-based inputs. e model has been trained
on a diverse range of texts, including books, articles, and websites, allowing it to under-
stand user input, generate responses, and maintain coherent conversations on a wide
range of topics. ere has been much discussion on its potential in transforming disci-
plinary practices such as medical writing (Biswas, 2023; Kitamura, 2023), surgical prac-
tice (Bhattacharya etal., 2023), and health care communications (Eggmann etal., 2023)
as well as enhancing higher education teaching and learning (e.g., Adiguzel etal., 2023;
Baidoo-Anu & Ansah, 2023).
Benets andchallenges ofusing generative AI inhigher education
One of the key uses of GenAI in higher education is for enhancing students’ learning
experience through its ability to respond to user prompts to generate highly original
output. Text-to-text AI generators can provide writing assistance to students, especially
non-native English-speaking students (Chan & Lee, 2023), by enabling them to brain-
storm ideas and get feedback on their writing through applications such as ChatGPT
(Atlas, 2023), while text-to-image AI generators such as DALL-E and Stable Diffusion
can serve as valuable tools for teaching technical and artistic concepts in arts and design
(Dehouche & Dehouche, 2023). GenAI tools are also believed to be useful research aids
for generating ideas, synthesizing information, and summarising a vast amount of text
data to help researchers analyse data and compose their writing (Berg, 2023; Chan &
Zhou, 2023), contributing to efficiency in publication (Kitamura, 2023; van Dis etal.,
2023). Another opportunity in which GenAI can bring benefits is learning assessment
(Crompton & Burke, 2023). Tools such as the Intelligent Essay Assessor are used to
grade students’ written work and provide feedback on their performance (Landauer,
Page 3 of 18
Chanand Hu Int J Educ Technol High Educ (2023) 20:43
2003). Mizumoto and Eguchi (2023) examined the reliability and accuracy of ChatGPT
as an automated essay scoring tool, and the results show that ChatGPT shortened the
time needed for grading, ensured consistency in scoring, and was able to provide imme-
diate scores and feedback on students’ writing skills. Such research demonstrates that
GenAI has potential to transform the teaching and learning process as well as improve
student outcomes in higher education.
On the other hand, there have been challenges about the limitations of GenAI and
issues related to ethics, plagiarism, and academic integrity. Kumar’s (2023) analysis
of AI-generated responses to academic writing prompts shows that the text output,
although mostly original and relevant to the topics, contained inappropriate references
and lacked personal perspectives that AI is generally incapable of producing. For sec-
ond language learners, constructing appropriate prompts poses a challenge in itself as it
requires a certain level of linguistic skills; and overreliance on GenAI tools may compro-
mise students’ genuine efforts to develop writing competence (Warschauer etal., 2023).
In addition, the content produced by GenAI may be biased, inaccurate, or harmful if
the dataset on which a model was trained contains such elements (Harrer, 2023). AI-
generated images, for example, may contain nudity or obscenity and can be created for
malicious purposes such as deepfakes (Maerten & Soydaner, 2023). GenAI tools are not
able to assess validity of content and determine whether the output they generate con-
tains falsehoods or misinformation, thus their use requires human oversight (Lubowitz,
2023). Furthermore, since AI-generated output cannot be detected by most plagiarism
checkers, it is difficult to determine whether a given piece of writing is the authors origi-
nal work (Peres etal., 2023). According to Chan (2023a), “it raises the question of what
constitutes unethical behaviour in academic writing including plagiarism, attribution,
copyrights, and authorship in the context of AI-generated content”—an AI-plagiarism.
As Zhai (2022) cautions, the use of text-to-text generators such as ChatGPT may com-
promise the validity of assessment practices, particularly those involving written assign-
ments. Hence, the widespread use of GenAI can pose a serious threat to academic
integrity in higher education. In Chan and Tsi (2023) study, there is a particular concern
towards holistic competency development such as creativity, critical thinking. e ben-
efits of GenAI underline the potential of the technology as a valuable learning tool for
students, while its limitations and challenges show a need for research into how GenAI
can be effectively integrated in the teaching and learning process. us, the research
questions for this study are
1. How familiar are university students with GenAI technologies like ChatGPT?
2. What are the potential benefits and challenges associated with using GenAI in teach-
ing and learning, as perceived by university students?
3. How can GenAI be effectively integrated into higher education to enhance teaching
and learning outcomes?
Student perceptions oftheuse ofGenAI inhigher education
User acceptance is key to successful uptake of technological innovations (Davis,
1989). John Biggs emphasized the importance of student perception in his 3P
Page 4 of 18
Chanand Hu Int J Educ Technol High Educ (2023) 20:43
(Presage–Process–Product) model of teaching and learning (Biggs, 2011). Accord-
ing to Biggs, students’ perceptions of their learning environment, their abilities, and
the teaching strategies used have a significant impact on their approach to learning
(Biggs, 1999), which in turn influences their learning outcomes. Students who perceive
the learning environment (such as, curriculum content, teaching methods, assessment
methods, learning resources, learning context, student support services) positively and
feel confident about their abilities are more likely to adopt a deep approach to learn-
ing, which involves seeking understanding and making connections between concepts.
On the other hand, students who have a negative perception of their learning environ-
ment or doubt their abilities may adopt a surface approach to learning, where they focus
on memorizing facts and meeting minimum requirements (Biggs, 2011). In a learning
environment, the way students perceive a technological innovation such as GenAI, their
views, concerns, and experiences of the technology can have impact on their willing-
ness to utilise the tool and consequently the extent to which the tool is integrated in
the learning process. A large proportion of research into tertiary students’ perceptions
in this area focuses on AI in general and chatbots which are not necessarily powered
by GenAI, while students’ views and experiences of GenAI tools specifically remain
relatively underexplored. Research into student perceptions of AI/GenAI typically
investigates students’ attitudes, their experiences of AI, and factors influencing their per-
ceptions such as gender, disciplines, age, and year of study.
Attitudes towards AI and experiences of AI Research into the use of AI in language
classrooms shows that students found AI tools such as chatbots and Plot Generator use-
ful for enhancing language acquisition by providing assistance with grammar, guiding
them in generating ideas, and helping them communicate in the target language (Bailey
etal., 2021; Sumakul etal., 2020). AI KAKU, a GenAI tool based on the GPT-2 language
model, was implemented in English language lessons with Japanese students and was
perceived to be easy to use and able to assist students to express themselves in English
(Gayed etal., 2022); while the use of AI-based chatbots for learning support improved
students’ learning achievement, self-efficacy, learning attitude, and learning motivation
(Essel etal., 2022; Lee etal., 2022). A study of the use of chatbots in business educa-
tion also reported favourable user feedback with students citing positive learning expe-
rience due to chatbots’ responsiveness, interactivity, and confidential learning support
(Chen etal., 2023). Most students agreed that AI have a profound impact on their dis-
ciplines and future careers (e.g., Bisdas et al., 2021; Gong etal., 2019; Sit etal., 2020)
and expressed an intention to utilise AI in their learning and future practice (e.g., Bisdas
etal., 2021; Lee etal., 2022), and thus viewed integration of AI as an essential part of
university curricula (e.g., Abdelwahab etal., 2022; Bisdas etal., 2021; Gong etal., 2019;
Yüzbaşioğlu, 2021).
Students who had a good understanding of AI were also found to express a low level of
anxiety about AI in Dahmash etal.s (2020) study. However, Jeffrey’s (2020) study found
conflicting beliefs among college students. Students who had a high level of understand-
ing and information about AI and believed that AI could benefit them personally also
expressed concerns about the impact of AI on human jobs. In Dahmash etal.’s (2020)
and Gong etal.’s (2019) research, the choice of radiology as a future career was asso-
ciated with the impact of AI—e number of medical students indicating radiology as
Page 5 of 18
Chanand Hu Int J Educ Technol High Educ (2023) 20:43
their specialty choice increased when the potential impact of AI was not a considera-
tion. Among the concerns and drawbacks regarding the use of AI, as perceived by stu-
dents, are limited human interaction/element (e.g., Bisdas etal., 2021; Essel etal., 2022),
potential data leakage (e.g., Bisdas etal., 2021), absence of emotional connection (Chen
etal., 2023), breach of ethics (e.g., Gillissen etal., 2022; Jha etal., 2022), and reduced
job opportunities or increased demand in job practices (Ghotbi etal., 2022; Gong etal.,
2019; Park etal., 2020).
Frequency of use/Time spent on AI tools Research examining the relationship between
frequency of AI use and student perceptions of AI is inconclusive. For example, Yildiz
Durak’s (2023) study of 86 students in a university in Turkey reported no correlation
between chatbot usage frequency and visual design self-efficacy, course satisfaction,
chatbot usage satisfaction, and learner autonomy. e finding shows that frequency of
use alone is not a meaningful factor, while satisfaction with use can impact users’ self-
efficacy. In contrast, Bailey etal. (2021) found that the amount of time spent on chatbot
use in a second language writing class was positively associated with students’ confi-
dence in using the target language and perception of task value.
Use of Methodology Most of the research into student perceptions of AI/GenAI
employs a quantitative survey design (e.g., Bisdas etal., 2021; Dahmash etal., 2020;
Gherhes & Obrad, 2018; Yüzbaşioğlu, 2021). Some studies incorporated open-ended
survey questions (e.g., Hew etal., 2023; Jeffrey, 2020) and semi-structured interviews
(e.g., Gillissen etal., 2022; Mokmin & Ibrahim, 2021; Park et al., 2020) to gather stu-
dents’ free responses and to probe their views on the research topic in addition to their
responses to survey questions. For example, Park etal.’s (2020) study consisted of two
stages: Semi-structured interviews were conducted face-to-face or by telephone in Stage
1, followed by an Internet-based survey in Stage 2. Studies that examined the impact of
AI and student perceptions typically adopted an experimental design using a pretest-
intervention-posttest approach and the administration of a questionnaire to examine
student perceptions (e.g., Essel etal., 2022; Lee etal., 2022). Qualitative research is rela-
tively rare as only the views of a small number of students can be explored with such
an approach. For example, Sumakul etal.s (2020) and Terblanche etal.’s (2022) studies
based on semi-structured interviews involved eight students and 20 students respec-
tively. In contrast, survey is more effective for reaching a large population of respond-
ents from different geographical locations as shown in previous studies such as Bisdas
etal. (2021), Dahmash etal., (2020), and Gong etal. (2019).
Although there has been a considerable amount of research into AI in general as
shown in the review of current studies in this section, there is currently lack of investiga-
tion into how students perceive GenAI. In view of the unprecedented interest in GenAI
at present, there is a need to examine university students’ attitude towards GenAI and
their experience of using GenAI in order to gain insights into how it can be integrated in
higher education to enhance teaching and learning.
Methodology
In this study, we used a survey design to collect data from university students in
Hong Kong, exploring their use and perceptions of GenAI in teaching and learn-
ing. The survey was administered via an online questionnaire, consisting of both
Page 6 of 18
Chanand Hu Int J Educ Technol High Educ (2023) 20:43
closed-ended and open-ended questions in order a large population of responses.
The initial questionnaire was developed by drawing upon similar studies and exist-
ing questionnaires on teachers’ and students’ perceptions of educational technolo-
gies in higher education. To ensure the relevance and clarity of the questionnaire
items, pilot studies were conducted prior to formal data collection. And the ques-
tionnaire was modified based on the feedback from the pilot study. The final ver-
sion of the instrument comprises a pool of 26 items, employing a 5-point Likert
scale ranging from “Strongly agree” to “Strongly disagree,” as well as 3 open-ended
questions to gather additional insights and perspectives from the respondents. Top-
ics covered in the survey encompassed their knowledge of GenAI technologies like
ChatGPT, the incorporation of AI technologies in higher education, potential chal-
lenges related to AI technologies, and the influence of AI on teaching and learning.
Data were gathered through an online survey, targeting students from all post-sec-
ondary educational institutions to ensure that the results represented the needs and
values of all participants. A convenience sampling method was employed to select
respondents based on their availability and willingness to partake in the study. Par-
ticipants were recruited through an online platform and given an informed consent
form before completing the survey. The participation was completely voluntary, and
the responses were anonymous.
A total of 399 undergraduate and postgraduate students, from various disciplines
of six universities in Hong Kong, completed the survey. Descriptive analysis was uti-
lized to analyze the survey data, and a thematic analysis approach was applied to
examine the responses from the open-ended questions in the survey. As the total
number of responses was manageable (n = 387), two coders manually generated
codes. After reading the entire dataset, each coder was assigned the same subset of
50 responses to identify potential themes. In cases where the coders disagreed, they
discussed the discrepancies and reached an agreement. Finally, a codebook was cre-
ated based on consensus and utilized to code the remaining responses.
Results
Demographic information
Participants in this study were from ten faculties (Faculty of Architecture, Arts,
Business, Dentistry, Education, Engineering, Law, Medicine, Science and Social
Sciences) of six universities in Hong Kong, comprising 204 males (51.1%) and 195
females (48.9%). There were (44.4%, n = 177) undergraduate students and (55.6%,
n = 222) postgraduate students. Nearly half of them (55.4%, n = 221) were enrolled
in STEM fields, mainly from the Faculty of Engineering (33.1%) and the Faculty of
Science (14.5%), while non-STEM students were primarily majored in Arts (14.8%,
n = 59), Business (13.3%, n = 53) and Education(7.5%, n = 30). Additionally, 66.7%
participants have reported using GenAI technologies in the general context (not spe-
cifically for teaching and learning) at least once. Specifically, 21.8% reported rarely
using it, 29.1% using it sometimes, 9.8% often using it, and 6.0% reported always
using it. Table1 shows the demographics information.
Page 7 of 18
Chanand Hu Int J Educ Technol High Educ (2023) 20:43
Knowledge ofgenerative AI technologies
As illustrated in Table 2, participants had a generally good understanding of GenAI
technologies, with mean scores ranging from 3.89 to 4.15. Specifically, students had
the highest mean score for the statement “I understand generative AI technologies
like ChatGPT have limitations in their ability to handle complex tasks” (Mean = 4.15,
SD = 0.82) and the lowest mean score for the emotional intelligence and empathy
considerations(Mean = 3.89, SD = 0.97), indicating that while they generally understand
GenAI technologies has limitations, they may not be fully aware of the potential risks
arise from the lack of emotional intelligence and empathy.
Moreover, the data showed a moderate positive correlation between their knowl-
edge of GenAI technologies and frequency of use(r = 0.1, p < 0.05). Specifically, regard-
ing their agreement on if GenAI technologies like ChatGPT may generate factually
inaccurate output, students who never or rarely use GenAI technologies (Mean = 3.99,
SD = 0.847) were significantly different (t = 2.695, p < 0.01) from students who have used
them at least sometimes (Mean = 4.22 SD = 0.829).
Table 1 Demographic Information
Characteristic n %
Sex
Male 204 51.1
Female 195 48.9
Academic level
Undergraduate 177 44.4
Postgraduate 222 55.6
Major
STEM 221 55.4
Non-STEM 173 43.4
Have you ever used generative AI technologies like ChatGPT?
Never 133 33.3
Rarely 87 21.8
Sometimes 116 29.1
Often 39 9.8
Always 24 6.0
Table 2 Knowledge of generative AI technologies
Statement Mean SD
I understand generative AI technologies like ChatGPT have limitations in their ability to handle
complex tasks 4.15 0.82
I understand generative AI technologies like ChatGPT can generate output that is factually inac-
curate 4.10 0.85
I understand generative AI technologies like ChatGPT can generate output that is out of context
or inappropriate 4.03 0.83
I understand generative AI technologies like ChatGPT can exhibit biases and unfairness in their
output 3.93 0.92
I understand generative AI technologies like ChatGPT may rely too heavily on statistics, which can
limit their usefulness in certain contexts 3.93 0.93
I understand generative AI technologies like ChatGPT have limited emotional intelligence and
empathy, which can lead to output that is insensitive or inappropriate 3.89 0.97
Page 8 of 18
Chanand Hu Int J Educ Technol High Educ (2023) 20:43
Willingness touse generative AI technologies
Overall, the findings suggest that students have a positive attitude toward GenAI tech-
nologies. ey would like to integrate GenAI technologies like ChatGPT in their learn-
ing practices (Mean = 3.85, SD = 1.02), as well as future careers (Mean = 4.05; SD = 0.96).
Specifically, students highly value its perceived usefulness in providing unique insights
(Mean = 3.74; SD = 1.08) and personalized feedback (Mean = 3.61; SD = 1.06). Addition-
ally, they find these technologies are user-friendly, as they are available 24/7(Mean = 4.12;
SD = 0.83) and offer anonymous support services (Mean = 3.77; SD = 0.99).
Moreover, the correlation analysis results show that students’ perceived willing-
ness to use GenAI technologies is positively correlated with both knowledge of GenAI
(r = 0.189; p < 0.001) and frequency of use (r = 0.326; p < 0.001), indicating that students
who are more knowledgeable about these technologies and use them more frequently
are more likely to use them in the future (Tables3, 4).
Concerns aboutgenerative AI technologies
Unlike willingness, descriptive statistics show that students expressed a slight favor of
concerns about GenAI. ey expressed the least positive opinions about if people will
become over-reliant on GenAI technologies (Mean = 2.89; SD = 1.13), and the high-
est rating was for how these technologies could affect the value of university education
(Mean = 3.18; SD = 1.16).
Table 3 Willingness to use generative AI technologies
Statement Mean SD
I envision integrating generative AI technologies like ChatGPT into my teaching and learning
practices in the future 3.85 1.02
Students must learn how to use generative AI technologies well for their careers 4.05 0.96
I believe generative AI technologies such as ChatGPT can improve my digital competence 3.70 0.96
I believe generative AI technologies such as ChatGPT can help me save time 4.20 0.82
I believe AI technologies such as ChatGPT can provide me with unique insights and perspectives
that I may not have thought of myself 3.74 1.08
I think AI technologies such as ChatGPT can provide me with personalized and immediate feed-
back and suggestions for my assignments 3.61 1.06
I think AI technologies such as ChatGPT is a great tool as it is available 24/7 4.12 0.83
I think AI technologies such as ChatGPT is a great tool for student support services due to ano-
nymity 3.77 0.99
Table 4 Concerns about generative AI technologies
Statement Mean SD
Using generative AI technologies such as ChatGPT to complete assignments undermines the
value of university education 3.15 1.17
Generative AI technologies such as ChatGPT will limit my opportunities to interact with others
and socialize while completing coursework 3.06 1.20
Generative AI technologies such as ChatGPT will hinder my development of generic or transfer-
able skills such as teamwork, problem-solving, and leadership skills 3.10 1.23
I can become over-reliant on generative AI technologies 2.85 1.13
Page 9 of 18
Chanand Hu Int J Educ Technol High Educ (2023) 20:43
Interestingly, there were significant differences between students who never or
rarely used these technologies and other participants (t = 3.873, p < 0.01). However,
no significant correlation was found between students’ concerns and knowledge
about GenAI technologies (r = 0.096; p > 0.05).
The benets andchallenges forstudents’ willingness andconcerns
What are thereasons behindstudents’ willingness toutilise generative AI technologies?
Consistent with the findings from the quantitative data, most participants perceived
GenAI as a valuable tool with numerous benefits and were willing to work with it, pri-
marily on learning, writing and research purposes:
1. Personalized and immediate learning support
When students struggle with assignments, GenAI can act as a virtual tutor, provid-
ing personalized learning support and answering their questions immediately. A stu-
dent from the faculty of engineering considered AI as “a top student” in their class,
because “When I have doubt and couldn’t find other people to help me out, ChatGPT
seems like a good option.” Besides immediate answers, customized recommendations
and feedback were also valued by students. As one remarked, “It would be useful if
ChatGPT could help me find the most effective solution when I am checking my fin-
ished homework. is way of using it would help me improve my depth of thinking and
understanding.” Feedback on submitted assignments is essential for students’ learn-
ing, but it also puts a lot of pressure on teachers, especially with a large number of
students. In this case, GenAI may be a solution.
Moreover, AI can also provide learning resources tailored to students’ specific needs.
For example, a student majoring in English proposed an innovative learning meth-
ods, using ChatGPT to learn a second language, “ChatGPT can generate short texts
based on the words entered by the user to help students memorize the words.” Moreo-
ver, some students from the Faculty of Education also assumed that AI can assist
them in future teaching, e.g., “I believe that the use of ChatGPT will reduce teachers’
workload for answering questions. I may also use it to generate some lesson plans.
Since GenAI was considered to “improve students’ motivation” and “help students
learn better on their own”, in the future, it may potentially revolutionize traditional
teaching and learning methods.
2. Writing and brainstorming support
GenAI technologies, such as ChatGPT, can also be used as writing assistants. Some-
times, students find it difficult to generate ideas or find inspiration. In such cases, a
participant suggested, “It’ll be convenient to ask ChatGPT some general questions and
even get inspired by it.” By inputting a question related to the writing topic, the AI
output can serve as a starting point to develop and expand on their ideas. In addi-
tion, this virtual assistant is equipped to provide technical support, for example, “it
can help with formatting and information retrieval “or “help gather citations., which
improves efficiency.
Page 10 of 18
Chanand Hu Int J Educ Technol High Educ (2023) 20:43
Furthermore, after writing, students can also use GenAI to enhance their writing
skills. As one remarked, “I would use it to help improve my writing (grammar, para-
phrasing…), consult some questions or let it give some feedback on my writing.” Espe-
cially for non-native English-speaking students who are struggling with writing, it
can be particularly useful if AI can “help polish articles” and provide personalized
feedback for their written texts.
3. Research and analysis support
e role of GenAI technologies in research has also caught the attention of students.
In terms of its ability to acquire, compile, and consolidate information, some par-
ticipants suggested it can “facilitate literature searching,”summarise readings,” and
even generate hypotheses based on data analysis.” With a vast amount of data and
knowledge, AI-powered technologies can help researchers always stay up-to-date
with the latest research trends. Moreover, it also contributes to data collection and
analysis. A student noted, “It saves resources in data collection and initial analysis.
We should ride on the initial insights to build our own insights.” Since GenAI technol-
ogies are capable of rapidly and effectively processing large amounts of data, students
can directly work on the basis of the preliminary analysis results.
4. Visual and audio multi-media support
In addition to the above-mentioned uses, participants also used GenAI technolo-
gies for creating artworks and handling repetitive tasks. With advances in com-
puter vision, AI-generated artworks have particularly gained attention from STEM
students. A student from the faculty of science mentioned, “I mainly played around
with DALL-E, stable diffusion and other AI art technologies, which generate images
based on a prompt.” Similarly, an engineering student “used text-to-image generation
AI like stable diffusion at home to create artwork.” Furthermore, AI technologies can
facilitate “the production of multi-media, incluing slides, audios, and videos. As a
content creator, “when we have no clue how to visualize stuff, it can offer samples and
insights.
5. Administrative support
Concerning “repetitive or non-creative” tasks, some participants believe that AI will
perform well. As one commented, “tedious administrative work will be handled by AI
efficiently.” By accelerating routine repetitive tasks, AI may leave more time for stu-
dents to focus on their studies and research.
What are thereasons behindstudents’ concerns orlack ofconcerns regardinggenerative AI
technologies?
In alignment with the quantitative results, the qualitative data similarly revealed dif-
ferent concerns regarding challenges about GenAI. Some participants were optimistic
about AI’s integration in the future. e reasons for this optimism include the willing-
ness mentioned earlier, as well as the belief that GenAI is part of the evolution and trends
of technology. One student stated, “It follows a general revolution of technology, similar
to the public use of computers 40years ago. I’m very much looking forward to the future
of how such technology can reshape the world” ey suggested that as new technologies
emerge, it is better to “positively embrace it” rather than avoid them.
Page 11 of 18
Chanand Hu Int J Educ Technol High Educ (2023) 20:43
Another reason behind the optimism was the assumption that humans would still
maintain control and oversight over the GenAI. A participant remarked that “I am not
that concerned, as it would lead humans to smartly utilize such AI tools to complete their
tasks in an efficient manner rather than simply being replaced by such tools.” Another
postgraduate student from the Faculty of Arts emphasized that AI is not a replacement
for human skills and expertise: “to my best of knowledge, I feel ChatGPT has not yet had
the creativity and imagination as human beings, nor can it create a thesis for postgradu-
ate students.” At least for now, they believed humans would continue to be in the loop
and have oversight over the GenAI technologies.
However, more than half of the participants still have concerns about the challenges of
integrating GenAI technologies, mainly about the reliability of the technology itself and
its impact:
1. Challenges concerning accuracy and transparency
Currently, GenAI can promptly provide fluent and human-sounding responses, but
their accuracy cannot always be guaranteed. As one student pointed out, “We can-
not predict or accurately verify the accuracy or validity of AI generated information.
Some people may be misled by false information.” Transparency is another significant
concern. For a majority of users, the AI system is complex and opaque, which makes
it difficult to understand how AI comes up with its decisions. “It is always dangerous
to use things you cannot understand, a student noted. As AI-driven conversations
become increasingly popular, remaining a “black box” may become an obstacle to
public trust.
2. Challenges concerning privacy and ethical issues
e use of GenAI also raised privacy and ethical concerns, which was mostly men-
tioned by students majored in arts and social science. ey were worried that AI
would collect personal information from our messages. As a social science student
put forward, “AI technologies are too strong so that they can obtain our private infor-
mation easily. Since these messages will be used to further improve the system, if
they are not properly protected, it “can pose privacy and security risks.
Ethically, the plagiarism concern has been mentioned numerous times. Plagiarism
has long been a critical issue in academics. But, with the rapid development of GenAI
technologies, it has become increasingly difficult to identify plagiarized information.
As an art student remarked, “I want to know whether I am dealing with an AI bot or
AI-generated content. Right now, it is somewhat easy to detect, but as the technology
improves, it may not be so easy”.
3. Challenges concerning holistic competencies
Regarding its impact on individuals and personal development, one of the main
issues is over-reliance on AI, which may hinder people’s growth, skills, and intel-
lectual development over time. As one participant commented, “this may lead to a
decrease in critical thinking and make decisions only based on the information that
AI provides to them.” In addition to critical thinking, a student also noted its negative
impact on creativity, “some people may rely too much on AI technologies to generate
ideas causing them to lose the capacity or willingness to think by themselves.
Page 12 of 18
Chanand Hu Int J Educ Technol High Educ (2023) 20:43
4. Challenges concerning career prospects
Regarding its impact on society as a whole, GenAI also carries risks and drawbacks.
e most frequently mentioned concern is job replacement. As GenAI is transform-
ing the workplace, some jobs that students are preparing for may disappear. A com-
puter science student expressed his concern “I will probably lose my job in the future
due to the advent of ChatGPT.” Similarly, a student who majored in social science also
mentioned, “AI may replace the job that I’m interested in (e.g., GIS analyst). Conse-
quently, employers may also raise their recruitment requirements. is development
will pose a test for future graduates, since “those who fall behind on this might have
difficulty finding employment or catching up.
5. Challenges concerning human values
Another mentioned societal risk relates to the value system. Some participants were
worried that “AI could misalign with our human values and becomes a danger to us.
For example, it may contribute to social injustice and inequality, as some participants
noted, “it may widen the gap between the rich and the poor” and “also be unfair to
those students who don’t use it.” Furthermore, in academic institutions and education,
some were concerned that the widespread use of AI might also might affect the stu-
dent–teacher relationship, since students may be “disappointed and lose respect for
teachers.”
6. Challenges concerning uncertain policies
Last but not least, students also expressed worries regarding the vacuum of institu-
tional policies on the use of GenAI. Since the development of technology has out-
paced regulatory measures, they were concerned about the potential risks such as
governance associated with Gen AI. As a student noted, “I am cautious. ere should
be implementation strategies & plans to navigate with these technologies.” Uncertain
regulations could potentially result in the misuse or unintended consequences of
GenAI, which may pose risks to themselves and society. Even for some students who
acknowledge the positive effects of GenAI, they also believe that a policy is necessary
currently. One student pointed out, “A well-balanced usage guideline needs to be in
place so that the benefits of the tech can be leveraged. Without institutional guidance,
students may feel at a loss for how to appropriately use GenAI in universities.
Discussion
e study of student perceptions of GenAI, such as ChatGPT, in higher education
reveals a complex and nuanced picture of both enthusiasm and concerns. e findings
of this study provide an insightful understanding of university students’ perception. It
is evident that students are generally familiar with GenAI technologies, and their level
of familiarity is influenced by factors such as knowledge about GenAI and frequency
of use. e results also highlight the potential benefits and risks associated with using
GenAI in teaching and learning, which are perceived differently among students based
on their experiences with GenAI technologies. Overall, the participants showed a good
understanding of the capabilities and limitations of GenAI technologies, as well as a
positive attitude towards using these technologies in their learning, research, and future
careers. However, there were also concerns about the reliability, privacy, ethical issues,
and uncertain policies associated with GenAI, as well as its potential impact on personal
Page 13 of 18
Chanand Hu Int J Educ Technol High Educ (2023) 20:43
development, career prospects, and societal values. Table5 shows the benefits and con-
cerns of employing GenAI technologies.
e study revealed that students’ knowledge of GenAI technologies and frequency
of use are positively correlated. is suggests that exposure to these technologies and
hands-on experience may help in enhancing students’ understanding and acceptance of
GenAI. Also, despite the relative novelty of GenAI for public use, students appear to
have knowledge of the technologies and understand its benefits and risks quite well.
Both quantitative and qualitative findings also show that students are generally willing
to use GenAI for their studies and future work, but they have high expectations. For
example, the study found that students perceive GenAI technologies as beneficial for
providing personalized learning support as they expect learning resources tailored to
their needs 24/7. In terms of writing and brainstorming support, students want feedback
to improve writing skills, beyond just grammar checking and brainstorming, similar to
the findings in Atalas’ study (2023). For research and analysis support, students envision
GenAI capabilities to not only facilitate literature searching and summarizing readings
but also to generate hypotheses based on data analysis, enabling them to stay up-to-date
with the latest research trends and build upon initial insights for their own work (Berg,
2023) which would not be expected from previous educational technologies. ese find-
ings indicate the potential of GenAI in revolutionizing traditional teaching and learn-
ing methods by offering tailored assistance, diverse learning needs, promoting efficiency
and fostering self-directed learning.
Despite the positive outlook, the study also reveals challenges concerning GenAI tech-
nologies, with students expressing reservations about over-reliance on the technology,
its potential impact on the value of university education, and issues related to accuracy,
transparency, privacy, and ethics. Students express concerns about the accuracy and eth-
ical issues, particularly plagiarism, as they face difficulty in determining the originality of
work generated by GenAI tools (Peres etal., 2023), which are unable to assess validity or
identify falsehoods, thus necessitating human oversight (Lubowitz, 2023). Interestingly,
there is no significant correlation between students’ concerns and their knowledge about
GenAI technologies, suggesting that even those with a good understanding of the tech-
nology may still have reservations, similar to Dahmash etal. (2020)’s findings. Addition-
ally, students were apprehensive about GenAI, which may hinder critical thinking and
creativity, and the impact of GenAI on job prospects (Ghotbi etal., 2022; Gong etal.,
2019; Park etal., 2020) and human values (Gillissen etal., 2022; Jha etal., 2022).
Table 5 Benefits and challenges on generative AI technologies from student perception
Student Perception of GenAI Technologies
Benets related to Challenges concerning
1. Personalized and immediate learning support 1. Accuracy and transparency
2. Writing and brainstorming support 2. Privacy and ethical issues
3. Research and analysis support 3. Holistic competencies
4. Visual and audio multi-media support 4. Career prospects
5. Administrative support 5. Human values
6. Uncertain policies
Page 14 of 18
Chanand Hu Int J Educ Technol High Educ (2023) 20:43
User acceptance is key to the successful uptake of technological innovations, and stu-
dents are the primary users of educational technologies. By understanding how students
perceive generative AI technologies, educators and policymakers can better understand
how best to integrate these technologies into higher education to enhance teaching and
learning outcomes.
As mentioned, the reasons behind students’ willingness and concerns about GenAI
technologies are multifaceted. On one hand, students are optimistic about the future
integration of these technologies into their academic and professional lives, considering
GenAI as part of the ongoing technological evolution. On the other hand, students have
reservations.
Conclusion
In this study, student perception of GenAI technologies were investigated. Accord-
ing to Biggs (1999, 2011), student perceptions of their learning environment, abilities,
and teaching strategies significantly influence their learning approach and outcomes,
with positive perceptions leading to a deep learning approach and negative perceptions
resulting in a surface approach. us, it is vital to understand student perception in the
context of GenAI technologies. By taking students’ perceptions into account, educators
and policymakers can better tailor GenAI technologies to address students’ needs and
concerns while promoting effective learning outcomes.
Understanding students on their willingness and concerns regarding the use of GenAI
tools can help educators to better integrate these technologies into the learning process,
ensuring they complement and enhance traditional teaching methods. is integra-
tion can lead to improved learning outcomes, as students will be more likely to adopt
a deep approach to learning when they perceive GenAI as a valuable and supportive
resource. Students’ perceptions can provide insights into their level of AI literacy, which
is essential for responsible use of GenAI technologies. By identifying gaps in students’
understanding, educators can develop targeted interventions to improve AI literacy and
prepare students for future employment in an increasingly AI-driven world. In the find-
ings, students highlight the potential risks and concerns, educators can create guidelines
and safeguards that ensure responsible and ethical use of GenAI technologies.
Implications
e diverse range of opinions among the participants highlights some implications that
must be considered to ensure the successful integration of GenAI into higher education.
According to the 3P model proposed by Biggs (2011), three key elements that can influ-
ence learning outcomes include student-dependent factors, teaching-dependent fac-
tors, and interactive impacts from the whole system. With this framework in mind, it
is important for students to develop their AI literacy, which includes understanding the
basics of Generative AI, how it works, its advantages, and disadvantages, as well as differ-
ent uses in higher education. Meanwhile, when using Generative AI, they should ensure
that their use aligns with ethical principles and does not cause any harm to society.
Additionally, as some students expressed concerns about their holistic competencies’
development, teachers can play a vital role in developing their high-order skills, perhaps
with the help of GenAI as mentioned in (Chan & Tsi, 2023). For example, teachers can
Page 15 of 18
Chanand Hu Int J Educ Technol High Educ (2023) 20:43
encourage students to critically evaluate AI-generated content and distinguish between
reliable and unreliable sources to develop their critical thinking skills. Or Generative AI
can be used to spark students’ creativity by generating diverse and unpredictable ideas
and prompts. Since holistic competencies may become the most in-demand attributes
for today’s work environment, a focus on competency development in instructional
designs could also relieve students’ anxiety concerning career prospects.
In the foreseeable future, as generative AI may potentially be widely used in formal
academic settings, institutions should also develop policies and provide formal guidance
on the use of Generative AI. Chan (2023b) suggests an AI Ecological Education Policy
Framework to tackle the various implications of AI integration in university teaching
and learning with three dimensions: Pedagogical, Governance, and Operational. Firstly,
institutions should consider providing educational resources and workshops to familiar-
ize students with GenAI technologies and their ethical and societal implications. is
would enable students to make informed decisions when using these technologies in
their academic endeavors.
Secondly, the development and implementation of GenAI technologies should prior-
itize transparency, accuracy, and privacy to foster trust and mitigate potential risks. For
example, technical staff could work on explainable AI models that provide clear explana-
tions of their decision-making processes. In addition, robust data protection policies and
practices should be in place to safeguard users’ privacy.
Lastly, higher education institutions should consider rethinking their policy, curricula
and teaching approaches to better prepare students for a future where GenAI technolo-
gies are prevalent. is may involve fostering interdisciplinary learning, emphasizing
critical thinking and creativity, and cultivating digital literacy and AI ethics education.
In conclusion, this study sheds light on the diverse perspectives of university students
towards GenAI technologies and underscores the need for a balanced approach to inte-
grating these technologies into higher education. By addressing students’ concerns and
maximizing the potential benefits, higher education institutions can harness the power
of GenAI to enhance teaching and learning outcomes while also preparing students for
the future workforce in the AI-era.
Limitations andfuture research
is study has several limitations that should be considered when interpreting the find-
ings. First, the sample size was relatively small, which may limit the generalizability of
the results to the broader population of students in Hong Kong. e study’s reliance on
self-reported data may also introduce potential biases, as participants could have been
influenced by social desirability or inaccurate recall of their experiences with GenAI
technologies. Furthermore, the cross-sectional design of the study does not allow for
an examination of changes in students’ perceptions over time as their exposure to and
experiences with GenAI technologies evolve. Lastly, since Gen AI has not been fully
used in formal academic settings, students have limited exposure to it. is study did not
explore how students were exposed to AI and the actual impact of GenAI on students’
learning outcomes, which would be necessary to provide a more comprehensive under-
standing of the role of these technologies in education.
Page 16 of 18
Chanand Hu Int J Educ Technol High Educ (2023) 20:43
Future research should address these limitations by employing larger, more diverse
samples; using longitudinal designs to track changes in students’ perceptions of genera-
tive AI over time and explore how these technologies are integrated into higher edu-
cation; and examining the relationship between GenAI use and learning outcomes.
Additionally, future research could explore on a specific group of students from different
discipline, academic backgrounds, age groups, or cultural contexts on AI literacy.
Overall, there is a need for further research to better understand how best to integrate
generative AI into higher education while minimizing potential risks related to privacy
and security. By exploring these areas, we can ensure that these technologies are used
responsibly and effectively in teaching and learning contexts.
Acknowledgements
The author wishes to thank the students who participated the survey.
Author contributions
CKYC: Conceptualization, Methodology, Validation, Investigation, Resources, Data curation, Writing—original, Writing—
review & editing, Supervision, Project administration. WH: Methodology, Formal analysis, Investigation, Writing—original,
Visualization. All authors read and approved the final manuscript.
Funding
No funding has been received for this study.
Availability of data and materials
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable
request.
Declarations
Competing interests
The author declares that one has no competing interests.
Received: 27 April 2023 Accepted: 23 June 2023
References
Abdelwahab, H. R., Rauf, A., & Chen, D. (2022). Business students’ perceptions of Dutch higher education institutions in
preparing them for artificial intelligence work environments. Industry and Higher Education, 37(1), 22–34. https:// doi.
org/ 10. 1177/ 09504 22222 10876 14
Adiguzel, T., Kaya, M. H., & Cansu, F. K. (2023). Revolutionizing education with AI: Exploring the transformative potential of
ChatGPT. Contemporary Educational Technology, 15(3), ep429. https:// doi. org/ 10. 30935/ cedte ch/ 13152
Atlas, S. (2023). ChatGPT for higher education and professional development: A guide to conversational AI. https:// digit
alcom mons. uri. edu/ cba_ facpu bs/ 548
Baidoo-Anu, D., & Ansah, L. O. (2023). Education in the era of generative artificial intelligence (AI): Understanding the
potential benefits of ChatGPT in promoting teaching and learning. https:// doi. org/ 10. 2139/ ssrn. 43374 84
Bailey, D., Southam, A., & Costley, J. (2021). Digital storytelling with chatbots: Mapping L2 participation and perception
patterns. Interactive Technology and Smart Education, 18(1), 85–103. https:// doi. org/ 10. 1108/ ITSE- 08- 2020- 0170
Berg, C. (2023). The case for generative AI in scholarly practice. https:// papers. ssrn. com/ sol3/ papers. cfm? abstr act_ id=
44075 87
Bhattacharya, K., Bhattacharya, A. S., Bhattacharya, N., Yagnik, V. D., Garg, P., & Kumar, S. (2023). ChatGPT in surgical
practice—A new kid on the block. Indian Journal of Surgery. https:// doi. org/ 10. 1007/ s12262- 023- 03727-x
Biggs, J. (1999). What the student does: Teaching for enhanced learning. Higher Education Research & Development, 18(1),
57–75.
Biggs, J. B. (2011). Teaching for quality learning at university: What the student does. McGraw-Hill Education (UK).
Bisdas, S., Topriceanu, C.-C., Zakrzewska, Z., Irimia, A.-V., Shakallis, L., Subhash, J., Casapu, M.-M., Leon-Rojas, J., Pinto dos
Santos, D., Andrews, D. M., Zeicu, C., Bouhuwaish, A. M., Lestari, A. N., Abu-Ismail, L., Sadiq, A. S., Khamees, A., Moham-
med, K. M. G., Williams, E., Omran, A. I., … Ebrahim, E. H. (2021). Artificial intelligence in medicine: A multinational
multi-center survey on the medical and dental students’ perception. Frontiers in Public Health, 9, 795284. https:// doi.
org/ 10. 3389/ fpubh. 2021. 795284
Biswas, S. (2023). ChatGPT and the future of medical writing. Radiology, 307(2), e223312. https:// doi. org/ 10. 1148/ radiol.
223312
Chan, C. K. Y., & Lee, K. K. W. (2023). The AI generation gap: Are Gen Z students more interested in adopting generative AI
such as ChatGPT in teaching and learning than their Gen X and Millennial Generation teachers? https:// arxiv. org/
abs/ 2305. 02878
Page 17 of 18
Chanand Hu Int J Educ Technol High Educ (2023) 20:43
Chan, C. K. Y., & Tsi, L. H. Y. (2023). The AI Revolution in Education: Will AI Replace or Assist Teachers in Higher Educa-
tion? [Preprint]. arXiv. https:// arxiv. org/ abs/ 2305. 01185
Chan, C. K. Y., & Zhou, W. (2023). Deconstructing Student Perceptions of Generative AI (GenAI) through an Expectancy
Value Theory (EVT)-based Instrument [Preprint]. arXiv. https:// arxiv. org/ abs/ 2305. 01186
Chan, C.K.Y. (2023a). Is AI changing the rules of academic misconduct? An in-depth look at students’ perceptions of
‘AI-giarism’. https:// arxiv. org/ abs/ 2306. 03358
Chan, C.K.Y. (2023b). A Comprehensive AI Policy Education Framework for University Teaching and Learning. https://
arxiv. org/ abs/ 2305. 00280
Chen, Y., Jensen, S., Albert, L. J., Gupta, S., & Lee, T. (2023). Artificial intelligence (AI) student assistants in the classroom:
Designing chatbots to support student success. Information Systems Frontiers, 25, 161–182. https:// doi. org/ 10.
1007/ s10796- 022- 10291-4
Crompton, H., & Burke, D. (2023). Artificial intelligence in higher education: The state of the field. International Journal
of Educational Technology in Higher Education, 20(1), 22. https:// doi. org/ 10. 1186/ s41239- 023- 00392-8
Dahmash, A. B., Alabdulkareem, M., Alfutais, A., Kamel, A. M., Alkholaiwi, F., Alshehri, S., Zahrani, Y. A., & Almoaiqel,
M. (2020). Artificial intelligence in radiology: Does it impact medical students preference for radiology as their
future career? BJR Open, 2(1), 20200037. https:// doi. org/ 10. 1259/ bjro. 20200 037
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS
Quarterly, 13(3), 319–340. https:// doi. org/ 10. 2307/ 249008
Dehouche, N., & Dehouche, K. (2023). What’s in a text-to-image prompt: The potential of Stable Diffusion in visual
arts education. https:// doi. org/ 10. 48550/ arXiv. 2301. 01902
Eggmann, F., Weiger, R., Zitzmann, N. U., & Blatz, M. B. (2023). Implications of large language models such as ChatGPT
for dental medicine. Journal of Esthetic and Restorative Dentistry. https:// doi. org/ 10. 1111/ jerd. 13046
Essel, H. B., Vlachopoulos, D., Tachie-Menson, A., Johnson, E. E., & Baah, P. K. (2022). The impact of a virtual teaching
assistant (chatbot) on students’ learning in Ghanaian higher education. International Journal of Educational
Technology in Higher Education, 19, 57. https:// doi. org/ 10. 1186/ s41239- 022- 00362-6
Gayed, J. M., Carlon, M. K. J., Oriola, A. M., & Cross, J. S. (2022). Exploring an AI-based writing assistant’s impact on
English language learners. Computers and Education: Artificial Intelligence, 3, 100055. https:// doi. org/ 10. 1016/j.
caeai. 2022. 100055
Gherhes, V., & Obrad, C. (2018). Technical and humanities students’ perspectives on the development and sustainabil-
ity of artificial intelligence (AI). Sustainability, 10(9), 3066. https:// doi. org/ 10. 3390/ su100 93066
Ghotbi, N., Ho, M. T., & Mantello, P. (2022). Attitude of college students towards ethical issues of artificial intelligence
in an international university in Japan. AI & Society, 37, 283–290. https:// doi. org/ 10. 1007/ s00146- 021- 01168-2
Gillissen, A., Kochanek, T., Zupanic, M., & Ehlers, J. (2022). Medical students’ perceptions towards digitalization and
artificial intelligence: A mixed-methods study. Healthcare, 10(4), 723. https:// doi. org/ 10. 3390/ healt hcare 10040
723
Gong, B., Nugent, J. P., Guest, W., Parker, W., Chang, P. J., Khosa, F., & Nicolaou, S. (2019). Influence of artificial intel-
ligence on Canadian medical students’ preference for radiology specialty: A national survey study. Academic
Radiology, 26(4), 566–577. https:// doi. org/ 10. 1016/j. acra. 2018. 10. 007
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Genera-
tive Adversarial Networks. arXiv preprint arXiv: 1406. 2661
Harrer, S. (2023). Attention is not all you need: The complicated case of ethically using large language models in
healthcare and medicine. eBioMedicine, 90, 104512. https:// doi. org/ 10. 1016/j. ebiom. 2023. 104512
Hew, K. F., Huang, W., Du, J., & Jia, C. (2023). Using chatbots to support student goal setting and social presence in
fully online activities: Learner engagement and perceptions. Journal of Computing in Higher Education, 35, 40–68.
https:// doi. org/ 10. 1007/ s12528- 022- 09338-x
Hu, K. (2023, February 2). ChatGPT sets record for fastest-growing user base—analyst note. Reuters. Retrieved from
https:// www. reute rs. com/ techn ology/ chatg pt- sets- record- faste st- growi ng- user- base- analy st- note- 2023- 02- 01/
Jeffrey, T. (2020). Understanding college student perceptions of artificial intelligence. Systemics, Cybernetics and
Informatics, 18(2), 8–13. https:// www. iiisci. org/ journ al/ sci/ FullT ext. asp? var= & id= HB785 NN20
Jha, N., Shankar, P. R., Al-Betar, M. A., Mukhia, R., Hada, K., & Palaian, S. (2022). Undergraduate medical students’ and
interns’ knowledge and perception of artificial intelligence in medicine. Advances in Medical Education and
Practice, 13, 927–937. https:// doi. org/ 10. 2147/ AMEP. S3685 19
Kingma, D. P., & Welling, M. (2013). Auto-Encoding Variational Bayes. arXiv preprint arXiv: 1312. 6114
Kitamura, F. C. (2023). ChatGPT is shaping the future of medical writing but still requires human judgment. Radiology,
307(2), e230171. https:// doi. org/ 10. 1148/ radiol. 230171
Kumar, A. H. S. (2023). Analysis of ChatGPT tool to assess the potential of its utility for academic writing in biomedical
domain. BEMS Reports, 9(1), 24–30. https:// doi. org/ 10. 5530/ bems.9. 1.5
Landauer, T. K. (2003). Automatic essay assessment. Assessment in Education: Principles, Policy & Practice, 10(3),
295–308. https:// doi. org/ 10. 1080/ 09695 94032 00014 8154
Lee, Y.-F., Hwang, G.-J., & Chen, P.-Y. (2022). Impacts of an AI-based chabot on college students’ after-class review,
academic performance, self-efficacy, learning attitude, and motivation. Educational Technology Research and
Development, 70, 1843–1865. https:// doi. org/ 10. 1007/ s11423- 022- 10142-8
Lubowitz, J. H. (2023). ChatGPT, an artificial intelligence chatbot, is impacting medical literature. Arthroscopy, 39(5),
1121–1122. https:// doi. org/ 10. 1016/j. arthro. 2023. 01. 015
Maerten, A.-S., & Soydaner, D. (2023). From paintbrush to pixel: A review of deep neural networks in AI-generated art.
https:// doi. org/ 10. 48550/ arXiv. 2302. 10913
Mizumoto, A., & Eguchi, M. (2023). Exploring the potential of using an AI language model for automoated essay scor-
ing. https:// doi. org/ 10. 2139/ ssrn. 43731 11
Mokmin, N. A. M., & Ibrahim, N. A. (2021). The evaluation of chatbot as a tool for health literacy education among
undergraduate students. Education and Information Technologies, 26, 6033–6049. https:// doi. org/ 10. 1007/
s10639- 021- 10542-y
Page 18 of 18
Chanand Hu Int J Educ Technol High Educ (2023) 20:43
Park, C. J., Yi, P. H., & Siegel, E. L. (2020). Medical student perspectives on the impact of artificial intelligence on the prac-
tice of medicine. Current Problems in Diagnostic Radiology, 50(5), 614–619. https:// doi. org/ 10. 1067/j. cprad iol. 2020. 06.
011
Peres, R., Shreier, M., Schweidel, D., & Sorescu, A. (2023). On ChatGPT and beyond: How generative artificial intelligence
may affect research, teaching, and practice. International Journal of Research in Marketing. https:// doi. org/ 10. 1016/j.
ijres mar. 2023. 03. 001
Sit, C., Srinivasan, R., Amlani, A., Muthuswamy, K., Azam, A., Monzon, L., & Poon, D. S. (2020). Attitudes and perceptions of
UK medical students towards artificial intelligence and radiology: A mutilcentre survey. Insights into Imaging. https://
doi. org/ 10. 1186/ s13244- 019- 0830-7
Sumakul, D. T. Y. G., Hamied, F. A., & Sukyadi, D. (2020). Students’ perceptions of the use of AI in a writing class. Advances in
Social Science, Education and Humanities Research, 624. 52–57. https:// www. atlan tis- press. com/ artic le/ 12597 0061. pdf
Terblanche, N., Molyn, J., Williams, K., & Maritz, J. (2022). Performance matters: Students’ perceptions of artificial intel-
ligence coach adoption factors. Coaching an International Journal of Theory Research and Practice, 16(1), 100–114.
https:// doi. org/ 10. 1080/ 17521 882. 2022. 20942 78
van Dis, E. A. M., Bollen, J., Zuidema, W., van Rooij, R., & Bockting, C. L. (2023). ChatGPT: Five priorities for research. Nature,
614, 224–226. https:// doi. org/ 10. 1038/ d41586- 023- 00288-7
Warschauer, M., Tseng, W., Yim, S., Webster, T., Jacob, S., Du, Q, & Tate, T. (2023). The affordances and contradictions of AI-
generated text for second language writers. https:// doi. org/ 10. 2139/ ssrn. 44043 80
Yildiz Durak, H. (2023). Conversational agent-based guidance: Examining the effect of chatbot usage frequency and sat-
isfaction on visual design self-efficacy, engagement, satisfaction, and learner autonomy. Education and Information
Technologies, 28, 471–488. https:// doi. org/ 10. 1007/ s10639- 022- 11149-7
Yüzbaşioğlu, E. (2021). Attitudes and perceptions of dental students towards artificial intelligence. Journal of Dental
Education, 85(1), 60–68. https:// doi. org/ 10. 1002/ jdd. 12385
Zhai, X. (2022). ChatGPT user experience: Implications for education. https:// doi. org/ 10. 2139/ ssrn. 43124 18
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
... Baidoo-Anu and Owusu Ansah (2023) discuss the cultural implications of AI technologies in educational contexts. Celik et al. (2022) explore the role of AI in promoting sustainability in educational institutions, and Chan and Hu (2023) analyse the influence of AI-driven social media platforms on student engagement. Chen et al. (2023) present findings on AI applications in mental health support for students, highlighting their potential benefits. ...
... Such an understanding is crucial for educators who wish to leverage GenAI effectively in their teaching practices. For instance, knowing how algorithms operate can help educators make informed decisions about which AI tools to adopt and how to integrate them into their instructional designs (Chan & Hu, 2023;Chen et al., 2023). Without this knowledge, educators may inadvertently adopt technologies that do not align with their pedagogical goals or that may even introduce biases into the learning environment (Aydin & Karaarslan, 2023;Celik et al., 2022). ...
... This strand encompasses a variety of studies examining the integration of AI tools in the classroom, with particular emphasis on their capabilities for enhancing teaching and learning processes (Chan & Hu, 2023;Rachha & Seyman, 2023). GenAI tools like ChatGPT can facilitate personalised learning experiences (Baidoo-Anu & Owusu Ansah, 2023), fostering greater engagement and understanding among students. ...
Article
Full-text available
The research landscape surrounding Generative Artificial Intelligence (GenAI) and education is rapidly expanding, characterised by a dynamic array of themes and sub‐themes. This paper aims to construct a comprehensive taxonomy that categorises the current literature on the integration of GenAI in educational settings. To do so, a systematic analysis was conducted first, which filtered and selected 30 pieces of literature. Within this literature, 369 phrases were identified, which culminated in the development of 5 overarching themes and 38 sub‐themes. These themes within the systematic review ran parallel to a taxonomy that was developed from them, which subsequently revealed a tension between them. Emphasising an interpretivist approach, this research acknowledges the subjective nature of knowledge formation and interpretation, enhancing understanding of the complex interplay between GenAI and educational practices, with a predominant focus on GenAI in higher education. Unlike previous literature reviews, this paper presents a subsequent taxonomy derived from the systematic review, which holds an original narrative: that a critical tension exists between technical discussions of GenAI and the pedagogical realities faced by educators. This taxonomy presents evidence that supports a notion that the fledging field of ‘GenAI and education’ research has two developing strands: the technical and the pedagogical. Not only are these two strands of foci emerging within the literature, but there is also a growing disconnect or void between the two. Without addressing this almost ‘siloed’ growth, conversations about GenAI's role in education risk becoming overly abstract, lacking practical relevance for educators. By illuminating this tension, this research invites further exploration into how educators can navigate the evolving landscape of GenAI in their classrooms.
... At the same time, concerns about recent AI developments, particularly generative AI (genAI), have complicated discussions about how to integrate and benefit from these new digital technologies (e.g., Fassbender, 2025). Issues such as academic integrity, AI dependence and privacy questions (e.g., Chan & Hu, 2023) have distracted the field from critically and systematically engaging with conditions necessary to make the opportunities of these innovations accessible. While many of these conditions, such as engaging in professional learning and ensuring access, are typically understood to be necessary to adopt any new digital technology, AI introduces new considerations for use in education. ...
... The sub-category "Resources" (in 13 studies) also related to efficient processing of tasks that freed up resources. This included (basic) questions (Chan & Hu, 2023;Dai et al., 2023) and preparing course exercises (Exintaris et al., 2023), searching for customized materials (Jin et al., 2023), and to create lesson plans (Mohamed, 2023). Specifically, freeing up teachers' time changed the nature of what they could do with students, such as: "ChatGPT created space for more intellectually stimulating and substantial exchanges between the supervisor and student, which propelled ...
... Both subcategories signal issues related to the introduction of AI, ensuring that all students have the opportunity to engage with these tools. & Hu, 2023;Ebadi & Amini, 2022). For instance, a student from a Hong Kong university highlighted that "Ethical dilemma includes ensuring that the technology is not used to discriminate against individuals or groups, and that it does not reinforce bias or stereotypes" (Chan, 2023, p. 15). ...
Preprint
Full-text available
There is significant discussion about the opportunities and issues associated with the use of artificial intelligence in higher education. However, issues, such as academic integrity and privacy, continue to dominate the conversation. This has limited how well instructors and higher education institutions can identify the conditions necessary to support AI use, to benefit all students. The present synthesis of qualitative evidence has explored the available evidence from 2019-2024, to consider the opportunities and conditions of AI use in higher education learning. The result is five synthesized findings addressing this issue at both institutional and learning design levels and considering subcategories ranging from access, to interactions to future learning. Based on the synthesized findings as the proposed AIMED model, addressing the conditions necessary to create opportunities of AI-tool use in higher education for all students. Implications and future research are explored.
... It is also important to emphasize a balanced approach, using chatbots as supplementary tools rather than replacements for traditional writing instruction (Ali, 2023). Further research is needed to investigate the long-term impact of chatbots on writing skill development, explore the effectiveness of different chatbot approaches, and address concerns regarding reliability and potential misuse (Chan & Hu, 2023). Ultimately, the goal is to leverage the potential benefits of chatbots while mitigating the risks, ensuring that this technology is used to enhance writing instruction in a positive and meaningful way. ...
... The ability to write effectively is particularly important in academic settings, where students are expected to demonstrate their understanding of complex concepts and communicate their ideas clearly and concisely in written assignments, essays, and research papers (Almalki, 2016). This demand for strong writing skills extends beyond the academic realm, as effective writing is also essential for success in the professional world, where clear and concise communication is crucial for building relationships, conveying information, and achieving goals (Chan & Hu, 2023). ...
... Third, the nature of the intervention, which focused on specific tasks and activities rather than long-term or comprehensive learning support, may not have demonstrated the broader potential of LLMs to aid in sustained skill development. Both Chan and Hu [45] and Stöhr et al. [46] highlight that students' familiarity with AI tools is positively correlated with their willingness to use them and their ability to recognize potential benefits, such as personalized learning and improved efficiency. The moderate increase in students' familiarity with AI and GenAI observed in this study suggests that the intervention did not provide sufficient exposure to foster the level of familiarity necessary for students to fully appreciate the tools' potential in addressing their learning challenges. ...
... Finally, there were some students expressing their skepticism or mixed experiences, particularly due to concerns about potential inaccuracies in LLM-generated solutions. Similar results have been reported in other studies, where students appreciated AI tools for their ability to clarify complex concepts, assist with problem-solving, improve efficiency, and enhance engagement, while also acknowledging challenges such as occasional inaccuracies and limited reliability [23,45,48,49]. ...
Article
Full-text available
This study investigates the integration of large language models (LLMs) alongside computer algebra systems (CAS) in a mathematics laboratory for civil engineering students, examining their combined impact on problem‐solving and inquiry‐driven learning. The intervention was designed using the integrate LLMs alongside CAS (ILAC) approach, which structures the inquiry process into key phases, guiding students through exploration, hypothesis testing, and solution validation. Six structured activities were implemented and assessed using quantitative and qualitative methods. Findings reveal that LLMs enhanced conceptual understanding, clarified methodologies, and assisted with command syntax, while CAS ensured computational accuracy and result validation. Many students critically cross‐verified LLM‐generated results with CAS, though some relied solely on LLMs, highlighting the need for better guidance on tool usage. While LLMs fostered engagement, skepticism remained regarding their ability to address deeper mathematical deficiencies. The intervention led to moderate improvements in students' familiarity with AI tools, though its short duration and the use of general‐purpose LLMs limited perceived usefulness. To maximize educational benefits, future implementations should consider longer interventions, advanced training in prompt engineering, and tailored AI solutions.
... This gap is particularly pronounced in developing nations, where understanding student perceptions and contextual challenges-such as disparities between public and private universities-remains critical for ethical and effective integration (Habibi et al., 2023;Lund & Wang, 2023). While prior studies have explored student attitudes, knowledge, and acceptance of AI tools, limited research examines how perceived usefulness, ease of use, and risk directly shape attitudes and usage behaviors, especially in Bangladesh (Chan & Hu, 2023b;Haglund, 2023;Shoufan, 2023). The number of studies further diminishes when a comparative analysis between public and private universities is considered specially in the context of a developing country like Bangladesh (Naher et al., 2023a;Niloy et al., 2023). ...
Article
Full-text available
ChatGPT’s capability to provide immediate responses to student queries positions it as a potentially transformative educational tool. Nevertheless, its impact on Bangladeshi university students remains a subject of debate. This cross-sectional study examines ChatGPT usage among Bangladeshi university students and its determinants using the Technology Acceptance Model (TAM). Data from 729 students across five public and four private universities were analyzed via inferential statistics and Structural Equation Modeling (SEM). Results indicate perceived usefulness, ease of use, and perceived risk significantly influence ChatGPT adoption. Private university students’ usage was primarily driven by ease of use and perceived usefulness, while perceived risk and attitude showed no significant impact. In contrast, public university students’ usage was strongly influenced by perceived usefulness and existing knowledge, with perceived risk negatively affecting attitudes. Public university students perceived higher risks and lower ease of use than private peers. SEM highlighted ease of use as the strongest positive predictor in private institutions, while existing knowledge was more influential in public ones. The findings suggest structured training, awareness campaigns, and safety policies could mitigate risks and enhance ethical adoption. Public universities require targeted interventions to address risk perceptions, whereas private institutions benefit from emphasizing ChatGPT’s usability and academic value.
... In order to provide a more thorough understanding of AI's risk, it is important to focus on the trends of research that dealt with risk in relation to AI use. In relation to the first trend, it is found the majority of articles address both the advantages and difficulties and/or challenges of this technology (AlAfnan et al., 2023;Alwaqdani, 2024;Bae et al., 2024;Barrot, 2023;Chan & Hu, 2023;Derakhshan & Ghiasvand, 2024;Kasneci et al., 2023;Kohnke et al., 2023;Michel-Villarreal et al., 2023;Teng, 2024). By highlighting these two essential sides, a sophisticated discussion is facilitated, showcasing AI's potential but also recognizing its challenges and limitations. ...
Article
Full-text available
This study explores the potential risks associated with the use of AI in higher education, based on the views of university teachers from the Czech Republic and Iraq. A total of 40 respondents, including 28 females and 12 males aged between 32 and 54, participated in the study. All participants were university teachers specializing in EFL, psychology, ITC, and foreign languages, and they reported using the internet and AI daily or several times a week for their professional activities. The qualitative research, grounded in a phenomenological approach, involved guided interviews that were recorded, transcribed, and analyzed using LIWC-22 software to identify common themes and sentiments. The findings reveal significant concerns about privacy risks, academic integrity, and the validity of AI-generated data. Respondents expressed fears over data misuse, unauthorized access, and the potential for AI to facilitate plagiarism and undermine critical thinking. While AI is seen as beneficial for personalized learning and language training, there are apprehensions about its impact on the role of teachers and the potential for job displacement. The study also highlights the limitations of AI in replicating human interaction and addressing students' emotional and behavioral engagement. Overall, the sentiment among respondents is predominantly negative, with calls for ethical guidelines, critical evaluation, and the preservation of human elements in education. The research underscores the need for further studies involving larger and more diverse samples, including students, to comprehensively understand the implications of AI in education.
... The integration of Artificial Intelligence (AI) into teaching and learning has become a significant area of interest in higher education among scholars in the last decade and it is significantly reshaping educational paradigms, particularly in second language academic writing (Crompton & Burke, 2023;Huang et al., 2023;Malik et al., 2023;De La Vall & Araya, 2023;Guo & Wang, 2024). This interest is motivated by the potential benefits of AI powered writing tools within the language classroom, as highlighted by a growing number of studies exploring how AI influences students' writing practices (Barrot, 2023;Chan & Hu, 2023;Malik et al., 2023). ...
Article
The integration of Artificial Intelligence (AI) such as ChatGPT into teaching and learning has become a significant area of interest in higher education as it has significantly reshaped students writing practices. Studies suggest that it has the potential to dismantle language barriers and foster multilingualism and linguistic inclusivity in multilingual writing classrooms (Malik et al., 2023). However, while the use of AI has been proven to be a significant tool in enhancing students’ communicative competence (Hasanein and Sobaih, 2023), particularly for students from non-native English-speaking backgrounds, there are widespread concerns among the academic community that AI tools may negatively impact academic integrity, creativity, and critical thinking skills in academic writing (Neumann et al., 2023). To address these concerns, instead of discouraging the use of AI, we conducted a 12-week intervention academic writing course where we integrated ChatGPT in the course. The purpose of the study was to explore its use as a writing support tool as well as a strategy to foster students’ responsible use of the tool. At the end of the course, focus group interviews were carried out with students to explore their experiences with the use of ChatGPT as a writing support tool. The findings of the focus group interviews gave us pedagogical insights into how educators can leverage the affordances of AI in the teaching of academic writing and how AI can be used to enhance equality and level writing expectations for students who come from non- native English-speaking backgrounds.
Article
Recent advancements in Artificial Intelligence (AI), particularly AI chatbots like ChatGPT, have sparked significant discussions in higher education regarding their impact on teaching and academic integrity. This study examines the integration of ChatGPT into a philosophy of education course for first‐year Education students at a Swedish university. Using a sociocultural approach and the concept of mediating devices, the study investigates how ChatGPT can be effectively integrated into higher education to support academic literacy and the challenges and opportunities it presents. The empirical material was collected through semi‐structured focus group interviews, observational notes, and students' assignments. The findings suggest that when used intentionally with clear pedagogical goals, ChatGPT can support academic literacy development. Its ability to generate coherent text, summarise complex concepts, and provide initial structures for academic writing proved particularly beneficial for first‐year students. However, limitations such as inaccuracies and lack of nuance provided opportunities for students to engage critically with the generated material and develop strategies to address these shortcomings. The findings highlight that both the teacher's guidance and the social aspects of learning, such as group discussions and collaborative work, were essential in leveraging ChatGPT's strengths and addressing its limitations. Therefore, this paper advocates for a balanced approach that emphasises pedagogical grounding and responsible AI integration to ensure the ethical and effective use of AI tools. Understanding the interplay between ChatGPT's integration and the social dynamics of teaching is crucial for assessing its impact on students' learning experiences.
Article
Integrating Artificial Intelligence (AI) in university education, especially the ChatGPT model, has aroused growing interest in the academic community. However, a comprehensive understanding of the impact and effectiveness of this technology in the educational process is needed. In this context, this article aimed to conduct a systematic review of the impact of ChatGPT Generative Artificial Intelligence on university teaching from 2019 to 2024. The PRISMA protocol was adopted to search databases such as Scopus and SciELO, and a selection of relevant studies was obtained that included 18 original articles that met the inclusion and exclusion criteria. Four main categories related to ChatGPT in university teaching were identified: the impact of ChatGPT, the potential of ChatGPT, the perception of students and teachers about using ChatGPT, and the effectiveness of ChatGPT. It was concluded that there is a growing interest in studying the impact of the ChatGPT model in higher education, which emphasizes the relevance and need for further research to understand its impact and areas that require further attention and development.
Article
Full-text available
This study examines the relationship between student perceptions and their intention to use generative artificial intelligence (GenAI) in higher education. With a sample of 405 students participating in the study, their knowledge, perceived value, and perceived cost of using the technology were measured by an Expectancy-Value Theory (EVT) instrument. The scales were first validated and the correlations between the different components were subsequently estimated. The results indicate a strong positive correlation between perceived value and intention to use generative AI, and a weak negative correlation between perceived cost and intention to use. As we continue to explore the implications of GenAI in education and other domains, it is crucial to carefully consider the potential long-term consequences and the ethical dilemmas that may arise from widespread adoption.
Article
Full-text available
This study aimed to explore the experiences, perceptions, knowledge, concerns, and intentions of Generation Z (Gen Z) students with Generation X (Gen X) and Generation Y (Gen Y) teachers regarding the use of generative AI (GenAI) in higher education. A sample of students and teachers were recruited to investigate the above using a survey consisting of both open and closed questions. The findings showed that Gen Z participants were generally optimistic about the potential benefits of GenAI, including enhanced productivity, efficiency, and personalized learning, and expressed intentions to use GenAI for various educational purposes. Gen X and Gen Y teachers acknowledged the potential benefits of GenAI but expressed heightened concerns about overreliance, ethical and pedagogical implications, emphasizing the need for proper guidelines and policies to ensure responsible use of the technology. The study highlighted the importance of combining technology with traditional teaching methods to provide a more effective learning experience. Implications of the findings include the need to develop evidence-based guidelines and policies for GenAI integration, foster critical thinking and digital literacy skills among students, and promote responsible use of GenAI technologies in higher education.
Article
Full-text available
Chatbot usage is evolving rapidly in various fields, including higher education. The present study’s purpose is to discuss the effect of a virtual teaching assistant (chatbot) that automatically responds to a student’s question. A pretest–posttest design was implemented, with the 68 participating undergraduate students being randomly allocated to scenarios representing a 2 × 2 design (experimental and control cohorts). Data was garnered utilizing an academic achievement test and focus groups, which allowed more in depth analysis of the students’ experience with the chatbot. The results of the study demonstrated that the students who interacted with the chatbot performed better academically comparing to those who interacted with the course instructor. Besides, the focus group data garnered from the experimental cohort illustrated that they were confident about the chatbot’s integration into the course. The present study essentially focused on the learning of the experimental cohort and their view regarding interaction with the chatbot. This study contributes the emerging artificial intelligence (AI) chatbot literature to improve student academic performance. To our knowledge, this is the first study in Ghana to integrate a chatbot to engage undergraduate students. This study provides critical information on the use and development of virtual teaching assistants using a zero-coding technique, which is the most suitable approach for organizations with limited financial and human resources.
Article
Full-text available
This study aims to develop an AI education policy for higher education by examining the perceptions and implications of text generative AI technologies. Data was collected from 457 students and 180 teachers and staff across various disciplines in Hong Kong universities, using both quantitative and qualitative research methods. Based on the findings, the study proposes an AI Ecological Education Policy Framework to address the multifaceted implications of AI integration in university teaching and learning. This framework is organized into three dimensions: Pedagogical, Governance, and Operational. The Pedagogical dimension concentrates on using AI to improve teaching and learning outcomes, while the Governance dimension tackles issues related to privacy, security, and accountability. The Operational dimension addresses matters concerning infrastructure and training. The framework fosters a nuanced understanding of the implications of AI integration in academic settings, ensuring that stakeholders are aware of their responsibilities and can take appropriate actions accordingly.
Article
Full-text available
This systematic review provides unique findings with an up-to-date examination of artificial intelligence (AI) in higher education (HE) from 2016 to 2022. Using PRISMA principles and protocol, 138 articles were identified for a full examination. Using a priori, and grounded coding, the data from the 138 articles were extracted, analyzed, and coded. The findings of this study show that in 2021 and 2022, publications rose nearly two to three times the number of previous years. With this rapid rise in the number of AIEd HE publications, new trends have emerged. The findings show that research was conducted in six of the seven continents of the world. The trend has shifted from the US to China leading in the number of publications. Another new trend is in the researcher affiliation as prior studies showed a lack of researchers from departments of education. This has now changed to be the most dominant department. Undergraduate students were the most studied students at 72%. Similar to the findings of other studies, language learning was the most common subject domain. This included writing, reading, and vocabulary acquisition. In examination of who the AIEd was intended for 72% of the studies focused on students, 17% instructors, and 11% managers. In answering the overarching question of how AIEd was used in HE, grounded coding was used. Five usage codes emerged from the data: (1) Assessment/Evaluation, (2) Predicting, (3) AI Assistant, (4) Intelligent Tutoring System (ITS), and (5) Managing Student Learning. This systematic review revealed gaps in the literature to be used as a springboard for future researchers, including new tools, such as Chat GPT.
Article
Full-text available
Objective This article provides an overview of the implications of ChatGPT and other large language models (LLMs) for dental medicine. Overview ChatGPT, a LLM trained on massive amounts of textual data, is adept at fulfilling various language-related tasks. Despite its impressive capabilities, ChatGPT has serious limitations, such as occasionally giving incorrect answers, producing nonsensical content, and presenting misinformation as fact. Dental practitioners, assistants, and hygienists are not likely to be significantly impacted by LLMs. However, LLMs could affect the work of administrative personnel and the provision of dental telemedicine. LLMs offer potential for clinical decision support, text summarization, efficient writing, and multilingual communication. As more people seek health information from LLMs, it is crucial to safeguard against inaccurate, outdated, and biased responses to health-related queries. LLMs pose challenges for patient data confidentiality and cybersecurity that must be tackled. In dental education, LLMs present fewer challenges than in other academic fields. LLMs can enhance academic writing fluency, but acceptable usage boundaries in science need to be established. Conclusions While LLMs such as ChatGPT may have various useful applications in dental medicine, they come with risks of malicious use and serious limitations, including the potential for misinformation. Clinical significance Along with the potential benefits of using LLMs as an additional tool in dental medicine, it is crucial to carefully consider the limitations and potential risks inherent in such artificial intelligence technologies.
Article
Since its maiden release into the public domain on November 30, 2022, ChatGPT garnered more than one million subscribers within a week. The generative AI tool ⎼ChatGPT took the world by surprise with it sophisticated capacity to carry out remarkably complex tasks. The extraordinary abilities of ChatGPT to perform complex tasks within the field of education has caused mixed feelings among educators, as this advancement in AI seems to revolutionize existing educational praxis. This is an exploratory study that synthesizes recent extant literature to offer some potential benefits and drawbacks of ChatGPT in promoting teaching and learning. Benefits of ChatGPT include but are not limited to promotion of personalized and interactive learning, generating prompts for formative assessment activities that provide ongoing feedback to inform teaching and learning etc. The paper also highlights some inherent limitations in the ChatGPT such as generating wrong information, biases in data training, which may augment existing biases, privacy issues etc. The study offers recommendations on how ChatGPT could be leveraged to maximize teaching and learning. Policy makers, researchers, educators and technology experts could work together and start conversations on how these evolving generative AI tools could be used safely and constructively to improve education and support students’ learning.