ArticlePDF Available

Understanding the role of AI in Malaysian higher education curricula: an analysis of student perceptions

Authors:

Abstract and Figures

This study explores the role of Artificial Intelligence (AI) in higher education curricula, focusing on student perceptions and engagement with AI tools through the lens of key variables: perceived utility, satisfaction, content quality, and perceived credibility. The rapid adoption of AI technologies, such as intelligent tutoring systems and adaptive learning platforms, offers significant promise in enhancing personalized learning, improving content delivery, and streamlining administrative processes. However, successfully integrating AI into higher education requires an understanding of how students perceive these tools and how their perceptions influence adoption. The study employs a quantitative cross-sectional survey design and utilizes Partial Least Squares Structural Equation Modelling (PLS-SEM) to examine the relationships between the identified predictors and students’ use of AI in their academic experiences, based on data from 306 participants. The results reveal that satisfaction has the strongest impact on students’ use of AI in higher education curricula, highlighting the importance of providing user-friendly, reliable, and effective platforms. The perceived utility also plays a significant role, though its effect size is moderate, indicating that while students recognize AI’s potential benefits in achieving academic goals, other factors also contribute to their willingness to adopt these tools. Content quality and perceived credibility, while positively correlated with AI adoption, were found to have weaker and statistically non-significant effects. This suggests that students may prioritize functionality and user experience over content quality and trust in AI systems. The study’s findings provide valuable insights for educators and policymakers aiming to optimize AI adoption in higher education curricula, emphasizing the need for comprehensive strategies that address multiple factors influencing student perceptions.
This content is subject to copyright. Terms and conditions apply.
Vol.:(0123456789)
Discover Computing (2025) 28:62 | https://doi.org/10.1007/s10791-025-09567-5
Discover Computing
Research
Understanding therole ofAI inMalaysian higher education curricula:
ananalysis ofstudent perceptions
ShahazwanMatYuso1· Anwar FarhanMohamadMarzaini2· LijieHao3· ZamzamiZainuddin4· MohdHelmeBasal5
Received: 2 January 2025 / Accepted: 21 April 2025
© The Author(s) 2025 OPEN
Abstract
This study explores the role of Articial Intelligence (AI) in higher education curricula, focusing on student perceptions
and engagement with AI tools through the lens of key variables: perceived utility, satisfaction, content quality, and
perceived credibility. The rapid adoption of AI technologies, such as intelligent tutoring systems and adaptive learning
platforms, oers signicant promise in enhancing personalized learning, improving content delivery, and streamlining
administrative processes. However, successfully integrating AI into higher education requires an understanding of how
students perceive these tools and how their perceptions inuence adoption. The study employs a quantitative cross-
sectional survey design and utilizes Partial Least Squares Structural Equation Modelling (PLS-SEM) to examine the rela-
tionships between the identied predictors and students’ use of AI in their academic experiences, based on data from
306 participants. The results reveal that satisfaction has the strongest impact on students’ use of AI in higher education
curricula, highlighting the importance of providing user-friendly, reliable, and eective platforms. The perceived utility
also plays a signicant role, though its eect size is moderate, indicating that while students recognize AI’s potential
benets in achieving academic goals, other factors also contribute to their willingness to adopt these tools. Content
quality and perceived credibility, while positively correlated with AI adoption, were found to have weaker and statisti-
cally non-signicant eects. This suggests that students may prioritize functionality and user experience over content
quality and trust in AI systems. The study’s ndings provide valuable insights for educators and policymakers aiming to
optimize AI adoption in higher education curricula, emphasizing the need for comprehensive strategies that address
multiple factors inuencing student perceptions.
Keywords Articial Intelligence· Satisfaction· Perceived utility· Content quality· Perceived credibility
1 Introduction
The rapid integration of Artificial Intelligence (AI) into higher education is transforming how curricula are designed,
delivered, and received. Educational institutions increasingly leverage AI-based tools to enhance learning experi-
ences, promote personalized education, and improve operational efficiency. These technologies, including adaptive
learning platforms, intelligent tutoring systems, and generative AI applications, promise to revolutionize teaching
* Shahazwan Mat Yuso, shahazwan@um.edu.my | 1Department ofCurriculum & Instructional Technology, Faculty ofEducation,
Universiti Malaya, 50603KualaLumpur, Malaysia. 2Academy ofLanguage Studies, Universiti Teknologi MARA, Cawangan Pulau Pinang,
14000PermatangPauh, PulauPinang, Malaysia. 3College ofForeign Languages, Zhejiang Wanli University, No.8, Qianhu South Road,
Yinzhou District, Ningbo315000, Zhejiang, China. 4College ofEducation, Psychology andSocial Work, Flinders University, Sturt
Rd, BedfordPark, SA5042, Australia. 5Faculty ofSports Science & Recreation, Universiti Teknologi MARA , Bangunan Akademik 3,
40450ShahAlam, Selangor, Malaysia.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Vol:.(1234567890)
Research
Discover Computing (2025) 28:62 | https://doi.org/10.1007/s10791-025-09567-5
and learning by aligning educational content with students’ needs, thus boosting engagement and knowledge
retention [2]. In the context of the Malaysian education system, AI is gradually being incorporated to support both
educators and students. The Malaysian higher education sector is undergoing rapid digital transformation, driven by
government initiatives such as the Malaysia Education Blueprint 2015–2025 emphasising the integration of emerg-
ing technologies in higher education [35]. Universities and colleges are adopting AI-driven solutions to automate
administrative tasks, develop smart assessment methods, and provide real-time feedback to learners [6]. Addition-
ally, AI-powered language processing tools are assisting students in overcoming language barriers, particularly in a
multilingual society like Malaysia.
However, while AI presents numerous opportunities, challenges such as digital infrastructure gaps, data privacy
concerns, and the need for educator upskilling remain significant hurdles to widespread adoption. Institutions face
the complex task of embedding AI meaningfully into curricula while maintaining academic integrity and aligning
with pedagogical standards. For instance, while AI tools can greatly improve content delivery and customization,
they often face scepticism regarding their credibility and reliability, affecting their perceived utility among students.
AI-based tools are increasingly used for delivering academic content and assessment, yet students frequently ques-
tion the reliability and trustworthiness of AI-generated feedback. This concern is further intensified by the opacity
of AI algorithms, which might hinder students’ trust and affect their willingness to rely on these systems for critical
academic tasks [5, 13]. These limitations may impact students’ perception of the overall quality and effectiveness of
AI as a learning tool, ultimately influencing their engagement and satisfaction with AI-integrated curricula.
In addition, although AI’s adaptive capabilities are expected to enhance personalized learning experiences, the
quality and reliability of AI-generated content are frequently questioned. Almufarreh [7] highlights that while AI
can customize learning paths, it may lack the contextual understanding to provide quality feedback and nuanced
academic insights. This potential limitation may impact students’ perception of the overall quality and effectiveness
of AI as a learning tool, which in turn could affect their engagement and satisfaction with AI-integrated curricula. The
perceived utility also presents a nuanced gap, as students’ perceptions of AI’s educational value are not uniform across
disciplines or educational contexts. For instance, Jacobs etal. [24] found that the perceived utility of AI tools varies
significantly based on students’ familiarity with technology and their academic field, suggesting that perceived utility
is influenced not only by AI’s technical capabilities but also by users’ baseline expectations and academic require-
ments. Hence, Saihi etal. [55] stated that to maximize the potential of AI, universities must ensure these technologies
are seen as valuable, credible, and enhancing the educational experience. Neglecting these concerns could lead to
low engagement with AI tools and a diminished return on institutional investments in AI.
Despite the growing integration of Artificial Intelligence (AI) in higher education, the research gap persists, particu-
larly regarding how students perceive and interact with AI-enabled tools across key dimensions: curricula integration,
satisfaction, perceived credibility, content quality, and perceived utility. Existing studies [3, 31, 55] frequently highlight
AI’s potential to enrich educational experiences and its operational benefits, but empirical research remains limited on
how these dimensions influence students’ long-term acceptance and effective utilisation of AI [31]. One notable gap
lies in understanding the perceived credibility of AI tools in educational settings. Empirical studies that examine how
perceived credibility affects the adoption and satisfaction levels of AI-driven educational tools are scarce, highlight-
ing the need for further investigation into how AI transparency and reliability enhance students’ trust in these tools
[55]. Furthermore, most research on AI in education has been conducted in Western contexts, where digital literacy
levels, institutional infrastructure, and student attitudes toward AI differ from those in Malaysia [18,31].
To date, localised empirical research on student perceptions of AI integration in Malaysian higher education
remains scarce, particularly regarding perceived utility, satisfaction, content quality, and credibility. This gap contrasts
sharply with the predominance of Western-centric studies in the field and highlights the urgent need for Malaysia-
specific data to inform AI curriculum development. Ensuring that integration strategies are grounded in contextually
relevant evidence is essential [31, 33, 67]. The unique socio-cultural and academic landscape of Malaysian higher
education necessitates localised research that examines AI adoption, student trust in AI-generated content, and how
institutional policies impact students’ engagement with AI-driven learning tools. Also, by focusing on the Malaysian
context, this study offers a nuanced understanding of how AI adoption aligns with national educational policies,
institutional digital strategies, and student expectations. Lastly, while there is substantial literature on general student
satisfaction with e-learning tools, limited research specifically addresses satisfaction with AI-based platforms. Many
existing studies focus on the functional aspects of satisfaction, such as ease of use or accessibility, rather than delving
into satisfaction derived from AI-specific attributes like adaptive learning and predictive capabilities [31]. Given that
satisfaction is a critical factor in technology adoption, further research is needed to assess how AI-driven learning
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Vol.:(0123456789)
Discover Computing (2025) 28:62 | https://doi.org/10.1007/s10791-025-09567-5
Research
experiences align with students’ expectations and contribute to positive academic outcomes. The findings will inform
policymakers, educators, and AI developers on how to design AI-enabled curricula that cater to students’ learning
preferences and concerns, ultimately leading to more effective and engaging AI integration in Malaysian universities.
2 Literature review
2.1 Theoretical framework ofthestudies
The Technology Acceptance Model (TAM), developed by Davis [17], is a widely recognized theoretical framework in
educational research that explains how individuals adopt and use technology. TAM posits that Perceived Usefulness (PU)
and Perceived Ease of Use (PEOU) are the primary determinants of technology acceptance. In the eld of education, TAM
has been extensively applied to investigate students’ and educators’ acceptance of emerging technologies, including
e-learning platforms, articial intelligence (AI), and digital learning tools, by examining how their perceived benets
inuence usage behaviour [50]. Within the context of AI integration in higher education, the Perceived Utility of AI (PUAI)
is conceptually aligned with Perceived Usefulness (PU) in TAM, referring to an individual’s belief that AI enhances learn-
ing eciency, problem-solving capabilities, and overall academic performance. According to TAM, when individuals
perceive AI as benecial for their educational activities, they are more likely to adopt and integrate it into their learning
and teaching practices, which refers to the use of AI in Higher Education Curricula (UAIHEC).
In this study, UAIHEC is operationalized as the extent to which students and educators incorporate AI technologies
into their academic activities. Based on the theoretical foundation of TAM, PUAI is expected to serve as a key predictor of
UAIHEC, as individuals who perceive AI as a valuable educational tool are more inclined to engage with it. The underly-
ing rationale is that when AI is perceived to enhance learning outcomes and facilitate academic processes, its adoption
within higher education curricula will increase. Consequently, this study hypothesizes that PUAI has a positive inuence
on UAIHEC, thereby reinforcing TAM’s core proposition that perceived benets signicantly shape technology adoption in
educational settings. TAM’s applicability continues to be validated in recent AI adoption research. For instance, Mustofa
etal. [43] extended the TAM by incorporating ethics and trust to study university students’ adoption of AI tools, revealing
that Perceived Usefulness remained a signicant predictor of attitude, whereas Perceived Ease of Use was not statisti-
cally inuential due to students’ high familiarity with technology. Furthermore, the inclusion of ethical considerations in
this model highlighted that beyond usability, factors such as perceived fairness and trustworthiness of AI systems can
directly shape students’ attitudes and willingness to use AI [43].
The Expectation-Conrmation Model (ECM), originally proposed by Bhattacherjee [12], is a theoretical model widely
used in educational technology research to explain continuance intention where the decision to continue using a par-
ticular technology. ECM posits that users’ satisfaction with technology is shaped by their expectations before use and
the conrmation of those expectations after use [54]. In this study, Satisfaction (SA) is dened as the degree to which
students and educators feel content with AI’s performance in educational contexts. Additionally, expectation is inuenced
by the Perceived Credibility of AI (PCAI) and the Content Quality of AI (CQAI). Research indicates that perceived cred-
ibility signicantly inuences user expectations within the Expectation-Conrmation Model, particularly in technology
and information-based contexts [57]. Studies have shown that when users perceive a source as credible, they develop
higher expectations about the information or service provided, leading to greater satisfaction upon conrmation [26],
ultimately inuencing satisfaction and continued technology adoption. Another key concept correlated with expecta-
tion is content quality. High-quality content has been shown to positively inuence user satisfaction and expectations.
Many researchers posit that when the quality of content meets or surpasses user expectations, it results in positive user
satisfaction and the likelihood of continued engagement [28].
Since expectations play a key role in determining satisfaction and future adoption, PCAI and CQAI are included as
antecedents of SA. Therefore, PCAI and CQAI are expected to inuence UAIHEC through expectation conrmation, as
users who perceive AI as credible and delivering high-quality content are more likely to integrate it into their learning
environments. Empirical ndings further support ECM’s focus on satisfaction and conrmation. For example, research on
e-learning continuance consistently shows that user satisfaction is one of the strongest predictors of ongoing technology
use, often outweighing initial perceived usefulness once students have actual usage experience [7]. In the context of AI-
based educational tools, when students’ expectations are met or exceeded, such as through accurate, relevant content
and reliable performance, they report higher satisfaction and a greater likelihood of continuing to use the AI system
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Vol:.(1234567890)
Research
Discover Computing (2025) 28:62 | https://doi.org/10.1007/s10791-025-09567-5
[7]. Thus, aligning AI outcomes with student expectations (e.g., delivering quality content and credible information) is
crucial for sustaining long-term adoption, as predicted by ECM.
2.2 Development ofArtificial Intelligence ineducation
Articial Intelligence (AI) has gradually emerged in education, with its origins tracing back to the mid-twentieth century.
The integration of this tool has garnered signicant attention over the past two decades. In Malaysia, it demonstrated
a growing commitment to the integration of AI in higher education, aligning with the national agenda for digital trans-
formation. The Malaysian Ministry of Higher Education (MOHE) has recognized AI as a critical enabler for improving
teaching, learning, and administrative processes in universities.
Several studies have explored AI integration in Malaysian education. For example, Mustapha etal. [42] assessed the
AI knowledge and competency of 81 vocational teacher trainers in Malaysia’s Design and Technology program. Using
surveys and thematic analysis, the ndings revealed low AI knowledge and skills among respondents, suggesting the
need for more AI-related courses in vocational training. The study recommended revising the TVET curriculum to include
AI-based content, highlighting the importance of equipping vocational students with AI competencies to meet future
industry demands. Another study by Ng [46] examined factors of AI tools aecting Malaysian undergraduate students’
academic writing prociency. Key factors analysed were personalized learning, feedback mechanisms, usage frequency,
and hedonic motivation. Using a survey of 200 students, the ndings revealed that personalized learning, feedback
mechanisms, and hedonic motivation signicantly enhance academic writing prociency, with feedback mechanisms
having the greatest impact. The study emphasized that students found AI tool feedback to provide valuable support,
playing a crucial role in improving their writing skills.
Additionally, a study by Wang and Shi [65] explored the acceptance and use of AI in history education among Malay-
sian university students. Utilising the Unied Theory of Acceptance and Use of Technology (UTAUT) framework, a survey
among 512 students from four Malaysian universities was analysed with Partial Least Squares-Structural Equation Mod-
elling. The study found a signicant inuence of behavioural intention to use AI on actual use while facilitating condi-
tions had no signicant eect on usage behaviour. The model explained 43.7% of the variance in behavioural intention,
extending UTAUT’s application to humanities education and providing practical insights for integrating AI tools in history
education. Interestingly, other Malaysian studies suggest that supportive conditions can matter for AI acceptance. For
instance, Foroughi etal. [20] examined university students’ intention to use ChatGPT in Malaysia and found that facilitat-
ing conditions (e.g., access to technology and guidance) positively impacted their behavioural intention to adopt the AI
chatbot [19]. This discrepancy indicates that the inuence of external support and infrastructure may vary depending
on the specic educational context and type of AI tool, highlighting the need to tailor integration strategies accordingly.
The transformative potential of generative AI in Malaysian higher education has also been highlighted. Saman etal.
[56] focused on AI’s ability to personalize learning, foster digital inclusion, and empower educators. Tools like ChatGPT,
Copilot, Gemini, and Claude can tailor learning experiences through customized feedback, expand educational access for
diverse learners, and enhance teaching eectiveness by supporting content creation and collaborative projects. Likewise,
Amdan etal. [10] found that AI-powered tools can oer personalized instruction, intelligent tutoring, and interactive
simulations, while also automating tasks like grading and providing predictive analytics to enhance eciency in STEM
classrooms. Their study highlights AI’s potential to improve student mastery, boost motivation and autonomy, and enable
teachers to oer more personalized support.
In a broader context, AI integration in education has been recognized globally. For example, a study by Chisom etal.
[15] highlights AI’s contributions in overcoming language barriers, improving literacy, fostering digital skills, and sup-
porting teachers through professional development and administrative tools in Africa. The study found that AI integra-
tion in education has helped bridge gaps in access and quality, promoting inclusive education across continents. Key
applications of AI include adaptive learning platforms, virtual tutors, and intelligent content delivery systems that cater
to diverse student needs. Ma etal. [32] further examined how cultural backgrounds inuence students’ attitudes and
intentions toward using AI in higher education, comparing Chinese and international students. Using TAM and survey
data analyzed via SEM, the research identied signicant dierences between the two groups. International students
showed a stronger correlation between perceived ease of use and their attitudes/intention to adopt AI compared to
Chinese students. The ndings highlight the impact of cultural background and prior technology exposure on AI per-
ceptions, emphasizing the need for tailored educational strategies such as language-specic support and user-friendly
interfaces to accommodate diverse perspectives. These insights guide educators and institutions in eectively integrat-
ing AI into teaching and learning.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Vol.:(0123456789)
Discover Computing (2025) 28:62 | https://doi.org/10.1007/s10791-025-09567-5
Research
In parallel, the emergence of advanced generative AI tools (like ChatGPT) in education has spurred large-scale inves-
tigations into student adoption patterns worldwide. Ravšelj etal. [53] conducted a global survey of over 23,000 higher
education students across 109 countries, revealing that students widely utilize ChatGPT for tasks such as brainstorming
ideas and summarizing content. Notably, while many students found ChatGPT useful for enhancing their learning e-
ciency, there were prevalent concerns regarding its reliability and the potential for academic misconduct [53]. This global
study underscores a common theme: students perceive clear benets of AI for learning, yet they simultaneously call for
oversight and improvements to ensure content quality and ethical use. Similarly, a regional study in Sweden by Stöhr etal.
[60] found broad awareness and generally positive attitudes toward AI chatbots among university students. However,
it is also documented signicant apprehension among certain groups (e.g., female and humanities students) about AI’s
impact on learning and assessment fairness. These empirical insights from dierent parts of the world emphasize that
successful AI integration in curricula requires not only leveraging its perceived utility but also addressing students’ trust
and comfort with the technology.
Jain and Jain [25] provide a complementary perspective by exploring the role of AI in transforming teaching and
learning in higher education. Their study examines how AI can improve learning experiences both inside and outside the
classroom, making education more accessible and ecient. Focusing on higher education institutions in Udaipur, India,
and using structured questionnaires to gather teachers’ perceptions, the ndings reveal that AI signicantly enhances stu-
dents’ learning capabilities and holds great potential for future growth in the higher education sector. The study also iden-
ties challenges in implementing AI and provides valuable insights for educators and educational policy development.
2.3 The role ofArtificial Intelligence inhigher education curricula
The role of AI in higher education has generated signicant interest among scholars, educators, and policymakers, given
AI’s potential to transform learning experiences, streamline administrative processes, and improve student outcomes.
AI’s applications in higher education span a broad range of functionalities, including intelligent tutoring systems, predic-
tive analytics for student performance, and content customization [31]. However, despite its promise, the integration
of AI into higher education requires critical examination, particularly regarding its eectiveness in fostering authentic
engagement and educational value.
One of the primary promises of AI in education is its ability to provide personalized learning experiences. Intelligent
tutoring systems and adaptive learning platforms analyze data to adjust content delivery based on individual learning
styles, helping students master material at their own pace [61, 39]. By responding to individual needs, these technologies
aim to enhance engagement and improve academic outcomes, yet questions arise regarding the depth and quality of
such personalization [33, 38]. Critics argue that while AI can identify gaps in knowledge and adaptively present content,
it may lack the capacity for nuanced interpretation and feedback, which are essential for critical thinking development.
Thus, while AI enhances eciency, its ecacy in supporting deep learning processes remains contentious.
In addition to personalization, AI plays a substantial role in administrative tasks, including predictive analytics to
identify at-risk students and optimize resource allocation. This capacity for data-driven insights has been applauded for
its potential to enhance institutional decision-making. However, ethical concerns regarding data privacy, algorithmic
bias, and the transparency of AI-driven predictions present substantial barriers to widespread acceptance. Galdames
[21] underscores that while AI can streamline administrative processes, it may also introduce issues related to student
privacy and accountability in decision-making, challenging institutions to balance innovation with ethical responsibility.
Ensuring that AI systems are transparent, fair, and secure is essential for maintaining trust among stakeholders.
Finally, the advent of generative AI (e.g., large language models like ChatGPT) brings both opportunities and chal-
lenges to higher education curricula. These tools can assist in content creation, provide real-time feedback, and facilitate
independent learning, but they also raise questions about academic integrity and the role of human educators. The key
challenge lies in integrating AI in ways that complement and enhance human instruction rather than replace it. This
necessitates ongoing dialogue and research into best practices for AI-assisted education, ensuring that the technology
is used to enrich learning outcomes while upholding educational values.
2.4 Perceived utility ofAI inhigher education curricula
Central to the successful adoption of AI in education is the construct of Perceived Utility. This construct refers to
stakeholders’ beliefs about the usefulness and effectiveness of AI in improving educational outcomes. According to
Davis [17], perceived utility (analogous to perceived usefulness) is the degree to which users believe a technology
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Vol:.(1234567890)
Research
Discover Computing (2025) 28:62 | https://doi.org/10.1007/s10791-025-09567-5
will enhance their performance. Applied to the education system, the Perceived Utility of Artificial Intelligence (PUAI)
reflects how students, educators, and administrators view the value of AI technologies in supporting teaching, learn-
ing, and operational efficiency. Lijie etal. [31] stated that PUAI encompasses several dimensions including academic
utility—referring to benefits such as personalised learning, adaptive assessments, and intelligent tutoring systems.
Moreover, Saxena etal. [58] explain that PUAI also includes operational utility, which streamlines administrative tasks
like enrolment, grading, and resource allocation. This construct also highlights research utility, which is the capabil-
ity of AI to enhance data analysis and foster interdisciplinary research. Through these dimensions, PUAI can directly
influence UAIHEC. Venkatesh etal. [64] posited that positive perceptions of a technology’s usefulness enhance its
integration into institutional practices. Recent studies have confirmed this relationship in educational contexts. For
example, Rana etal. [51] found that clear communication of AI’s benefits within an institution enhanced adoption
rates—institutions with strong messaging about AI’s advantages experienced higher adoption levels among faculty
and students. Similarly, Ng etal. [44] found that the positive effects of PUAI correlated with the successful implemen-
tation of AI tools, leading to better academic performance.
Consistent with these ndings, numerous empirical studies have rearmed the importance of perceived utility in
student acceptance of AI. Kanont etal. [27] observed that the expected benets and Perceived Usefulness of genera-
tive AI tools signicantly inuenced Thai university students’ attitudes and intentions to adopt those tools. In another
study, Al-Abdullatif and Alsubaie [4] similarly found that students’ intentions to use ChatGPT were strongly linked to the
perceived value the AI provided to their learning process, especially among students with higher AI literacy. These cases
highlight that when learners clearly recognize the value added by AI whether through improved learning outcomes,
eciency, or support, their propensity to integrate such technologies into their studies increases signicantly.
H1 PUAI has a signicant and positive impact on UAIHEC.
2.5 Satisfaction andcontent quality ofAI inhigher education curricula
Other critical factors influencing the success of AI integration include satisfaction with how students, educators, and
administrators perceive and respond to AI-driven tools and content quality with the relevance, accuracy, and peda-
gogical effectiveness of AI-generated content. According to Almufarreh [7], satisfaction refers to the overall content-
ment of stakeholders regarding the use of AI tools in learning and teaching processes. In educational technology,
satisfaction is a multidimensional construct involving perceptions of ease of use, effectiveness, and even emotional
engagement with AI-based systems. Similarly, content quality relates to the characteristics of AI-generated or AI-
curated educational materials, including their accuracy, relevance, interactivity, and adaptability to learners’ needs
[68]. High-quality content is critical for fostering trust in AI systems and ensuring meaningful learning experiences.
Many studies have highlighted a strong, positive relationship between satisfaction and content quality in AI-driven
educational contexts. Most ndings consistently show that high-quality content drives higher levels of satisfaction among
students and educators. For example, Chen etal. [14] found that students were more likely to recommend AI learning
tools when they perceived the content as accurate and pedagogically eective. Similarly, Basri [11] observed that student
satisfaction increased signicantly in online courses where AI tools curated well-organized, research-based content.
Further, recent analytics-driven studies provide quantitative evidence of this link. Almufarreh [7], using a PLS-
SEM approach, confirmed that the content quality of AI outputs and the perceived utility of AI tools were significant
predictors of students’ satisfaction with those tools, together accounting for a substantial portion of the variance in
satisfaction. In practical terms, students are much more satisfied when AI systems deliver accurate, relevant content
and tangibly aid their learning objectives. Moreover, research on AI-driven tutoring systems has noted that enhancing
content clarity and alignment with course objectives can boost student satisfaction and engagement levels [8]. These
findings reinforce that high-quality, pedagogically sound AI content is a prerequisite for positive user experiences,
which in turn encourages continued use and peer recommendations of AI-based tools in education.
H2 SA has a signicant and positive impact on UAIHEC.
H3 CQAI has a signicant and positive impact on UAIHEC.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Vol.:(0123456789)
Discover Computing (2025) 28:62 | https://doi.org/10.1007/s10791-025-09567-5
Research
2.6 Perceived credibility andtrustworthiness ofAI tools inhigher education curricula
The perceived credibility and trustworthiness of AI tools is another crucial construct that drives the success of AI inte-
gration in education. Perceived credibility refers to users’ belief in the accuracy, reliability, and competence of AI tools.
Ou etal. [49] note that credibility in technology can be divided into surface credibility (appearance-based trust) and
earned credibility (trust established through performance). In the context of AI, factors like content accuracy, clarity,
and alignment with academic standards contribute to perceived credibility [30]. Trustworthiness, on the other hand,
encompasses users’ condence in AI systems to perform consistently, securely, and ethically. It includes perceptions of
fairness, privacy, and the absence of bias in AI operations. Mayer etal. [34] propose a framework for trust that includes
ability, integrity, and benevolence, all of which are critical in evaluating AI education tools.
Several studies suggest that credibility and trustworthiness are interdependent in inuencing adoption. For instance,
Shin [59] noted that accurate and reliable AI systems earned credibility, which in turn fostered trust among users. Simi-
larly, transparent and fair operations increased trustworthiness and reinforced users’ perceptions of system credibility. In
educational settings, Mogavi etal. [40] found that students were more willing to adopt AI-powered learning platforms
when they trusted the tools to deliver high-quality, credible content illustrating how trust and perceived content cred-
ibility work together to encourage usage.
Empirical evidence across contexts further underscores the importance of trust in AI’s outputs for driving adoption.
Abdalla [1] found that higher education students’ trust in ChatGPT was a signicant positive predictor of their continued
use of the AI, whereas concerns (i.e., perceived risks) had an adverse eect on their usage intentions. This highlights that
when AI tools demonstrate reliability and transparency and align with users’ values (thereby earning trust), students
are far more inclined to embrace them in their academic routines. In a similar vein, Mustofa etal. [43] emphasize that
incorporating trust-building measures and ethical safeguards in AI tools can enhance student attitudes and acceptance,
suggesting that credibility and trustworthiness need to be proactively cultivated. Notably, Malaysian-based research
on AI adoption also underscores infrastructure and support as components of credibility; when students feel that an AI
platform is well-supported and endorsed by their institution, their condence in using it grows [20]. Ultimately, fostering
a high degree of trust in AI systems—by ensuring consistent performance, accuracy, and ethical design—is essential for
their sustainable integration into higher education.
H4 PCAI has a signicant and positive impact on UAIHEC.
3 Methodology
3.1 Participants
Following the guidelines provided by G*Power and the “10-times rule, as suggested by Hair etal. [22], the required
sample size for this study was calculated to be 85 participants, aimed at examining predictors of AI adoption in higher
education curricula. Additionally, to account for potential non-response [37], an additional 25% was included, leading
to an invitation of 21 more participants.
Since the instruments used in this study had not previously been validated within this research context, a Conrma-
tory Factor Analysis (CFA) was deemed necessary to ensure the instruments’ validity. In accordance with Kline [29], who
recommends a minimum sample size of 200 for CFA in Structural Equation Modeling (SEM), a total of 200 participants
were specically recruited for this validation process before the main study. Consequently, a total of 306 participants
were targeted, comprising 200 for CFA and 106 for the main study.
The decision to recruit this sample size further supported by the need to ensure adequate statistical power and model
stability in SEM analyses. Wolf etal. [66] highlighted that sample size requirements for SEM vary widely, ranging from
30 to 460 cases, depending on model complexity and desired statistical power. Therefore, securing a sample size of 306
participants aligns with these recommendations, aiming to balance model complexity with sucient statistical power.
Participants were recruited via simple random sampling from public universities, facilitated through group emails
and networks involving colleagues who are educators. Data collection spanned 2months, with ethical approval
secured from the relevant ethics committee at Universiti Malaya. An introductory email provided students with
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Vol:.(1234567890)
Research
Discover Computing (2025) 28:62 | https://doi.org/10.1007/s10791-025-09567-5
information about the study’s purpose, emphasized the voluntary nature of participation, and assured confidentiality
of responses. Ultimately, 306 valid responses were obtained and were utilized for subsequent analysis in this research.
3.2 Instruments
In this study, five key instruments were employed: the Use of AI in Higher Learning Curricula Scale, Satisfaction Scale,
Perceived Credibility of AI Tools Scale, Content Quality of AI Tools Scale, and Perceived Utility of AI Tools Scale. To
enhance both the feasibility and validity of these instruments for respondents whose primary language is not English,
all instruments were presented in a bilingual format. While the core content of the tools remained intact, specific
contextual references to AI learning tools” were adapted. A rigorous content validation process was undertaken,
which included blind back-translation, expert review, and pilot testing. Two English translation experts carried out
the blind back-translation to ensure accuracy. Subsequently, a panel of six subject-matter experts evaluated the
content for alignment and validity.
For the pilot test, 200 participants who were not part of the main study were recruited, and Conrmatory Factor Analy-
sis (CFA) was employed to evaluate the validity of the instruments. For the independent variables, 10 satisfaction items,
5 perceived credibility items, 5 content quality items, and 7 perceived utility items were all adapted from Almufarreh [7].
The dependent variable, the Use of AI in Higher Learning Curricula Scale, comprised 10 items, as referenced by Ng etal.
[45]. Each instrument was structured around a 5-point Likert Scale, ranging from “Strongly Disagree” to “Strongly Agree.
During the CFA for the pilot test, all items exceeded the threshold value (0.7), and thus all the items are retained for
the main study. As recommended by Hair etal. [22], reliability and validity were conrmed through Cronbach’s alpha
and composite reliability values exceeding 0.70. Factor loadings also met the minimum threshold of 0.70, while the AVE
for each construct surpassed 0.50, indicating satisfactory convergent validity. For divergent validity, the HTMT ratio of
correlations was kept below 0.90, and the square root of the AVE for each construct was greater than its correlations with
other constructs. Furthermore, each item’s factor loading within its designated construct was higher than any loading
it had in related constructs [22].
Following the criteria mentioned, only items that met the specied thresholds were retained for each construct. The
nal Conrmatory Factor Analysis was conducted separately for each instrument, revealing a strong model t across
the board, as indicated by an SRMR ≤ 0.08 and NFI ≥ 0.9. Each construct demonstrated satisfactory Cronbach’s alpha and
composite reliability values above 0.7, factor loadings above 0.7, and AVE values exceeding 0.5. Additionally, HTMT values
were all below 0.9, conrming the validity and reliability of the instruments for the main study. A comprehensive list of
retained items is provided in Appendix.
3.3 Procedure
Before administering the research instruments to the student participants, the researcher will obtain ethical approval
from the relevant ethics committee to ensure compliance with ethical standards. Data collection is expected to last
2months, during which the instruments will be administered to gather sucient responses. In the initial phase of
analysis, all instruments will undergo validation through separate Conrmatory Factor Analyses (CFA) to conrm their
suitability and reliability within this specic research context. This validation step is crucial to establish the robustness
of the measurement tools used in assessing the Use of AI in Higher Learning Curricula.
Upon conrming instrument validity, the researcher will employ Structural Equation Modelling (SEM) to analyse the
data. SEM was utilised to evaluate both the measurement model, examining the relationships among observed and latent
variables within the construct, and the structural model, to identify and analyse the predictive factors of the Use of AI in
Higher Learning Curricula. This dual approach in SEM allows for a comprehensive assessment of both the measurement
accuracy and the underlying structural relationships, oering insights into which factors most signicantly contribute
to the Use of AI in Higher Learning Curricula. Such a rigorous approach ensures that the ndings can be condently
interpreted in light of the study’s objectives, establishing a reliable basis for understanding the predictors of AI adoption
in higher education.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Vol.:(0123456789)
Discover Computing (2025) 28:62 | https://doi.org/10.1007/s10791-025-09567-5
Research
3.4 Data analysis
This study utilizes a quantitative cross-sectional survey design, which offers several advantages, including the ability
to generalize findings, describe the prevalence and characteristics of the target cohort, and test hypothesized rela-
tionships at a specific point in time. The sample includes 200 responses for the initial Confirmatory Factor Analysis
(CFA) of the instruments, followed by an additional 106 responses for the main analysis, ensuring adequate data for
robust statistical testing.
To test the validity and reliability of the instruments and to examine the hypothesized model, Partial Least Squares
Structural Equation Modelling (PLS-SEM) was conducted using Smart-PLS 4.0. This approach allows for comprehensive
testing of the CFA, the measurement model, and the structural model, providing a nuanced understanding of each
component’s contribution. The CFA focused on key model fit indices, including assessments of internal reliability,
convergent validity, and discriminant validity for each instrument, ensuring that the constructs measured align with
the study’s theoretical framework.
In assessing the measurement model, six reflective second-order constructs were evaluated holistically for internal
reliability, as well as for convergent and discriminant validity [22]. This evaluation enhances the model’s robustness,
facilitating the examination of the relationships among latent variables. Once the instruments were validated, the
structural model was tested, focusing on the relationships between the latent constructs and their impact on the Use
of AI in Higher Education Curricula (UAIHEC). The structural model’s significance was assessed through bootstrapping,
with indices such as p-values (p < 0.001), t-values, and path coefficients (β) guiding the interpretation. The confidence
interval, excluding zero, was also examined to verify the significance of the mediating effects.
To gain deeper insights into the predictive capacity of the model, the R2 value (explaining the variance in UAIHEC)
and the f2 effect size for each predictor were analysed. This analysis provides an understanding of both the cumulative
and individual contributions of the predictors on UAIHEC. Detailed interpretations and implications of these findings
will be discussed in the following section, with a focus on how each predictor’s effect size informs our understand-
ing of AI integration in higher education settings. This approach underscores the study’s commitment to rigorously
exploring key factors influencing AI adoption in educational curricula.
4 Results
4.1 Demographic information
Following the validation of the individual instrument through Confirmatory Factor Analysis (CFA), this section will
present the findings from both the measurement model and the structural model. The results are based on data
collected from 106 participants who were involved in the main study. Before interpreting the results of PLS-SEM, the
demographic information was presented in Table1.
Table1 provides a comprehensive overview of the demographic characteristics of the respondents in this study. A
total of 106 participants were surveyed, with a higher representation of females (67%) compared to males (33%). In
terms of academic level, the majority of respondents were pursuing a Bachelor’s degree (57%), followed by Diploma
students (28%), Master’s degree students (9%), and Doctor of Philosophy candidates (4%). Age-wise, nearly half
(49%) of the participants were aged 23–27, while 24% were in the 18–22 age bracket, 14% were aged 28–32, 10%
were between 33–37years old, and only 2% were aged 38 or above. When considering fields of study, the largest
groups were Business and Management (23%), Education (20%), and Social Sciences (15%), followed by Health and
Medicine (10%), Art and Humanities (9%), Communication and Media (8%), and Law and Public Policy (6%), with 5%
from other majors. In terms of AI tools used, all respondents (100%) reported using ChatGPT, while 43% had experi-
ence with Gemini, 34% with Microsoft Copilot, 21% with Perplexity, and 12% with DALL-E. Some respondents also
reported using Claude (10%), Midjourney (8%), and Bard (6%), reflecting a broad spectrum of AI adoption. Regarding
the usage time of AI tools, the majority of respondents (58%) had been using AI for more than 1year, while 21% had
used it for 10–12months, 10% for 7–9months, 7% for 3–6months, and only 3% for less than 3months. Additionally,
the study was conducted in 5 renowned public universities in Klang Valley, Malaysia. These institutions play a vital role
in higher education and contribute significantly to academic research and technological advancements in Malaysia.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Vol:.(1234567890)
Research
Discover Computing (2025) 28:62 | https://doi.org/10.1007/s10791-025-09567-5
This demographic data highlights a diverse and representative sample of students encompassing various academic
disciplines and differing levels of AI experience across different stages of education.
4.2 Normality anddescriptive statistic
The target population for this study consisted of students from universities in Selangor who are exposed to AI in
the higher education curricula. Using simple random sampling [41, 62], a total of 106 valid responses were col-
lected. Before conducting the main analysis, the dataset was assessed for normality by evaluating skewness and
kurtosis values. According to theoretical guidelines, skewness and kurtosis values should fall within the range of
− 2 to + 2 [48]. The results confirmed the data’s normality, with kurtosis values ranging from − 0.452 (UAIHLC10) to
1.282 (SA4) and skewness values ranging from − 1.16 (UAIHLC4) to − 0.019 (UAIHLC10). Additionally, descriptive
statistics, including mean (x) and standard deviation (SD), were reported for all items (Table2). The mean values
Table 1 Demographic
information of respondents Demographic Clarication Number Percentage
Gender Male 35 33
Female 71 67
University University A 35 33
University B 23 22
University C 15 14
University D 19 18
University E 14 13
Academic level Diploma 30 28
Bachelor’s Degree 61 57
Master’s Degree 10 9
Doctor of Philosophy 5 4
Age 18–22 26 24
23–27 52 49
28–32 15 14
33–37 11 10
38 or above 2 2
Field of study Art and humanities 10 9
Social sciences 16 15
Business and management 25 23
Education 22 20
Health and medicine 11 10
Law and public policy 7 6
Communication and media 9 8
Other majors 6 5
AI tools used ChatGPT 106 100
Gemini 46 43
Microsoft Copilot 36 34
Dall-e 13 12
Perplexity 23 21
Claude 11 10
Midjourney 9 8
Bard 7 6
Usage time Less than 3months 4 3
3–6months 7 7
7–9months 11 10
10–12months 22 21
More than 1year 62 58
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Vol.:(0123456789)
Discover Computing (2025) 28:62 | https://doi.org/10.1007/s10791-025-09567-5
Research
demonstrated moderate to high levels of agreement, ranging from 3.3491 (PCAI4) to 4.066 (UAIHLC4), with standard
deviations generally below 1.1, reflecting moderate variability [36]. These results indicate that the data are suitable
for subsequent analyses, including PLS-SEM.
4.3 Measurement andstructural model
In this study, all constructs such as Use of AI in Higher Education Curricula (UAIHEC), Satisfaction (SA), Perceived Cred-
ibility of AI Tools (PCAI), Content Quality of AI Tools (CQAI), and Perceived Utility of AI Tools (PUAI) were modelled as
rst-order reective constructs, each measured directly through reective indicators. The measurement model was
assessed using Partial Least Squares Structural Equation Modelling (PLS-SEM) via SmartPLS 4.0. Following Hair etal. [22],
internal consistency reliability was conrmed through Cronbach’s Alpha and Composite Reliability (CR), both exceed-
ing 0.70. Convergent validity was supported by factor loadings above 0.70 and Average Variance Extracted (AVE) values
above 0.50. Discriminant validity was conrmed through the Fornell–Larcker criterion, cross-loading analysis, and the
Heterotrait–Monotrait (HTMT) ratio, with all values within acceptable thresholds. As such, no second-order constructs
Table 2 Mean, SD, kurtosis,
skewness
UAIHEC Use of Articial Intelligence in Higher Education Curricula, SA Satisfaction, PCAI Perceived Credibil-
ity of Articial Intelligence, CQAI Content Quality of Articial Intelligence, PUAI Perceived Utility of Articial
Intelligence
x
SD Skewness Kurtosis
UAIHEC1 3.8396 1.13084 − 0.886 0.215
UAIHEC2 3.7264 1.08262 − 0.855 0.353
UAIHEC3 3.6698 0.92295 − 0.476 0.446
UAIHEC4 4.066 1.0073 − 1.16 1.07
UAIHEC5 3.7547 1.00296 − 0.526 − 0.02
UAIHEC6 3.5943 1.03995 − 0.464 − 0.184
UAIHEC7 3.717 0.98324 − 0.443 − 0.246
UAIHEC8 3.7547 0.97406 − 0.495 − 0.137
UAIHEC9 3.5755 0.97539 − 0.373 − 0.069
UAIHEC10 3.5189 0.91788 − 0.019 − 0.452
SA1 3.9623 0.84992 − 0.591 0.346
SA2 3.8774 0.86962 − 0.555 0.216
SA3 3.9906 0.86735 − 0.874 1.273
SA4 3.8491 0.90283 − 0.882 1.282
SA5 3.717 0.96367 − 0.638 0.325
PCAI1 3.4906 0.98798 − 0.275 − 0.002
PCAI2 3.4811 0.90744 − 0.255 0.288
PCAI3 3.434 0.94636 − 0.287 0.226
PCAI4 3.3491 0.9466 − 0.206 0.1
PCAI5 3.434 0.98579 − 0.299 − 0.331
CQAI1 3.7453 0.94679 − 0.564 0.096
CQAI2 3.5094 0.94864 − 0.369 0.023
CQAI3 3.6698 0.90208 − 0.483 0.256
CQAI4 3.7736 0.91841 − 0.659 0.751
CQAI5 3.783 0.98566 − 0.887 0.78
PUAI1 3.3868 1.00067 − 0.379 − 0.063
PUAI2 3.4571 0.89902 − 0.638 0.631
PUAI3 3.4906 0.85351 − 0.579 0.754
PUAI4 3.6981 0.89623 − 0.657 0.479
PUAI5 4 0.82808 − 0.718 0.773
PUAI6 3.6792 0.91075 − 0.703 0.737
PUAI7 3.8774 0.8586 − 0.68 0.962
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Vol:.(1234567890)
Research
Discover Computing (2025) 28:62 | https://doi.org/10.1007/s10791-025-09567-5
were modelled in this study, and all constructs were treated as unidimensional reective structures, aligned with their
theoretical underpinnings.
Prior to applying the PLS-SEM technique, data normality was examined. The results indicated that skewness and
kurtosis values ranged between 1.0 and − 0.1, arming the normal distribution of the data. Furthermore, the variance
ination factors (VIF) for UAIHEC, PUAI, SA, CQAI, and PCAI were all below 5, conrming the absence of collinearity issues
among the latent variables. Table2 subsequently presents the measurement model, outlining indices related to reliability
and convergent validity.
As presented in Table3, all constructs demonstrated high internal consistency, with Cronbach’s alpha values ranging
from 0.889 to 0.929 and composite reliability values ranging from 0.897 to 0.933. In terms of convergent validity, the AVE
values for all variables exceeded the threshold of 0.5. Furthermore, all item factor loadings were above 0.70, indicating
no concerns regarding convergent validity for the instruments utilised in this study.
Divergent validity was assessed using the HTMT ratio, Fornell–Larcker criterion, and cross-loadings. The results of the
HTMT analysis for the variables are detailed in Table3.
The results indicate that all HTMT values (Table4) for the constructs and their dimensions are below the threshold
of 0.90, providing preliminary evidence for the divergent validity of the instruments. Also, the assessment of divergent
Table 3 Reliability and
convergent validity of the
instruments
UAIHEC Use of Articial Intelligence in Higher Education Curricula, SA Satisfaction, PCAI Perceived Credibil-
ity of Articial Intelligence, CQAI Content Quality of Articial Intelligence, PUAI Perceived Utility of Articial
Intelligence, CA Cronbach Alpha, CR Composite Reliability, AVE Average Variance Extracted
Factors items Loading CA CR AVE
UAIHEC UAIHEC1 0.791 0.929 0.933 0.610
UAIHEC2 0.712
UAIHEC3 0.773
UAIHEC4 0.831
UAIHEC5 0.848
UAIHEC6 0.786
UAIHEC7 0.797
UAIHEC8 0.761
UAIHEC9 0.764
UAIHEC10 0.737
PUAI PUAI1 0.784 0.919 0.923 0.674
PUAI2 0.825
PUAI3 0.776
PUAI4 0.835
PUAI5 0.848
PUAI6 0.848
PUAI7 0.828
SA SA1 0.878 0.919 0.920 0.756
SA2 0.888
SA3 0.862
SA4 0.855
SA5 0.865
CQAI CQAI1 0.812 0.889 0.897 0.692
CQAI2 0.801
CQAI3 0.867
CQAI4 0.826
CQAI5 0.853
PCAI PCAI1 0.836 0.899 0.902 0.714
PCAI2 0.903
PCAI3 0.899
PCAI4 0.747
PCAI5 0.830
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Vol.:(0123456789)
Discover Computing (2025) 28:62 | https://doi.org/10.1007/s10791-025-09567-5
Research
validity must take into account the Fornell–Larcker criterion. The outcomes of the Fornell–Larcker analysis are presented
in Table5. As per Hair [22], the square root of the AVE for each construct should exceed the highest correlation between
that construct and any other construct within the model, therefore conrming divergent validity.
As shown in Table5, all measured values meet the established criteria, further conrming the divergent validity of
the instruments. Besides, the analysis of item cross-loadings indicates that the factor loadings within their respective
constructs are consistently higher than those with other related constructs. This nding provides further evidence that
there are no issues with divergent validity in the context of this study.
Cross-loading is identied when a factor’s item-loading values on its respective construct are greater than its cross-
loading on other constructs [23]. The values in bold in Table6 indicate the highest loading for each item, conrming that
items load more strongly on their intended constructs than on others. For example, CQAI1 has the highest loading value
on its construct CQAI (0.812) compared to its cross-loadings on PCAI (0.489), PUAI (0.67), S (0.68), and UAIHEC (0.633).
Similarly, PCAI2 loads the highest on PCAI (0.903) compared to its cross-loadings on CQAI (0.568), PUAI (0.597), S (0.521),
and UAIHEC (0.52). These results demonstrate strong discriminant validity, as items consistently exhibit higher loadings
on their respective constructs than on other constructs. This supports the robustness of the measurement model for
subsequent structural equation modelling analyses.
According to Table7, PUAI has a positive and signicant eect on UAIHEC (β = 0.209, t = 2.21, p = 0.027 < 0.05). Addi-
tionally, the condence interval (0.038 to 0.401) does not include zero, further conrming the signicance of this rela-
tionship. Therefore, H1 is supported. SA also exhibits a positive and signicant impact on UAIHEC (β = 0.531, t = 6.751,
p = 0.000 < 0.05). The condence interval (0.37 to 0.68) excludes zero, reinforcing the result. Hence, H2 is supported. For
CQAI, the eect on UAIHEC is positive but not signicant (β = 0.136, t = 1.872, p = 0.061 > 0.05). The condence interval
(-0.019 to 0.276) includes zero, indicating no statistical signicance. Thus, H3 is not supported. Similarly, PCAI shows a
weak and non-signicant eect on UAIHEC (β = 0.041, t = 0.635, p = 0.525 > 0.05). The condence interval (− 0.074 to 0.162)
includes zero, conrming the insignicance of this relationship. Consequently, H4 is not supported.
The R2 value represents the proportion of variance in endogenous latent variables that is explained by exogenous
variables. According to Hair etal. [23], R20.67 indicates substantial predictive relevance, 0.33 R2 < 0.67 signies mod-
erate predictive relevance, 0.19 R2 < 0.33 denotes weak predictive relevance, and R2 < 0.19 indicates very weak or no
predictive relevance. Based on these criteria, Fig.1 demonstrates that the value for UAIHEC is 0.710. This indicates that
71.0% of the variance in UAIHEC is explained by PUAI, SA, CQAI, and PCAI within the model. Consequently, the variance
of UAIHEC explained by these four factors demonstrates substantial predictive relevance.
Table 4 Discriminant validity
(HTMT Value)
UAIHEC Use of Articial Intelligence in Higher Education Curricula, SA Satisfaction, PCAI Perceived Credibil-
ity of Articial Intelligence, CQAI Content Quality of Articial Intelligence, PUAI Perceived Utility of Articial
Intelligence
CQAI PCAI PUAI SA UAIHEC
CQAI
PCAI 0.752
PUAI 0.870 0.782
SA 0.788 0.645 0.841
UAIHEC 0.770 0.644 0.813 0.876
Table 5 Discriminant validity
(Fornell–Larcker value)
UAIHEC Use of Articial Intelligence in Higher Education Curricula, SA Satisfaction, PCAI Perceived Credibil-
ity of Articial Intelligence, CQAI Content Quality of Articial Intelligence, PUAI Perceived Utility of Articial
Intelligence
CQAI PCAI PUAI SA UAIHEC
CQAI 0.832
PCAI 0.668 0.845
PUAI 0.792 0.703 0.821
SA 0.724 0.590 0.780 0.870
UAIHEC 0.714 0.590 0.758 0.817 0.781
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Vol:.(1234567890)
Research
Discover Computing (2025) 28:62 | https://doi.org/10.1007/s10791-025-09567-5
To gain deeper insights into which predictor has the most substantial impact on UAIHEC, it is essential to examine the
values. The f2 eect size of each predictor represents its unique contribution to the R2 value of the dependent variable when
included in the model, as compared to when it is excluded. According to Hair etal. [23], f2 ≤ 0.02 indicates a small eect size,
0.15 ≤ f2 ≤ 0.02 indicates a medium eect size, and f20.35 indicates a large eect size. Table8 presents the f2 values of each
specic predictor on UAIHEC, highlighting their respective contributions to the model.
Table 6 Cross-loading
UAIHEC Use of Articial Intelligence in Higher Education Curricula, SA Satisfaction, PCAI Perceived Credibil-
ity of Articial Intelligence, CQAI Content Quality of Articial Intelligence, PUAI Perceived Utility of Articial
Intelligence
CQAI PCAI PUAI S UAIHEC
CQAI1 0.812 0.489 0.67 0.68 0.633
CQAI2 0.801 0.627 0.59 0.47 0.487
CQAI3 0.867 0.625 0.657 0.577 0.564
CQAI4 0.826 0.532 0.628 0.554 0.554
CQAI5 0.853 0.532 0.726 0.687 0.692
PCAI1 0.621 0.836 0.641 0.582 0.546
PCAI2 0.568 0.903 0.597 0.521 0.52
PCAI3 0.587 0.899 0.62 0.513 0.489
PCAI4 0.496 0.747 0.535 0.442 0.464
PCAI5 0.535 0.83 0.568 0.42 0.465
PUAI1 0.662 0.724 0.784 0.589 0.543
PUAI2 0.614 0.561 0.825 0.55 0.576
PUAI3 0.637 0.665 0.776 0.514 0.552
PUAI4 0.64 0.541 0.835 0.655 0.656
PUAI5 0.696 0.487 0.848 0.751 0.671
PUAI6 0.632 0.587 0.848 0.709 0.668
PUAI7 0.674 0.525 0.828 0.679 0.667
SA1 0.584 0.488 0.642 0.878 0.691
SA2 0.681 0.486 0.662 0.888 0.72
SA3 0.637 0.509 0.696 0.862 0.695
SA4 0.607 0.475 0.674 0.855 0.7
SA5 0.638 0.603 0.714 0.865 0.745
UAIHEC1 0.601 0.495 0.632 0.702 0.791
UAIHEC10 0.455 0.468 0.522 0.537 0.737
UAIHEC2 0.494 0.351 0.549 0.584 0.712
UAIHEC3 0.504 0.473 0.563 0.555 0.773
UAIHEC4 0.605 0.463 0.62 0.752 0.831
UAIHEC5 0.64 0.51 0.672 0.737 0.848
UAIHEC6 0.553 0.47 0.597 0.628 0.786
UAIHEC7 0.62 0.486 0.637 0.694 0.797
UAIHEC8 0.529 0.393 0.55 0.566 0.761
UAIHEC9 0.532 0.494 0.55 0.566 0.764
Table 7 Hypotheses testing
UAIHEC Use of Articial Intelligence in Higher Education Curricula, SA Satisfaction, PCAI Perceived Credibil-
ity of Articial Intelligence, CQAI Content Quality of Articial Intelligence, PUAI Perceived Utility of Articial
Intelligence
Path coecient T statistics P values CI Results
H1: PUAI UAIHEC 0.209 2.21 0.027 (0.038, 0.401) Supported
H2: SA UAIHEC 0.531 6.751 0.000 (0.37,0.68) Supported
H3: CQAI UAIHEC 0.136 1.872 0.061 (-0.019, 0.276) Not supported
H4: PCAI UAIHEC 0.041 0.635 0.525 (-0.074, 0.162) Not supported
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Vol.:(0123456789)
Discover Computing (2025) 28:62 | https://doi.org/10.1007/s10791-025-09567-5
Research
Table8 highlights the f2 eect sizes for the predictors on UAIHEC, thus showing their contributions to the model’s explana-
tory power. SA emerges as the most signicant predictor, with an f2 value of 0.361. This indicates a large eect size and under-
scores its substantial inuence on UAIHEC. PUAI demonstrates a medium eect size, with an f2 value of 0.037. This suggests
Fig. 1 Structural model. UAIHEC Use of Articial Intelligence in Higher Education Curricula, SA Satisfaction, PCAI Perceived Credibility of Arti-
cial Intelligence, CQAI Content Quality of Articial Intelligence, PUAI Perceived Utility of Articial Intelligence
Table 8 Predictive relevance
of f2
UAIHEC Use of Articial Intelligence in Higher Education Curricula, SA Satisfaction, PCAI Perceived Credibil-
ity of Articial Intelligence, CQAI Content Quality of Articial Intelligence, PUAI Perceived Utility of Articial
Intelligence
UAIHEC
CQAI 0.022
PCAI 0.002
PUAI 0.037
SA 0.361
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Vol:.(1234567890)
Research
Discover Computing (2025) 28:62 | https://doi.org/10.1007/s10791-025-09567-5
a moderate yet meaningful contribution to the dependent variable. Similarly, CQAI has a medium eect size of 0.022 and
reects a limited but notable impact. In contrast, PCAI has an f2 value of 0.002, signifying a small eect size and minimal
inuence on UAIHEC. These ndings emphasise the dominant role of SA in predicting UAIHEC, while the contributions of
PUAI and CQAI are moderate, and PCAI’s impact is negligible.
The predictive relevance of the model was evaluated using the Blindfolding Technique in SmartPLS 4, following the
guidelines of Hair etal. [22], to determine the Q2 value. The Q2 value reects the ability of a latent variable to predict the
associated dependent variable within the model. A high Q2 value indicates strong predictive relevance of the latent variable
for the dependent variable, whereas a low Q2 value suggests limited predictive capacity. As shown in Table9, the Q2 value
for UAIHEC exceeds 0 (0.691), conrming the predictive validity of the endogenous construct in the model.
5 Discussion
The ndings from this study provide crucial insights into students’ perceptions and engagement with AI tools in higher
education, emphasising the roles of satisfaction, perceived credibility, content quality, and perceived utility. These results
contribute to the broader discourse on AI adoption by rearming and extending prior research, highlighting a nuanced
interplay between these factors and their impact on students’ willingness to embrace AI in academic settings. This aligns
with prior research by Dash etal. [16], which emphasized that self-ecacy, interaction, and content quality are critical deter-
minants of user satisfaction and engagement in e-learning settings, highlighting the importance of addressing these factors
for successful AI adoption in educational environments [16].
The study conrms that AI tools oer personalized learning pathways and improve content delivery eciency, yet their
impact depends heavily on how well they are integrated into pedagogical frameworks. This is consistent with ndings by
Rapanta etal. [52], who stressed the need for a balanced approach between technology and pedagogy for meaningful
learning experiences [52]. The variance explained in this study (R2 = 0.710) highlights the strong combined inuence of the
identied predictors in which perceived utility, satisfaction, content quality, and perceived credibility inuenced students’
engagement with AI tools. This high variance suggests a complex interplay between these factors, as demonstrated by Bond
etal. [13], who noted that technology adoption is rarely inuenced by a single predictor, but rather a combination of factors
inuencing behavioural, aective, and cognitive engagement [13].
Perceived utility of AI (PUAI) emerged as a signicant predictor of students’ engagement with AI tools, showcasing its
central role in determining technology adoption in educational settings. This is consistent with the Technology Acceptance
Model (TAM) as discussed by Lijie etal. [31], which identies perceived usefulness as a critical determinant of users’ inten-
tion to use technology. The ndings align with Abu Talib etal. [2], who highlighted that perceived utility drives technology
adoption, particularly when it enhances performance and eciency [2]. The study revealed that students perceive AI tools
as useful primarily due to their ability to automate routine tasks, enhance content delivery, and adapt learning experiences
to individual needs. However, the moderate eect size of PU in this study highlights its partial role in inuencing AI adoption,
suggesting that other factors such as emotional and experiential dimensions are equally important [16, 63].
To enhance AI’s perceived utility, institutions must demonstrate its real-world relevance by integrating practical features,
such as AI-powered career guidance or subject-specic simulations, that directly address students’ academic and professional
aspirations [55]. This is supported by Alwi and Ahmad Khan [9], who found that perceived ease of use (PEOU) and perceived
usefulness (PU) signicantly mediate technology readiness and decision-making in AI adoption among students.
Satisfaction (SA) was the strongest inuence on the use of AI in higher education curricula, emphasizing its pivotal
role in fostering engagement. This nding aligns with the Expectation-Conrmation Model (ECM), which posits that
satisfaction arises when users’ expectations are met or exceeded by their experiences [47]. Dash etal. [16] further empha-
sized that user satisfaction is a key driver of e-learning success, inuencing user intention and long-term engagement
[16]. In educational contexts, satisfaction with AI tools is closely tied to their reliability, ease of use, and alignment with
students’ learning objectives. For example, AI platforms that deliver timely feedback, enable seamless navigation, and
adapt dynamically to learners’ needs are more likely to garner positive responses. These ndings align with Zou etal. [70],
who emphasize that addressing technical challenges and providing a positive user experience are essential to fostering
satisfaction and engagement with AI tools.
Table 9 Predictive relevance
of Q2Q2predict
UAIHEC 0.691
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Vol.:(0123456789)
Discover Computing (2025) 28:62 | https://doi.org/10.1007/s10791-025-09567-5
Research
Contrary to expectations, the content quality of AI (CQAI) did not signicantly impact students’ use of AI in higher
education curricula. This nding deviates from prior research. For example, Almufarreh [7] emphasizes content quality as
a cornerstone of eective digital learning tools. A plausible explanation for this result is that students might take content
quality for granted, assuming it to be a baseline feature of any educational tool. Consequently, its inuence might be
overshadowed by more dynamic factors such as usability or perceived utility [13, 16]. Future research should explore
how specic dimensions of content quality, such as contextual relevance, depth, and adaptability, might impact students’
perceptions of AI tools. For example, AI platforms that tailor content to regional or cultural contexts might resonate more
eectively with diverse student demographics [55, 69].
Interestingly, perceived credibility AI (PCAI) had no signicant inuence on students’ engagement with AI in higher
education curricula, contradicting established literature that positions trust and credibility as fundamental to technology
adoption [55]. This nding may stem from students’ limited understanding of AI technologies, making it challenging to
critically evaluate the credibility of AI-generated content. This aligns with Almufarreh [7], who suggested that credibility
concerns are secondary to immediate functionality for most students. Institutions should prioritize transparency and
accountability in AI systems by providing accessible explanations of AI processes, such as data sourcing and algorith-
mic decision-making, to foster greater trust in AI-generated outputs [6]. Additionally, Dash etal. [16] emphasized that
incorporating mechanisms for verifying the credibility of AI outputs, such as citations for AI-generated content or peer-
reviewed validation, could further enhance students’ condence in these tools.
Lastly, a key theoretical contribution of this study is its reinforcement of the Technology Acceptance Model (TAM) and
the Expectation-Conrmation Model (ECM). Specically, perceived utility (PUAI) emerged as a critical determinant for AI
adoption, supporting Lijie etal. [31] and Abu Talib etal. [2], who found that perceived usefulness signicantly predicts
user intention. However, the ndings of the present study extend TAM by demonstrating that perceived utility alone is
insucient to drive AI adoption, as emotional and experiential factors also play a substantial role [63]. This aligns with
Bond etal. [13], who emphasised that technology adoption is a complex process inuenced by cognitive, aective,
and behavioural engagement. Therefore, institutions must enhance AI’s perceived utility by incorporating features that
address both academic and professional aspirations, such as AI-powered career guidance or adaptive learning models
[55].
Overall, this study contributes to the body of knowledge by rening our understanding of AI adoption in educational
settings. By demonstrating the complex interplay between perceived utility, satisfaction, content quality, and credibility,
this research extends existing theoretical frameworks, oering a more comprehensive view of student engagement with
AI. These insights have direct implications for educators and policymakers seeking to optimize AI integration in curricula,
ensuring that AI tools are not only functional but also engaging and trustworthy. Hence, future studies should further
examine how demographic and contextual factors mediate these relationships, enabling a more tailored approach to
AI implementation in higher education.
6 Conclusion
This study emphasises the importance of AI adoption in higher education (HE), particularly within the Malaysian context.
The study recognizes that student perceptions are inuenced by a dynamic interplay of satisfaction, perceived utility,
content quality and credibility. While satisfaction and perceived utility of AI emerged as the strongest predictors of the
use of AI in Malaysian higher education curricula, this study highlights the need for continuous renement of AI tools to
address students’ evolving expectations and the unique characteristics of Malaysian HE. Institutions in Malaysia should
not only focus on the technical features of AI systems but also invest in training programs to elevate both student and
faculty competency in using these technologies eectively, keeping in mind the diverse linguistic, cultural, and socio-
economic contexts of Malaysian learners.
Furthermore, integrating AI into Malaysian pedagogical frameworks requires addressing potential ethical and equity
concerns specic to the country. Ensuring that AI tools are accessible and inclusive is vital to prevent exacerbating existing
disparities between urban and rural education settings and between public and private institutions. Further, Malaysian
institutions should establish robust feedback mechanisms to gather real-time insights from students and educators,
enabling iterative improvements to AI systems. Such participatory approaches can foster a sense of ownership, collabo-
ration, and alignment with the nation’s educational aspirations outline in Malaysia Education Blueprint.
Last but not least, the study also highlights the necessity of blending AI with human judgement to preserve the
cultural and empathetic aspects of education unique to Malaysia’s multicultural society. While AI streamlines learning
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Vol:.(1234567890)
Research
Discover Computing (2025) 28:62 | https://doi.org/10.1007/s10791-025-09567-5
processes, human oversight remains critical to ensure the quality, relevance, and ethical use of AI-generated content in
line with Malaysian values. Hence, expanding research to include the perspectives of Malaysian educators, policymakers,
and administrators could provide a more comprehensive understanding of the challenges and opportunities associated
with AI integration in the country. Finally, cross-cultural and interdisciplinary studies within Southeast Asia and beyond
could shed light on how contextual factors specic to Malaysia, such as language diversity, infrastructure gaps, and gov-
ernment policies, inadvertently shape AI adoption. This would oer valuable insights into the global education system
while positioning Malaysia as an innovative, inclusive education leader.
Acknowledgements Not applicable.
Author contributions All authors—conception and design Shahazwan, Anwar, & Lijie—wrote the main manuscript text Zamzami & Mohd
Helme—advisors and proofreaders All authors reviewed the manuscript.
Funding The author received no nancial support for the research, authorship, and/or publication of this article.
Data availability The datasets generated during and/or analysed during the current study are not publicly available due to the fact that the
datasets used in this study were provided by a collaborating institution and contain sensitive information with usage restrictions. However,
they are available from the corresponding author upon reasonable request.
Declarations
Ethics approval and consent to participate This study was conducted in accordance with ethical guidelines and regulations established by
the University of Malaya Research Ethics Committee (UMREC) at the University of Malaya. The data was collected from the participants after
obtaining ethical approval from the UMREC at the University of Malaya, identied by the reference number UM.TNC2/UMREC_2261. In addi-
tion, the participants also approved the informed consent form for participating in the self-reported survey.
Competing interests The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which
permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to
the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modied the licensed material. You
do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party
material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If
material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds
the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco
mmons. org/ licen ses/ by- nc- nd/4. 0/.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Vol.:(0123456789)
Discover Computing (2025) 28:62 | https://doi.org/10.1007/s10791-025-09567-5
Research
Appendix: Items retained afterCFA
Factor Item Description
UAIHEC UAIHEC1 Articial intelligence is relevant to my everyday life in higher education curricula
UAIHEC2 I am curious about discovering new AI technologies in higher education curricula
UAIHEC3 I am condent I will perform well on AI learning tools
in higher education curricula
UAIHEC4 I will continue to use AI learning tools in the future
UAIHEC5 I will keep myself updated with the latest AI learning tools
in higher education curricula
UAIHEC6 I often try to recommend and explain AI learning tools to my classmates or friends
UAIHEC7 I try to work with my classmates to complete tasks and projects using AI learning tools
UAIHEC8 I know how to use AI learning tools (e.g., Siri, chatbot)
UAIHEC9 I can evaluate AI learning tools for dierent situations
UAIHEC10 I can compare the dierences between AI learning tools
SA SA1 The use of AI tools for education greatly enhances my learning
SA2 The practice of reviewing content and material for education enhances my learning
SA3 It is helpful to be able to contact the AI tools
SA4 AI tools greatly enhanced my ability to learn
SA5 The information obtained from AI tools is valuable
PCAI PCAI1 The content and material generated and provided by AI tools in education are believable
PCAI2 The content and material generated and provided by AI tools in education are accurate
PCAI3 The content and material generated and provided by AI tools in education are trustworthy
PCAI4 The content and material generated and provided by AI tools in education are from any bias
PCAI5 The content and material generated and provided by AI tools in education are complete
CQAI CQAI1 The content generated by the AI tools is easy to understand
CQAI2 The content generated by the AI tools is new
CQAI3 The content generated by the AI tool is refreshing
CQAI4 The content generated by the AI tools is popular
CQAI5 The content generated by the AI tools is relevant for users
PUAI PUAI1 I am condent in the quality and accuracy of AI tools
PUAI2 In AI tools, I received what I paid for
PUAI3 The educational materials provided by AI tools were adequate
PUAI4 Having AI tools gave me more control over my learning goals
PUAI5 I learned something by using AI tools that I did not know before
PUAI6 The information I received has inuenced how I will manage my educational goals in the future
PUAI7 What I learned can help reduce my chances of failing exams
References
1. Abdalla RA. Higher education students’ trust and use of ChatGPT: empirical evidence. Int J Technol Enhanced Learn. 2025;17(1):81–105.
2. Abu Talib M, Bettayeb AM, Omer RI. Analytical study on the impact of technology in higher education during the age of COVID-19:
systematic literature review. Educ Inf Technol. 2021;26(6):6719–46.
3. Ahmad K, Iqbal W, El-Hassan A, Qadir J, Benhaddou D, Ayyash M, Al-Fuqaha A. Data-driven artificial intelligence in education: a
comprehensive review. IEEE Trans Learn Technol. 2024;17:12–31. https:// doi. org/ 10. 1109/ tlt. 2023. 33146 10.
4. Al-Abdullatif AM, Alsubaie MA. ChatGPT in learning: assessing students’ use intentions through the lens of perceived value and the
influence of AI literacy. Behav Sci. 2024;14(9):845.
5. Al-Emran M, Abu-Hijleh B, Alsewari AA. Examining the impact of Generative AI on social sustainability by integrating the information
system success model and technology-environmental, economic, and social sustainability theory. Educ Inf Technol. 2024. https://
doi. org/ 10. 1007/ s10639- 024- 13201-0.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Vol:.(1234567890)
Research
Discover Computing (2025) 28:62 | https://doi.org/10.1007/s10791-025-09567-5
6. Al-Emran M, AlQudah AA, Abbasi GA, Al-Sharafi MA, Iranmanesh M. Determinants of using AI-based chatbots for knowledge sharing:
evidence from PLS-SEM and fuzzy sets (fsQCA). IEEE Trans Eng Manage. 2023;71:4985–99.
7. Almufarreh A. Determinants of students’ satisfaction with ai tools in education: a pls-sem-ann approach. Sustainability.
2024;16(13):5354.
8. Almusfar LA. Improving learning management system performance: a comprehensive approach to engagement, trust, and adaptive
learning. IEEE Access. 2025. https:// doi. org/ 10. 1109/ ACCESS. 2025. 35502 88.
9. Alwi NH, Ahmad Khan BN. Technology readiness and adoption of artificial intelligence among accounting students in Malaysia. Int
J Religion. 2024;5(10):4029–38. https:// doi. org/ 10. 61707/ e30gn v95.
10. Amdan MAB, Janius N, Kasdiah MAHB. Concept paper: efficiency of artificial intelligence (AI) tools for STEM education in Malaysia.
Int J Sci Res Arch. 2024;12(2):553–9.
11. Basri WS. Enhancing AI auto efficacy: role of AI knowledge, information source, behavioral intention and information & communica-
tions technology learning. Profesional de la información. 2024;33(3):1–16.
12. Bhattacherjee A. Understanding information systems continuance: an expectation-confirmation model. MIS Q. 2001;25(3):351. https://
doi. org/ 10. 2307/ 32509 21.
13. Bond M, Buntins K, Bedenlier S, Zawacki-Richter O, Kerres M. Mapping research in student engagement and educational technology
in higher education: a systematic evidence map. Int J Educ Technol High Educ. 2020;17:1–30.
14. Chen X, Zou D, Xie H, Cheng G, Liu C. Two decades of artificial intelligence in education. Educ Technol Soc. 2022;25(1):28–47.
15. Chisom ON, Unachukwu CC, Osawaru B. Review of AI in education: transforming learning environments in Africa. Int J Appl Res Soc
Sci. 2023;5(10):637–54.
16. Dash G, etal. COVID-19 and E-learning adoption in higher education: a multi-group analysis and recommendation. Sustainability.
2022;14(8799):1–20. https:// doi. org/ 10. 3390/ su141 48799.
17. Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quart. 1989;13:319–40.
18. Djokic I, Milicevic N, Djokic N, Maleic B, Kalas B. Students’ perceptions of the use of artificial intelligence in educational services.
Amfiteatru Econ. 2024;26(65):294–310.
19. Faraon M, Rönkkö K, Milrad M, Tsui E. International perspectives on artificial intelligence in higher education: an explorative study
of students’ intention to use ChatGPT across the Nordic countries and the USA. Educ Inf Technol. 2025. https:// doi. org/ 10. 1007/
s10639- 025- 13492-x.
20. Foroughi B, Senali MG, Iranmanesh M, Khanfar A, Ghobakhloo M, Annamalai N, Naghmeh-Abbaspour B. Determinants of intention
to use ChatGPT for educational purposes: findings from PLS-SEM and fsQCA. Int J Human Comput Interact. 2024;40(17):4501–20.
21. Galdames IS. Impact of artificial intelligence on higher education: a literature review. In: International conference in information
technology and education. Cham: Springer Nature Switzerland; 2024. p. 373–92
22. Hair JF, Black WC, Babin BJ, Anderson RE. Multivariate data analysis, 8th edition; 2019. www. cenga ge. com/ highe red
23. Hair JF, Tomas G, Hult M, Ringle CM, Sarstedt M. A primer on Partial Least Squares Structural Equation Modeling (PLS-SEM). 3rd ed.
Sage Publishing; 2022.
24. Jacobs J, Scornavacco K, Clevenger C, Suresh A, Sumner T. Automated feedback on discourse moves: teachers’ perceived utility of a
professional learning tool. Educ Tech Res Dev. 2024;72(3):1307–29.
25. Jain S, Jain R. Role of artificial intelligence in higher education—an empirical investigation. IJRAR-Int J Res Anal Rev. 2019;6(2):144–50.
26. Johnston AC, Warkentin M. The influence of perceived source credibility on end user attitudes and intentions to comply with recom-
mended IT actions. J Org End User Comput. 2010;22(3):1–21. https:// doi. org/ 10. 4018/ joeuc. 20100 70101.
27. Kanont K, Pingmuang P, Simasathien T, Wisnuwong S, Wiwatsiripong B, Poonpirome K, Khlaisang J. Generative-AI, a learning assistant?
Factors influencing higher-Ed students’ technology acceptance. Electr J e-Learn. 2024;22(6):18–33.
28. Kim E-D, Chae M-S. An empirical study on the differences of relationship between content quality factors and user satisfaction on
mobile contents based on user characteristics. J Korea Acad Ind Cooper Soc. 2013;14(4):1957–68. https:// doi. or g/ 1 0. 5762/ KAIS. 2013.
14.4. 1957.
29. Kline RB. Principles of structural equation modeling. 2nd ed. Guilford Press; 2005.
30. Lee S, Song KS. Teachers’ and students’ perceptions of AI-generated concept explanations: implications for integrating generative AI in
computer science education. Comput Educ Artif Intell. 2024;7: 100283.
31. Lijie H, Mat Yuso S, Mohamad Marzaini AF. Inuence of AI-driven educational tools on critical thinking dispositions among university
students in Malaysia: a study of key factors and correlations. Educ Inf Technol. 2024;30:1–25.
32. Ma D, Akram H, Chen IH. Articial intelligence in higher education: a cross-cultural examination of students’ behavioral intentions and
attitudes. Int Rev Res Open Distrib Learn. 2024;25(3):134–57.
33. Mat Yuso S, Lijie H, Marzaini AFM, Basal MH. An investigation of the theory of planned behavior in predicting Malaysian secondary school
teachers’ use of ICT during teaching and learning sessions. J Nusantara Stud (JONUS). 2024;9(1):97–120.
34. Mayer RC, Davis JH, Schoorman FD. An integrative model of organizational trust. Acad Manag Rev. 1995;20(3):709–34.
35. Ministry of Education Malaysia. Malaysia Education Blueprint 2015–2025 (Higher Education). Putrajaya: Ministry of Education Malaysia;
2015.
36. Mishra P, Pandey CM, Singh U, Gupta A, Sahu C, Keshri A. Descriptive statistics and normality tests for statistical data. Ann Card Anaesth.
2019;22(1):67–72.
37. Mitchell ML, Jolley JM. Research design explained. Wadsworth: Cengage Learning; 2013.
38. Mizan NAM, Norman H. Pre-university students’ perception in using generative AI: a study at a Malaysian private university. Int J Acad
Res Bus Soc Sci. 2024. https:// doi. org/ 10. 6007/ IJARB SS/ v14- i8/ 22455.
39. Mizan NA, Norman H. Persepsi Pelajar Pra-Universiti dalam Menggunakan AI Generatif: Kajian di Universiti Swasta Malaysia. Jurnal Pen-
didikan Bitara UPSI. 2024;17(2):138–49.
40. Mogavi RH, Deng C, Kim JJ, Zhou P, Kwon YD, Metwally AHS, etal. ChatGPT in education: a blessing or a curse? A qualitative study explor-
ing early adopters’ utilization and perceptions. Comput Hum Behav Artif Humans. 2024;2(1): 100027.
41. Moser CA. Quota sampling. J R Stat Soc Ser A Gen. 1952;115(3):411–23.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Vol.:(0123456789)
Discover Computing (2025) 28:62 | https://doi.org/10.1007/s10791-025-09567-5
Research
42. Mustapha R, Rosly NI, Yasin AA, Lambin R, Saad F, Kashean S. Knowledge and competency of vocational teacher trainees in the eld of
articial intelligence (AI): a case study in a Malaysian public university. In: AIP conference proceedings, vol. 2750, No. 1. AIP Publishing;
2024.
43. Mustofa RH, Kuncoro TG, Atmono D, Hermawan HD. Extending the Technology acceptance model: the role of subjective norms, ethics,
and trust in AI Tool adoption among students. Comput Educ Artif Intell. 2025;8: 100379.
44. Ng DTK, Lee M, Tan RJY, Hu X, Downie JS, Chu SKW. A review of AI teaching and learning from 2000 to 2020. Educ Inf Technol.
2023;28(7):8445–501.
45. Ng DTK, Wu W, Leung JKL, Chiu TKF, Chu SKW. Design and validation of the AI literacy questionnaire: The aective, behavioral, cognitive
and ethical approach. Br J Edu Technol. 2024;55(3):1082–104. https:// doi. org/ 10. 1111/ bjet. 13411.
46. Ng SZ. The eects of AI tools on undergraduates’ academic writing prociency in Malaysia (Doctoral dissertation, UTAR); 2024.
47. Ngo TTA, Tran TT, An GK, Nguyen PT. ChatGPT for educational purposes: investigating the impact of knowledge management factors on
student satisfaction and continuous usage. IEEE Trans Learn Technol. 2024. https:// doi. org/ 10. 1109/ TLT. 2024. 33837 73.
48. Noel DD, Justin KGA, Alphonse AK, Désiré LH, Dramane D, Nafan D, Malerba G. Normality assessment of several quantitative data trans-
formation procedures. Biostat Biom Open Access J. 2021;10:51–65.
49. Ou M, Zheng H, Zeng Y, Hansen P. Trust it or not: Understanding users’ motivations and strategies for assessing the credibility of AI-
generated information. New Media Soc. 2024. https:// doi. org/ 10. 1177/ 14614 44824 12931 54.
50. Puiu S, Udriștioiu MT. The behavioral intention to use virtual reality in schools: a technology acceptance model. Behav Sci. 2024;14(7):615.
https:// doi. org/ 10. 3390/ bs140 70615.
51. Rana RL, Adamashvili N, Tricase C. The impact of blockchain technology adoption on tourism industry: a systematic literature review.
Sustainability. 2022;14(12):7383.
52. Rapanta C, etal. Balancing technology, pedagogy and the new normal: post-pandemic challenges for higher education. Postdig Sci Educ.
2021. https:// doi. org/ 10. 1007/ s42438- 021- 00249-1.
53. Ravšelj D, Keržič D, Tomaževič N, Umek L, Brezovar N, Iahad A, N., etal. Higher education students’ perceptions of ChatGPT: a global study
of early reactions. PLoS ONE. 2025;20(2): e0315011.
54. Sae-Tae K, Ling J, Wang Q. The impact of user addiction on continuance intention to use streaming platforms: incorporating expectation
conrmation model and personality traits. Front Commun. 2024. https:// doi. org/ 10. 3389/ fcomm. 2024. 14109 75.
55. Saihi A, Ben-Daya M, Hariga M. The moderating role of technology prociency and academic discipline in AI-chatbot adoption within
higher education: Insights from a PLS-SEM analysis. Educ Inf Technol. 2024;30:1–39.
56. Saman HM, Noor SM, Isa CMM, Lian OC, Narayanan G. Embracing articial intelligence as a catalyst for change in reshaping malaysian
higher education in the digital era: a literature review. In: International conference on innovation & entrepreneurship in computing,
engineering & science education (InvENT 2024). Atlantis Press; 2024. p. 633–43
57. Sauls M. Perceived credibility of information on internet health forums (Doctoral dissertation, Clemson University). Clemson University
TigerPrints; 2018. https:// tiger prints. clems on. edu/ all_ disse rtati ons/ 2110.
58. Saxena AK, García V, Amin MR, Salazar JMR, Dey S. Structure, objectives, and operational framework for ethical integration of articial
intelligence in educational. Sage Sci Rev Educ Technol. 2023;6(1):88–100.
59. Shin D. User perceptions of algorithmic decisions in the personalized AI system: perceptual evaluation of fairness, accountability, trans-
parency, and explainability. J Broadcast Electron Media. 2020;64(4):541–65.
60. Stöhr C, Ou AW, Malmström H. Perceptions and usage of AI chatbots among students in higher education across genders, academic levels
and elds of study. Comput Educ Artif Intell. 2024;7: 100259.
61. Talan T, Kalınkara Y. The role of articial intelligence in higher education: ChatGPT assessment for anatomy course. Uluslararası Yönetim
Bilişim Sistemleri ve Bilgisayar Bilimleri Dergisi. 2023;7(1):33–40.
62. Tian S, Zhang J, Chen L, Liu H, Wang Y. Random sampling-arithmetic mean: A simple method of meteorological data quality control based
on random observation thought. IEEE Access. 2020;8:226999–7013.
63. Vázquez-Parra JC, Henao-Rodríguez C, Lis-Gutiérrez JP, Palomino-Gámez S, Suárez-Brito P. Perception of AI tool adoption and training:
initial validation using GSEM method. Appl Comput Inf. 2024. https:// doi. org/ 10. 1108/ aci- 09- 2024- 0370.
64. Venkatesh V, Morris MG, Davis GB, Davis FD. User acceptance of information technology: toward a unied view. MIS quart. 2003;27:425–78.
65. Wang F, Shi X. Understanding AI acceptance and usage in history education: an application of the UTAUT model among Malaysian higher
education students. Preprints; 2024. https:// doi. org/ 10. 20944/ prepr ints2 02411. 1542. v1
66. Wolf EJ, Harrington KM, Clark SL, Miller MW. Sample size requirements for structural equation models: an evaluation of power, bias, and
solution propriety. Educ Psychol Measur. 2013;73(6):913–34.
67. Yingsoon GY, Zhang S, Chua NA, Chen Y, Xiaoyao T. Embracing cultural dimensions in AI-enhanced sustainability education: tailoring
pedagogies for a global learner community. In: Rethinking the pedagogy of sustainable development in the AI era. IGI Global Scientic
Publishing; 2025. p. 37–60.
68. Zawacki-Richter O, Marín VI, Bond M, Gouverneur F. Systematic review of research on articial intelligence applications in higher educa-
tion–where are the educators? Int J Educ Technol High Educ. 2019;16(1):1–27.
69. Zhao Y, Zhao M, Shi F. Integrating moral education and educational information technology: a strategic approach to enhance rural teacher
training in universities. J Knowl Econ. 2023;15:1–41.
70. Zou X, Su P, Li L, Fu P. AI-generated content tools and students’ critical thinking: insights from a Chinese university. IFLA J. 2024;50(2):228–41.
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional aliations.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers and authorised users (“Users”), for small-
scale personal, non-commercial use provided that all copyright, trade and service marks and other proprietary notices are maintained. By
accessing, sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of use (“Terms”). For these
purposes, Springer Nature considers academic use (by researchers and students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and conditions, a relevant site licence or a personal
subscription. These Terms will prevail over any conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription
(to the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of the Creative Commons license used will
apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may also use these personal data internally within
ResearchGate and Springer Nature and as agreed share it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not
otherwise disclose your personal data outside the ResearchGate or the Springer Nature group of companies unless we have your permission as
detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial use, it is important to note that Users may
not:
use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at
onlineservice@springernature.com
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
This study explored factors influencing ChatGPT adoption among higher education students in five Nordic countries (Sweden, Finland, Denmark, Norway, and Iceland) and the USA. The unified theory of acceptance and use of technology 2 (UTAUT2) framework was employed and extended to incorporate personal innovativeness. Data was collected from 586 students recruited through Prolific and analyzed using partial least squares structural equation modeling (PLS-SEM). The findings revealed varying patterns of relationships between different factors and behavioral intention in each region. In the Nordic countries, performance expectancy, hedonic motivation, and habit demonstrated positive relationships with behavioral intention. In the USA, the results revealed positive relationships between behavioral intention and performance expectancy, social influence, habit, and personal innovativeness. Performance expectancy emerged as the strongest predictor of behavioral intention in both regions. In both the Nordic countries and the USA, habit and behavioral intention emerged as the only predictors of ChatGPT use behavior. Behavioral intention demonstrated a marginally stronger influence on use behavior in both regions. These findings offer insights for educators and policymakers regarding AI integration in academic settings by highlighting common drivers and differences in AI adoption patterns.
Article
Full-text available
This paper examines the critical factors influencing the performance of Learning Management Systems (LMS), emphasizing user satisfaction and loyalty. Employing a quantitative approach, structural equation modeling (SEM) was used to analyze survey data from 266 university students actively utilizing LMS platforms. Findings reveal that system quality significantly impacts user satisfaction, emphasizing reliability and usability. Adaptive learning enhances satisfaction and engagement by tailoring educational content to individual needs. Trust, particularly regarding data security and system reliability, further contributes to satisfaction, underscoring the necessity of robust and transparent security measures. Although information quality positively affects user satisfaction, it is less impactful than system-related factors. Service quality, while exerting a smaller influence, highlights the importance of responsive technical support for overall functionality. Additionally, user satisfaction strongly predicts user loyalty, indicating that satisfied users are more likely to continue using and recommending LMS platforms. The study provides actionable insights for developers and educators, advocating for enhanced system performance, adaptive learning tools, and trust-building measures to foster satisfaction and loyalty.
Article
Full-text available
The paper presents the most comprehensive and large-scale global study to date on how higher education students perceived the use of ChatGPT in early 2024. With a sample of 23,218 students from 109 countries and territories, the study reveals that students primarily used ChatGPT for brainstorming, summarizing texts, and finding research articles, with a few using it for professional and creative writing. They found it useful for simplifying complex information and summarizing content, but less reliable for providing information and supporting classroom learning, though some considered its information clearer than that from peers and teachers. Moreover, students agreed on the need for AI regulations at all levels due to concerns about ChatGPT promoting cheating, plagiarism, and social isolation. However, they believed ChatGPT could potentially enhance their access to knowledge and improve their learning experience, study efficiency, and chances of achieving good grades. While ChatGPT was perceived as effective in potentially improving AI literacy, digital communication, and content creation skills, it was less useful for interpersonal communication, decision-making, numeracy, native language proficiency, and the development of critical thinking skills. Students also felt that ChatGPT would boost demand for AI-related skills and facilitate remote work without significantly impacting unemployment. Emotionally, students mostly felt positive using ChatGPT, with curiosity and calmness being the most common emotions. Further examinations reveal variations in students’ perceptions across different socio-demographic and geographic factors, with key factors influencing students’ use of ChatGPT also being identified. Higher education institutions’ managers and teachers may benefit from these findings while formulating the curricula and instructions/regulations for ChatGPT use, as well as when designing the teaching methods and assessment tools. Moreover, policymakers may also consider the findings when formulating strategies for secondary and higher education system development, especially in light of changing labor market needs and related digital skills development.
Article
Full-text available
Generative Artificial Intelligence (AI) refers to advanced systems capable of creating new content by learning from vast datasets, including text, images, and code. These AI tools are increasingly being integrated into various sectors, including education, where they have the potential to enhance learning experiences. While the existing literature has primarily focused on the immediate educational benefits of these tools, such as enhanced learning and efficiency, less attention has been given to how these tools influence broader social sustainability goals, including equitable access and inclusive learning environments. Therefore, this study aims to fill this gap by developing a theoretical research model that combines the information system (IS) success model, technology-environmental, economic, and social sustainability theory (T-EESST), and privacy concerns. To evaluate the developed model, data were collected from 773 university students who were active users of Generative AI and analyzed using the PLS-SEM technique. The findings showed that service quality, system quality, and information quality have a significant positive effect on user satisfaction. Using Generative AI tools is found to be positively affected by user satisfaction. Interestingly, the findings supported the positive role of Generative AI in promoting social sustainability. However, no significant negative correlation was found between privacy concerns and Generative AI use. The findings provide several theoretical contributions and offer insights for various stakeholders in developing, implementing, and managing Generative AI tools in educational settings.
Article
Full-text available
This research examines the factors that affect critical thinking disposition (CTD) in Malaysian university students using AI-based educational technologies. It analyzes the impact of AI literacy (AIL), perceived ease of use (PEOU), perceived usefulness (PU), and motivation (MO) through the lens of the Technology Acceptance Model (TAM) and Self-Determination Theory (SDT). Using a cross-sectional survey design, data were collected from 483 participants and analyzed through Partial Least Squares Structural Equation Modeling (PLS-SEM). The results reveal that AIL, PU, and MO positively and significantly impact CTD, with motivation being the most critical factor. Contrary to expectations, PEOU was found to influence CTD negatively, thus suggesting that while ease of use is important, it may lead to superficial engagement, hindering the deep cognitive processing necessary for critical thinking. The study also confirms significant correlations among the predictors, highlighting that AI literacy enhances perceived ease of use, influencing perceived usefulness and ultimately boosting motivation. These findings underscore the importance of designing AI tools that balance user-friendliness with complexity to encourage critical engagement. Additionally, the study emphasizes the need to integrate AI literacy into higher education curricula to better prepare students for future challenges. The research contributes to the growing body of knowledge on AI-driven learning environments and fosters critical thinking skills. Future research is suggested to explore these relationships in different cultural contexts and to include qualitative approaches for a more comprehensive insight.
Chapter
This chapter explores the integration of cultural dimensions in AI-enhanced sustainability education, emphasizing the need to tailor pedagogies for a diverse global learner community. As sustainability challenges transcend geographical boundaries, it is imperative to develop educational approaches that are inclusive and culturally sensitive. The chapter discusses the potential of Artificial Intelligence (AI) in personalizing learning experiences and addressing the unique needs of learners from various cultural backgrounds. By examining case studies and empirical evidence, we demonstrate how AI can be leveraged to adapt sustainability education to different cultural contexts, fostering a deeper understanding and engagement among students. The chapter also highlights the importance of interdisciplinary collaboration and the role of educators in bridging cultural gaps, ultimately contributing to the development of a more sustainable and equitable global society.
Article
This study extends the Technology Acceptance Model (TAM) to investigate the adoption of AI tools among university students, incorporating Ethics and Trust as moderating variables and Subjective Norms as a quadratic variable. Structural Equation Modeling (SEM) on a sample of 437 students’ reveals that Perceived Usefulness (PU) significantly influences Attitude Toward Using (ATU), while Perceived Ease of Use (PU) significantly influences Attitude Toward Using (ATU), while Perceived Ease of Use (PEoU) does not, suggesting familiarity with technology reduces the role of ease of use. Ethics positively impacts ATU, highlighting its importance in shaping attitudes. However, Ethics and Trust do not moderate the ATU-Actual Use (AU) relationship, and the hypothesized quadratic effect of Subjective Norms is unsupported, confirming a linear relationship. These findings underscore the direct influence of Ethics and Trust in AI adoption and suggest that educational policies should prioritize ethical AI usage and trust-building to enhance acceptance.
Article
Purpose This study develops and validates the “Perception of the Adoption and Training in the Use of Artificial Intelligence Tools in the Profession” instrument, designed to measure Latin American university students' attitudes and perceptions regarding AI training in their professional education across diverse fields. Design/methodology/approach The instrument was administered to 238 students from various disciplines at a Mexican university. Structural validity and reliability were assessed using a generalized structural equation model (GSEM) with quasi-maximum likelihood (QML) to handle data non-normality and analyze latent construct relationships. Findings Results show high internal consistency and validity, with strong correlations between items and constructs of “attitude” and “perception of AI training value.” The study found significant relationships between understanding AI tools and the perceived value of AI training, as well as between this perception and attitudes toward incorporating AI in professional training. Practical implications The instrument helps institutions identify student attitudes and training needs related to AI, enabling tailored curricula and training programs that foster positive AI acceptance, thus preparing students for modern technological challenges. Originality/value This study offers a validated instrument tailored to the Latin American context, addressing a gap in measuring student perceptions of AI in professional training. It serves as a diagnostic tool for educators and policymakers in designing AI-integrated pedagogical strategies that align with student needs.
Preprint
This study investigates the acceptance and usage of Artificial Intelligence (AI) in history education among Malaysian higher education students using the Unified Theory of Acceptance and Use of Technology (UTAUT) framework. Despite AI's growing integration across academic disciplines, its adoption patterns in humanities, particularly history education, remain understudied. Through a survey of 512 history students from four Malaysian universities, this research examines how performance expectancy, effort expectancy, social influence, and facilitating conditions affect AI adoption in history education. Partial Least Squares Structural Equation Modeling (PLS-SEM) analysis revealed that performance expectancy (β = 0.337, p < 0.001), effort expectancy (β = 0.286, p < 0.001), and social influence (β = 0.240, p < 0.001) significantly influence behavioral intention to use AI, while facilitating conditions showed no significant effect on usage behavior. The model explained 43.7% of variance in behavioral intention. These findings extend UTAUT's application to humanities education while providing practical insights for integrating AI tools in history education.