ArticlePDF Available

First-year students AI-competence as a predictor for intended and de facto use of AI-tools for supporting learning processes in higher education

Authors:

Abstract

The influence of Artificial Intelligence on higher education is increasing. As important drivers for student retention and learning success, generative AI-tools like translators, paraphrasers and most lately chatbots can support students in their learning processes. The perceptions and expectations of first-years students related to AI-tools have not yet been researched in-depth. The same can be stated about necessary requirements and skills for the purposeful use of AI-tools. The research work examines the relationship between first-year students’ knowledge, skills and attitudes and their use of AI-tools for their learning processes. Analysing the data of 634 first-year students revealed that attitudes towards AI significantly explains the intended use of AI tools. Additionally, the perceived benefits of AI-technology are predictors for students’ perception of AI-robots as cooperation partners for humans. Educators in higher education must facilitate students’ AI competencies and integrate AI-tools into instructional designs. As a result, students learning processes will be improved.
Open Access
© The Author(s) 2024. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits
use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original
author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third
party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the mate-
rial. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or
exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://
creativecommons.org/licenses/by/4.0/.
RESEARCH ARTICLE
Delckeretal. Int J Educ Technol High Educ (2024) 21:18
https://doi.org/10.1186/s41239-024-00452-7
International Journal of Educational
Technology in Higher Education
First-year students AI-competence
asapredictor forintended andde facto use
ofAI-tools forsupporting learning processes
inhigher education
Jan Delcker1* , Joana Heil1, Dirk Ifenthaler1,2, Sabine Seufert3 and Lukas Spirgi3
Abstract
The influence of Artificial Intelligence on higher education is increasing. As important
drivers for student retention and learning success, generative AI-tools like transla-
tors, paraphrasers and most lately chatbots can support students in their learning
processes. The perceptions and expectations of first-years students related to AI-tools
have not yet been researched in-depth. The same can be stated about necessary
requirements and skills for the purposeful use of AI-tools. The research work examines
the relationship between first-year students’ knowledge, skills and attitudes and their
use of AI-tools for their learning processes. Analysing the data of 634 first-year students
revealed that attitudes towards AI significantly explains the intended use of AI tools.
Additionally, the perceived benefits of AI-technology are predictors for students’ per-
ception of AI-robots as cooperation partners for humans. Educators in higher educa-
tion must facilitate students’ AI competencies and integrate AI-tools into instructional
designs. As a result, students learning processes will be improved.
Keywords: Artificial Intelligence, Higher education, Learning process, AI tool, Chatbot
Introduction
AI-robots are agents programmed to fulfill tasks traditionally done by humans (Dang
& Liu, 2022). e number of interactions between humans and AI-robots is increasing,
which is a strong indicator of the integration of AI-technology into the lives of humans
(Kim etal., 2022). A popular example is the deployment of chatbots on a website. ese
AI-robots can guide users and respond to basic user requests (Larasati etal., 2022). e
technology behind semi-automated and fully automated human-like task fulfillment is
based on AI-methods and AI-algorithms (Gkinko & Elbanna, 2023). ese AI-methods
and -algorithms form the main programming characteristics of AI-robots (Woschank
etal., 2020). e features lead to an increasing similarity in the performance of humans
and AI-robots (Byrd etal., 2021). Additionally, the appearance and behavior of AI-robots
are becoming more human-like (Hildt, 2021). While most machines are easily distin-
guishable from humans, AI-robots might be hard to identify (Desaire etal., 2023) and
*Correspondence:
delcker@uni-mannheim.de
1 University of Mannheim, L4, 1,
68161 Mannheim, Germany
2 Curtin University, Perth,
Australia
3 University of St. Gallen, St.
Gallen, Switzerland
Page 2 of 13
Delckeretal. Int J Educ Technol High Educ (2024) 21:18
the ability to identify AI-robots is one of the many challenges accompanying these new
technologies. As a result, humans even start to attribute AI-robots with human-like
understanding, as well as mental capacities (Roesler etal., 2021).
Accordingly, new and changing demands in humans’ digital competencies are required
to deal with the various applications of AI-robots in all sectors of human life (Seufert
& Tarantini, 2022). One of these fields is higher education, which is strongly affected
by introducing AI-technology and AI-robots (Ouyang et al., 2022; Popenici & Kerr,
2017). Future applications for AI-technology can be found at all levels of higher educa-
tion (Ocaña-Fernández etal., 2019). On the student level, virtual AI teaching assistants
(Kim etal., 2020; Liu etal., 2022) and intelligent tutoring systems (Azevedo etal., 2022;
Latham, 2022) have the capability to guide individual learner paths (Brusilovsky, 2023;
Rahayu et al., 2023). Educators might implement automated grading and assessment
tools (Heil & Ifenthaler, 2023; Celik etal., 2022) or create educational content with gen-
erative AI (Bozkurt & Sharma, 2023; Kaplan-Rakowski etal., 2023). e administration
of higher education institutions has to adapt their policies to the new technology (Chan,
2023), while incorporating learning analytic tools to improve study conditions, reduce
drop-out rates, and adapt their study programs (Aldowah etal., 2019; Ifenthaler & Yau,
2020; Ouyang et al., 2023; Tsai et al., 2020). ese developments are embedded into
national policy-making processes, such as creating ethics guidelines (Jobin etal., 2019)
and competence frameworks (Vuorikari etal., 2022) for AI-technology.
According to recent studies, first-year students enter institutions of higher learning
with various perceptions and expectations about university life, for instance, in terms
of social aspects, learning experiences, and academic support (Houser, 2004). While
students’ generic academic skills appear to be well-established for coping with higher
education requirements, their competencies related to AI seem to be limited (Ng etal.,
2023).
As of now, there are no conceptual frameworks that cover the use of human-like AI-
technology, focusing on first-year students within the context of higher education. us,
this study is targeting this research gap. For this purpose, seven functionalities of AI-
tools have been conceptualized for their application in the context of higher education.
is conceptualization is a helpful differentiation to analyze the intent and frequency
of use, as well as possible indicators that might affect intent and frequency of use. As a
result, implications for further implementing AI-tools in higher education learning pro-
cesses will be derived.
Background
First‑year students
First-year students’ perceptions and expectations and how they cope with academic
requirements in higher education have been identified as important factors for learning
success and student retention (Mah, & Ifenthaler, 2018; Tinto, 1994; Yorke & Longden,
2008). Several studies identified a mismatch between first-year students’ perceptions and
academic reality (Smith & Wertlieb, 2005). Furthermore, research indicates that many
first-year students do not know what is expected at university and are often academi-
cally unprepared (Mah & Ifenthaler 2017; McCarthy & Kuh, 2006). Students’ prepared-
ness is particularly relevant concerning generic skills such as academic competencies,
Page 3 of 13
Delckeretal. Int J Educ Technol High Educ (2024) 21:18
which they should possess when entering university (Barrie, 2007). Numerous aspects,
including sociodemographic features, study choices, cognitive ability, motivation, per-
sonal circumstances, and academic and social integration, have been linked to first-year
students’ learning success and retention in higher education (Bean & Eaton, 2020; Sanavi
& Matt, 2022). Mah & Ifenthaler (2017) identified five academic competencies for suc-
cessful degree completion: time management, learning skills, self-monitoring, technol-
ogy proficiency, and research skills. Accordingly, coping with academic requirements is
an important driver of student retention in higher education (omas, 2002). Moreover,
students’ perceptions of their first year can affect student success (Crisp etal., 2009).
More recently, it has been argued that competencies related to AI are an important
driver for student retention and learning success (Bates etal., 2020; Mah, 2016; Ng etal.,
2023). Nonetheless, first-year students’ perceptions, expectations, and academic com-
petencies for coping with academic requirements related to AI-tools have not yet been
researched in-depth.
Conceptualization ofAI‑tools inhigher education
Dang and Liu (2022) propose a differentiation of AI-robots, which is also used in this
study. ey categorize AI-robots into << mindful >> (AI-robots with increasingly human
characteristics) and << mindless >> (AI-robots with machine characteristics) tools. e
so-called mindful AI-robots can perform more complex tasks, react to the prompts of
the users in a more meaningful way, and are designed to act and look like humans. On
the other hand, mindless AI-robots perform fewer complex tasks and appear more like
machines. In the following, a short overview of AI-tools is provided, including their main
functionality and examples for practical use in higher education learning processes:
Mindless AI‑robots
1) Translation text generators: ese tools use written text as input and translate the text
into a different language. Translation text generators can help to quickly translate text
into the language a student is most familiar with or to translate into a language that is
required by the assignment. Many study programs require students to hand in (some)
papers in a language different from the study program’s language (Galante, 2020). Two of
the most prominent translation text generators are Google Translate and DeepL (Mar-
tín-Martín etal., 2021).
2) Summarizing/rephrasing text generators: ese tools use written text as input and
can change the structure of the text. On the one hand, they are used to extract critical
information, keywords, or main concepts out of structured text, reducing the complexity
of the input text. In this way, they help the user focus on the input text’s most important
aspects, allowing them to get a basic understanding of complex frameworks. Summariz-
ing text, such as research literature or lecture slides, is an important learning strategy
in the context of higher education (Mitsea & Drigas, 2019). On the other hand, these
text generators can rephrase text input, an important task when writing research papers:
In most cases, written research assignments include some theoretical chapter based on
existing research literature. Students must rephrase and restructure existing research
literature to show their understanding of concepts and theories (Aksnes etal., 2019).
Quillbot is an example of such a rephrasing tool (Fitria, 2021).
Page 4 of 13
Delckeretal. Int J Educ Technol High Educ (2024) 21:18
3) Writing assistants: Writing assistants can enhance the quality of written text. ese
tools automatically check for grammar and spelling mistakes, while the text is being cre-
ated. Furthermore, these tools can give recommendations to the writer to improve the
language used: they can provide suggestions for alternative formulations to avoid collo-
quial language and unnecessary iterations. Writing assistants are usually a part of word
processors (e.g., Microsoft Word), but standalone programs or extensions such as Gram-
marly also exist (Koltovskaia, 2020).
4) Text generators: ese tools can automatically generate written text. Text genera-
tors take short prompts as input and produce text based on this input. e output text
is mainly used for blog entries, text-based social media posts, or Twitter messages. ey
can be differentiated from chatbots as they cannot produce more complex pieces of text.
WriteSonic is a such a text generator tool (Almaraz-López etal., 2023).
Mindful AI‑robots
5) Chatbot: Chatbots are applications that simulate human interactions (Chong etal.,
2021). In the context of business, they are generally used to answer customer ques-
tions automatically. In education, these chatbots help to guide learners through online
environments or administrative processes. With the release of ChatGPT, a new kind of
chatbot was introduced. ese chatbots can produce various output formats, including
working algorithms, presentations, or pictures, based on prompts that are very similar
to human interactions (Almaraz-López etal., 2023; Fauzi etal., 2023; Fuchs, 2023). Stu-
dents can use chatbots to automatically produce content, which is traditionally being
used as part of instructional design, especially final assessments.
6) Virtual avatars: Virtual avatars are digital representations of living beings. ey
can be used in online classroom settings to represent teachers and learners alike. In
these classroom settings, virtual representations, such as Synthesia, have been shown to
improve students’ learning performance, compared to classes without virtual represen-
tation (Herbert & Dołżycka, 2022).
7) Social-humanoid robots: ese tools not only simulate human behavior and per-
form human tasks, but in many cases, social-humanoid robots are also built close to
human complexity, featuring hands, legs, and faces (van Pinxteren etal., 2019). ey can
perform human-like mimic to various degrees. Currently, these social-humanoid robots
are used as servers in restaurants and are tested in medical and educational institutions
(Henschel etal., 2021).
AI‑competencies andAI‑ethics
e European DigComp Framework 2.2 is a comprehensive framework, that organizes
different components of digital competencies deemed essential for digitally competent
citizens (Vuorikari etal., 2022). Within this framework, AI literacy can be found in three
dimensions: knowledge, proficiency, and attitudes. Basic ideas about the functionality
and application areas of AI technology are allocated to the knowledge dimension. is
dimension also holds theoretical knowledge about AI laws and regulations, such as the
European data protection regulation. e ability of a person to take advantage of AI and
use it to improve various aspects of their life can be found in the proficiency dimen-
sion. Successfully deploying AI technology to solve problems requires the capability to
Page 5 of 13
Delckeretal. Int J Educ Technol High Educ (2024) 21:18
choose adequate tools and consequently control these chosen tools. Competent citi-
zens must be able to form an opinion on AI technology’s benefits, risks, and disadvan-
tages. is allows them to participate in political and social decision-making processes.
rough a meta-analysis of guidelines, Jobin etal. (2019) identifies eleven ethical prin-
ciples which must be considered when working with AI, such as transparency, justice,
fairness and trust. Hands-on examples are the guidelines by Diakopoulos etal. (2016) as
well as Floridi etal. (2018). e attitude dimension holds these competencies. As with
many technological advancements, higher education will be one of the main drivers for
facilitating digital AI-competencies (Cabero-Almenara etal., 2023; Ehlers & Kellermann,
2019).
Furthermore, AI technology will change the various learning processes within higher
education (Kim etal., 2022). is includes the perspective of educators (Kim etal., 2022),
learners (Zawacki-Richter et al., 2019), and administration alike (Leoste et al., 2021).
Although research indicates these impacts, research on AI-robots in higher education is
scarce, mainly because higher education institutions rarely use the different applications
broadly (Kim etal., 2022; Lim etal., 2023).
e functionalities of the different tools offer students various potential applications
for learning processes. Following the Unified eory of Acceptance and Use of Technol-
ogy (UTAUT), the intent to use new digital tools as well as the actual usage of technol-
ogy might be influenced by the expectation of performance, the expectation of effort,
social influence, and facilitating conditions (Venkatesh etal., 2003). Strzelecki (2023)
states that the assumptions made by UTAUT also hold for AI-tools, more specifically
ChatGPT, although he could not identify a significant effect from facilitating condi-
tions. In accordance with the DigComp 2.2 framework, this study focuses on students’
attitudes, proficiency, and knowledge regarding AI-technology as additional constructs
influencing the intent to use and actual usage of AI-tools.
Furthermore, the study builds on the considerations by Dang and Liu (2022) and
examines which constructs influence students’ perception of AI-technology as competi-
tors and cooperation for humans: Research in the field of AI uncovers a range of pos-
sible outcomes from increasing AI integration into human society (Einola & Khoreva,
2023). While some argue that AI technology will compete with humans in the work-
place, leading to a massive job loss (Zanzotto, 2019) and deskilling of human workers
(Li etal., 2023). On the other hand, AI has the potential to be a cooperation partner
for humans by automating processes (Bhargava etal., 2021; Joksimovic etal., 2023) or
relieving humans from physical and psychological stress (Raisch & Krakowski, 2021).
Hypotheses
is research project aims to better understand first-year students’ perceptions as well as
the intended and de facto use of AI-tools. While AI-competencies are understood as an
essential driver for learning success and student retention (Ng etal., 2023), the following
hypotheses emerge from the research gaps identified for the context of higher education:
Hypothesis 1 e underlying constructs of AI-competencies (skills, attitude, knowl-
edge) have a positive effect on the intention to use AI-robots, while the intention to use
AI-robots has a positive effect on the actual use of AI-robots.
Page 6 of 13
Delckeretal. Int J Educ Technol High Educ (2024) 21:18
Hypothesis 2a Students’ AI-competencies and the perceived benefits of AI-technology
are predictors for students’ perception of AI-robots as cooperation partners for humans.
Hypothesis 2b Students’ AI-competencies and the perceived risks of AI-technology
are predictors for students’ perception of AI-robots as competition for humans.
Method
Data collection andparticipants
An online questionnaire was designed to collect data from first-year students at a Ger-
man and a Swiss university. Possible participants were asked to take part in the survey
through an e-mail, which was send through the universities e-mail system. In total,
N = 638 first-year students participated in the survey. On average, they were 20.62years
old, with a standard deviation of 2.25years. Of the N = 638 students, N = 309 identified
as male, N = 322 as female, and N = 7 as non-binary. e lowest average use of the mind-
less tools could be found in paraphrasing and summarizing tools (M = 1.13, SD = 1.51).
e use of online writing assistants was slightly higher (M = 1.94, SD = 1.76), and the
highest average usage could be found in online translation tools (M = 3.53, SD = 1.18).
e average use of mindless robots was relatively low (M = 2.2, SD = 1.05). e willing-
ness to use the robots ranged from the lowest in virtual avatars (M = 2.23, SD = 1.13) to
the highest in online translation tools (M = 3.16, SD = 1.17).
Instrument
e online questionnaire consists of three parts. e instrument’s first part comprises
questions regarding knowledge, skills, and attitudes regarding AI-technology (Vuori-
kari etal., 2022). e different AI-robots are presented in part 2 of the questionnaire.
For each tool, current and intended usage was gathered, following the unified theory of
acceptance and use of technology (UTAUT) (Venkatesh etal., 2003). e items were for-
mulated to match the different tools with those tasks that are relevant for students, such
as writing assignments or preparing for exams. In addition, ethical considerations for
each tool were prompted (Vuorikari etal., 2022). e actual use of the robots by the
participants was evaluated with a 6-point Likert scale and their potential willingness to
use them with the help of a 5-point Likert scale. e third part of the instrument sum-
marizes items that collect demographic data. e instrument can be found in Additional
file1.
Analysis
A path analysis was conducted based on the factors of AI-competence, taken from the
DigiComp2.2 framework (Skills, Attitude, Knowledge), in combination with the UTAUT
models’ assumption that the intention to use technology influences the actual use of AI-
tools. A visualization of the model can be found in Fig.1. e path analysis was done
with RStudio, more specifically, the package lavaan (Rosseel, 2012).
Multiple linear regression analyses were conducted in RStudio to answer Hypotheses
2a and 2b.
Page 7 of 13
Delckeretal. Int J Educ Technol High Educ (2024) 21:18
Results
Hypothesis 1: theinuence ofskills, attitude, andknowledge ontheintended use
ofAI‑tools
e model has a relative well fit, with a non-significant chi-square (3, 638) = 7.3, p = 0.06,
and the fit Comparative Fit Index (CFI) = 0.96, above the respective cut-off value of 0.95.
e Tucker-Lewis Index (TLI) = 0.91 is slightly lower than 0.95. e RMSEA = 0.05 is
below 0.08.
e results indicate a significant positive influence of attitude (ß = 0.26, p < 0.01) and
a significant negative influence of skills (β = 0.1, p = 0.02) on the intention to use the
tools. Knowledge seems to have no significant impact (β = 0.06, p = 0.19). Further-
more, the intention to use the AI-tools significantly predicts their actual use (β = 0.33,
p < 0.01, R2 = 0.11). e path analysis is shown in Fig.1.
Hypothesis 2a: perceived benets asindicators forAI ascooperation partner
A multiple linear regression was conducted to analyze the influencing factors on stu-
dents’ rating of AI as cooperation partners. Concerning students’ rating of AI as a coop-
eration opportunity, the influence of AI-competence and the perceived benefits of AI
were included in the analysis. Both factors are significant predictors and explain 15.41%
of the variation in the estimation of AI as a cooperation possibility for humans. F(2,
635) = 57.84, p < 0.01. Both AI-competence, β = 0.22, p < 0.01, t(637) = 5.9 and perceived
benefits, β = 0.27, p < 0.01, t(637) = 7.2 are significant predictors.
Hypothesis 2b: perceived risks asindicators forAI ascompetition
A multiple linear regression was conducted to analyze the influencing factors on stu-
dents’ rating of AI as a competitor for humans. When considering the influence of
Fig. 1 Path analysis—skills, attitudes, and knowledge as predictors for intended and de facto use of AI-tools
Page 8 of 13
Delckeretal. Int J Educ Technol High Educ (2024) 21:18
perceived risks and AI-competence on students’ rating of AI as competition, both fac-
tors explain 2.26% of the variation in the dependent factor. F(2,635) = 7.33, p < 0.01.
While the AI-competence is a significant predictor, β = 0.09, t(637) = 10.2, p < 0.01, p er-
ceived risk is not β = 0.03, t(637) = 1.64, p = 0.1.
Discussion
Findings
e analyzed data provides insights into the actual use and implementation of AI-tools
in students’ learning process in their entry phase. So far, mindless AI-tools are favored
by the participants compared to mindful tools. ese mindless AI-tools provide useful
functionalities regarding tasks that can be considered as typical for higher education
programs, such as written papers, presentations, or reports (Flores etal., 2020; Medland,
2016). ese functionalities include translations (Einola & Khoreva, 2023) or summaries
(Fitria, 2021). e analysis results show that the intention to use these tools is affected by
students’ perceived skills, knowledge, and attitudes (Venkatesh etal., 2003). A positive
attitude has a positive effect on the intended use of AI-tools. A positive attitude includes
a general interest and an openness about AI technology, but also a strong interest in a
critical discussion about AI technology. Students’ curiosity about the new technology
leads to factual testing and might give students a better understanding of what the AI-
tools have to offer them in practice, reflecting on the challenges and opportunities of
AI-technology. e findings of the path analysis indicate that proficiency in controlling
the tools does not have a positive effect on the intended use. is result can be explained
through the aforementioned importance of attitude towards AI-technology (Almaraz-
López etal., 2023; Vuorikari etal., 2022). Students’ curiosity for the new technology
might outweigh their perceived need for a distinct AI proficiency.
Additionally, many AI-tools can be easily accessed and give the impression of being
easy to use. e same argument holds for the construct of knowledge. e student’s
intention to use AI-tools for learning processes appears to be independent of their theo-
retical knowledge of the systems’ internal functionalities. While this knowledge might
help students to understand better the results they receive from AI-tools or increase
their ability to formulate adequate prompts (Zamfirescu-Pereira etal., 2023), the absence
of theoretical knowledge does not present itself as a barrier to the intended use.
Implications
ese findings do have important implications for the further implementation of AI-tools
in higher education learning processes (Heil & Ifenthaler 2023; Celik etal., 2022; Kaplan-
Rakowski etal., 2023; Latham, 2022; Liu etal., 2022; Ocaña-Fernández etal., 2019). At first
glance, using AI-tools does not require prior practical and theoretical training from stu-
dents. At the same time, students might not be able to fully apprehend the possibilities of
AI-tools or effectively use them to improve their learning processes (Alamri et al., 2021;
Børte etal., 2023). Educators should, therefore, integrate these tools into their instructional
design practices and pair them with additional practices to facilitate the students’ AI-com-
petencies (Lindfors etal., 2021; Sailer etal., 2021; Zhang etal., 2023). As a result, students
Page 9 of 13
Delckeretal. Int J Educ Technol High Educ (2024) 21:18
will be able to use AI-tools to improve their learning processes, while simultaneously being
able to critically reflect on the input, output, and influence of the respective AI-tools.
e results of Hypotheses 2a and 2b show a significant effect of AI competence and the
perceived benefits of AI-tools on the expected cooperation potential of AI technology
(Bhargava etal., 2021; Raisch & Krakowski, 2021). Instructional designers and other stake-
holders in higher education need to provide best-practice examples of how AI-tools can
be used to positively influence learning processes if they want to facilitate the usage of the
respective tools.
Limitations andoutlook
ChatGPT was not yet openly accessible when the data for this survey was collected. e
overall usage of AI tools has likely increased since ChatGPT was introduced to a broader
user base (Strzelecki, 2023). e presence of ChatGPT in media and scientific discussions
might have led students to look into other AI-tools, such as DeepL (Einola & Khoreva,
2023) or Quillbot (Fitria, 2021) as well. e composition of the student sample also limits
the study’s results. While the University in Switzerland is more open towards the usage of
AI technology, policymakers in German universities tend to be more restrictive towards the
use of AI (von der Heyde etal., 2023). To overcome the limitation of the sample size, future
studies will include students from a broader range of academic years. As a result, the gener-
alizability of the result will be improved.
e present discussion about ChatGPT and the influence of AI-tools in general on higher
education underlines the need to educate learners about AI and their respective AI-compe-
tencies (Almaraz-López etal., 2023; Chong etal., 2021; Fauzi etal., 2023). A second study
is currently being conducted to analyze how the introduction of ChatGPT to the public
sphere has changed students’ attitudes toward AI and their use of AI-tools, both intended
and factual. It can be assumed that the powerful tool leads to an increasing awareness of AI,
as well as a broad usage over different study programs and for various tasks within higher
education programs. Further studies should include additional research approaches to
collect additional data about students’ experiences and usage of AI tools, such as a think-
a-loud study or interviews with students. ese approaches give insights into the teach-
ing strategies which might help students to facilitate AI competences and improve their
learning outcomes through AI tools. An example of such a strategy is a class that teaches
students to write scientific texts with the support of ChatGPT. A comprehensive under-
standing of necessary competencies and pedagogical are the foundation for holistic AI liter-
acy programs. ese programs need to be accessible for all students and flexible enough to
adhere to different levels of prior knowledge and learning preferences. Another important
task for ongoing research projects is the analysis of the relationship between AI competen-
cies, pedagogical concepts and the learning outcome of students, especially regarding the
different tools which might be used in the future. Additional, longitudinal studies might be
best suited to gather detailed data through out AI-supported learning process.
Conclusion
e increasing capabilities of AI-tools offer a wide range for possible application in
higher education institutions. Once the gap between the theoretical chances and appli-
cable solutions is closed, multiple stakeholders, such as administrator, educators and
Page 10 of 13
Delckeretal. Int J Educ Technol High Educ (2024) 21:18
students, will be able to benefit from individualized learning paths, automated feedback
or data-based decision-making processes. Lately, an increasing number of research work
has been published to close this gap. e introduction of ChatGPT to the general public
has fueled the discussions about AI technology, especially in the field of higher educa-
tion institutions. One of the challenges encased in the implementation of AI into learn-
ing processes is the facilitation of students’ AI competencies. Students need the practical
skills, theoretical knowledge and comprehensive attitudes to unlock the potential of
AI-technology for their learning processes. Educators and higher education institutions
have the responsibility to create safe learning environments which foster points of con-
tact with AI as well as possibilities to actively engage with AI. ese learning environ-
ments must provide students with access to relevant AI-tools and must be founded on
holistic legal frameworks and regulations.
Supplementary Information
The online version contains supplementary material available at https:// doi. org/ 10. 1186/ s41239- 024- 00452-7.
Additional le1. AI-Competence Instrument.
Acknowledgements
Not applicable.
Author contributions
All authors participated in planning the study, designing the data collection tools, collecting and analyzing data for the
study. The first author (corresponding author) led the writing up process, with contributions from the second and third
authors. All authors read and approved the final manuscript.
Funding
Not applicable.
Availability of data and materials
The data supporting this study’s findings are available on request from the corresponding author. The data are not
publicly available due to privacy or ethical restrictions.
Declarations
Competing interests
The authors declare no known competing financial interests or personal relationships that could have appeared to influ-
ence the work reported in this paper. The authors declare no conflict of interest.
Received: 22 September 2023 Accepted: 1 March 2024
References
Aksnes, D. W., Langfeldt, L., & Wouters, P. (2019). Citations, citation indicators, and research quality: An overview of basic
concepts and theories. SAGE Open, 9(1), 215824401982957. https:// doi. org/ 10. 1177/ 21582 44019 829575
Alamri, H. A., Watson, S., & Watson, W. (2021). Learning technology models that support personalization within blended
learning environments in higher education. TechTrends, 65(1), 62–78. https:// doi. org/ 10. 1007/ s11528- 020- 00530-3
Aldowah, H., Al-Samarraie, H., & Fauzy, W. M. (2019). Educational data mining and learning analytics for 21st century
higher education: a review and synthesis. Telematics and Informatics, 37, 13–49. https:// doi. org/ 10. 1016/j. tele. 2019.
01. 007
Almaraz-López, C., Almaraz-Menéndez, F., & López-Esteban, C. (2023). Comparative study of the attitudes and perceptions
of university students in business administration and management and in education toward artificial intelligence.
Education Sciences, 13(6), 609. https:// doi. org/ 10. 3390/ educs ci130 60609
Azevedo, R., Bouchet, F., Duffy, M., Harley, J., Taub, M., Trevors, G., Cloude, E., Dever, D., Wiedbusch, M., Wortha, F., & Cerezo,
R. (2022). Lessons learned and future directions of metaTutor: leveraging multichannel data to scaffold self-regu-
lated learning with an intelligent tutoring system. Frontiers in Psychology. https:// doi. org/ 10. 3389/ fpsyg. 2022. 813632
Barrie, S. C. (2007). A conceptual framework for the teaching and learning of generic graduate attributes. Studies in Higher
Education, 32(4), 439–458. https:// doi. org/ 10. 1080/ 03075 07070 14761 00
Bates, T., Cobo, C., Mariño, O., & Wheeler, S. (2020). Can artificial intelligence transform higher education? International
Journal of Educational Technology in Higher Education, 17(1), 42. https:// doi. org/ 10. 1186/ s41239- 020- 00218-x
Page 11 of 13
Delckeretal. Int J Educ Technol High Educ (2024) 21:18
Bean, J. P., & Eaton, S. B. (2020). A Psychological Model of College Student Retention. https:// api. seman ticsc holar. org/ Corpu
sID: 22493 7248
Bhargava, A., Bester, M., & Bolton, L. (2021). Employees’ perceptions of the implementation of robotics, artificial intelli-
gence, and automation (RAIA) on job satisfaction, job security, and employability. Journal of Technology in Behavioral
Science, 6(1), 106–113. https:// doi. org/ 10. 1007/ s41347- 020- 00153-8
Børte, K., Nesje, K., & Lillejord, S. (2023). Barriers to student active learning in higher education. Teaching in Higher Educa-
tion, 28(3), 597–615. https:// doi. org/ 10. 1080/ 13562 517. 2020. 18397 46
Bozkurt, A., & Sharma, R. (2023). Generative AI and prompt engineering: The art of whispering to let the genie out of the algo-
rithmic world. 18, i–vi. https:// doi. org/ 10. 5281/ zenodo. 81749 41
Brusilovsky, P. (2023). AI in education, learner control, and human-AI collaboration. International Journal of Artificial Intel-
ligence in Education. https:// doi. org/ 10. 1007/ s40593- 023- 00356-z
Byrd, K., Fan, A., Her, E., Liu, Y., Almanza, B., & Leitch, S. (2021). Robot vs human: Expectations, performances and gaps
in off-premise restaurant service modes. International Journal of Contemporary Hospitality Management, 33(11),
3996–4016. https:// doi. org/ 10. 1108/ IJCHM- 07- 2020- 0721
Cabero-Almenara, J., Gutiérrez-Castillo, J. J., Guillén-Gámez, F. D., & Gaete-Bravo, A. F. (2023). Digital Competence of Higher
Education Students as a Predictor of Academic Success. Technology, Knowledge and Learning, 28(2), 683–702. https://
doi. org/ 10. 1007/ s10758- 022- 09624-8
Celik, I., Dindar, M., Muukkonen, H., & Järvelä, S. (2022). The promises and challenges of artificial intelligence for teachers:
A systematic review of research. TechTrends, 66(4), 616–630. https:// doi. org/ 10. 1007/ s11528- 022- 00715-y
Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. International
Journal of Educational Technology in Higher Education, 20(1), 38. https:// doi. org/ 10. 1186/ s41239- 023- 00408-3
Chong, T., Yu, T., Keeling, D. I., & de Ruyter, K. (2021). AI-chatbots on the services frontline addressing the challenges and
opportunities of agency. Journal of Retailing and Consumer Services, 63, 102735. https:// doi. org/ 10. 1016/j. jretc onser.
2021. 102735
Crisp, G., Palmer, E., Turnbull, D., Nettelbeck, T., Ward, L., LeCouteur, A., Sarris, A., Strelan, P., & Schneider, L. (2009). First year
student expectations: Results from a university-wide student survey. Journal of University Teaching and Learning
Practice, 6(1), 16–32. https:// doi. org/ 10. 53761/1. 6.1.3
Dang, J., & Liu, L. (2022). Implicit theories of the human mind predict competitive and cooperative responses to AI robots.
Computers in Human Behavior, 134, 107300. https:// doi. org/ 10. 1016/j. chb. 2022. 107300
Desaire, H., Chua, A. E., Isom, M., Jarosova, R., & Hua, D. (2023). Distinguishing academic science writing from humans
or ChatGPT with over 99% accuracy using off-the-shelf machine learning tools. Cell Reports Physical Science, 4(6),
101426. https:// doi. org/ 10. 1016/j. xcrp. 2023. 101426
Diakopoulos, N., Friedler, S., Arenas, M. et al. (2016). Principles for accountable algorithms and a social impact statement
for algorithms. FATML . http:// www. fatml. org/ resou rces/ princ iples- for- accou ntable- algor ithms.
Ehlers, U., & Kellermann, S. A. (2019). Future Skills - The Future of Learning andHigher education. Results of the Interna-
tional Future Skills Delphi Survey
Einola, K., & Khoreva, V. (2023). Best friend or broken tool? Exploring the co-existence of humans and artificial intelligence
in the workplace ecosystem. Human Resource Management, 62(1), 117–135. https:// doi. org/ 10. 1002/ hrm. 22147
Fauzi, F., Tuhuteru, L., Sampe, F., Ausat, A. M. A., & Hatta, H. R. (2023). Analysing the role of ChatGPT in improving student
productivity in higher education. Journal on Education, 5(4), 14886–14891. https:// doi. org/ 10. 31004/ joe. v5i4. 2563
Fitria, T. N. (2021). QuillBot as an online tool: Students’ alternative in paraphrasing and rewriting of English writing. Englisia:
Journal of Language, Education, and Humanities, 9(1), 183. https:// doi. org/ 10. 22373/ ej. v9i1. 10233
Flores, M. A., Brown, G., Pereira, D., Coutinho, C., Santos, P., & Pinheiro, C. (2020). Portuguese university students’ concep-
tions of assessment: Taking responsibility for achievement. Higher Education, 79(3), 377–394. https:// doi. org/ 10.
1007/ s10734- 019- 00415-2
Floridi, L., Cowls, J., Beltrametti, M., et al. (2018). AI4People—An ethical framework for a good AI Society: Opportunities,
risks, principles, and recommendations. Minds & Machines, 28, 689–707. https:// doi. org/ 10. 1007/ s11023- 018- 9482-5
Fuchs, K. (2023). Exploring the opportunities and challenges of NLP models in higher education: is Chat GPT a blessing or
a curse? Frontiers in Education. https:// doi. org/ 10. 3389/ feduc. 2023. 11666 82
Galante, A. (2020). Pedagogical translanguaging in a multilingual English program in Canada: Student and teacher per-
spectives of challenges. System, 92, 102274. https:// doi. org/ 10. 1016/j. system. 2020. 102274
Gkinko, L., & Elbanna, A. (2023). The appropriation of conversational AI in the workplace: A taxonomy of AI chatbot users.
International Journal of Information Management, 69, 102568. https:// doi. org/ 10. 1016/j. ijinf omgt. 2022. 102568
Heil, J., & Ifenthaler, D. (2023). Online Assessment in Higher Education: A Systematic Review. Online Learning. https:// doi.
org/ 10. 24059/ olj. v27i1. 3398
Henschel, A., Laban, G., & Cross, E. S. (2021). What makes a robot social? A review of social robots from science fiction to a
home or hospital near you. Current Robotics Reports, 2(1), 9–19. https:// doi. org/ 10. 1007/ s43154- 020- 00035-0
Herbert, C., & Dołżycka, J. D. (2022). Personalized avatars without agentic interaction: Do they promote learning per-
formance and sense of self in a teaching context? A pilot study. In A. González-Briones, A. Almeida, A. Fernandez,
A. El Bolock, D. Durães, J. Jordán, & F. Lopes (Eds.), Highlights in practical applications of agents, multi-agent systems,
and complex systems simulation. The PAAMS Collection. PAAMS 2022 (pp. 169–180). Springer. https:// doi. org/ 10. 1007/
978-3- 031- 18697-4_ 14
Hildt, E. (2021). What sort of robots do we want to interact with? Reflecting on the human side of human-artificial intel-
ligence interaction. Frontiers in Computer Science. https:// doi. org/ 10. 3389/ fcomp. 2021. 671012
Houser, M. L. (2004). We don’t need the same things! Recognizing differential expectations of instructor communication
behavior for nontraditional and traditional students. The Journal of Continuing Higher Education, 52(1), 11–24. https://
doi. org/ 10. 1080/ 07377 366. 2004. 10400 271
Ifenthaler, D., & Yau, J.Y.-K. (2020). Utilising learning analytics to support study success in higher education: a sys-
tematic review. Educational Technology Research and Development, 68(4), 1961–1990. https:// doi. org/ 10. 1007/
s11423- 020- 09788-z
Page 12 of 13
Delckeretal. Int J Educ Technol High Educ (2024) 21:18
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9),
389–399. https:// doi. org/ 10. 1038/ s42256- 019- 0088-2
Joksimovic, S., Ifenthaler, D., Marrone, R., De Laat, M., & Siemens, G. (2023). Opportunities of artificial intelligence for sup-
porting complex problem-solving: Findings from a scoping review. Computers and Education: Artificial Intelligence, 4,
100138. https:// doi. org/ 10. 1016/j. caeai. 2023. 100138
Kaplan-Rakowski, R., Grotewold, K., Hartwick, P., & Papin, K. (2023). Generative AI and teachers’ perspectives on its imple-
mentation in education. Journal of Interactive Learning Research, 34(2), 313–338.
Kim, J., Lee, H., & Cho, Y. H. (2022). Learning design to support student-AI collaboration: Perspectives of leading
teachers for AI in education. Education and Information Technologies, 27(5), 6069–6104. https:// doi. org/ 10. 1007/
s10639- 021- 10831-6
Kim, J., Merrill, K., Xu, K., & Sellnow, D. D. (2020). My teacher is a machine: Understanding students’ perceptions of AI
teaching assistants in online education. International Journal of Human-Computer Interaction, 36(20), 1902–1911.
https:// doi. org/ 10. 1080/ 10447 318. 2020. 18012 27
Koltovskaia, S. (2020). Student engagement with automated written corrective feedback (AWCF) provided by Grammarly:
A multiple case study. Assessing Writing, 44, 100450. https:// doi. org/ 10. 1016/j. asw. 2020. 100450
Larasati, P. D., Irawan, A., Anwar, S., Mulya, M. F., Dewi, M. A., & Nurfatima, I. (2022). Chatbot helpdesk design for digital
customer service. Applied Engineering and Technology, 1(3), 138–145. https:// doi. org/ 10. 31763/ aet. v1i3. 684
Latham, A. (2022). Conversational intelligent tutoring systems: The state of the art. In A. E. Smith (Ed.), Women in engineer-
ing and science (pp. 77–101). Springer. https:// doi. org/ 10. 1007/ 978-3- 030- 79092-9_4
Leoste, J., Jõgi, L., Õun, T., Pastor, L., López, S. M. J., & Grauberg, I. (2021). Perceptions about the future of integrating
emerging technologies into higher education—the case of robotics with artificial Intelligence. Computers., 10(9),
110. https:// doi. org/ 10. 3390/ compu ters1 00901 10
Li, C., Zhang, Y., Niu, X., Chen, F., & Zhou, H. (2023). Does artificial intelligence promote or inhibit on-the-job learning?
Human reactions to AI at work. Systems, 11(3), 114. https:// doi. org/ 10. 3390/ syste ms110 30114
Lim, W. M., Gunasekara, A., Pallant, J. L., Pallant, J. I., & Pechenkina, E. (2023). Generative AI and the future of education:
Ragnarök or reformation? A paradoxical perspective from management educators. The International Journal of
Management Education, 21(2), 100790. https:// doi. org/ 10. 1016/j. ijme. 2023. 100790
Lindfors, M., Pettersson, F., & Olofsson, A. D. (2021). Conditions for professional digital competence: The teacher educators
view. Education Inquiry, 12(4), 390–409. https:// doi. org/ 10. 1080/ 20004 508. 2021. 18909 36
Liu, J., Zhang, L., Wei, B., & Zheng, Q. (2022). Virtual teaching assistants: Technologies, applications and challenges. In
Humanity driven AI (pp. 255–277). Springer International Publishing. https:// doi. org/ 10. 1007/ 978-3- 030- 72188-6_ 13.
Mah, D.-K. (2016). Learning analytics and digital badges: Potential impact on student retention in higher education.
Technology, Knowledge and Learning, 21(3), 285–305. https:// doi. org/ 10. 1007/ s10758- 016- 9286-8
Mah, D.-K., & Ifenthaler, D. (2017). Academic staff perspectives on first-year students’ academic competencies. Journal of
Applied Research in Higher Education, 9(4), 630–640. https:// doi. org/ 10. 1108/ JARHE- 03- 2017- 0023
Mah, D.-K., & Ifenthaler, D. (2018). Students’ perceptions toward academic competencies: The case of German first-year
students. Issues in Educational Research, 28, 120–137.
Martín-Martín, A., Thelwall, M., Orduna-Malea, E., & Delgado López-Cózar, E. (2021). Google Scholar, Microsoft Academic,
Scopus, Dimensions, Web of Science, and OpenCitations’ COCI: A multidisciplinary comparison of coverage via cita-
tions. Scientometrics, 126(1), 871–906. https:// doi. org/ 10. 1007/ s11192- 020- 03690-4
McCarthy, M., & Kuh, G. D. (2006). Are students ready for college? Phi Delta Kappan, 87(9), 664–669. https:// doi. org/ 10.
1177/ 00317 21706 08700 909
Medland, E. (2016). Assessment in higher education: Drivers, barriers and directions for change in the UK. Assessment &
Evaluation in Higher Education, 41(1), 81–96. https:// doi. org/ 10. 1080/ 02602 938. 2014. 982072
Mitsea, E., & Drigas, A. (2019). A journey into the metacognitive learning strategies. International Journal of Online and
Biomedical Engineering (IJOE), 15(14), 4. https:// doi. org/ 10. 3991/ ijoe. v15i14. 11379
Ng, D. T. K., Su, J., & Chu, S. K. W. (2023). Fostering secondary school students’ AI literacy through making AI-driven recy-
cling bins. Education and Information Technologies. https:// doi. org/ 10. 1007/ s10639- 023- 12183-9
Ocaña-Fernández, Y., Valenzuela-Fernández, L. A., & Garro-Aburto, L. L. (2019). Artificial Intelligence and its implications in
higher education. Propósitos y Representaciones. https:// doi. org/ 10. 20511/ pyr20 19. v7n2. 274
Ouyang, F., Wu, M., Zheng, L., Zhang, L., & Jiao, P. (2023). Integration of artificial intelligence performance prediction and
learning analytics to improve student learning in online engineering course. International Journal of Educational
Technology in Higher Education, 20(1), 4. https:// doi. org/ 10. 1186/ s41239- 022- 00372-4
Ouyang, F., Zheng, L., & Jiao, P. (2022). Ar tificial intelligence in online higher education: A systematic review of empirical
research from 2011 to 2020. Education and Information Technologies, 27(6), 7893–7925. https:// doi. org/ 10. 1007/
s10639- 022- 10925-9
Popenici, S. A. D., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher educa-
tion. Research and Practice in Technology Enhanced Learning, 12(1), 22. https:// doi. org/ 10. 1186/ s41039- 017- 0062-8
Rahayu, N. W., Ferdiana, R., & Kusumawardani, S. S. (2023). A systematic review of learning path recommender systems.
Education and Information Technologies, 28(6), 7437–7460. https:// doi. org/ 10. 1007/ s10639- 022- 11460-3
Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation–augmentation paradox. Acad-
emy of Management Review, 46(1), 192–210. https:// doi. org/ 10. 5465/ amr. 2018. 0072
Roesler, E., Manzey, D., & Onnasch, L. (2021). A meta-analysis on the effectiveness of anthropomorphism in human-robot
interaction. Science Robotics, 6(58), eabj5425. https:// doi. org/ 10. 1126/ sciro botics. abj54 25
Rosseel, Y. (2012). lavaan: An R package for structural equation modeling. Journal of Statistical Software. https:// doi. org/ 10.
18637/ jss. v048. i02
Sailer, M., Schultz-Pernice, F., & Fischer, F. (2021). Contextual facilitators for learning activities involving technology in
higher education: The C-model. Computers in Human Behavior, 121, 106794. https:// doi. org/ 10. 1016/j. chb. 2021.
106794
Sanavi, S., & Matt, J. (2022). The influence of the first-year seminar participation on student retention. Journal of Education
and Training Studies, 10(4), 90. https:// doi. org/ 10. 11114/ jets. v10i4. 5669
Page 13 of 13
Delckeretal. Int J Educ Technol High Educ (2024) 21:18
Seufert, S., & Tarantini, E. (2022). Gestaltung der digitalen Transformation in Schulen: Ein Reifegradmodell für die Berufsbil-
dung. MedienPädagogik: Zeitschrift Für Theorie Und Praxis Der Medienbildung, 49(Schulentwicklung), 301–326. https://
doi. org/ 10. 21240/ mpaed/ 49/ 2022. 07. 15.X
Smith, J. S., & Wertlieb, E. C. (2005). Do first-year college students’ expectations align with their first-year experiences?
NASPA Journal, 42(2), 153–174. https:// doi. org/ 10. 2202/ 1949- 6605. 1470
Strzelecki, A. (2023). To use or not to use ChatGPT in higher education? A study of students’ acceptance and use of tech-
nology. Interactive Learning Environments. https:// doi. org/ 10. 1080/ 10494 820. 2023. 22098 81
Thomas, L. (2002). Student retention in higher education: The role of institutional habitus. Journal of Education Policy,
17(4), 423–442. https:// doi. org/ 10. 1080/ 02680 93021 01402 57
Tinto, V. (1994). Leaving college: Rethinking the causes and cures of student attrition. University of Chicago Press. https://
doi. org/ 10. 7208/ chica go/ 97802 26922 461. 001. 0001
Tsai, Y.-S., Rates, D., Moreno-Marcos, P. M., Muñoz-Merino, P. J., Jivet, I., Scheffel, M., Drachsler, H., Delgado Kloos, C., &
Gašević, D. (2020). Learning analytics in European higher education—Trends and barriers. Computers & Education,
155, 103933. https:// doi. org/ 10. 1016/j. compe du. 2020. 103933
van Pinxteren, M. M. E., Wetzels, R. W. H., Rüger, J., Pluymaekers, M., & Wetzels, M. (2019). Trust in humanoid robots: Implica-
tions for services marketing. Journal of Services Marketing, 33(4), 507–518. https:// doi. org/ 10. 1108/ JSM- 01- 2018- 0045
Venkatesh, M., & DavisDavis, G. B. F. D. (2003). User acceptance of information technology: Toward a unified view. MIS
Quarterly, 27(3), 425. https:// doi. org/ 10. 2307/ 30036 540
von der Heyde, M., Goebel, M., Zoerner, D., & Lucke, U. (2023). Integrating AI tools with campus infrastructure to support
the life cycle of study regulations. Proceedings of European University, 95, 332–344.
Vuorikari, R., Kluzer, S., & Punie, Y. (2022). D igComp 2.2, The Digital Competence framework for citizens—With new examples
of knowledge, skills and attitudes. Publications Office of the European Union. https:// doi. org/ 10. 2760/ 115376.
Woschank, M., Rauch, E., & Zsifkovits, H. (2020). A review of further directions for artificial intelligence, machine learning,
and deep learning in smart logistics. Sustainability, 12(9), 3760. https:// doi. org/ 10. 3390/ su120 93760
Yorke, M., & Longden, B. (2008). The first-year experience of higher education in the UK—Final report. The Higher Education
Academy.
Zamfirescu-Pereira, J. D., Wong, R. Y., Hartmann, B., & Yang, Q. (2023). Why Johnny can’t prompt: how non-AI experts try
(and fail) to design LLM prompts. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems,
1–21. https:// doi. org/ 10. 1145/ 35445 48. 35813 88.
Zanzotto, F. M. (2019). Viewpoint: Human-in-the-loop Artificial Intelligence. Journal of Artificial Intelligence Research, 64,
243–252. https:// doi. org/ 10. 1613/ jair.1. 11345
Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence
applications in higher education – where are the educators? International Journal of Educational Technology in Higher
Education, 16(1), 39. https:// doi. org/ 10. 1186/ s41239- 019- 0171-0
Zhang, C., Schießl, J., Plößl, L., Hofmann, F., & Gläser-Zikuda, M. (2023). Acceptance of artificial intelligence among pre-
service teachers: A multigroup analysis. International Journal of Educational Technology in Higher Education, 20(1), 49.
https:// doi. org/ 10. 1186/ s41239- 023- 00420-7
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Jan Delcker is a post doc researcher at the Chair of Learning, Design and Technology at University of
Mannheim, Germany. Jan’s research focuses on the transformation of educational institutions through the
implementation of digital technology.
Joana Heil is a PhD candidate at the Chair of Learning, Design and Technology at University of Man-
nheim, Germany. The design and development of online assessment, adaptive feedback and learning ana-
lytics are the focus of Joana’s research.
Dirk Ifenthaler is Professor and Chair of Learning, Design and Technology at University of Mannheim,
Germany and UNESCO Deputy Chair on Data Science in Higher Education Learning and Teaching at Curtin
University, Australia. Dirk’s research focuses on the intersection of cognitive psychology, educational tech-
nology, data analytics, and organisational learning.
Sabine Seufert Professor of Business Education, heads the Institute for Educational Management and
Technologies at the University of St. Gallen. Her research focuses on digital transformation and artificial
intelligence in education.
Lukas Spirgi research associate and PhD student, works at the Institute for Educational Management
and Technologies at the University of St. Gallen. He conducts research in the field of digital transformation
and artificial intelligence in education.
... Sanusi et al. (2022) adopt a similar approach, integrating the ethics of AI as a competence dimension that bridges the other dimensions of their model, namely learning, teamwork, and knowledge competence. Based on a systematic literature review as well as expert interviews, Delcker et al. (2024) developed a framework of AI competence in the context of education, including the subcomponents of theoretical knowledge, legal framework and ethics, implications of AI, attitude towards AI, teaching and learning with AI and ongoing professionalisation as the cornerstone of a competent approach to AI. This framework is designed modularly and can be adapted according to the target group. ...
... Example item for section two: 'If I used the AI tool shown in the video, I would achieve greater learning success'. In addition, students' general AI competence was assessed through a modular survey by Delcker et al. (2024) covering different dimensions of AI competence, with the selected sub-categories for this context being theory, laws and regulations, the impact of AI, and attitudes towards AI (18 items; Cronbachs' α = .84). ...
... The findings support our first hypothesis (1a), revealing significant differences across the four dimensions of AI competence (theory, regulations, impact and attitude) (Delcker et al., 2024). Interestingly, the students showed the strongest AI competence in the 'attitude' dimension. ...
Article
Full-text available
Artificial intelligence represents an emerging frontier for higher education institutions, potentially personalising learning, automating tasks, and supporting student outcomes. This study examines international students’ perceptions of the recent proliferation of generative artificial intelligence tools in the context of academic learning and assessment. The study involved N = 223 students from three different higher education institutions located in Australia, Germany and Italy. The focus was on the student’s competence in artificial intelligence and their perception of six different generative artificial intelligence tools concerning learning and assessment. The findings suggest that the dimensions of competence in artificial intelligence vary considerably and that students from different countries have a comparable level of competence in artificial intelligence. Further findings indicate that the expected support of generative artificial intelligence tools for learning and assessment is perceived differently. This study highlights the need for increased pedagogical attention to artificial intelligence, bridging the gap between students' enthusiasm and technical knowledge. It suggests that effective integration of generative artificial intelligence tools should also prioritise the development of critical thinking and comprehension skills over content generation.
... This exploratory study shows possibilities of ChatGPT for supporting educational leaders as they address challenges in their schools (Delcker et al., 2024;Duha, 2023;Kalla et al., 2023). Findings suggest that AI may support school leaders as they consider a variety of ways to address complex issues that may involve contrasting interests among teachers, students, or parents. ...
... Findings from this study further support understandings that, as ChatGPT continues to evolve as a transformative tool (Delcker et al., 2024;Kalla et al., 2023), human wisdom, expertise, and experience are necessary for complex, contextspecific problem solving. This finding supports findings in the literature that tasks requiring "creativity, emotion, knowledge transfer, and social interaction" (Long & Magerko, 2020, p. 4) also require a human component. ...
Article
Full-text available
Educational leaders are faced with multi-faceted dilemmas that place decision-making at the heart of their day-to-day work. For support, they often turn to collaborative networks of experienced educators, such as Project ECHO, for solutions to address challenges they encounter while working in the field. The availability of generative AI technology, however, may have potential for informing the solutions to complex educational dilemmas as well, saving time and offering efficiency for problem solving. This study compares solutions for three problems of practice (PoP) that were generated through ChatGPT software and those proposed by the educational leaders in TeleED, an online professional development platform. The problem of practice in this study refers to complex, real-world challenges or problems shared by educational leaders. Findings indicate that solutions to these PoPs showed similarities, suggesting that both ChatGPT and the educational leaders in TeleED highlight ideas such as documentation and peer mentoring to address challenges. This reflects ChatGPT’s ability to generate applicable solutions to complex problems that somewhat mirror solutions posed by leaders. However, educational leaders offered solutions that were nuanced, context specific, and contained explanations of resources and experiences that enhanced discussions. Findings suggest that ChatGPT may have utility for enhancing decision-making but not replacing the human element.
... The other major theme in this period is the emergence of artificial intelligence, with a range of studies published from students' intention to use AI (e.g., Delcker et al., 2024) and their AI acceptance (e.g., Zhang et al., 2023), to gamified robots (e.g., Yang et al., 2023), intelligent tutoring systems (e.g., Zheng et al., 2024), and predictive learning analytics (e.g., Ouyang et al., 2023). Seven systematic reviews exploring the role of AI in higher education have been published so far in this period (e.g., Crompton & Burke, 2023;Salas-Pilco et al., 2022), which is not surprising given the findings of Bond et al. 's (2024) meta review of AI finding 66 reviews were published between 2018 and 2024 solely on higher education alone. ...
Article
Full-text available
In celebrating the 20th anniversary of the International Journal of Educational Technology in Higher Education (IJETHE), previously known as the Revista de Universidad y Sociedad del Conocimiento (RUSC), it is timely to reflect upon the shape and depth of educational technology research as it has appeared within the journal, in order to understand how IJETHE has contributed to furthering scholarship, and to provide future directions to the field. It is particularly important to understand authorship patterns in terms of equity and diversity, especially in regard to ensuring wide-ranging geographical and gender representation in academic publishing. To this end, a content and authorship analysis was conducted of 631 articles, published in RUSC and IJETHE from 2010 to June 2024. Furthermore, in order to contribute to ongoing efforts to raise methodological standards of secondary research being conducted within the field, an analysis of the quality of evidence syntheses published in IJETHE from 2018 to June 2024 was conducted. Common themes in IJETHE have been students’ experience and engagement in online learning, the role of assessment and feedback, teachers’ digital competencies, and the development and quality of open educational practices and resources. The authorship analysis revealed gender parity and an increasingly international identity, although contributions from the Middle East, South America and Africa remain underrepresented. The findings revealed a critical need for enhanced efforts to raise the methodological rigour of EdTech evidence syntheses, and suggestions are provided for how IJETHE can help move the field forwards. Key future research areas include educator professional development, the impact of digital tools on learning outcomes and engagement, the influence of social and contextual factors, the application of AI tools to support learning, and the use of multimodal data to analyse student learning across diverse contexts.
... Behovet for nye undervisningsmetoder (Walter, 2024) og innpass av KI-verktøy i undervisningsdesign med mål om å forbedre laeringsprosessene for studentene, ble også trukket frem (Delcker et al., 2024;Ørevik & Skjelbred, 2023). Samtidig understreket flere av artiklene viktigheten av både holdninger og opplaering blant pedagoger (Walter, 2024) og studenter (Chiu, 2024) for å bli klar for å arbeide i et samfunn der bruk av skriveroboter stadig blir mer aktuelt. ...
Article
Full-text available
Høsten -22 ble OpenAI sin skriverobot ChatGPT offentlig tilgjengelig. I tiden etterpå er flere skriveroboter basert på kunstig intelligens (KI) etablert. Vi søker i denne studien å kaste lys over hvordan tilgangen på disse hjelpemidlene påvirker studenters lærings¬prosess og veileders rolle. Artikkelen baserer seg på en spørreundersøkelse og fokusgruppeintervjuer blant vitenskapelige ansatte ved norske UH-institusjoner. Studien viser at skriveroboter skaper både utfordringer og mulighetsrom. Mulighetene knyttes særlig til et begrep som «sparringspartner» der roboten kan være en samtalepartner som aldri går lei og tilbyr komplementære ferdigheter i forhold til en ordinær veileder. Utfordringene knyttes særlig til ukritisk bruk av skriverobotenes forslag. Under forutsetning av at studentene forholder seg kritisk til skriverobotenes forslag, og bevarer sin selvstendighet i forhold til den teksten som er under produksjon, argumenterer studien for at læringsprosessen kan styrkes. Veilederens rolle blir imidlertid annerledes når teknologien kan hjelpe med noe av det som veiledere tidligere har måttet hjelpe studentene med.
... To encourage positive attitudes, institutions should provide clear information on AI's benefits and limitations (Pataranutaporn et al., 2021). Workshops, seminars, and training can help clarify AI and address misconceptions (Delcker et al., 2024). Showcasing case studies and offering hands-on experience with AI tools can further build students' confidence (Zou et al., 2023). ...
Article
Full-text available
Introduction Academic engagement of Chinese college students has received increasing research attention due to its impact on Students’ Mental health and wellbeing. The emergence of artificial intelligence (AI) technologies marked the beginning of a new era in education, offering innovative tools and approaches to enhance learning. Still, it can be viewed from positive and negative perspectives. This study utilizes the Theory of Planned Behavior (TPB) as a theoretical framework to analyze the mediating role of students’ attitudes toward AI, perceived social norms, perceived behavioral control, and their intention to use AI technologies in the relationships between Students’ academic engagement and Mental health. Methods The study involved a total of 2,423 Chinese college students with a mean age of approximately 20.53 ± 1.51 years. The survey was conducted through Questionnaire Star, using a secure website designed specifically for the study. The Hayes’ PROCESS Macro (Version 4.2) Model 80 with SPSS 29.0, a multivariate regression analysis with a chain mediation model that allows for multiple mediators to be tested sequentially, has been used. The statistical test explored the direct and indirect effects of students’ engagement (X) on mental health (Y) through a series of mediators: attitude toward AI (M1), subjective norm (M2), perceived behavioral control over AI use (M3), and AI use behavioral intention (M4). Results The direct positive relationship between engagement and mental health (β = 0.0575; p < 0.05), as well as identifying key mediating factors such as perceived behavioral control (β = 0.1039; p < 0.05) and AI use of behavioral intention (β = 0.0672; p < 0.05), highlights the potential of AI tools in enhancing students’ well-being. However, the non-significant mediating effects of attitude toward AI (β = 0.0135), and subjective norms (β = –0.0005), suggest that more research is needed to understand the nuances of these relationships fully. Discussion Overall, the study contributes to the growing body of literature on the role of AI in education and offers practical implications for improving mental health support in academic settings.
... They also show that excessive use of ChatGPT may negatively affect students' academic performance and memory, which corresponds with the findings in study when marking students' formative essays. Consequently, although students do not need prior practical and theoretical expertise to use GenAI tools they may still lack a complete understanding of the potential of such tools or the ability to use them to enhance their learning processes successfully (25). Although students held varied opinions regarding the advantages of GenAI during the Autumn Term, more than 80% of students across all classes in the spring term acknowledged the use of GenAI tools for learning. ...
... It entails that adequate infrastructure and technical support can facilitate the effective use of these tools in secondary education (Lee, 2023). Hence, it is crucial to understand the practical aspects of integrating ICT tools into educational environments and how institutions can invest in the necessary infrastructure and provide ongoing technical support to ensure students have the resources to use ICT tools effectively (Delcker et al., 2024). Thus, by addressing facilitating conditions, educators can create an environment that supports the integration of ICT tools, promoting the development of students' academic skills and improving their performance. ...
Article
Full-text available
In this digital era, teachers are expected to integrate information and communication technology (ICT) tools to enhance traditional teaching methods with modern technology. Therefore, this systematic review addressed research questions regarding the challenges, the role of stakeholders, and strategies that enable effective integration of ICT tools in secondary education. It analysed 54 scholarly and peer-reviewed publications between 2008 and 2023 sampled from electronic databases (Web of Science, PubMed, Scopus, and Eric). The main findings indicated that effective integration of ICT tools should address professional development, improve knowledge and skills, and resolve systemic challenges. Results highlight the significant role of policymakers in nurturing an innovative, collaborative, and supportive school environment. They additionally indicated that school leaders should ensure the availability, accessibility, reliability, compatibility, and security of ICT tools. Similarly, teachers should assist students in changing attitudes, beliefs, and practices towards ICT tools by supporting and facilitating practical and meaningful skills development opportunities. On the other hand, students should embrace a supportive, collaborative, and innovative school environment. Hence, recommendations include exploring the long-term effects of ICT integration on educational outcomes with the proliferation of AI and its impact on educational outcomes, sustainability, and adaptability in secondary schools.
... With the help of AI tools, humans have developed their cognitive abilities to the fullest. For example, research indicates that eliminating the gap between theoretical possibilities and practical solutions enables various stakeholders, including administrators, educators, and students, to reap benefits from personalized learning pathways, automated feedback mechanisms, and data-driven decision-making processes (Delcker et al. 2024). In addition, the claimed Parliament's priority of the EU is "to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. ...
Article
Full-text available
Unveiling the cognitive patterns that underpin linguistic expressions, conceptual metaphor serves not only as an effective means for speakers to convey their values but also as a crucial tool for listeners to comprehend unfamiliar topics. This study undertakes a corpus-based analysis of conceptual metaphor expressions within the European Union’s Artificial Intelligence Act. Utilizing a corpus derived from the European Union Artificial Intelligence Act and employing both Conceptual Metaphor Theory and Critical Metaphor Analysis Theory, this research examines metaphors in terms of their types, orientations, and underlying rationales. The study identifies the most-use semantic domains of Journey, Human, War, and Object metaphors, indicating that the overall orientations are characterized by Tool, Dependency, Human, and Risk, reflecting both the aspirations and concerns of humanity. This study addresses a gap in metaphor research regarding the European Union’s Artificial Intelligence Act, offering valuable insights for policymakers and AI developers in understanding and shaping public perception of AI technologies.
... En los derroteros de la innovación educativa, se encuentran la personalización del aprendizaje y la educación 5.0 (Cortés et al., 2018;Delcker et al., 2024;Rane et al., 2023) así como el uso de la IAGen. La historia de la IA no es lineal en tanto producto de años de desarrollo matemático y computacional. ...
Chapter
Full-text available
Las tecnologías digitales como la Inteligencia Artiicial (IA), particularmente las generativas, posibilitan nuevas narrativas, tipos de empleo y dilemas éticos que se debaten desde la academia. Ella viene a requerir nuevas alfabetizaciones entre los agentes educativos. Tal como se ha reconocido, las herramientas digitales en los contextos universitarios pueden favorecer aprendizajes, pero -al mismo tiempo-, desde la mediación docente, pueden contribuir al desarrollo del pensamiento crítico, al problematizar sobre el componente ético en la incorporación de herramientas de IA para el diseño y entrega de actividades disciplinares e investigativas.
Article
Artificial Intelligence (AI) has brought about significant changes in our lives, making AI literacy a crucial endeavor for the future. Despite its growing importance in academia, there is limited empirical research on its impact on college students’ higher order thinking skills (HOTS). The present study systematically and comprehensively explores the relationship between college students’ AI literacy and HOTS using the 3P (Presage-process–product) model. In this model, students’ AI literacy represents the presage factors, while behavioral engagement and peer interaction serves as the process factors, and HOTS is the product factor. We gathered data from a survey of 260 college students. We utilized structural equation modeling to analyze the relationships between the 3P factors. The results showed that both AI usage and AI evaluation directly influenced HOTS and also indirectly affected HOTS through the mediating role of behavioral engagement and peer interaction. Conversely, AI awareness and AI ethics showed no direct influence on HOTS, although AI awareness impacted HOTS via peer interaction mediation. The results of the study have several theoretical and practical implications. From a theoretical perspective, this study incorporates AI literacy, behavioral engagement, peer interaction, and HOTS within the 3P model framework, shedding light on their interrelations. On a practical note, the results emphasize the need to consider AI literacy, behavioral engagement, peer interaction when designing courses to enhance HOTS in the era of AI.
Article
Full-text available
Artificial intelligence (AI) education has gained popularity, and educators are developing activities to enhance students' AI literacy and promote collaboration in problem-solving. While current approaches using simulations and games can improve students' AI knowledge, they may not adequately prepare them for higher-level cognitive tasks. Only a few studies have explored the use of maker education to develop students' AI literacy. This case study employed a mixed-method approach and integrated AI into maker education to enhance students' motivation, career interest, confidence, collaboration, and AI literacy across low to high cognitive domains. The study involved 35 secondary school students in an AI maker program, where AI-driven recycling bins were employed as a project-based learning intervention. The results demonstrated a positive impact on students' motivation, AI literacy, and collaboration. The study provides design principles and an instructional design framework to assist future educators in creating meaningful maker-based learning experiences in AI education. It highlights the potential of using maker education to enhance students' AI literacy and offers guidance to educators on developing effective AI maker activities. The article also discusses theoretical contributions and practical implications for future research.
Article
Full-text available
While artificial intelligence (AI) has been integral in daily life for decades, the release of open generative AI (GAI) such as ChatGPT has considerably accelerated scholars’ interest in the impact of GAI in education. Both promises and fears of GAI have been becoming apparent. This quantitative study explored teachers’ perspectives on GAI and its potential implementation in education. A diverse group of teachers (N = 147) completed a validated survey sharing their views on GAI technology in terms of its use, integration, potential, and concerns. Overall, the teachers express positive perspectives towards GAI regardless of their teaching style. The findings of the study suggest that the more frequently teachers used GAI, the more positive their perspectives became. The teachers believed that GAI could enhance their professional development and could be a valuable tool for students. Although no guarantee exists that teachers’ perspectives translate into actions, previous research shows that technology integration and diffusion is highly dependent on teachers’ initial views. The findings of this study have implications on how GAI may be integrated in teaching and learning practices.
Article
Full-text available
Over the past few years, there has been a significant increase in the utilization of artificial intelligence (AI)-based educational applications in education. As pre-service teachers’ attitudes towards educational technology that utilizes AI have a potential impact on the learning outcomes of their future students, it is essential to know more about pre-service teachers’ acceptance of AI. The aims of this study are (1) to discover what factors determine pre-service teachers’ intentions to utilize AI-based educational applications and (2) to determine whether gender differences exist within determinants that affect those behavioral intentions. A sample of 452 pre-service teachers (325 female) participated in a survey at one German university. Based on a prominent technology acceptance model, structural equation modeling, measurement invariance, and multigroup analysis were carried out. The results demonstrated that eight out of nine hypotheses were supported; perceived ease of use (β = 0.297***) and perceived usefulness (β = 0.501***) were identified as primary factors predicting pre-service teachers’ intention to use AI. Furthermore, the latent mean differences results indicated that two constructs, AI anxiety (z = − 3.217**) and perceived enjoyment (z = 2.556*), were significantly different by gender. In addition, it is noteworthy that the paths from AI anxiety to perceived ease of use (p = 0.018*) and from perceived ease of use to perceived usefulness (p = 0.002**) are moderated by gender. This study confirms the determinants influencing the behavioral intention based on the Technology Acceptance Model 3 of German pre-service teachers to use AI-based applications in education. Furthermore, the results demonstrate how essential it is to address gender-specific aspects in teacher education because there is a high percentage of female pre-service teachers, in general. This study contributes to state of the art in AI-powered education and teacher education.
Article
Full-text available
User control and human-AI collaboration are two related directions of research in the modern stream of work on human-centered AI. The field of AI in education was an early pioneer in this area of research, but now it lags behind the work on user control and human-AI collaboration in other areas of AI. This paper attempts to motivate further research on learner control and human-AI collaboration in educational applications of AI by presenting a review of the current work and comparing it with similar work in the field of recommender systems.
Article
Full-text available
The capabilities of generative AI in education, serving as a co-creator, highlight the crucial role of prompt engineering for optimal interactions between humanity and Large Language Models (LLMs) that utilize Natural Language Processing (NLP). Generative AI's potential lies in responding to well-crafted prompts, making them essential for unleashing its capabilities in generating authentic content. To optimize this process, prompt engineers, encompassing various stakeholders in education, must grasp how language nuances impact generative AI's responses. By approaching a conversational generative AI strategically, with clear purpose, tone, role, and context, a prompt-based conversational pedagogy can be established, enabling communication and interaction that facilitate teaching and learning effectively. Such approaches are crucial for harnessing generative AI's power while ensuring meaningful and contextually relevant interactions. Keywords: prompt engineering, prompt design for AI, co-creation with AI, generative artificial intelligence, AI in education (AIEd). Highlights What is already known about this topic: • Generative AI's emergence in large language models has vast implications, as it closely mimics human language and comprehension. • Prompt engineering is optimizing AI language model's responses by formulating effective and specific inputs. • AI language model's efficacy depends on algorithms, training data, and prompt quality. What this paper contributes: • This paper posits that co-creation involving generative AI presents a potent approach in the field of education, underscoring the significance of human-machine interaction facilitated by carefully crafted prompts. • This paper emphasizes the importance of educators developing prompt engineering skills to harness the full potential of generative AI in educational contexts effectively. Implications for theory, practice and/or policy: • Prompt engineers should understand how language nuances impact generative AI's capabilities, enabling authentic and well-tailored content generation for effective teaching and learning interactions. • Approaching generative AI strategically, with clear purpose, tone, role, and context, fosters a prompt-based conversational pedagogy.
Article
Full-text available
This study aims to develop an AI education policy for higher education by examining the perceptions and implications of text generative AI technologies. Data was collected from 457 students and 180 teachers and staff across various disciplines in Hong Kong universities, using both quantitative and qualitative research methods. Based on the findings, the study proposes an AI Ecological Education Policy Framework to address the multifaceted implications of AI integration in university teaching and learning. This framework is organized into three dimensions: Pedagogical, Governance, and Operational. The Pedagogical dimension concentrates on using AI to improve teaching and learning outcomes, while the Governance dimension tackles issues related to privacy, security, and accountability. The Operational dimension addresses matters concerning infrastructure and training. The framework fosters a nuanced understanding of the implications of AI integration in academic settings, ensuring that stakeholders are aware of their responsibilities and can take appropriate actions accordingly.
Article
Full-text available
ChatGPT has enabled access to artificial intelligence (AI)-generated writing for the masses, initiating a culture shift in the way people work, learn, and write. The need to discriminate human writing from AI is now both critical and urgent. Addressing this need, we report a method for discriminating text generated by ChatGPT from (human) academic scientists, relying on prevalent and accessible supervised classification methods. The approach uses new features for discriminating (these) humans from AI; as examples, scientists write long paragraphs and have a penchant for equivocal language, frequently using words like "but," "however," and "although." With a set of 20 features, we built a model that assigns the author, as human or AI, at over 99% accuracy. This strategy could be further adapted and developed by others with basic skills in supervised classification, enabling access to many highly accurate and targeted models for detecting AI usage in academic writing and beyond.
Article
ChatGPT is an AI tool that assisted in writing, learning, solving assessments and could do so in a conversational way. The purpose of the study was to develop a model that examined the predictors of adoption and use of ChatGPT among higher education students. The proposed model was based on a previous theory of technology adoption. Seven predictors were selected to build a model that predicted the behavioral intention and use behavior of ChatGPT. The partial-least squares method of structural equation modeling was used for data analysis. The model was found to be reliable and valid, and the results were based on a self-reported data of 534 students from a Polish state university. Nine out of ten proposed hypotheses were confirmed by the results. Habit was found to be the best predictor of behavioral intention, followed by performance expectancy and hedonic motivation. The dominant determinant of use behavior was behavioral intention, followed by personal innovativeness. The research highlighted the need for further examination of how AI tools could be adopted in learning and teaching.