Claudia Nerdel’s research while affiliated with Technical University of Munich and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (81)


Development and Validation of a Short AI literacy test (AILIT-S) for University Students
  • Article

June 2025

·

5 Reads

Computers in Human Behavior Artificial Humans

·

·

Daniel S. Schiff

·

Claudia Nerdel

How AI literacy correlates with affective, behavioral, cognitive and contextual variables: A systematic review
  • Preprint
  • File available

May 2025

·

118 Reads

Highlights: • Synthesizes 31 empirical studies that applied six AI literacy instruments in 14 countries, mapping correlations with 88 affective, behavioral, cognitive, and contextual variables • Finds consistent, medium-to-strong correlations between AI literacy and AI self-efficacy, positive AI attitudes, and digital competencies • Uncovers that self-assessment scales show systematically higher correlations compared with a performance-based test, hinting at metacognitive bias in self-reports • Studies draw mainly from university samples, especially health disciplines, pointing to the need for studies in K-12, workplaces, and under-represented fields. This systematic review maps the empirical landscape of AI literacy by examining its correlations with a diverse array of affective, behavioral, cognitive and contextual variables. Building on the review of AI literacy scales by Lintner (2024), we analyzed 31 empirical studies that applied six of those Ai literacy scales, covering 14 countries and a range of participant groups. Our findings reveal robust correlations of AI literacy with AI self-efficacy, positive AI attitudes, motivation, and digital competencies, and negative correlations with AI anxiety and negative AI attitudes. Personal factors such as age appear largely uncorrelated with AI literacy. The review reveals measurement challenges regarding AI literacy: discrepancies between self-assessment scales and performance-based tests suggest that metacognitive biases like the Dunning Kruger effect may inflate certain correlations with self-assessment AI literacy scales. Despite these challenges, the robust findings provide a solid foundation for future research.

Download

Development and Validation of a Short AI literacy test (AILIT-S) for University Students

March 2025

·

555 Reads

Highlights • Developed a 10-item AI literacy test based on a validated 28-item test with data from 1,465 university students across Germany, the UK, and the US • Demonstrates reliability and validity of the short version of the test • AILIT-S enables efficient assessment of AI literacy at the group level in under 5 minutes but is not intended for individual assessment Fostering AI literacy is an important goal in higher education in many disciplines. Assessing AI literacy can inform researchers and educators on current AI literacy levels and provide insights into the effectiveness of learning and teaching in the field of AI. It can also inform decision-makers and policymakers about the successes and gaps with respect to AI literacy within certain institutions, populations, or countries, for example. However, most of the available AI literacy tests are quite long and time-consuming. A short test of AI literacy would instead enable efficient measurement and facilitate better research and understanding. In this study, we develop and validate a short version of an existing validated AI literacy test. Based on a sample of 1,465 university students across three Western countries (Germany, UK, US), we select a subset of items according to content validity, coverage of different difficulty levels, and ability to discriminate between participants. The resulting short version, AILIT-S, consists of 10 items and can be used to assess AI literacy in under 5 minutes. While the shortened test is less reliable than the long version, it maintains high construct validity and has high congruent validity. We offer recommendations for researchers and practitioners on when to use the long or short version.


Fig. 2 Distribution of the rating scores of teachers, experts and LLM agent averaged across all six dimensions on feedback quality.
Fig. 3 Average score and standard deviation of the scoring of the feedback texts generated by teachers, experts and LLM agent in each rating category. Significant differences are marked by * (p < 0.05) and * * (p < 0.01).
Fig. 4 Distribution of the number of words in each feedback text written by the teachers, experts and the LLM agent.
Overview of the multidimensional criteria applied to assess the feedback, categorized into content-related (C) and language-related (L) aspects.
provides an overview over all correlation values. These results suggest that humans and LLMs struggled with different types of feedback texts, with limited alignment in their strengths.
Towards Adaptive Feedback with AI: Comparing the Feedback Quality of LLMs and Teachers on Experimentation Protocols

February 2025

·

121 Reads

Effective feedback is essential for fostering students' success in scientific inquiry. With advancements in artificial intelligence, large language models (LLMs) offer new possibilities for delivering instant and adaptive feedback. However, this feedback often lacks the pedagogical validation provided by real-world practitioners. To address this limitation, our study evaluates and compares the feedback quality of LLM agents with that of human teachers and science education experts on student-written experimentation protocols. Four blinded raters, all professionals in scientific inquiry and science education, evaluated the feedback texts generated by 1) the LLM agent, 2) the teachers and 3) the science education experts using a five-point Likert scale based on six criteria of effective feedback: Feed Up, Feed Back, Feed Forward, Constructive Tone, Linguistic Clarity, and Technical Terminology. Our results indicate that LLM-generated feedback shows no significant difference to that of teachers and experts in overall quality. However, the LLM agent's performance lags in the Feed Back dimension, which involves identifying and explaining errors within the student's work context. Qualitative analysis highlighted the LLM agent's limitations in contextual understanding and in the clear communication of specific errors. Our findings suggest that combining LLM-generated feedback with human expertise can enhance educational practices by leveraging the efficiency of LLMs and the nuanced understanding of educators.


A Multinational Assessment of AI Literacy among University Students in Germany, the UK, and the US

February 2025

·

93 Reads

·

3 Citations

Computers in Human Behavior Artificial Humans


Exemplary aspects of the proposed framework for science education using MLLMs.
Taking the next step with generative artificial intelligence: The transformative role of multimodal large language models in science education

January 2025

·

163 Reads

·

35 Citations

Learning and Individual Differences

·

·

·

[...]

·

Claudia Nerdel

The integration of Artificial Intelligence (AI), particularly Large Language Model (LLM)-based systems, in education has shown promise in enhancing teaching and learning experiences. However, the advent of Multimodal Large Language Models (MLLMs) like GPT-4 Vision, capable of processing multimodal data including text, sound, and visual inputs, opens a new era of enriched, personalized, and interactive learning landscapes in education. This paper derives a theoretical framework for integrating MLLMs into multimodal learning. This framework serves to explore the transformative role of MLLMs in central aspects of science education by presenting exemplary innovative learning scenarios. Possible applications for MLLMs range from content creation to tailored support for learning, fostering engagement in scientific practices, and providing assessments and feedback. These applications are not limited to text-based and uni-modal formats but can be multimodal, thus increasing personalization, accessibility, and potential learning effectiveness. Despite the many opportunities, challenges such as data protection and ethical considerations become salient, calling for robust frameworks to ensure responsible integration. This paper underscores the necessity for a balanced approach in implementing MLLMs, where the technology complements rather than supplants the educators' roles, ensuring an effective and ethical use of AI in science education. It calls for further research to explore the nuanced implications of MLLMs for educators and to extend the discourse beyond science education to other disciplines. Through developing a theoretical framework for the integration of MLLMs into multimodal learning and exploring the associated potentials , challenges, and future implications, this paper contributes to a preliminary examination of the trans-formative role of MLLMs in science education and beyond.


KI in gymnasialer und beruflicher Lehrkräftebildung an der TUM: Interdisziplinäres und fachdidaktische Umsetzungsbeispiel

December 2024

·

74 Reads

KI-Technologien verändern nicht nur den naturwissenschaftlichen Unterricht durch fachwissenschaftliche Anwendungen wie die Proteinstrukturanalyse, sondern transformieren auch das Lernen selbst. Der Beitrag gibt Einblick in die Inhalte und die didaktischen Ansätze eines Seminars für Studierende des Lehramts Biologie am Gymnasium sowie des Lehramts Gesundheit und Pflege an beruflichen Schulen an der Technischen Universität München (TUM). Das Seminar zielt darauf ab, angehende Lehrkräfte auf das Unterrichten von KI-spezifischen Themen vorzubereiten. Der Einsatz von KI als Lerntechnologie wird im Seminar exemplarisch thematisiert.


AI Advocates and Cautious Critics: How AI Attitudes, AI Interest, Use of AI, and AI Literacy Build University Students' AI Self-Efficacy

December 2024

·

637 Reads

·

15 Citations

Computers and Education Artificial Intelligence

This study investigates how cognitive, affective, and behavioral variables related to artificial intelligence (AI) build AI self-efficacy among university students. Based on these variables, we identify three meaningful student groups, which can guide educational initiatives. We recruited 1465 undergraduate and graduate students from the United States, the United Kingdom, and Germany and measured their AI self-efficacy, AI literacy, interest in AI, attitudes towards AI, and AI use. Using a path model, we examine the correlations and paths among these variables. Results reveal that AI usage and positive AI attitudes significantly predict interest in AI, which in turn and together with AI literacy, enhance AI self-efficacy. Moreover, using Gaussian Mixture Models, we identify three groups of students: 'AI Advocates,' 'Cautious Critics,' and 'Pragmatic Observers,' each exhibiting unique patterns of AI-related cognitive, affective, and behavioral traits. Our findings demonstrate the necessity of educational strategies that not only focus on AI literacy but also aim to foster students' AI attitudes, usage, and interest to effectively promote AI self-efficacy. Furthermore, we argue that educators who aim to design inclusive AI educational programs should take into account the distinct needs of different student groups identified in this study.


A Multinational Assessment of AI Literacy among University Students in Germany, the UK, and the US

October 2024

·

1,189 Reads

·

2 Citations

Highlights • Assessed 1,465 university students across Germany, the UK, and the US to measure AI literacy and related variables • Compares AI literacy, AI self-efficacy, interest in AI, attitudes towards AI, AI use, and prior learning experiences between countries. • German students show higher AI literacy; UK students have more negative attitudes; US students report greater AI self-efficacy • Provides an AI literacy test validated for cross-national research AI literacy is one of the key competencies that university students-future professionals and citizens-need for their lives and careers in an AI-dominated world. Cross-national research on AI literacy can generate critical insights into trends and gaps needed to improve AI education. In this study, we assessed AI literacy of 1,465 students in Germany, the UK, and the US using a knowledge test previously validated in Germany. We additionally measure AI self-efficacy, interest in AI, attitudes towards AI, AI use, and students' prior learning experiences. Our IRT analysis demonstrates that the AI literacy test remains effective in measuring AI literacy across different languages and countries. Our findings indicate that the majority of students have a foundational level of AI literacy as well as relatively high levels of interest and positive attitudes related to AI. Students in Germany tend to have a higher level of AI literacy compared to their peers in the UK and US, whereas students in the UK tend to have more negative attitudes towards AI, and US students are more likely to have high AI self-efficacy. Based on these results, we offer recommendations for educators on how to take into account differences in affective variables such as attitudes and prior experiences to create effective learning opportunities. By validating an existing test instrument across different countries, we provide an instrument and data which can serve as orientation for future research.


Figure 4 BIC and ABIC for Gaussian Mixture Models (GMM) with 1 to 6 components.
Definitions of Key Constructs Related to AI Self-Efficacy, AI Literacy, AI Interest, AI Attitudes, and AI Use.
Correlations between AI literacy, AI self-efficacy, AI interest, positive and negative attitude towards AI, and use of AI.
Fit indices of the path model for the total sample and country-specific subsamples from the United States, the United Kingdom, and Germany.
AI Advocates and Cautious Critics: How AI Attitudes, AI Interest, Use of AI, and AI Literacy Build University Students' AI Self-Efficacy

October 2024

·

1,250 Reads

·

1 Citation

HIGHLIGHTS • Validated a path model showing that AI use and positive AI attitudes significantly predict AI interest, which, along with AI literacy, enhances AI self-efficacy • Uncovers and describes three groups of students: AI Advocates, Cautious Critics, and Pragmatic Observers • Describes socio-demographic characteristics of the students' groups regarding AI • Recommends strategies to promote student AI self-efficacy in light of cognitive, affective, and behavioral factors ABSTRACT This study investigates how cognitive, affective, and behavioral variables related to artificial intelligence (AI) build AI self-efficacy among university students. Based on these variables, we identify three meaningful student groups. We recruited 1,465 undergraduate and graduate students from the United States, the United Kingdom, and Germany and measured their AI self-efficacy, AI literacy, interest in AI, attitudes towards AI, and AI use. Using a path model, we examine the correlations and paths among these variables. Results reveal that AI usage and positive attitudes significantly predict interest in AI, which in turn and together with AI literacy, enhances AI self-efficacy. Moreover, using Gaussian Mixture Models, we identify three stable and distinct groups of students: 'AI Advocates,' 'Cautious Critics,' and 'Pragmatic Observers,' each exhibiting unique patterns of AI-related cognitive, affective, and behavioral traits. Our findings demonstrate the necessity of educational strategies that not only focus on AI literacy but also aim to foster students' attitudes, usage, and interest to effectively promote AI self-efficacy. Furthermore, we argue that educators who aim to design inclusive AI educational programs should take into account the distinct needs of different student groups identified here.


Citations (38)


... However, research on how to foster these competencies effectively is still in its infancy. To evaluate different methods of learning and instruction, and progress made toward AI literacy policy goals expressed by governments and corporations around the world, effective assessment of AI literacy is crucial (Hornberger et al., 2023;Hornberger et al., 2025). ...

Reference:

Development and Validation of a Short AI literacy test (AILIT-S) for University Students
A Multinational Assessment of AI Literacy among University Students in Germany, the UK, and the US

Computers in Human Behavior Artificial Humans

... Similarly, integrating acoustic information has been found to reinforce temporal modeling and improve robustness to ambiguous textual or visual cues [57]. In educational contexts, multimodal approaches have demonstrated improvements in learner engagement, feedback personalization, and cognitive load management, particularly when integrated with adaptive feedback systems [58]. These findings suggest that multimodal systems are not merely additive in capability but fundamentally more expressive and cognitively aligned with human information processing. ...

Taking the next step with generative artificial intelligence: The transformative role of multimodal large language models in science education

Learning and Individual Differences

... The use of AI, analytics, and automation raises questions about data governance, algorithmic bias, and privacy. Institutions must establish robust policies to ensure that smart administration adheres to ethical standards and regulatory compliance [33]. The upfront investment in digital infrastructure and staff training can be substantial-particularly for institutions in developing countries. ...

AI Advocates and Cautious Critics: How AI Attitudes, AI Interest, Use of AI, and AI Literacy Build University Students' AI Self-Efficacy

Computers and Education Artificial Intelligence

... AI literacy was measured using the AI literacy test instrument developed by Hornberger et al. (2023) and validated with samples in the US, UK, and Germany (Hornberger et al., 2024). The instrument builds on Long and Magerko's (2020) conceptualization of AI literacy and consists of 27 4-choice items and one sorting item (arranging the steps of machine learning in order). ...

A Multinational Assessment of AI Literacy among University Students in Germany, the UK, and the US

... Wang et al. (2023) highlight how HEIs' AI capabilities directly affect students' self-efficacy and creativity, suggesting that a technologically advanced learning environment can empower students to engage more effectively with AI tools. This context sets the stage for Bewersdorff et al. (2024), who emphasize the importance of positive attitudes towards AI and AI literacy in fostering students' interest and self-efficacy in AI. This finding indicates that a supportive mindset and foundational knowledge are crucial for effective AI integration in education. ...

AI Advocates and Cautious Critics: How AI Attitudes, AI Interest, Use of AI, and AI Literacy Build University Students' AI Self-Efficacy

... According to Lai et al. (2024), social influence refers to the extent to which individuals perceive that those who are important to them believe they should use AI to support their study. Rocha et al. (2024) highlighted that social influence can play an important role in adopting any technology. Alvi (2021) in his study, also used social influence to find out AI use. ...

Exploring Influences on ICT Adoption in German Schools: A UTAUT-Based Structural Equation Model

Journal of Learning for Development

... Perceived efficacy refers to teachers' beliefs about the ability of AR to enhance student learning and engagement in Chemistry . This perception significantly influences their willingness to adopt and sustain AR in their instructional practices (Ripsam & Nerdel, 2024). Teachers who view AR as a valuable tool that improves students' comprehension and increases their level of participation are more likely to incorporate and utilize it in their classes (Mazzuco, Krassmann, Reategui & Gomes, 2022;Zhang, Li, Huang, Feng & Luo, 2020). ...

Teachers’ attitudes and self-efficacy toward augmented reality in chemistry education

... In the field of education, artificial intelligence (AI) has the potential to enhance teaching and learning experiences through the development of language models (Bewersdorff et al., 2024;Lee et al., 2024;Leinonen et al., 2023;Wu et al., 2023). AI-based language learning applications provide supplementary exercises and materials on topics such as pronunciation correction, vocabulary development, and language practice in areas where students require improvement (Wang & Jiao, 2021). ...

Taking the Next Step with Generative Artificial Intelligence: The Transformative Role of Multimodal Large Language Models in Science Education

... AESs have exhibited higher inter-rater reliability compared to human raters for objectively measurable features, while human raters have shown superior performance in evaluating subjective aspects [28]. Cross-linguistic and cross-cultural comparisons have revealed varying degrees of agreement between AESs and human raters across different L1 backgrounds, suggesting potential bias in automated systems [33]. Agreement levels also vary based on testtakers' proficiency levels [9]. ...

Assessing student errors in experimentation using artificial intelligence and large language models: A comparative study with human raters

Computers and Education Artificial Intelligence

... Finally, for a more comprehensive demographic analysis, we included five single-choice questions on AI literacy [33] that assessed participants' understanding of AI functionalities and limitations, the Affinity for Technology Interaction (ATI) scale [74], and a short Big-5 personality questionnaire [62]. ...

What do university students know about Artificial Intelligence? Development and validation of an AI literacy test

Computers and Education Artificial Intelligence