Fig 2 - uploaded by Tareq Rasul
Content may be subject to copyright.
Source publication
The introduction of Artificial Intelligence (AI), specifically Generative AI (GenAI), has significantly transformed the higher education landscape. Despite the opportunities GenAI offers to students, they pose significant challenges for academic integrity. Thus, it is crucial for higher education institutions (HEI) to balance the use of GenAI for e...
Contexts in source publication
Context 1
... comprehensive policy should include regulations for the use of GenAI tools, clearly communicating the roles and responsibilities of various stakeholders for its responsible use, and outline procedures for detecting academic dishonesty and penalties and sanctions for their use, among others. For example, Aalto University's recent academic integrity policy introduced detailed guidelines for the responsible use of AI tools for students and other stakeholders (Aalto University, 2023). Similarly, the new integrity policy introduced by The University of Sydney offers detailed guidelines for AI use by students in assessable work and outlines the consequences of academic dishonesty (The University of Sydney, 2023). ...
Context 2
... educators, and institutions. This framework emphasizes collaborative learning, critical thinking, and AI literacy for students, innovative pedagogical and assessment approaches and development programs for educators, and developing comprehensive AI policy and review, culture of academic integrity, and technological infrastructure for institutions (Fig. 2). The proposed framework offers a foundational approach for HEIs to proactively engage with multiple stakeholders to maintain academic integrity among students and ensures that education remains a process that genuinely reflects students' learning involving the ethical use of GenAI tools. However, it is important to note that ...
Citations
... Moreover, with a lack of proactive measures and guidance, there might be a risk that students will get used to using these tools as registered pharmacists without considering the impact on organisational requirements and patient safety [8]. Therefore, tailored training programs to equip students with key skills to evaluate and use GenAI are required to foster responsible integration [9]. ...
Background
Generative artificial intelligence (GenAI) has significant potential implications for pharmacy education, but its ethical, practical, and pedagogical implications have not been fully explored.
Aim
This international study evaluated pharmacy students’ acceptance and use of GenAI tools using the Extended Unified Theory of Acceptance and Use of Technology (UTAUT).
Method
A cross-sectional survey of pharmacy students from nine countries during the first half of 2024 assessed GenAI usage patterns, curricular integration, and acceptance via the Extended UTAUT framework. After appropriate translation and cultural adaptation, exploratory factor analysis (EFA) identified key adoption factors.
Results
A total of 2009 responses were received. ChatGPT and Quillbot were the tools most frequently utilised. EFA identified three key dimensions: Utility-Driven Adoption, Affordability and Habitual Integration, and Social Influence. Students rated performance and effort expectancy highly, highlighting their perceived usefulness and ease of use of GenAI tools. In contrast, habit and price value received lower ratings, indicating barriers to habitual use and affordability concerns. Gender disparities were noted, with males demonstrating significantly higher acceptance ( p < 0.001). Additionally, country-specific differences were evident, as Malaysia reported a high performance expectancy, while Egypt exhibited low facilitating conditions. Over 20% indicated an over-reliance on GenAI for assignments, raising ethical concerns. Significant gaps were observed, such as limited ethical awareness—only 10% prioritised legal and ethical training—and uneven curricular integration, with 60% reporting no formal exposure to Generative AI.
Conclusion
Findings reveal critical gaps in ethical guidance, equitable access, and structured GenAI integration in pharmacy education. A proactive, context-specific strategy is essential to align technological innovation with pedagogical integrity.
... While AI can support learning, institutions must take proactive measures to ensure it does not replace essential cognitive processes. By integrating critical thinking-focused AI applications, enhancing academic integrity awareness, and fostering an environ-ment where AI is used as an assistive rather than a dominant tool, students can develop sustainable learning habits that prepare them for the evolving demands of the workforce (Ateeq et al., 2024;Rasul et al., 2024). Supporting students with time management training, mental health resources, and flexible learning options can further mitigate the negative effects of AI overreliance. ...
This chapter investigates the perceived erosion of critical academic skills among 745 university students due to dependency on AI tools. The survey measured six key constructs: AI Dependency (AID), Cognitive Offloading (CO), Motivational Decline (MD), Academic Skills Erosion (ASE), Academic Integrity Awareness (AIA), and External Pressures (EP), using a five-point Likert scale. Path Analysis was employed to examine the interrelationships among these constructs. The results revealed a strong positive relationship between AID and both CO and MD, which indicated that the increased reliance on AI leads to reduced cognitive engagement and diminished academic motivation. Additionally, CO and MD were positively associated with ASE, which means that students who offload cognitive tasks and experience MD are more likely to exhibit deteriorating academic skills. While AIA had a weak negative relationship with AID, EP showed a moderate positive association and highlighted the role of academic stress in driving AI reliance.
... In recent years, the world has witnessed a drastic rise in the use of mobile devices such as smartphones, iPad and hand phones [3] . The increasing number of mobile technologies and its usability facilitated by the internet has triggered innovative opportunities for their usage in higher education [4]. According to [5] the rate of mobile internet penetration is predicted to rise up to 71% of the world population by the year 2025. ...
Mobile learning (m-learning) utilizes portable devices like smartphones and tablets for educational purposes and is gaining popularity, particularly in open and distance learning (ODL) contexts. This study explores the usability of mobile learning technologies among undergraduate students in ODL. An online survey questionnaire was administered to undergraduates enrolled in a Business course in Botswana and a usability conceptual model was developed. The findings revealed that predominant opportunities of using mobile learning technologies in ODL include improved student interaction and flexible learning schedules. Likewise, challenges embraced limited battery life, small screens, distractions from phone calls and notifications, and the high cost of internet bundles. To enhance adoption of mobile learning, the study recommends investments in infrastructure and internet connectivity. Furthermore, it suggests reducing internet costs through initiatives like government subsidies or partnerships with Telecom providers. Additionally, the study highlights the importance of training ODL educators and students in effective mobile learning technology utilization.
... In this respect the question of originality constitutes an especially difficult one when students do not simply let an AI chatbot write their texts but co-write their assignments together with the AI chatbot (Luo, 2024). Several scholars have called for guidelines and policies regulating the usage of AI chatbots (Farhi et al., 2023;Johnston et al., 2024;Ofem et al., 2024), partly promoting the usage and ongoing enhancement of AI detection tools (Dalalah & Dalalah, 2023;Perkins et al., 2024;Rasul et al., 2024). Several universities have by now introduced corresponding policies -some of which involve the usage of AI detection tools and even factual bans of AI chatbots (Luo, 2024). ...
... The use of tools that aim at detecting AI-generated texts is discussed as a possible way of tackling the issue of detecting fraudulent usages of AI chatbots by students (Dalalah & Dalalah, 2023;Perkins et al., 2024;Rasul et al., 2024). Some universities have even already implemented policies that entail the use of such tools (Luo, 2024). ...
This study addresses a significant gap in our understanding of how university students actually use AI chatbots such as ChatGPT. While existing literature has explored students’ attitudes, the acceptance and adoption of AI tools, as well as potential implications for academic integrity, little is known about the students’ concrete practices of using AI chatbots. We conducted an exploratory, qualitative study involving focus group interviews with 61 students at a German university. Drawing on sociological practice theory, particularly structuration theory, we show how students use (and not use) AI chatbots very reflexively and often in highly elaborate ways. We identify five generalized practices of usage: Students use AI chatbots as support tool (e.g., language editing), as learning facilitator, as sole author (students just copy-paste AI-generated text), as first author (students substantially modify AI-generated text), and as second author (students outline their own ideas and arguments and let the AI write them up). Hence, in addition to mere copy-pasting, students often co-author their works with AI chatbots, bringing up the question of originality. Furthermore, we reveal the related norms and interpretative schemes. Further findings include that the non-usage of AI chatbots has become a source of self-validation and pride for some students. We also find a continued necessity for students to acquire academic competencies. Our findings have significant implications for university policies. Measures such as mandatory disclosure of AI use, reliance on AI detection tools, or outright bans may prove ineffective and problematic, and might even exacerbate existing inequalities.
... This was a result of the fact that a small proportion of students could use GenAI to subvert the validity of assessment and achieve a passing level without putting in the required effort and time themselves (Thompson et al., 2023), and a broader view that as disruptive technologies develop, assessment would have to fundamentally change. The view that assessment in higher education must be re-evaluated has now received widespread attention in the literature (Bearman et al., 2024;Mao et al., 2024;Rasul et al., 2024;Thanh et al., 2023;Thompson et al., 2023;Xia et al., 2024). This has become even more critical as the abilities of GenAI tools to tackle assessment have grown even further since their first release, including demonstrating the ability to pass high-stakes medical and legal examinations (Head & Willis, 2024;. ...
Recent advances in Generative AI (GenAI) are transforming multiple aspects of society, including education and foreign language learning. In the context of English as a Foreign Language (EFL), significant research has been conducted to investigate the applicability of GenAI as a learning aid and the potential negative impacts of new technologies. Critical questions remain about the future of AI, including whether improvements will continue at such a pace or stall and whether there is a true benefit to implementing GenAI in education, given the myriad costs and potential for negative impacts. Apart from the ethical conundrums that GenAI presents in EFL education, there is growing consensus that learners and teachers must develop AI literacy skills to enable them to use and critically evaluate the purposes and outputs of these technologies. However, there are few formalised frameworks available to support the integration and development of AI literacy skills for EFL learners. In this article, we demonstrate how the use of a general, all-purposes framework (the AI Assessment Scale) can be tailored to the EFL writing and translation context, drawing on existing empirical research validating the scale and adaptations to other contexts, such as English for Academic Purposes. We begin by engaging with the literature regarding GenAI and EFL writing and translation, prior to explicating the use of three levels of the updated AIAS for structuring EFL writing instruction which promotes academic literacy and transparency and provides a clear framework for students and teachers.
... For many educators and students, GenAI represents their first meaningful encounter with AI technology (Parker et al., 2024). By providing immediate, personalized feedback and facilitating self-directed learning, GenAI tools hold the potential to alleviate the burden of teachers, while aiding students in cultivating essential skills and achieving a deeper understanding of their subjects (Rasul et al., 2024). ...
... Despite the opportunities and benefits, educators and researchers have also voiced significant concerns regarding the impact of GenAI on education. These concerns primarily centre around issues such as academic misconduct (Rasul et al., 2024) and the risk of hindering students' intellectual growth and problem-solving skills (Michel-Villarreal et al., 2023). For instance, Črček and Patekar (2023) found that of the 201 university students in Croatia, 44.7% reported using ChatGPT for university assignments. ...
The launch of ChatGPT and the rapid proliferation of generative AI (GenAI) have
brought transformative changes to education, particularly in the field of assessment.
This has prompted a fundamental rethinking of traditional assessment practices,
presenting both opportunities and challenges in evaluating student learning. While
numerous studies have examined the use of GenAI in assessment, no systematic
review has been conducted to synthesise the existing empirical evidence on this
topic. Systematically reviewing 19 empirical studies published within 10 years,
starting in 2014, this study assessed the current state of empirical evidence regarding
GenAI in educational assessment practices and the future research directions
required to advance this field. The findings were synthesised into four themes: (1)
Educators’ perceptions of GenAI in assessment practices; (2) Students’ perceptions
of GenAI in assessment practices; (3) Effectiveness of applying GenAI in
assessment practices; and (4) Recommendations for leveraging GenAI in future
assessment practices. The first three themes summarise the current empirical
evidence, while the fourth theme identifies priorities for future research to guide the
effective integration of GenAI into assessment practices.
... Single examinations or essays are increasingly being recognised as untenable for assessing learning (Gorichanaz, 2023). Among the topics raised in the literature on assessment redesign, common themes include focusing on student-centred (Hsiao et al., 2023), authentic (Rasul et al., 2024), learning instead of the end product, higher-order thinking skills (Smolansky et al., 2023), and evaluative judgement . This recognition has helped advance frameworks for integrating AI into assessments such as the AIAS. ...
Recent developments in Generative Artificial Intelligence (GenAI) have created significant uncertainty in education, particularly in terms of assessment practices. Against this backdrop, we present an updated version of the AI Assessment Scale (AIAS), a framework with two fundamental purposes: to facilitate open dialogue between educators and students about appropriate GenAI use and to support educators in redesigning assessments in an era of expanding AI capabilities. Grounded in social constructivist principles and designed with assessment validity in mind, the AIAS provides a structured yet flexible approach that can be adapted across different educational contexts. Building on implementation feedback from global adoption across both the K-12 and higher education contexts, this revision represents a significant change from the original AIAS. Among these changes is a new visual guide that moves beyond the original traffic light system and utilises a neutral colour palette that avoids implied hierarchies between the levels. The scale maintains five distinct levels of GenAI integration in assessment, from "No AI" to "AI Exploration", but has been refined to better reflect rapidly advancing technological capabilities and emerging pedagogical needs. This paper presents the theoretical foundations of the revised framework, provides detailed implementation guidance through practical vignettes, and discusses its limitations and future directions. As GenAI capabilities continue to expand, particularly in multimodal content generation, the AIAS offers a starting point for reimagining assessment design in an era of disruptive technologies.
... To maximize the benefits and minimize the risks associated with GenAI, educators and policymakers should adopt responsible guidelines (Arantes, 2024). Schools should establish policies on the acceptable use of GenAI, especially for assignments and exams, to prevent academic misconduct (Rasul et al., 2024). ...
GenAI has currently been widely integrated into learning in secondary schools, as it is considered to provide the potential to enhance the learning experience for students. However, using GenAI in secondary school settings remains controversial, as its benefits coexist with significant ethical, social, and educational challenges. This review critically examines the role of GenAI in secondary school education, assessing current applications, potential educational benefits, and unique challenges. Through the exploration of case studies and empirical research, the researcher highlights key areas where GenAI can enhance learning while presenting possible drawbacks and ethical dilemmas. The researcher concludes by suggesting guidelines for the responsible integration of GenAI in secondary school education, aiming to capitalize on its potential while addressing critical risks.
There is increasing demand for teamwork and collaboration proficiencies, which require recalibration of workforce capabilities. As artificial intelligence reshapes societal and professional demands, higher education must adapt by equipping students with essential skills for evolving workplace needs. This systematic review aims to map the perspectives explored and the observed effects of using AI in collaborative learning among higher education students. The reviewed 34 studies primarily explored how AI enhances collaborative learning, targeting group performance, collaboration, knowledge building, and social interactions. Findings showed that integrating AI into collaborative learning situations can promote both collaborative and individual learning in various higher education contexts. This review provides an overview of AI's role in collaborative learning settings, highlights current development, and identifies gaps for further research. It also serves as a source for educators, policymakers, and researchers interested in leveraging AI to foster enriching educational experiences.