Content uploaded by Dorottya Sallai
Author content
All content in this area was uploaded by Dorottya Sallai on Jul 01, 2024
Content may be subject to copyright.
Approach Generative AI Tools Proactively or Risk
Bypassing the Learning Process in Higher Education
Dorottya Sallai
1
,*, Jonathan Cardoso-Silva
2
,*, Marcos E. Barreto
3
,*, Francesca Panero3,
Ghita Berrada2, and Sara Luxmoore2.
* Corresponding authors
Forthcoming in LSE Public Policy Review, 2024
Abstract
The growing reliance of higher education (HE) students on Generative Artificial
Intelligence (GenAI) tools for learning and assessment risks circumventing rather than
enhancing the learning process if no adequate support and direction are provided.
Reflecting on experience from a UK university, our article explores how students use
GenAI tools in practice. We argue that students rely on GenAI differently for learning than
for assessments and tend to focus more on the output or performance than the learning
journey itself. This raises questions on how GenAI can be successfully integrated into the
curriculum without jeopardising learning. Based on our observations that some students
use GenAI platforms as a substitute for learning rather than as a tool to enhance learning,
our policy recommendations focus on curriculum planning and assessment design.
JEL Keywords:
• Generative AI (Z0)
• Artificial Intelligence (Z0)
• Artificial Intelligence in Education (Z0)
• Higher Education (I23)
• Education Policy (I280)
• Technological Change (O330)
• Technological Impact (O330)
• Technology Adoption (O330)
Introduction (405 words)
The rise of Generative AI (GenAI) tools and their potential impact on teaching, learning,
and assessment practices has recently been a significant topic of discussion in higher
1
LSE Department of Management, d.sallai@lse.ac.uk
2
LSE Data Science Institute, j.cardoso-silva@lse.ac.uk
3
LSE Department of Statistics, M.E.Barreto@lse.ac.uk
education (1,2). Since November 2022, when OpenAI introduced ChatGPT, its online
conversational AI chatbot that rapidly gained widespread attention globally, educators
and students have been challenged by the capabilities of this and other tools later
released by other major tech companies, such as Google's Gemini, GitHub's Copilot,
Microsoft's Copilot, and Anthropic's Claude.
For the first time, people could easily converse directly with an AI chatbot using natural
language to discuss almost any topic and 'look up' information instead of retrieving it from
search engines, Wikipedia, academic databases, or primary sources (3). Some GenAI-
powered tools can also function as digital personal assistants, helping to auto-complete
paragraphs or programming code, even without the need for explicit conversation or
instructions, and there are GenAI chatbots that can operate as personal tutors (4). Given
the potential for these tools to automatically generate complete essays and other
assignments commonly adopted in higher education, coupled with the fact that GenAI-
generated content often lacks factual accuracy, much of the discourse and policies within
the sector have been centred on ethical considerations and concerns related to academic
misconduct (5).
In this article, we provide policy recommendations on assessment and curriculum design
which reflect how higher education (HE) institutions and educators can adapt to these
challenges as students and teachers use GenAI platforms as part of their learning
journey. These recommendations are inspired by initial insights from GENIAL, a study
we conducted during the 2023/24 academic year to investigate how undergraduate and
postgraduate students from quantitative and qualitative subjects at the London School of
Economics and Political Science (LSE) interacted with Generative AI tools (ChatGPT and
Gemini) in their courses.
We argue that the biggest pedagogical challenge of using GenAI tools in higher education
is that students may use them to replace their learning process and critical skills. The
changes brought by the advent of this technology demand that educators and higher
education institutions rethink their curriculum and assessment design practices and
approach this new era of AI-enabled learning with curiosity, self-reflection and a
commitment to life-long learning. In contrast to the generally rather vague guidelines and
policy documents currently available on AI in HE, our article provides some practical ideas
and actionable recommendations that can make a difference for educators and HE
institutions quickly and efficiently.
The Context of the 2023-24 Academic Year
As GenAI tools grew in popularity, an extensive scholarly debate arose on how to
embrace and use GenAI in educational settings most effectively (6). While GenAI may
create opportunities for increasing administrative efficiency and innovation in university
education, for instance, by improving access to remote learning, asynchronous teaching
delivery, online collaboration, gamification, and student engagement, it also presents
significant challenges, particularly in the areas of academic integrity, equity and the future
of traditional assessment methods like ‘open book’ exams, dissertations or essays (1).
Many scholars warned about the 'death' of the essay even before ChatGPT became freely
available to all (7), calling our attention to the rise of academic cheating because of the
increase in online take-home examinations after the Covid-19 pandemic and the almost
parallel emergence of artificial intelligence (AI) (8). Indeed, as Lindebaum and Ramirez
(9) claimed, freely available tools are already giving students the opportunity to rely
entirely on GenAI when writing their assignments. These platforms not only design and
write high-level essays but also paraphrase the text and check it against available
plagiarism tools. The UK higher education sector had already been facing an ongoing
challenge with ‘contract’ cheating through ‘essay mills’ – professional websites that
provide pre-written assignments to students (10). Now, students can rely entirely on a
free language model supplier such as ChatGPT to generate academic work.
On the other hand, some have argued that students’ efficient use of ChatGPT can be
advantageous for learners, teachers, and researchers, especially non-native English
speakers (11). From this perspective, AI can enhance students’ linguistic abilities and
serve as teaching assistants rather than machines that replace human competence (12).
In this context, lecturers should be aware of the risk of students trusting GenAI tools too
much, potentially distracting them from their learning goals. Lecturers may need to mentor
and guide students more closely in navigating conflicting sources of information (1).
Nevertheless, higher education institutions face many unanswered questions about
adapting their curricula and extracurricular offers to equip future graduates with the
necessary skills to thrive in a future dominated by artificial intelligence (13). Educators
face the challenge of identifying in which areas their students require new skills, and
hence, the development of new competencies and which skills may become obsolete.
Hence, it is not enough to think only about integrating AI into the current educational
framework; universities also have to proactively create an environment to explore how AI
enhances human intelligence (13).
Policy reactions from around the world
In addition to academic discussions, policymakers have been working on providing
guidance for dealing with GenAI. For instance, UNESCO's Guidance for Generative AI in
Education and Research (14), published in 2023, emphasises that education and
research practitioners need to use GenAI ethically in their practice. The report states that
GenAI could be useful in minimising the pressure of homework and exams rather than
exacerbating it. However, it also encourages education practitioners and learners alike to
engage critically with GenAI's contents and outputs, as the tool is unreliable and tends to
produce answers that conform to Global North cultural standards and underrepresent
voices from the Global South and Indigenous communities. The document also calls for
GenAI's use to be prevented ‘where it would deprive learners of opportunities to develop
cognitive abilities and social skills through observations of the real world, empirical
practices such as experiments, discussions with other humans, and independent logical
reasoning’. As for assessments, the guidance suggests that GenAI's impact is not simply
a matter of having concerns about learners cheating: the capabilities of GenAI tools
should prompt a ‘rethink [of] what exactly should be learned and to what ends, and how
learning is to be assessed and validated’. Finally, it calls for education practitioners to
have access to well-structured programmes on using GenAI in education (to date, only
Singapore has such a programme (14, p.26)). The Council of Europe (15) mentions
UNESCO's Guidance as a regulatory framework to build upon for its upcoming legally
binding instrument to ensure a human rights-based approach to using AI in education.
In the UK, a Department for Education (DfE) policy paper (16) highlights the potential of
GenAI tools to reduce workloads across the education sector and free up teachers' time,
‘allowing them to deliver excellent teaching’. However, like UNESCO, the DfE also points
out the unreliability, inaccuracies, biases, and copyright and user privacy issues
associated with these tools. The DfE’s document states that, while GenAI tools can make
certain tasks quicker, accessing the tools does not replace the deep subject knowledge
and judgment of a human expert and that it is ‘more important than ever that [the]
education system ensures pupils acquire knowledge, expertise and intellectual
capability’. The document argues that one can only make the most of GenAI tools when
they already possess a solid knowledge base. For example, being proficient in clear
writing and having a good grasp of the subject being addressed are necessary for creating
effective prompts. Additionally, one can only assess the accuracy of the tool's results if
they have a framework for comparison. The DfE concludes that, while the education
sector should certainly make the most of the opportunities offered by the tools, it should
do so through safe and effective use of the tools to continue delivering an excellent
education that prepares pupils to contribute to society and the workplace. The
observations in our study align with the points raised in the UNESCO and DfE documents.
We agree that the impact of students using GenAI tools must be considered for more
effective teaching instruction.
Preliminary Findings from the GENIAL Project
The objective of our research study – GENIAL (Generative AI Tools as a Catalyst for
Learning)
4
– was to explore how university students in full-time undergraduate and
4
Interested readers can find more information on the website: https://lse-dsi.github.io/genial
postgraduate courses use GenAI tools in their learning and assessment. The project
launched as a smaller focus group initiative in June 2023 to evaluate the efficacy of code
generation tools. Over the 2023-2024 academic year, as interest grew in the field, the
original initiative evolved into a multidisciplinary research project, investigating the
learning behaviours of around 220 students in four undergraduate and three postgraduate
courses, including quantitative and qualitative subjects. The courses ran in the autumn
and spring terms of 2023-2024 in the LSE Departments of Statistics, Data Science,
Management, and Public Policy. The study’s preliminary findings are based on the
analysis of student questionnaires, focus group sessions, observational experiments, and
the analysis of chat logs that students created specifically for their course chats. Students
who participated in the study were asked to create a specific chatlog for all their course-
related GenAI conversations and share their chatlogs and brief reflections on their
learning with the research team through weekly surveys.
In the autumn term 2023/24 (September/2023 to December 2023), the three participating
undergraduate courses allocated in-class time for students to independently work on
challenging tasks using ChatGPT as an aid while limiting free web browsing and peer
interactions. In spring term 2023/24 (January/2024 to March/2024), we expanded the
number of participating courses from three to six to include a range of qualitative
disciplines and to gather more data about students’ GenAI tools usage, include their
usage outside the classroom and for assessments. Table 1 shows the list of courses
participating in the study each term. We specifically aimed to explore differences in
learning approaches and student perceptions in relation to the usefulness of GenAI tools
in different subject areas, as well as undergraduate and postgraduate levels.
Table 1: Participating courses in GENIAL
Case study
Autumn Term (2023)
Winter Term (2024)
Undergraduate
courses
DS105 – Data for Data Science
DS202 – Data Science for Social Scientists
ST207 – Databases
DS105 – Data for Data Science
DS202 – Data Science for Social Scientists
MG317 – Leading Organisational Change
Postgraduate
courses
ST456 – Deep Learning
PP422 – Data Science for Public Policy
MG4B7 – Leading Organisational Change
Figure 1: GenAI tools known to students in the GENIAL study from a)
undergraduate and b) postgraduate courses.
We used various data collection methods to gather reliable and high-quality data. During
the first term, we ran a survey at the end of dedicated in-class activities where students
were asked to work independently and use the chatbots as an aid. In the second term,
we expanded our data collection efforts. We conducted surveys and focus groups, and
every week, we requested participants to share chat logs related to their learning and
participation in the course, both in and out of the classroom. Furthermore, we obtained
students' assignment submissions and chat logs. While a detailed report on our findings
is forthcoming, we share some of the preliminary insights of this study that have
significantly influenced our thinking and shaped our policy recommendations regarding
the wider issue of the use of AI in Higher Education.
Mixed perceptions of the use of Generative AI tools by students
At the start of the term, we asked students to list the GenAI tools they knew or had used.
OpenAI’s ChatGPT was recognised and used by almost all undergraduates (Figure 1a)
and postgraduates (Figure 1b). Grammarly ranked second, followed by Microsoft’s BingAI
(now Microsoft Copilot) and OpenAI’s image generator, DALL·E. A consistent proportion
across all courses (~80%) reported using these tools for learning, with most of them
stating that the GenAI tools made learning easier for them.
However, not all students found GenAI tools beneficial during the classes observed during
the autumn term. Indeed, students had mixed perceptions of using GenAI for learning
exercises. After each class with a GENIAL activity, participants were asked to rate the
usefulness of the GenAI chatbot's assistance on their designated tasks. In Figure 2, we
show that the responses of students from one of the participating courses — DS202A
(Data Science for Social Scientists) — produced two main rating modes: one around 3-4
and another around 6-7.
Figure 2: Distribution of all 80 responses from the 29 participating DS202A
students across 8 weeks of the Autumn Term 2023/24 regarding their perceived
helpfulness of GenAI tools.
This mixed perception of the usefulness of GenAI tools can be partly attributed to the
limited time available for in-class activities. Typically, only the final 30 minutes of a 90-
minute class were reserved for independent GenAI-assisted activity work, which limited
the opportunity for deep thinking and experimentation. The design of the classes
sometimes also required students to apply concepts and skills they had just been
exposed to for the first time without the proper time for guided, supervised practice.
However, the mixed response can also be associated with the students' different levels
of comprehension of the new material. Our preliminary analysis suggests that students
benefit most from using GenAI tools when they clearly understand a task's purpose and
have already grasped the basic underlying concepts needed to complete it. This view was
also expressed by a DS105A student in our end-of-term study:
‘As long as you understand what ChatGPT is doing, then it is incredibly useful
to use it as it does all the 'meaningless' work for you. You get the code and
then correct it, which is only possible if you understand the problem.’
During Winter 2023/24, when we also collected data about students’ usage of GenAI tools
outside of the classroom, we observed that most students used generative AI tools for
task completion and productivity gains. There were few instances of GenAI usage for
exploring or gaining deeper insight into the subject matter itself. Common uses included
summarising required readings to save time and troubleshooting coding errors in
programming-heavy courses. This usage appears to be, more frequently, a coping
mechanism for the pressure of deadlines and limited time for assignments rather than a
desire to learn more about a topic or skill. A DS202W student explained in one of the
focus groups that they would first use ChatGPT to create what seemed like a valid
submission (in this case, programming code) and then worry about understanding what
the code does later:
‘It was like, I solved it first, I got the stress out of the way, and now I can take
my time to learn and understand. Without worrying about, like, “Oh, I have to
submit this assignment, and it’s not working”’
In line with the above, we saw students asking AI tools to explain how a management
framework applies to a business case study and then relying on the AI-produced
summary when writing their reports, apparently without properly checking it against the
course material. We also observed students submitting AI-generated code in their
assessments where the code runs and produces something, but the output is unrelated
to the task's objectives. The convincing tone of the GenAI chatbots' responses seems to
give students the illusion that the output is always factual and true. We found that students
produced lower quality work when relying on chatbots too much at the beginning of the
term, compared to how they would have done without using it. In the next section, we
discuss that doing well in assessments or getting good grades does not equal learning
and how we may help students not to inadvertently bypass the cognitive processes
associated with learning.
Discussion
Mindless use of Generative AI tools for assignments hinders the learning
process
Assessment plays a central role in higher education, involving various stakeholders such
as governments, employers, funders, professional bodies, and parents. This creates a
high-pressure environment for students and teachers, significantly impacting students'
study priorities (17). Due to the high stakes, it is unsurprising that this process sometimes
forces students to prioritise creating a ‘final product’ that passes as a good demonstration
of learning rather than engaging deeply with the desired learning outcomes set out by
course leaders. Because GenAI tools can mimic sophisticated language, creating the
illusion of expertise, they can worsen the gap between assessment and learning when
used mindlessly and detached from the learning process.
Students are aware of this conflict and the performative role that GenAI can play when
used for assignments. During a DS202W GENIAL focus group session, a student
explained the reasoning for using ChatGPT and Gemini when working on an assignment
as follows:
‘There is like the “dual-purpose”, so one of them is obviously to get high
grades in the assignment, and the other one is to learn what's happening.’
Students' use of chatbots and GenAI-powered autocomplete tools for assignments should
not be reduced to cheating or laziness. Educators may find their students are genuinely
interested in using them for learning. If we approach this new technology more
constructively and in dialogue with our students, ensuring that assessments are
constructively aligned with the learning activities in our courses, we can use it as a
teaching opportunity. If used correctly, AI chatbots can boost students’ interest and help
them perceive more learning value from the activities and exercises they do as part of a
course (18,19).
It is true, however, that when turning to a GenAI tool for learning support, students may
be persuaded by the chatbot's authoritative tone into believing or ‘learning’ things that are
just outright wrong (20,21). This is a very important and valid concern, as these systems
do not have reliable truth, knowledge or fact-checking mechanisms. When Google
incorporated AI overviews into their Search product, Internet users quickly found out that
Google Search told its users to use ‘non-toxic glue’ when cooking pizza to make the
cheese stickier and that it was okay, recommended even, for humans to eat rock once a
day (22). Using common sense, it is easy to recognise these responses as illogical, and
it is improbable that they would be directly copied into a more formal document, like an
essay. However, validating GenAI-generated content about a completely unknown
subject is much more challenging. It is easier to be misled when we lack the minimal,
foundational knowledge to validate what we are reading.
For the reasons mentioned above, we argue that the biggest risk of the uncritical use of
GenAI tools is that students inadvertently bypass learning rather than enhance it.
Considering how AI chatbots can significantly impact students' learning outcomes, it is
important for educators to critically reflect on and reevaluate their current curriculum and
assessment design practices (23). These observations are in line with both the UK DfE's
position that students need prior knowledge to use GenAI tools effectively and UNESCO's
suggestion that learners and educators should critically engage with the tools and rethink
what needs to be learned and how learning is assessed.
Generative AI exacerbates pre-existing constraints of higher education
The mindless use of GenAI tools is also symptomatic of underlying, pre-existing issues
within the learning ecosystem of higher education, which are only exacerbated by the
ease of use of such tools to create the content on which students are assessed. This
aligns with the findings of Abbas et al. 2024 (24), who argue that time pressures and
workload encourage students to use GenAI tools for assessments. They also show that
excessive use of ChatGPT may negatively affect students' academic performance and
memory, which we also found in the GENIAL study when marking students’ formative
essays. Consequently, although students do not need prior practical and theoretical
expertise to use GenAI tools, they may still lack a complete understanding of the potential
of AI tools or the ability to use them to enhance their learning processes successfully (25).
Although students held varied opinions regarding the advantages of GenAI during the
Autumn Term, more than 80% of students across all classes in the spring term
acknowledged the use of GenAI for learning. Surprisingly, contrary to our prior
assumptions, students in management courses found AI tools somewhat less beneficial
compared to those in quantitative and data science courses.
This was somewhat unexpected, considering that AI tools are commonly seen as a
possible threat to the future of essays (7), but not altogether unsurprising considering how
essays produced by GenAI chatbots often tend to be generic and unoriginal (26,27). Our
research indicates that students are more inclined to use AI tools when they find the
volume of readings or the complexity of the materials challenging. Conversely, they are
less likely to depend on AI tools when the pace of delivery and the subject matter are
easier for them to follow. As one of the MG317 students stated in one of the weekly
surveys:
‘This week’s content was pretty straightforward, and I haven’t found myself
using AI.’
This aligns with the findings of Abbas et al. (2024) (24), who argue that time pressures
and workload encourage students to use AI for assessments. They also show that
excessive use of ChatGPT may negatively affect students' academic performance and
memory, something we also observed in the GENIAL study when marking students’
formative essays. Consequently, although students do not need prior practical and
theoretical expertise to use AI tools, they may still lack a complete understanding of the
potential of GenAI tools or the ability to use them to enhance their learning processes
successfully.
Given that existing evaluation methods such as open-book exams, problem-solving
questions, critical thinking assignments, case studies, and creative writing tasks are not
‘adequate to confirm students’ learning and performance in the absence of any tool
capable of validating the authorship of the work’ (28), and students are not necessarily
able to judge accurately whether their use of GenAI tools will lead to their expected or
hoped positive outcomes (29), educators need to rethink how they deliver and evaluate
learning (27). This was also underlined by a DS202W participant in one of our focus
groups, who stated:
‘If your question cannot differentiate between a student who actually
understands the content and an AI, that means your question is not good
enough.’
This discussion highlights the importance of educating students on the impacts of GenAI
use and the urgent need for higher education institutions to reform how they evaluate and
measure learning through curriculum design and assessment.
Policy recommendations
This section proposes a few practical recommendations for higher education
professionals and regulators to constructively incorporate GenAI tools into their teaching,
learning, and assessment practices. Based on our initial impressions from running the
GENIAL study, these suggestions are designed to assist educators in maximising the
benefits of GenAI tools while adjusting their teaching and assessment approaches to
minimise any negative impact on their students' learning processes. Higher education
leaders can also use them to understand how to best support faculty and staff in
implementing these practices.
Assessment Design
• Separate the learning process from the assessed ‘product’. Design
assessments with some continuous elements before submission, requiring
documentation of the development of the final output. Educators can require
students to submit their assessments in parts or create in-term submission points
in the form of short live pitches, presentations or online video updates. These
preliminary submissions do not need to be formally marked to be effective, but
feedback should be provided on the adherence to the learning path.
• Map out the process your students will follow as they work on their
assignments. Once a decision has been made about the form of the final (essay,
exam, coding project, presentation), write down the steps a student who has
engaged deeply with the course material and mastered the knowledge is expected
to follow to produce the output.
• Take, for example, essay assignments. It is reasonable to expect that students 1)
identify the key literature from the course reading list, 2) use an appropriate search
engine, using appropriate keywords, and 3) identify a related bibliography. Then,
we expect the student to 4) read selected references, 5) judge their validity, 6)
summarise the key arguments, and 7) establish connections across all readings.
Explicitly writing those down will help devise strategies for effectively assessing
the learning process, as described in the previous recommendation.
Ideally, these processes should be mapped in a visual format to facilitate drawing
the loops that normally arise (e.g., refining the keywords used for literature search
after reading selected references). The map could be shared with students for
maximal transparency, but it is not necessary nor always wise.
• Delay the adoption of GenAI tools when introducing a new topic or skill.
Employ practice activities targeting the understanding of key concepts immediately
after they have been introduced. Then, add in practice formative assignments in
which the explicit encouragement of GenAI tools grows as the level of complexity
and scope of the exercises incrementally increase.
For example, in courses with a programming component, if students are told about
the expected level of engagement with GenAI tools as they progress, it becomes
more likely that the tools are used as an assistant to automate skills that have
already been mastered in earlier, simpler exercises, mitigating the risk of
bypassing the learning process. It is also important to ask for code explanations to
identify the cognitive process used when working on the assessments.
When advising students on the appropriate use of GenAI tools at different levels
of difficulty, it is important to explain the reasoning behind recommending the most
suitable use for a particular task. It's even better to discuss and come up with these
recommendations together with the students.
• Encourage students to track and share their use of GenAI tools to support
their individual learning journey. Techniques akin to those employed in
language instruction, which assess students’ knowledge and comprehension
levels at the start of the semester, could be adapted in other fields to determine
students’ initial baseline understanding of the subject matter and evaluate their
overall progress by the end of the term not only in relation to the marking criteria
but also to their initial understanding. Integrate in-class comprehension tests at the
beginning, mid-, and end of the term to measure individual progress and use them
at the end of the term to benchmark students’ final grades to their learning
journeys. The tracking could also be done through analytics feedback to help with
student engagement (30).
Curriculum Planning
• Assume that students are using GenAI, even if you do not use them in your course.
• Teach criticism, complexity and productive failures. Teach students about the
importance of locating primary sources of information and to be critical of GenAI-
produced outputs. In coding, aim to teach high-level engineering concepts by
critiquing the inconsistencies of functions output by a model or learning to re-
prompt a system to produce cleaner and more consistent results. Similarly, while
with GenAI students can achieve significant results, for instance in coding with little
skill, preparing them to move beyond simplistic GenAI-favoured solutions is critical.
Finally, we should remind students that learning proceeds through engagement
and productive failures, while productivity goals that can be quickly but mindlessly
achieved through some GenAI solutions can instead hamper the process.
• Do not rely on temporary GenAI faults. Do not try to outwit the models or
underestimate their problem-solving abilities by providing partial or confusing
problem specifications. The models are constantly evolving and will be able to
provide students with alternative ways of solving the problem. In coding-based
courses, invest in problems that can be solved in parts of increasing complexity
and encourage students to engage with code analysis, debugging, and refactoring
supported by GenAI tools.
• Increase teachers’ literacy of GenAI tools. As students increase their
engagement with such tools, teachers need to develop the necessary literacy to
guide pupils in their usage and understanding of AI. As highlighted in the
UNESCO’s ‘Draft for AI competency framework for teachers and for school
students’(31), alongside AI Foundations and Applications, the themes which
should be taught are Ethics for AI, AI Pedagogy, AI for Professional Development
and Human-centred Mindset. This is supported by the study by Cukurova et al.
(2023), which highlights how technical knowledge needs to be aligned with
adequate technical support and plans to minimise workload, address ethical
issues, and increase teachers’ trust. The lack of these would otherwise undermine
teachers’ engagement.
Conclusions
Our initial findings from the GENIAL project validate certain observations about
students' use of GenAI tools, which academics and policymakers have also highlighted,
and suggest that there is a risk to the mindless use of GenAI tools. While these systems
can enhance the teaching and learning experience by unlocking new ways to exercise
new skills and knowledge, they pose a pedagogical risk. Because of the pressure to
perform well on assessments and untimely submission deadlines, students might rely on
GenAI tools in ways that disregard the intended learning outcomes of structured teaching,
inadvertently bypassing the intended learning process. This mindless use of AI distracts
from the real purpose of the courses and further enlarges the gap between assessments
and learning.
Educators must recognise that GenAI will impact their teaching practices, even if they do
not incorporate these tools in their courses, and decision-makers within academic
departments and universities need to transition from complete bans on student use of AI
to active engagement. Once we proactively guide students safely and adequately using
GenAI tools as part of their learning process, we counteract the potential pedagogical
distraction these systems pose to the educational system. Furthermore, GenAI tools are
something students will encounter in settings beyond the university, such as the
workplace. By teaching them how to use these tools responsibly, critically, and safely, we
can prepare them to contribute positively to the workplace and society.
Acknowledgements
The authors acknowledge funding support from the LSE Eden Centre for Education
Enhancement and the LSE Data Science Institute. We are grateful to Casey Kearney,
Leonard Hinckeldey, Maxwell Filip-Tuner, Michael Wiemers, and Isabelle Simonet for
their participation in the discussions on the GENIAL project. We thank Yang Yang,
Jenni Carr, Marina Franchi, and Mark Baltovic for their insightful perspectives on
pedagogy, assessment design and student learning. A special thanks goes out to all the
students who generously shared their data with us.
Author Contributions
Dorottya Sallai: Conceptualisation, Investigation, Writing – original draft, Writing – review &
editing. Jonathan Cardoso-Silva: Conceptualisation, Data curation, Funding Acquisition,
Investigation, Project administration, Writing – original draft, Writing – review & editing. Marcos
E. Barreto: Conceptualisation, Funding Acquisition, Investigation, Project administration, Writing
– original draft, Writing – review & editing. Francesca Panero: Investigation, Writing – original
draft, Writing – review & editing. Ghita Berrada: Writing – original draft, Writing – review & editing.
Sara Luxmoore: Data Curation, Writing – original draft.
Bibliography
1. Krammer SM. Is there a glitch in the matrix? Artificial intelligence and management
education. Management Learning. 2023 Dec 15;13505076231217667.
2. Maslej N, Fattorini L, Perrault R, Parli V, Reuel A, Brynjolfsson E, et al. The AI Index
2024 Annual Report [Internet]. Stanford, CA: AI Index Steering Committee, Institute
for Human-Centered AI, Stanford University; 2024 Apr [cited 2024 Jun 4] p. 1–502.
Available from: https://aiindex.stanford.edu/report/
3. Strzelecki A. Is ChatGPT-like technology going to replace commercial search
engines? LHTN [Internet]. 2024 Apr 4 [cited 2024 Jun 5]; Available from:
https://www.emerald.com/insight/content/doi/10.1108/LHTN-02-2024-0026/full/html
4. Koivisto M. Tutoring Postgraduate Students with an AI-Based Chatbot. Int J Adv Corp
Learn. 2023 Mar 13;16(1):41–54.
5. Chan CKY. A comprehensive AI policy education framework for university teaching
and learning. Int J Educ Technol High Educ. 2023 Jul 7;20(1):38.
6. Barros A, Prasad A, Śliwa M. Generative artificial intelligence and academia:
Implication for research, teaching and service. Management Learning. 2023
Nov;54(5):597–604.
7. Marche S. The college essay is dead. Nobody is prepared for how AI will transform
academia. The Atlantic [Internet]. 2022 Dec 6 [cited 2024 Jun 4]; Available from:
https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-
student-essays/672371
8. Gaumann N, Veale M. AI Providers as Criminal Essay Mills? Large Language Models
meet Contract Cheating Law [Internet]. 2023 [cited 2024 Jun 5]. Available from:
https://osf.io/cpbfd
9. Lindebaum D, Ramirez MF. “Negative” resource review: On the essay-writing
algorithm Essay Genius at https://essaygenius.ai/. AMLE. 2023 Mar
20;amle.2022.0474.
10. Medway D, Roper S, Gillooly L. Contract cheating in UK higher education: A covert
investigation of essay mills. British Educational Res J. 2018 Jun;44(3):393–418.
11. Wang C. Exploring Students’ Generative AI-Assisted Writing Processes: Perceptions
and Experiences from Native and Nonnative English Speakers. Tech Know Learn
[Internet]. 2024 May 30 [cited 2024 Jun 5]; Available from:
https://link.springer.com/10.1007/s10758-024-09744-3
12. Mizumoto A, Eguchi M. Exploring the potential of using an AI language model for
automated essay scoring. Research Methods in Applied Linguistics. 2023
Aug;2(2):100050.
13. Gimpel H, Gutheil N, Mayer V, Bandtel M, Büttgen M, Decker S, et al. (Generative) AI
Competencies for Future-Proof Graduates: Inspiration for Higher Education
Institutions [Internet]. University of Hohenheim; 2024 Feb [cited 2024 Jun 5]. Available
from: https://zenodo.org/doi/10.5281/zenodo.10680210
14. Guidance for generative AI in education and research [Internet]. UNESCO; 2023
[cited 2024 May 13]. 48 p. Available from:
https://unesdoc.unesco.org/ark:/48223/pf0000386693?locale=en
15. Regulating Artificial Intelligence In Education Council of Europe Standing Conference
of Ministers of Education [Internet]. Council of Europe Standing Conference of
Ministers of Education; 2023 [cited 2024 Jun 9]. Available from:
https://rm.coe.int/regulating-artificial-intelligence-in-education-26th-session-council-
o/1680ac9b7c
16. UK Department for Education. Generative artificial intelligence in education [Internet].
UK Department for Education; 2023 [cited 2024 Apr 22]. Available from:
https://www.gov.uk/government/publications/generative-artificial-intelligence-in-
education
17. McConlogue T. Assessment and Feedback in Higher Education: A Guide for
Teachers [Internet]. 1st ed. London, UK: UCL Press; 2020 [cited 2024 Apr 29]. 180 p.
Available from: https://discovery.ucl.ac.uk/id/eprint/10096352/
18. Lee YF, Hwang GJ, Chen PY. Impacts of an AI-based chabot on college students’
after-class review, academic performance, self-efficacy, learning attitude, and
motivation. Education Tech Research Dev. 2022 Oct;70(5):1843–65.
19. Fidan M, Gencel N. Supporting the Instructional Videos With Chatbot and Peer
Feedback Mechanisms in Online Learning: The Effects on Learning Performance and
Intrinsic Motivation. Journal of Educational Computing Research. 2022
Dec;60(7):1716–41.
20. Zhou K, Jurafsky D, Hashimoto T. Navigating the Grey Area: How Expressions of
Uncertainty and Overconfidence Affect Language Models [Internet]. arXiv e-prints.
2023 [cited 2024 Jun 9]. Available from:
https://ui.adsabs.harvard.edu/abs/2023arXiv230213439Z
21. Zhou K, Hwang JD, Ren X, Sap M. Relying on the Unreliable: The Impact of Language
Models’ Reluctance to Express Uncertainty [Internet]. arXiv; 2024 [cited 2024 Jun 9].
Available from: http://arxiv.org/abs/2401.06730
22. McMahon L, Kleinman Z. Glue pizza and eat rocks: Google AI search errors go viral.
BBC News [Internet]. 2024 May 24 [cited 2024 Jun 8]; Available from:
https://www.bbc.co.uk/news/articles/cd11gzejgz4o
23. Manolchev C, Nolan R, Hodgson E. Unlikely allies: ChatGPT and higher education
assessment. JLDHE [Internet]. 2024 Mar 27 [cited 2024 Jun 8];(30). Available from:
https://journal.aldinhe.ac.uk/index.php/jldhe/article/view/1136
24. Abbas M, Jam FA, Khan TI. Is it harmful or helpful? Examining the causes and
consequences of generative AI usage among university students. Int J Educ Technol
High Educ. 2024 Feb 16;21(1):10.
25. Delcker J, Heil J, Ifenthaler D, Seufert S, Spirgi L. First-year students AI-competence
as a predictor for intended and de facto use of AI-tools for supporting learning
processes in higher education. Int J Educ Technol High Educ. 2024 Mar 18;21(1):18.
26. Jürgen Rudolph, Samson Tan, Shannon Tan. ChatGPT: Bullshit spewer or the end of
traditional assessments in higher education? JALT [Internet]. 2023 Jan 25 [cited 2024
Jun 8];6(1). Available from: https://journals.sfu.ca/jalt/index.php/jalt/article/view/689
27. Klyshbekova M, Abbott P. ChatGPT and Assessment in Higher Education: A Magic
Wand or a Disruptor? EJEL. 2024 Feb 9;00–00.
28. Chaudhry IS, Sarwary SAM, El Refae GA, Chabchoub H. Time to Revisit Existing
Student’s Performance Evaluation Approach in Higher Education Sector in a New Era
of ChatGPT — A Case Study. Cogent Education. 2023 Dec 31;10(1):2210461.
29. Graham S. Scott. Post-Process but Not Post-Writing: Large Language Models and a
Future for Composition Pedagogy. Composition Studies. 2023;51(1):162–8.
30. Suraworachet W, Zhou Q, Cukurova M. Impact of combining human and analytics
feedback on students’ engagement with, and performance in, reflective writing tasks.
Int J Educ Technol High Educ. 2023 Jan 3;20(1):1.
31. UNESCO. Draft AI competency frameworks for teachers and for school students
[Internet]. UNESCO; 2023. Available from:
https://www.unesco.org/sites/default/files/medias/fichiers/2023/11/UNESCO-Draft-
AI-competency-frameworks-for-teachers-and-school-students.pdf