ArticlePDF Available

ChatGPT and the rise of generative AI: Threat to academic integrity?

Authors:

Abstract

The emergence of OpenAI's ChatGPT has put intense spotlight on Generative AI (Gen-AI) systems and their possible impacts on Academic integrity. This commentary concludes that although these technologies are capable of revolutionising academia, the way ChatGPT and other generative AI systems are used could surely undermine academic integrity. However, to ensure that the risks to academic integrity are mitigated for greater maximisation, multi-stakeholder efforts are required.
Journal of Responsible Technology 13 (2023) 100060
Available online 20 February 2023
2666-6596/© 2023 The Author(s). Published by Elsevier Ltd on behalf of ORBIT. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
ChatGPT and the rise of generative AI: Threat to academic integrity?
Damian Okaibedi Eke
School of Computer Science and Informatics, De Montfort University, Leicester, United Kingdom
ARTICLE INFO
Keywords:
ChatGPT
Large language models
OpenAI
Academic integrity
Generative AI
1. Introduction
The emergence of OpenAIsChatGPT
1
has put intense spotlight on
Generative AI (Gen-AI) systems and their possible impacts on Academic
integrity. Generative AI systems are designed to generate content or
output (such as Text, images, audio, simulations, video and codes) from
the data they are trained on. Whereas ChatGPT is neither the rst Gen-AI
system ever developed nor is it the rst by OpenAI, it represents a
breakthrough in Generative AI technology. In many academic quarters,
concerns on academic integrity have been raised (Stokel-Walker, 2022).
This is fascinating considering that this is not the rst AI powered text
generator. A number of AI text/content generators for diverse contents
are available including but not limited to: Rytr,
2
Jasper,
3
CopyAI,
4
Writesonic,
5
Kafkai,
6
Copysmith,
7
Peppertype,
8
Articoolo,
9
Article Forge
10
and Copymatic.
11
The question then is: what is different
about ChatGPT that raises serious concerns?
For clearer perspectives, let us understand what ChatGPT is.
ChatGPT is a large language model (LLM) that uses deep learning to
generate human-like texts in response to prompts. It was released on the
30th of November 2022 as OpenAIs latest iteration of their large
language models capable of having ‘intelligent conversations. This is
part of the Generative Pre-trained Transformer (GPT) models from the
California based company. Before now, there has been GPT-1 launched
in 2018 (Radford et al., 2018), GPT-2 (Radford et al., 2019) and GPT-3
(Brown et al., 2020). In 2021, OpenAI released DALL.E 2, a Gen-AI
system for generating images from text. However, ChatGPT is different
from the previous models in many ways. Most importantly, it is different
from GPT-3 that is designed to perform a wide range of natural language
processing (NLP) such as language translation, text summarization, and
question answering, generation of creative writing (such as poetry or
ction), generation of high quality long or short form copy (such as blog
posts). On the other hand, ChatGPT is built from the GPT-3 language
model and has unique use cases (such as generation of responses in
dialogues/conversation, explanation of complex subjects, concept or
themes, generation of new codes or xing of existing codes for errors).
Overall, ChatGPT has more use cases than GPT-3 and as with many other
technologies it has logical malleability which means that it can be
ne-tuned for a variety of language tasks. ChatGPTs capabilities have
been hailed as ‘scary goodby proponents or described as ‘‘prolic and
highly effective and still learning’’ (Gleason, 2022). Additionally, it is
E-mail address: damian.eke@dmu.ac.uk.
1
https://chat.openai.com/chat
2
https://rytr.me/
3
https://www.jasper.ai/
4
https://www.copy.ai/?via=start
5
https://writesonic.com/
6
https://kafkai.com/en/
7
https://copysmith.ai/#a_aid=start
8
https://www.peppertype.ai/
9
http://articoolo.com/
10
https://www.articleforge.com/
11
https://copymatic.ai/
Contents lists available at ScienceDirect
Journal of Responsible Technology
journal homepage: www.sciencedirect.com/journal/journal-of-responsible-technology
https://doi.org/10.1016/j.jrt.2023.100060
Journal of Responsible Technology 13 (2023) 100060
2
freely available to all users unlike many AI powered content generators.
The inherent capabilities of ChatGPT have been demonstrated in reports
that it has successfully passed a Law school exam (Choi et al., 2023) and
Master of Business Administration (MBA) exam (Terwiesch, 2023). A
judge in Colombia has also admitted that a court decision
12
was
informed by ChatGPT.
However, critics have pointed out that as a large language model,
ChatGPT is ‘‘‘not particularly innovative and ‘revolutionary’’’ because
similar systems have been developed in the past. Others have observed
that despite its uent and persuasive texts, the system still ‘‘lacks the
ability to truly understand the complexity of human language and
conversation’’ (Bogost, 2022). To be fair to the developers, a number of
limitations of the system are made clear to users. It is clearly presented
that it can occasionally generate incorrect information, produces
harmful instructions or biased content and has limited knowledge
because of the data it was trained on. Amidst a number of issues that
ChatGPT raises, this commentary only explores whether it undermines
academic integrity. It also provides recommendations on how academia
can be proactive in responding to the challenges ChatGPT and Gen-AI
systems raise.
2. Threat to academic integrity?
So far, experiences of academics with ChatGPT is that it correctly
answers questions often asked undergraduates and postgraduate stu-
dents (Lock, 2022) including questions requiring coding skills (Scharth,
2022). The general fear is that students as well as researchers can start
outsourcing their writing to ChatGPT. If some early responses to uni-
versity level essay questions are anything to go by, professors and lec-
turers should be worried about the future of essays as a form of
assessment. According to Stokel-Walker (2022), some of the responses
‘‘are so lucid, well-researched and decently referenced’’. Although it has
its limitations and ethical shortcomings (Birhane & Raji, 2022) like so
many other language models (Eliot, 2022; Weidinger et al., 2021), it is a
tool with broader implications for academic integrity.
According to the International Centre for Academic Integrity (2021),
academic integrity is dened as a commitment to six fundamental
values: honesty, trust, fairness, respect, responsibility and courage. As
such, when a person uses ChatGPT to generate essays or other forms of
written texts that are then passed off as original work, it violates the core
principles of academic integrity. ChatGPT raises similar concerns as the
well documented commercial ‘contract cheating in higher education
(Newton, 2018). The only difference is that ChatGPT is free and easily
accessible to all users. It also offers users the opportunity of interaction.
Users can tweak their queries to know how different the responses can
be. This means that there are possibilities of generating different text-
s/essays and the user can pick the best out of the lot. One academic was
quoted in a Nature commentary recently (Stokel-Walker, 2022) saying
that at the moment, its looking a lot like the end of essays as an
assignment for education. The concern in academia however is not
limited to its open and free availability, it is also rooted in the lack of
availability of tools to detect people using this viral chatbot. Turnitin,
Unicheck, PlagScan, Noplag and other plagiarism checker tools are often
used to maintain academic integrity. This gap is and should be a source
of concern that needs attention. It is also critical to reect on whether
using ChatGPT for academic paper or assignments can constitute
plagiarism in the moral sense of ‘‘theft of intellectual property’’. Whose
intellectual property is stolen when ChatGPT is passed off as original
work? Who is damaged by this act? While I agree that using ChatGPT
without proper acknowledgement goes against the fundamental values
of academic integrity, the whole plagiarism debate is a little more
complex.
I also admit that the hype around ChatGPT and its real-life capabil-
ities can either alarm or excite people in academia. The concern in
academia goes beyond whether it is bad or good technology. ChatGPT is
a denition of a disruptive technology. It is here and it is about to disrupt
both the ontology and epistemology of academia, science and teaching.
That means that academia is about to reconsider what constitutes
knowledge and how it can be acquired. The challenge then becomes;
how is this technology embraced and applied effectively, safely and
responsibly? Whether ChatGPT is a morally neutral technology or an
existential part of the normative moral order is not the focus of this
commentary. This does not mean that ChatGPT does not raise other
ethical issues beyond issues of academic integrity, or that these concerns
do not matter. There are a number of ethical issues surrounding large
language models already identied in literature (Bender et al., 2021).
The emerging stories of the human cost of building ChatGPT raise great
concern (Perrigo, 2023). However, these are not the focus of this essay.
What is clear from what we know about this technology so far is that it
could be used in ways that could undermine academic integrity. The
question then is: what can academia do about this?
3. What can academia do?
There are a number of things academia needs to do including but not
limited to considerations of the opportunities and challenges ChatGPT
and other LLMs present; understanding ways of maximising these op-
portunities while mitigating challenges to academic integrity.
3.1. Consider academic opportunities and challenges of ChatGPT
Academia needs to take ChatGPT seriously. By academia, I mean the
ecosystem that facilitates the pursuit of research, teaching, and schol-
arship in general. This includes academic and research institutions, ac-
ademic publishers and funders. ChatGPT and other generative AI
systems are revolutionary and academia needs to be ready to be part of
that revolution. It is not sustainable to ban, reject or dismiss it. This is a
technology that presents opportunities for teaching, research and
innovation. Using ChatGPT can become an efcient and time saving way
of carrying out academic activities. From lesson plan design, task crea-
tion, writing to provision of inspiration and ideas, ChatGPT can help
both teachers and students to improve teaching and learning experi-
ences. It can also be used to improve research. For instance, it can be a
tool for quick and easy generation of data for many types of research. It
can also be used as an analysis tool as well as a writing assistant for
research reports.
However, the responsible use of ChatGPT in academia faces signi-
cant challenges, particularly owing to potential misuses that constitute
threats to academic integrity. First, its usage without appropriate
acknowledgement is currently not reected in the academic integrity
policies or statements of academic institutions and publishers. This
needs to change. In addition to this, many people in academia; re-
searchers, teachers and students still do not know how to optimally use
the system, not to mention using it responsibly. There is a great need for
education.
Second, a harmonised and responsible way of acknowledging the use
of ChatGPT is yet to be established. A number of research papers have
listed ChatGPT as authors (Stokel-Walker, 2023). However, both Nature
(Nature, 2023) and Science (Thorp, 2023) journals have made their
stance clear that no LLM can be accepted as a credited author in their
journals. The current lack of guidance for users on how to acknowledge
the use of ChatGPT raises a lot of concerns.
Third, a tested, validated and accepted tool to identify dishonest use
of AI text generators in academia is not yet available. That means that it
12
https://www.theguardian.com/technology/2023/feb/03/colombia-judge
-chatgpt-ruling#:~:text=A%20judge%20in%20Colombia%20has,costs%20of%
20his%20medical%20treatment.
D.O. Eke
Journal of Responsible Technology 13 (2023) 100060
3
is still easy to pass off an output from ChatGPT as an original work
without detection. To address this challenge, OpenAI has developed a
free tool (AI text Classier
13
) trained to distinguish between AI-written
and human-written texts. Unfortunately, this has been described as an
‘imperfect toolby OpenAI who warned that it should not be used as a
primary decision-making tool. How academic institutions and pub-
lishers will implement OpenAIs imperfect tool or develop better tools
remains unclear.
Fourth, it is important to note that there is a greater concern for
institutions in Low-and-middle-income countries where Turnitin and
other plagiarism tools are yet to be integrated as measures for academic
integrity. Technical integration of these tools costs money which many
of the institutions in these countries do not have. ChatGPT could thus
exacerbate an already documented problem of cheating in these places
(Farahat, 2022). It presents a global challenge that requires a solution
that can work for everybody - a less expensive, safe, sustainable and
responsible solution.
3.2. Consider actionable steps to achieve responsible use of ChatGPT and
other Gen-AI systems in academia
Responsible use of generative AI systems in academia entails the
development of implementation approaches that maximise their capa-
bilities while mitigating threats to academic integrity. I believe the rst
thing for academia to do is to identify the use of AI generated texts
without acknowledgement as part of academic dishonesty. Whereas it is
implied in current academic integrity policies, it needs to be made
clearer for staff, students and in the case of journals, potential authors,
what values are violated when AI generated texts are used as original
work. Furthermore, there are many ways ChatGPT and other AI text
generators can be integrated into academic activities from assessment,
research to teaching. Knowledge of these approaches remains very low
in academia. Generative AI systems are changing the world students are
being prepared for. It is therefore the responsibility of the same aca-
demic institutions to prepare students for a world that is effectively
being revolutionised by LLMs.
Capacity development, for both staff and students, on the diverse use
cases of ChatGPT is therefore necessary in relevant institutions. On the
part of staff who are expected to identify dishonest uses of the tool, they
should be able to know how it works.
Furthermore, for universities to preserve the current assessment
methods based on written essays, there is a need to create a reliable tool
that can detect AI generated texts. However, designing such a tool and
incorporating it into effective or reliable assessment approaches will
require a lot of funding and the support or buy-in of OpenAI or other
creators of these language models. It may also take time to develop
whereas in the meantime, AI generated texts may already be part of
academic assessments. On the other hand, this may be an opportunity to
reconsider the future of essay writing as a form of assessment as Donald
(2018) has suggested. There are already calls to fundamentally change
assessment methods - a change from accessing nished essays to
assessing critical thinking involved in the process. Teaching students
who become good essay writers is important, but is understanding the
process not more important than the nished product? The integration
of ChatGPT in teaching students critical thinking and writing should be a
viable consideration. Where essays are absolutely necessary, oral exams
can be used more to supplement for better assessment.
This is not a problem limited to academic essays and students. There
are also risks of this happening within the wider academic life: journal
and conference papers, reports, blogs, dissertations, books and other
forms of academic writing. However, there is an argument to be made
that the system could allow people to play to their strengths and increase
the quality of their academic outputs. With all its limitations and
imperfections, ChatGPT can become an effective learning companion.
For instance, it can generate great ideas and texts that can in turn be
perfected by users. It is not an authoritative academic voice and neither
is it 100% accurate but it can be a good academic assistant. Devising
ways of referencing its use or application is therefore a necessity to
ensure its responsible use in academia. For users who want to maintain
the tenets of academic integrity before technical tools for identifying
cheating are developed, referencing ChatGPT could involve document-
ing date of generation, prompts used for generation and limiting the use
of direct quotation to one paragraph.
Additionally, the possibility of ChatGPT writing or correcting codes
calls for the reimagination of technical coding assessments. So far, it has
proven to be capable of writing functioning codes with custom prompts
which could help students answer basic data structures and algorithm
questions. I therefore suggest that including oral interview as part of the
assessment should not be a supplementary but a major part of the
assessment. This will give an opportunity of testing the students
knowledge of the codes and their functions.
4. Conclusion
I argue that the way ChatGPT and other AI powered text generators
are used could surely undermine academic integrity. They are also
capable of revolutionising academia. It is the responsibility of all of us
humans to ensure that the risks to academic integrity are mitigated for
greater maximisation. This needs a multi-stakeholder effort; from the
technical developers, policy makers in academic institutions, publishers,
professors, lecturers to students. Academic writing, essay assignments
and technical coding assessments may not be dead but it is time to
reimagine critical changes to ensure sustainable integrity in academia.
In summary, academic institutions need to do a number of things:
Embrace ChatGPT as an essential part of pedagogy and research.
Establish ChatGPT training and capacity building for both staff and
students for optimal maximisation and to ensure responsible use.
Providing necessary support and resources to both staff and students
can help to mitigate possible risks to academic integrity.
Review their academic integrity policies and make necessary
changes to reect current AI trends and possibilities.
Work with relevant bodies (including but not limited to journal ed-
itors and publishers) to co-create effective ways of acknowledging
the use of ChatGPT and other AI tools in academic texts.
Work towards developing cost effective and trusted tools for iden-
tifying possible dishonest use of AI tools in academia globally.
Finally, OpenAI and other large language model creators should be
willing to work with academia to achieve responsible use of AI powered
text generators in academia. OpenAIs move to develop the ‘imperfect
classier is a welcome development but not sufcient to address aca-
demic integrity concerns. The companys current engagement with ed-
ucators in the US is also commendable. However, such engagement
should be extended to stakeholders in academia in other parts of the
world, particularly ones from low-and-middle-income countries. A
multi-stakeholder endeavour is needed to co-create solutions to main-
tain academic integrity. This may include redenition of what consti-
tutes academic achievement, impact and novel ways of measuring them.
Declaration of Competing Interest
The authors declare that they have no known competing nancial
interests or personal relationships that could have appeared to inuence
the work reported in this paper.
13
https://openai.com/blog/new-ai-classier-for-indicating-ai-written-text/
D.O. Eke
Journal of Responsible Technology 13 (2023) 100060
4
References
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of
stochastic parrots: Can language models be too big?. In Proceedings of the 2021 ACM
Conference on fairness, accountability, and transparency, FAccT 21 (pp. 610623).
New York, NY, USA: Association for Computing Machinery. https://doi.org/
10.1145/3442188.3445922.
Birhane, A., Raji, I.D., 2022. ChatGPT, galactica, and the progress trap | WIRED [WWW
document]. WIRED. URL https://www.wired.com/story/large-language-models-
critique/ (accessed 12.20.22).
Bogost, I. (2022). ChatGPT is dumber than you think [WWW document]. The Atlantic.
URL https://www.theatlantic.com/. technology/archive/2022/12/chatgpt-openai-
articial-intelligence-writing-ethics/672386/(accessed 2.6.23).
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A.,
Shyam, P., Sastry, G, & Askell, A. (2020). Language models are few-shot learners.
Advances in Neural Information Processing Systems, 33, 18771901.
Choi, J.H., Hickman, K.E., Monahan, A., Schwarcz, D., 2023. ChatGPT goes to law
school. Available SSRN.
Donald, A., 2018. Is the ‘time of the assessed essayover? Teach. Perspect. Bus. Sch. URL
https://blogs.sussex.ac.uk/business-school-teaching/2018/11/14/is-the-time-
of-the-assessed-essay-over/(accessed 12.20.22).
Eliot, L. (2022). AI ethics and the future of where large language models are heading
[WWW document]. Forbes. URL https://www.forbes.com/sites/lanceeliot/2022/0
8/30/ai-ethics-asking-aloud-whether-large-language-models-and-their-bossy-bel
ievers-are-taking-ai-down-a-dead-end-path/ (accessed 12.19.22).
Farahat, A. (2022). Elements of academic integrity in a cross-cultural Middle Eastern
educational system: Saudi Arabia, Egypt, and Jordan case study. International Journal
for Educational Integrity, 18, 118. https://doi.org/10.1007/s40979-021-00095-5
Gleason, N. (2022). ChatGPT and the rise of AI writers: How should higher education
respond? [WWW document]. Campus Learn Share Connect. URL https://www.timeshi
ghereducation.com/campus/chatgpt-and-rise-ai-writers-how-should-higher-e
ducation-respond (accessed 12.19.22).
International Center for Academic Integrity (2021) The fundamental values of academic
integrity, 3rd edn. Available at: https://academicintegrity.org/resources/funda
mental-values (accessed 10.10.2022).
Lock, S. (2022). What is AI chatbot phenomenon ChatGPT and could it replace humans?
[WWW document]. The Guardian. URL https://www.theguardian.com/technology/
2022/dec/05/what-is-ai-chatbot-phenomenon-chatgpt-and-could-it-replace-humans
(accessed 12.20.22).
Nature. (2023). Tools such as ChatGPT threaten transparent science; here are our ground
rules for their use. Nature, 613. https://doi.org/10.1038/d41586-023-00191-1,
612612.
Newton, P. M. (2018). How common is commercial contract cheating in higher education
and is it increasing? A systematic review. Frontiers in Education, 3. https://doi.org/
10.3389/feduc.2018.00067
Perrigo, B. (2023). Exclusive: The $2 per hour workers who made ChatGPT safer [WWW
document]. Time. URL https://time.com/6247678/openai-chatgpt-kenya-workers/
(accessed 2.6.23).
Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., 2018. Improving language
understanding by generative pre-training.
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language
models are unsupervised multitask learners. OpenAI Blog, 1, 9.
Scharth, M. (2022). The ChatGPT chatbot is blowing people away with its writing skills.
An expert explains why its so impressive [WWW document]. The Conversation. URL
http://theconversation.com/the-chatgpt-chatbot-is-blowing-people-away-with-its
-writing-skills-an-expert-explains-why-its-so-impressive-195908 (accessed
12.20.22).
Stokel-Walker, C. (2022). AI bot ChatGPT writes smart essays Should professors
worry? Nature. https://doi.org/10.1038/d41586-022-04397-7
Stokel-Walker, C. (2023). ChatGPT listed as author on research papers: Many scientists
disapprove. Nature, 613, 620621. https://doi.org/10.1038/d41586-023-00107-z
Terwiesch, C., 2023. Would chat GPT3 get a Wharton MBA? A prediction based on its
performance in the operations management course. Mack Inst. Innov. Manag. Whart.
Sch. Univ. Pa.
Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science (New York, N.Y.), 379.
https://doi.org/10.1126/science.adg7879, 313313.
Weidinger, L., Mellor, J., Rauh, M., Grifn, C., Uesato, J., Huang, P.-.S., Cheng, M.,
Glaese, M., Balle, B., Kasirzadeh, A., Kenton, Z., Brown, S., Hawkins, W., Stepleton,
T., Biles, C., Birhane, A., Haas, J., Rimell, L., Hendricks, L.A., ..., Gabriel, I., 2021.
Ethical and social risks of harm from language models. ArXiv211204359 Cs.
D.O. Eke
... Fowler (2023) also argues that AI-generated content threatens conventional assessment methods, compelling institutions to either reframe standards or enhance AI detection mechanisms. Yet such interventions are not usually available in less affluent areas, widening current educational disparities (Eke, 2023). So, although AI has the potential to democratise education, unequal access and a lack of support for ethical use highlight the necessity for universal AI literacy initiatives and strong policy structures in schools worldwide. ...
... One of the major challenges is the fear of over-reliance on AI tools and thus undermining the development of essential research skills. Eke (2023) finds students are worried about losing their critical thinking ability or their capacity for independent writing, a concern echoed by Kumar et al. (2024), who warn that excessive use of AI can create a "false confidence" in students' abilities. There are also ethical issues of abuse; students are aware of the fine line between legitimate AI assistance and academic dishonesty, particularly in settings where institutional policies are poorly articulated (Stone, 2023). ...
... However, this model is wanting in accounting for the reservations of students such as Chika and Tunde, whose hesitation is more based on concerns regarding academic integrity, skill atrophy, and insufficient guidance-issues not fully covered by TAM. Eke's (2023) complaint of TAM's limited attention to ethical and contextual issues is especially apt here, since these students' reservations are based on more fundamental suspicions that transcend ease or perceived usefulness. ...
Article
Full-text available
This study examines the application of artificial intelligence (AI) writing tools in academic writing among Nigerian university students, highlighting both the potential benefits and significant challenges. With increasing numbers of people utilising AI-powered tools like ChatGPT, Grammarly, and Quillbot, Nigerian academics and students demonstrate varying degrees of exposure and expertise in using such tools. The research, employing a purposive sampling technique, conducted interviews with 20 Nigerian students to examine their awareness, usage patterns, and reasons for utilising AI in academic research. The respondents were duly informed of their rights within the research and the choice to use it without consequences. Results show that although AI tools facilitate tasks such as data analysis and writing support, concerns about academic integrity, over-dependence, and institutional support issues persist. Interestingly, students report a lack of guidelines and official training, which leads to the ad hoc and potentially unethical use of AI. The paper concludes that systematic AI literacy programs and ethical guidelines are necessary for the responsible and fair integration of AI into Nigerian higher education, thereby assisting students’ learning while promoting academic integrity.
... This technological advancement presents new opportunities for improving the accuracy of medical knowledge, enhancing the efficiency of medical research, and accelerating the translation of scientific insights into clinical practice [4][5][6][7]. However, it simultaneously presents several challenges, including potential academic misconduct facilitated by GAI tools, as well as ethical and regulatory concerns surrounding their use [8]. ...
Article
Full-text available
Objective To assess the knowledge, attitudes, and practices (KAP) of medical stakeholders regarding the use of generative artificial intelligence (GAI) tools. Methods A cross‐sectional survey was conducted among stakeholders in medicine. Participants included researchers, clinicians, and medical journal editors with varying degrees of familiarity with GAI tools. The survey questionnaire comprised 40 questions covering four main dimensions: basic information, knowledge, attitudes, and practices related to GAI tools. Descriptive analysis, Pearson's correlation, and multivariable regression were used to analyze the data. Results The overall awareness rate of GAI tools was 93.3%. Participants demonstrated moderate knowledge (mean score 17.71 ± 5.56), positive attitudes (mean score 73.32 ± 15.83), and reasonable practices (mean score 40.70 ± 12.86). Factors influencing knowledge included education level, geographic region, and attitudes (p < 0.05). Attitudes were influenced by work experience and knowledge (p < 0.05), while practices were driven by both knowledge and attitudes (p < 0.001). Participants from outside China scored higher in all dimensions compared to those from China (p < 0.001). Additionally, 74.0% of participants emphasized the importance of reporting GAI usage in research, and 73.9% advocated for naming the specific tool used. Conclusion The findings highlight a growing awareness and generally positive attitude toward GAI tools among medical stakeholders, alongside the recognition of their ethical implications and the necessity for standardized reporting practices. Targeted training and the development of clear reporting guidelines are recommended to enhance the effective use of GAI tools in medical research and practice.
... However, the inclusion of ChatGPT within academia depicts a multifaceted landscape replete with both opportunities and challenges (Lund and Wang, 2023;Rahman et al., 2023). As this technology pervades educational settings, the central concern of reinforcing academic integrity, rigor and ethical conduct becomes a critical focus (Eke, 2023;Bin-Nashwan et al., 2023a). The adoption of AI tools like ChatGPT in research, scholarly pursuits and academic assistance introduces a complex terrain where the boundaries between original thought, collaboration and automated assistance blur (Cotton et al., 2024;Shiri, 2023;Lo, 2023;Alser and Waisberg, 2023). ...
Article
Purpose The research aims to unravel the dynamics of academic integrity in the ChatGPT era by analyzing critical predictors such as personal best goals (PBG), academic competence and workplace stress. Furthermore, it examines how ChatGPT adoption acts as a moderating factor, potentially influencing the relationship between these predictors and academic integrity, offering a nuanced understanding of its impact in academic settings. Design/methodology/approach Relying on the social cognitive theory, the authors adopt a quantitative approach through an online survey to explore the impact of key variables – PBG, academic competence and workplace stress, as well as ChatGPT adoption on integrity – by analyzing data from responses collected through Academic Social Networking Sites. Findings The study found that PBG related positively to academic integrity among academic staff. However, workplace stress had a negative impact on academic integrity, while academic competence failed to report any effect. With the integration of ChatGPT adoption as a moderator into the model, the authors found the association between PBG and academic integrity altered to be negative. The ChatGPT adoption-moderated interactions of academic competence and workplace stress on academic integrity were significant. Practical implications The findings suggest that institutions should provide training on the ethical and effective use of artificial intelligence (AI) tools like ChatGPT to ensure they support rather than hinder academic integrity. Additionally, organizations should focus on stress management initiatives and fostering a balanced approach to personal goal setting to mitigate the potential negative impacts of ChatGPT adoption on ethical behavior. Originality/value This attempt is both timely and pioneering, addressing a clear gap in the current body of literature. As limited studies have examined the role of ChatGPT in academic settings, this work stands out as one of the earliest to investigate how educators and researchers incorporate OpenAI tools into their professional practices.
... However, the inclusion of ChatGPT within academia depicts a multifaceted landscape replete with both opportunities and challenges (Lund and Wang, 2023;Rahman et al., 2023). As this technology pervades educational settings, the central concern of reinforcing academic integrity, rigor and ethical conduct becomes a critical focus (Eke, 2023;Bin-Nashwan et al., 2023a). The adoption of AI tools like ChatGPT in research, scholarly pursuits and academic assistance introduces a complex terrain where the boundaries between original thought, collaboration and automated assistance blur (Cotton et al., 2024;Shiri, 2023;Lo, 2023;Alser and Waisberg, 2023). ...
Article
Purpose The research aims to unravel the dynamics of academic integrity in the ChatGPT era by analyzing critical predictors such as personal best goals (PBG), academic competence and workplace stress. Furthermore, it examines how ChatGPT adoption acts as a moderating factor, potentially influencing the relationship between these predictors and academic integrity, offering a nuanced understanding of its impact in academic settings. Design/methodology/approach Relying on the social cognitive theory, the authors adopt a quantitative approach through an online survey to explore the impact of key variables – PBG, academic competence and workplace stress, as well as ChatGPT adoption on integrity – by analyzing data from responses collected through Academic Social Networking Sites. Findings The study found that PBG related positively to academic integrity among academic staff. However, workplace stress had a negative impact on academic integrity, while academic competence failed to report any effect. With the integration of ChatGPT adoption as a moderator into the model, the authors found the association between PBG and academic integrity altered to be negative. The ChatGPT adoption-moderated interactions of academic competence and workplace stress on academic integrity were significant. Practical implications The findings suggest that institutions should provide training on the ethical and effective use of artificial intelligence (AI) tools like ChatGPT to ensure they support rather than hinder academic integrity. Additionally, organizations should focus on stress management initiatives and fostering a balanced approach to personal goal setting to mitigate the potential negative impacts of ChatGPT adoption on ethical behavior. Originality/value This attempt is both timely and pioneering, addressing a clear gap in the current body of literature. As limited studies have examined the role of ChatGPT in academic settings, this work stands out as one of the earliest to investigate how educators and researchers incorporate OpenAI tools into their professional practices.
... Among these, in terms of the use of ChatGPT to support teachers, studies by Farrokhnia et al. (2024), Skrabut (2023), and Finley (2023) are particularly worth mentioning. While numerous studies have investigated the potential benefits of ChatGPT in education (Farrokhnia et al., 2024;Kasneci et al., 2023;Grassini, 2023;Xia, et al., 2023;Rudolph et al., 2023a, b), many researchers have raised concerns regarding the ethical issues associated with its academic use (Grassini, 2023;Halaweh, 2023;Kasneci et al., 2023;Nguyen & Dieu, 2024;Rudolph et al., 2023a, b;Sok & Heng, 2023;Tlili et al., 2023) and hence have emphasized the need for developing a clear academic guideline to ensure proper and responsible utilization of the tool (Eke, 2023). ...
Article
Full-text available
ChatGPT, developed by OpenAI, is the most buzzing word in academia recently. Due to its ability to provide instant language support and generate diverse educational resources, it has emerged as a powerful tool in ELT (English language teaching). This study aims to explore ELT teachers’ perception and usage of ChatGPT as a teaching tool in the Bangladeshi EFL context. To do this, a concurrent mixed-method research design was employed using interviews and a survey questionnaire. 54 ELT teachers for the survey questionnaire, and 7 teachers interviewed participated from the department of English from 5 different private universities in Bangladesh. The results revealed that ELT teachers used ChatGPT for generating practice tasks, preparing question materials for quizzes or examinations, and providing automated feedback. The teachers highlighted several benefits, such as saving time, having unlimited resources, and easy accessibility, while they noted students’ overdependence and plagiarism, misinterpreted instruction, faulty information, and similar and repetitive structure and language as the potential drawbacks. The teachers opined that for using ChatGPT both teachers and students need proper training and ethical awareness. The findings bring forth valuable insights for teachers, students, and policymakers.
... text, images, audio, code, simulations, and videos, based on the data they have been trained on (Eke, 2023). Recently, there has been growing interest in and extensive application of conversational agents powered by generative AI across diverse disciplines, each harnessing this technology for unique purposes (Aydin & Karaarslan, 2023). ...
Article
The popularity of generative AI chatbots, such as ChatGPT, has sparked numerous studies investigating their use in educational contexts. However, it is important to note that chatbots are not a new phenomenon; researchers have explored conversational agents across diverse fields for decades. Conversational agents engage users in natural language conversations through text or voice interfaces. While these agents have demonstrated potential for enhancing human learning, relatively few studies have assessed their overall effectiveness or the contexts in which they are implemented in education. To address this gap, we systematically reviewed empirical studies published before the emergence of ChatGPT. Given the transformative impact of generative AI technologies, we argue that it is crucial to summarize research on conversational agents conducted prior to this paradigm shift. Understanding the educational applications of earlier chatbots provides valuable context for evaluating and guiding ongoing developments in the era of generative AI. Our review examined 3,045 articles, ultimately selecting 23 studies encompassing 29 implementations published between 2004 and 2019. The findings highlight variations in chatbot interfaces, learning modes, and interactions, with evidence of medium to large effects on learning outcomes and positive usability perceptions. Common limitations included non-random sampling methods and small sample sizes. Future research directions emphasize the importance of addressing contextual, implementation, and methodological considerations to advance the field further.
Article
Full-text available
As artificial intelligence (AI) becomes increasingly integrated into higher education , understanding perceptions across different demographic groups is essential for its effective implementation. This study examines attitudes toward AI among students, lecturers, and academic staff, considering factors such as gender, age, occupation, academic discipline, ethical concerns, and experience level. The findings indicate that while overall perceptions of AI in education are positive, concerns about ethics and uncertainty regarding its role persist. Gender and age differences in AI perceptions are minimal, though female students, educators, and individual in humanities disciplines express slightly higher ethical concerns. Teachers exhibit greater skepticism, emphasizing the need for transparency, ethical guidelines, and training to build trust. The study also highlights the influence of AI experience and perceptions. Frequent users tend to have a more positive outlook, whereas those with advance expertise engage with AI more selectively, suggesting a shift toward intentional and strategic use.
Article
Full-text available
Background Increasingly, students are using ChatGPT to assist them in learning and even completing their assessments, raising concerns of academic integrity and loss of critical thinking skills. Many articles suggested educators redesign assessments that are more ‘Generative‐AI‐resistant’ and to focus on assessing students on higher order thinking skills. However, there is a lack of articles that attempt to quantify assessments at different cognitive levels to provide empirical study insights on ChatGPT's performance at different levels, which will affect how educators redesign their assessments. Objectives Educators need new information on how well ChatGPT performs to redesign future assessments to assess their students in this new paradigm. This paper attempts to fill the gap in empirical research by using spreadsheet modelling assessments, tested using four different prompt engineering settings, to provide new knowledge to support assessment redesign. Our proposed methodology can be applied to other course modules for educators to achieve their respective insights for future assessment designs and actions. Methods We evaluated the performance of ChatGPT 3.5 on solving spreadsheets modelling assessment questions with multiple linked test items categorised according to the revised Bloom's taxonomy. We tested and compared the accuracy performance using four different prompt engineering settings namely Zero‐Shot‐Baseline (ZSB), Zero‐Shot‐Chain‐of‐Thought (ZSCoT), One‐Shot (OS), and One‐Shot‐Chain‐of‐Thought (OSCoT), to establish how well ChatGPT 3.5 tackled technical questions of different cognitive learning levels for each prompt setting, and which prompt setting will be effective in enhancing ChatGPT's performance for questions at each level. Results We found that ChatGPT 3.5 was good up to Level 3 of the revised Bloom's taxonomy using ZSB, and its accuracy decreased as the cognitive level increased. From Level 4 onwards, it did not perform as well, committing many mistakes. ZSCoT would achieve modest improvements up to Level 5, making it a possible concern for instructors. OS would achieve very significant improvements for Levels 3 and 4, while OSCoT would be needed to achieve very significant improvement for Level 5. None of the prompts tested was able to improve the response quality for level 6. Conclusions We concluded that educators must be cognizant of ChatGPT's performance for different cognitive level questions, and the enhanced performance from using suitable prompts. To develop students' critical thinking abilities, we provided four recommendations for assessment redesign which aim to mitigate the negative impact on student learning and leverage it to enhance learning, considering ChatGPT's performance at different cognitive levels.
Article
Full-text available
Generative artificial intelligence (Gen AI) has gained the spotlight within education since large language models became publicly available. Gen AI has demonstrated its ability to generate high-quality academic content and even pass medical exams and these concerns have, at times, overshadowed its potential benefits. This paper explores Gen AI as a training companion in paramedic education and continuing professional development (CPD), highlighting how it can enhance learning, improve accessibility and address individual learner needs while acknowledging potential problems.
Book
Full-text available
Rethinking Language Education in the Age of Generative AI bridges the gap between theory, research, and practice in AI and language education. Through conceptual pieces, empirical studies, and practical applications, this book provides critical insights and implications for reimagining language education in the age of generative AI. The contributors explore a wide range of issues, reflections, and innovations in AI and language education across diverse contexts, including English as a Second Language (ESL), English as a Foreign Language (EFL), foreign language learning, postsecondary pathways programs for international students, and language teacher education programs. Topics examined include critical AI literacy, GenAI-informed second language teaching and assessment, teacher and student perceptions, tool development for language learning, as well as ethical considerations, policies, and guidelines. The book incorporates interdisciplinary perspectives, such as L2/foreign language studies, education, and applied linguistics, as well as global insights from countries like the United States, Canada, South Korea, Thailand, Indonesia, and the Philippines. This book is essential for students and researchers seeking to leverage AI to enhance language teaching and learning in innovative, critical, ethical, and responsible ways.
Article
Full-text available
Abstract Introduction Academic integrity is the expectation that members of the academic community, including researchers, teachers, and students, to act with accuracy, honesty, fairness, responsibility, and respect. Academic integrity is an issue of critical importance to academic institutions and has been gaining increasing interest among scholars in the last few years. While contravening academic integrity is known as academic misconduct, cheating is one type of academic misconduct and is generally defined as “any action that dishonestly or unfairly violates rules of research or education. Case study The case study presented in this paper describes the elements of academic misconduct in three Middle Eastern countries (Saudi Arabia, Egypt, and Jordan). Four categories of factors were analyzed, namely personal, cultural traits, contextual, and institutional. Moreover, a comparison of factors of misconduct is conducted in the three countries in order to examine how different learning environments and cultures can affect academic cheating. The study also investigates the role of teachers and administration system in enforcing integrity policy in educational institutes. Discussion and evaluation An evaluation of the main causes of cheating and plagiarism among students in Saudi Arabia, Egypt, and Jordan is conducted by analyzing students’ response to a 20 questions survey. The nonparametric Dunn’s statistical analysis is performed to compare the variance and frequency of factors that may affect academic integrity. The significant results are reported in terms of the Krushal F statistic and p-value
Article
Full-text available
Contract cheating, where students recruit a third party to undertake their assignments, is frequently reported to be increasing, presenting a threat to academic standards and quality. Many incidents involve payment of the third party, often a so-called “Essay Mill,” giving contract cheating a commercial aspect. This study synthesized findings from prior research to try and determine how common commercial contract cheating is in Higher Education, and test whether it is increasing. It also sought to evaluate the quality of the research evidence which addresses those questions. Seventy-one samples were identified from 65 studies, going back to 1978. These included 54,514 participants. Contract cheating was self-reported by a historic average of 3.52% of students. The data indicate that contract cheating is increasing; in samples from 2014 to present the percentage of students admitting to paying someone else to undertake their work was 15.7%, potentially representing 31 million students around the world. A significant positive relationship was found between time and the percentage of students admitting to contract cheating. This increase may be due to an overall increase in self-reported cheating generally, rather than contract cheating specifically. Most samples were collected using designs which makes it likely that commercial contract cheating is under-reported, for example using convenience sampling, with a very low response rate and without guarantees of anonymity for participants. Recommendations are made for future studies on academic integrity and contract cheating specifically.
Article
In less than 2 months, the artificial intelligence (AI) program ChatGPT has become a cultural sensation. It is freely accessible through a web portal created by the tool's developer, OpenAI. The program-which automatically creates text based on written prompts-is so popular that it's likely to be "at capacity right now" if you attempt to use it. When you do get through, ChatGPT provides endless entertainment. I asked it to rewrite the first scene of the classic American play Death of a Salesman, but to feature Princess Elsa from the animated movie Frozen as the main character instead of Willy Loman. The output was an amusing conversation in which Elsa-who has come home from a tough day of selling-is told by her son Happy, "Come on, Mom. You're Elsa from Frozen. You have ice powers and you're a queen. You're unstoppable." Mash-ups like this are certainly fun, but there are serious implications for generative AI programs like ChatGPT in science and academia.
Article
At least four articles credit the AI tool as a co-author, as publishers scramble to regulate its use. At least four articles credit the AI tool as a co-author, as publishers scramble to regulate its use. Credit: Iryna Imago/Shutterstock Hands typing on a laptop keyboard with screen showing artificial intelligence chatbot ChatGPT Hands typing on a laptop keyboard with screen showing artificial intelligence chatbot ChatGPT
ChatGPT, galactica, and the progress trap | WIRED [WWW document
  • A Birhane
  • I D Raji
Birhane, A., Raji, I.D., 2022. ChatGPT, galactica, and the progress trap | WIRED [WWW document]. WIRED. URL https://www.wired.com/story/large-language-modelscritique/ (accessed 12.20.22).
Language models are few-shot learners
  • T Brown
  • B Mann
  • N Ryder
  • M Subbiah
  • J D Kaplan
  • P Dhariwal
  • A Neelakantan
  • P Shyam
  • G Sastry
  • A Askell
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G, & Askell, A. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877-1901.
Is the 'time of the assessed essay' over? Teach
  • A Donald
Donald, A., 2018. Is the 'time of the assessed essay' over? Teach. Perspect. Bus. Sch. URL https://blogs.sussex.ac.uk/business-school-teaching/2018/11/14/is-the-timeof-the-assessed-essay-over/(accessed 12.20.22).