Figure - available from: International Journal for Educational Integrity
This content is subject to copyright. Terms and conditions apply.
Source publication
Recent advances in generative pre-trained transformer large language models have emphasised the potential risks of unfair use of artificial intelligence (AI) generated content in an academic environment and intensified efforts in searching for solutions to detect such content. The paper examines the general functionality of detection tools for AI-g...
Similar publications
Background
The application of artificial intelligence (AI) in academic writing has raised concerns regarding accuracy, ethics, and scientific rigour. Some AI content detectors may not accurately identify AI-generated texts, especially those that have undergone paraphrasing. Therefore, there is a pressing need for efficacious approaches or guideline...
Citations
... Simultaneously, the reliability of AI detection tools has emerged as a critical concern, particularly in academic settings. Multiple studies have demonstrated the difficulty in distinguishing between human-authored and AI-generated content (Elkhatat et al., 2023;Liang et al., 2023;Perkins et al., 2023;Perkins, Roe, et al., 2024;Sadasivan et al., 2023;Weber-Wulff et al., 2023). The unreliability of detection systems has serious implications, potentially leading to unjustified accusations and adverse consequences for students' academic and personal lives (Gorichanaz, 2023;Roe, Perkins, & Ruelle, 2024). ...
This research examines the emerging technique of step-around prompt engineering in GenAI research, a method that deliberately bypasses AI safety measures to expose underlying biases and vulnerabilities in GenAI models. We discuss how Internet-sourced training data introduces unintended biases and misinformation into AI systems, which can be revealed through the careful application of step-around techniques. Drawing parallels with red teaming in cybersecurity, we argue that step-around prompting serves a vital role in identifying and addressing potential vulnerabilities while acknowledging its dual nature as both a research tool and a potential security threat. Our findings highlight three key implications: (1) the persistence of Internet-derived biases in AI training data despite content filtering, (2) the effectiveness of step-around techniques in exposing these biases when used responsibly, and (3) the need for robust safeguards against malicious applications of these methods. We conclude by proposing an ethical framework for using step-around prompting in AI research and development, emphasizing the importance of balancing system improvements with security considerations.
... This integration can significantly reduce the time spent drafting and revising academic manuscripts (Pividori & Greene, 2024). The use of AI in academic writing can address needs in the following areas, although the list is not exhaustive: (a) comprehensive literature review and information collection (Campbell & Cox, 2024;Salvagno et al., 2023); (b) idea generation and topic development (Campbell & Cox, 2024;He et al., 2023;Khalifa & Albadawy, 2024); (c) writing assistance and editiong (Tran, 2023;Dergaa et al., 2023;Salvagno et al., 2023;Saqib & Zia, 2024;Weber-Wulff et al., 2023); (d) formatting and compliance (Rasmussen et al., 2018); (e) personalization and feedback (Rad et al., 2023); (f) accessibility (Mohammed & 'Nell' Watson, 2019;Salas-Pilco et al., 2022;Du & Daniel, 2024;Ulla et al., 2024); (g) data analyzis and vizualition (Shahrul & Mohamed, 2024;Wang, Wu et al., 2023); (h) publication preparation discoverability (Dergaa et al., 2023) (see figure 1). ...
... AI technologies can adapt to new forms of academic dishonesty as they evolve. Thus, AI detection tools are continually updated to address these challenges, providing a dynamic solution to an ever-changing problem (Saqib & Zia, 2024;Weber-Wulff et al., 2023). Importantly, AI detection tools can be used not only for punitive measures but also to support educational growth (Weber-Wulff et al., 2023). ...
... Thus, AI detection tools are continually updated to address these challenges, providing a dynamic solution to an ever-changing problem (Saqib & Zia, 2024;Weber-Wulff et al., 2023). Importantly, AI detection tools can be used not only for punitive measures but also to support educational growth (Weber-Wulff et al., 2023). By identifying potential cases of academic dishonesty, educators can offer students opportunities to revise and resubmit their work, fostering a learning environment that emphasizes growth and understanding over punishment (Dusza, 2024). ...
Higher Education is experiencing substantial transformations as Artificial Intelligence (AI) redefines academic and administrative operations. This paper examines AI's paradigm-shifting influence on Higher Education Institutions (HEIs), emphasizing its contribution to improving pedagogical processes and optimizing administrative efficacy. Using a structured methodology, this study's thematic analysis highlights key areas where AI is making an impact. This addresses the positive aspects of using AI in teaching practices and the learning process, its crucial role in the writing of academic papers, its effects on academic honesty, its implementation in administrative work, the responsibilities faced by education leaders in the AI landscape, and the link between AI and the digital divide in higher learning institutions. Further studies may focus on comparative research among diverse academic institutions in different regions, leadership strategies that facilitate the integration of AI in HEIs, and techniques to enhance AI literacy among teachers, staff, and students.
... This integration can significantly reduce the time spent drafting and revising academic manuscripts (Pividori & Greene, 2024). The use of AI in academic writing can address needs in the following areas, although the list is not exhaustive: (a) comprehensive literature review and information collection (Campbell & Cox, 2024;Salvagno et al., 2023); (b) idea generation and topic development (Campbell & Cox, 2024;He et al., 2023;Khalifa & Albadawy, 2024); (c) writing assistance and editiong (Tran, 2023;Dergaa et al., 2023;Salvagno et al., 2023;Saqib & Zia, 2024;Weber-Wulff et al., 2023); (d) formatting and compliance (Rasmussen et al., 2018); (e) personalization and feedback (Rad et al., 2023); (f) accessibility (Mohammed & 'Nell' Watson, 2019;Salas-Pilco et al., 2022;Du & Daniel, 2024;Ulla et al., 2024); (g) data analyzis and vizualition (Shahrul & Mohamed, 2024;Wang, Wu et al., 2023); (h) publication preparation discoverability (Dergaa et al., 2023) (see figure 1). ...
... AI technologies can adapt to new forms of academic dishonesty as they evolve. Thus, AI detection tools are continually updated to address these challenges, providing a dynamic solution to an ever-changing problem (Saqib & Zia, 2024;Weber-Wulff et al., 2023). Importantly, AI detection tools can be used not only for punitive measures but also to support educational growth (Weber-Wulff et al., 2023). ...
... Thus, AI detection tools are continually updated to address these challenges, providing a dynamic solution to an ever-changing problem (Saqib & Zia, 2024;Weber-Wulff et al., 2023). Importantly, AI detection tools can be used not only for punitive measures but also to support educational growth (Weber-Wulff et al., 2023). By identifying potential cases of academic dishonesty, educators can offer students opportunities to revise and resubmit their work, fostering a learning environment that emphasizes growth and understanding over punishment (Dusza, 2024). ...
Higher Education is experiencing substantial transformations as Artificial Intelligence (AI) redefines academic and administrative operations. This paper examines AI's paradigm-shifting influence on Higher Education Institutions (HEIs), emphasizing its contribution to improving pedagogical processes and optimizing administrative efficacy. Using a structured methodology, this study's thematic analysis highlights key areas where AI is making an impact. This addresses the positive aspects of using AI in teaching practices and the learning process, its crucial role in the writing of academic papers, its effects on academic honesty, its implementation in administrative work, the responsibilities faced by education leaders in the AI landscape, and the link between AI and the digital divide in higher learning institutions. Further studies may focus on comparative research among diverse academic institutions in different regions, leadership strategies that facilitate the integration of AI in HEIs, and techniques to enhance AI literacy among teachers, staff, and students.
... Furthermore, evidence continues to amass that genAI detectors are better at detecting 100% AI generated content than are humans (e.g., Perkins et al., 2024b;Weber-Wulff et al., 2023). Such findings suggest that these programs offer some help and, at least, partial evidence that a student may have used genAI in situations where they have been instructed not to. ...
A ‘two-lane’ (All-or-None) approach to the use of generative
artificial intelligence (genAI) is the idea that there should be two
categories of assessments in higher education: Lane 1/None:
where the use of genAI is prohibited, and Lane 2/All: where any
use of genAI is permitted. This idea has been thoughtfully
detailed and continues to be debated. Although this idea is
generally well-intentioned, in this comment piece I argue that, if
implemented, it will promote an impoverished approach to
education and educational assessment. One argument often
invoked in favour of an All-or-None approach is that genAI use
may sometimes be undetectable. Contract cheating (e.g.,
students outsourcing assessments to ghostwriters) is sometimes
undetectable, yet an argument that there should be an All-or-
None approach permitting contract cheating in some
assessments is clearly absurd. An All-or-None approach to genAI
and assessment is also absurd. A middle lane, where genAI use in
assessments is allowed with some limitations, is essential.
... and ZeroGPT can accurately detect AI generated text. Another study with the same research question was conducted by [15], where the authors reveal that the existing detectors tend to classify the text as human written, giving contradictory results. AI content detectors are also biased against non-native speakers. ...
With the rise of advanced natural language models like GPT, distinguishing between human-written and GPT-generated text has become increasingly challenging and crucial across various domains, including academia. The long-standing issue of plagiarism has grown more pressing, now compounded by concerns about the authenticity of information, as it is not always clear whether the presented facts are genuine or fabricated. In this paper, we present a comprehensive study of feature extraction and analysis for differentiating between human-written and GPT-generated text. By applying machine learning classifiers to these extracted features, we evaluate the significance of each feature in detection. Our results demonstrate that human and GPT-generated texts exhibit distinct writing styles, which can be effectively captured by our features. Given sufficiently long text, the two can be differentiated with high accuracy.
... The detection of LLM-generated text, has become an emerging challenge. Current detection technologies, including commercial tools, often need help distingushing between human-written and LLM-generated content (Price and Sakellarios 2023;Walters 2023;Weber-Wulff et al. 2023). These systems frequently misclassify outputs, with a tendency to favour human-written classifications. ...
The remarkable ability of large language models (LLMs) to comprehend, interpret, and generate complex language has rapidly integrated LLM-generated text into various aspects of daily life, where users increasingly accept it. However, the growing reliance on LLMs underscores the urgent need for effective detection mechanisms to identify LLM-generated text. Such mechanisms are critical to mitigating misuse and safeguarding domains like artistic expression and social networks from potential negative consequences. LLM-generated text detection, conceptualized as a binary classification task, seeks to determine whether an LLM produced a given text. Recent advances in this field stem from innovations in watermarking techniques, statistics-based detectors, and neural-based detectors. Human-assisted methods also play a crucial role. In this survey, we consolidate recent research breakthroughs in this field, emphasizing the urgent need to strengthen detector research. Additionally, we review existing datasets, highlighting their limitations and developmental requirements. Furthermore, we examine various LLM-generated text detection paradigms, shedding light on challenges like out-of-distribution problems, potential attacks, real-world data issues, and ineffective evaluation frameworks. Finally, we outline intriguing directions for future research in LLM-generated text detection to advance responsible artificial intelligence. This survey aims to provide a clear and comprehensive introduction for newcomers while offering seasoned researchers valuable updates in the field.
... Relying on AI-detection methods to force students not to use GenAI is as successful and effective as catching the wind in a net. Many researchers have found these tools generally unreliable (Elkhatat et al., 2023;Li et al., 2023;Liang et al., 2023;Matthews & Volpe, 2023;Sharples, 2022;Weber-Wulff et al., 2023). We strongly believe that no matter how advanced these detection tools claim to be, there will always be ways to outsmart these tools (using AI too), and GenAI will keep developing and outpacing these detection methods. ...
There is a growing need to upskill higher education (HE) teachers for the effective and responsible integration of generative artificial intelligence (GenAI) in their classrooms. This case study sought to address this growing need by designing and delivering a training course for educators, focusing on the use of ChatGPT as it was the most commonly used tool at the time. The professional development opportunity lasted 5 weeks and covered critical aspects of GenAI use for teaching and learning. Data collected from participants included discussion board entries, written tasks and focus groups. Findings highlight some of the common practices and concerns HE practitioners had regarding the use of GenAI in their practice. The findings also emphasise the importance of providing teachers with customised GenAI training to facilitate its effective integration in HE contexts. Finally, based on the findings of this study, we propose the TPTP Support System for Teachers, built upon four key areas: teacher training, pedagogical support, testing revamp and practice networks. This system aims to guide institutional efforts to facilitate and support educators as they integrate GenAI in HE. Implications for practice or policy: Teacher training is necessary for the effective integration of GenAI in HE contexts. Institutions should provide support in four key areas to facilitate educators’ effective and responsible use of GenAI in HE. The TPTP Support System for Teachers can be leveraged for these planning and support initiatives.
... Some higher education institutions (HEIs) have integrated AI detection applications, such as Turnitin, within virtual learning environments, or use standalone tools like GPTZero (https://gptzero.me/) and WinstonAI (https://gowinston.ai/) to detect AI generated work (McDonald et al., 2024). However, other institutions remain hesitant due to concerns about the accuracy and reliability of these tools, particularly the risk of false positives (Dalalah & Dalalah, 2023;Saqib & Zia, 2024;Weber-Wulff et al., 2023). Moreover, current detection tools may unfairly target students whose first language is not English, mistakenly identifying their work as AI-generated (Fröhling & Zubiaga, 2021). ...
Generative AI has the potential to transform higher education assessment. This study examines the opportunities and challenges of integrating AI into coursework assessments, highlighting the need to rethink traditional paradigms. A case study is presented that explores AI as an auxiliary learning tool in postgraduate coursework. Students found AI valuable for text generation, proofreading, idea generation, and research but noted limitations in accuracy, detail, and specificity. AI integration offers advantages such as enhancing assessment authenticity, promoting self-regulated learning, and developing critical thinking and problem-solving skills. A holistic approach is recommended, incorporating AI into feedback, adapting assessments to leverage AI’s capabilities, and promoting AI literacy among students and educators. Embracing AI while addressing its challenges can enable effective, equitable, and engaging assessment and teaching practices. Universities are encouraged to strategically integrate AI into teaching and learning, ultimately transforming the educational landscape to better prepare students for an AI-driven world.
... In particular, the focus has been on the curation of detection benchmarks (Uchendu et al. 2021;Li et al. 2024;Wang et al. 2024) and the automation of detection procedure (Venkatraman, Uchendu, and Lee 2024;Hu, Chen, and Ho 2023;Wang et al. 2023;Mitchell et al. 2023). Yet, these detectors can be easily fooled by simple paraphrasing (Krishna et al. 2024) and are not robust to unseen models and domains (Weber-Wulff et al. 2023). These limitations necessitate exploring alternative strategies, such as integrating human-inthe-loop mechanisms, where human evaluators validate or supplement existing detectors. ...
The proliferation of generative models has presented significant challenges in distinguishing authentic human-authored content from deepfake content. Collaborative human efforts, augmented by AI tools, present a promising solution. In this study, we explore the potential of DeepFakeDeLiBot, a deliberation-enhancing chatbot, to support groups in detecting deepfake text. Our findings reveal that group-based problem-solving significantly improves the accuracy of identifying machine-generated paragraphs compared to individual efforts. While engagement with DeepFakeDeLiBot does not yield substantial performance gains overall, it enhances group dynamics by fostering greater participant engagement, consensus building, and the frequency and diversity of reasoning-based utterances. Additionally, participants with higher perceived effectiveness of group collaboration exhibited performance benefits from DeepFakeDeLiBot. These findings underscore the potential of deliberative chatbots in fostering interactive and productive group dynamics while ensuring accuracy in collaborative deepfake text detection. \textit{Dataset and source code used in this study will be made publicly available upon acceptance of the manuscript.
... Model as a sociocultural figure is a representation. Model itself is divided into three, those are professional model, non-professional model, and model a community organized (Hassan et al., 2022;Weber-Wulff et al., 2023). ...
This study aimed to examine how the language used in GoSend advertisements taken from YouTube could influence consumer perceptions and behaviour. This study provided practical importance on the significance of effective language use, appropriate visual selection, and platform selection that could achieve marketing goals. To analyze the advertisement, the researcher used Fairclough’s three-dimensional framework: text dimension (micro), discourse practice (meso), and socio-cultural practice (macro). This study employed a qualitative method in analyzing the advertising text of Gojek’s YouTube GoSend advertisement #BestSellerGoSend featuring Ariel Noah. The findings of this study indicated that various strategies were used in advertising to attract consumer interest. The selection of interesting and promising language increasingly made customers want to use the GoSend service. This advertisement had an impact on influencing customers that by using this service, goods would arrive quickly with cheap shipping costs and safe and hassle-free delivery. In addition, the selection of models as visual objects in this advertisement was very influential in attracting the attention of the audience and building public trust because the model in this advertisement was a legendary public figure, and the addition of old song clips from the band “Noah” served as a special attraction in this advertisement.