Figure - available from: International Journal for Educational Integrity
This content is subject to copyright. Terms and conditions apply.
Artificial intelligence (AI)-generated articles demonstrated reduced AI scores after rephrasing. a The mean AI scores of 50 ChatGPT-generated articles before and after rephrasing; b ChatGPT-generated articles demonstrated lower Perplexity scores computed by GPTZero as compared to original articles although increased after rephrasing; * p < 0·05, ** p < 0·01, ***p < 0·001
Source publication
Background
The application of artificial intelligence (AI) in academic writing has raised concerns regarding accuracy, ethics, and scientific rigour. Some AI content detectors may not accurately identify AI-generated texts, especially those that have undergone paraphrasing. Therefore, there is a pressing need for efficacious approaches or guideline...
Citations
... Fan (2023) figured out another important AI tool for giving feedback on writing is Grammarly. Google Translate, DeepL, and Turnitin are AI tools to help EFL learners improve their writing (Gao et al., 2024;Liu et al., 2024;Sun et al., 2022). Similarly, Kim & Kim (2022) focused on the benefits of using AI in creating students' skills such as problem-solving skills, creativity, and collaboration skills. ...
This study was conducted to discover the applications of AI tools such as ChatGPT, Grammarly, Google Translate, Turnitin, and CorpusMate in IELTS essay writing and the challenges of using those tools. The participants were 45 IELTS learners, aged between 13 and 19 at a foreign language center in the Mekong Delta, Vietnam. The findings indicated that young IELTS learners did not fully recognize the assistance of those AI tools in writing their essays. Moreover, in terms of their challenges and limitations, they stated that they did not know many useful AI tools; and, they found it hard to instruct properly when using an AI tool. This study gives the readers a rather different view of the applications of AI tools in English language teaching and learning field.
... Representative detection tools include ZeroGPT, developed specifically to identify AI-generated content and accurately distinguish between AI-generated and human-written texts. Based on DeepAnalyse technology and a training corpus of more than 10 million articles, this tool achieves high accuracy while maintaining a low false-positive rate (Liu et al., 2024). Copyleaks, which focuses on plagiarism detection, has added functionality to identify AI-generated content. ...
Introduction
The widespread application of artificial intelligence in academic writing has triggered a series of pressing legal challenges.
Methods
This study systematically examines critical issues, including copyright protection, academic integrity, and comparative research methods. We establishes a risk assessment matrix to quantitatively analyze various risks in AI-assisted academic writing from three dimensions: impact, probability, and mitigation cost, thereby identifying high-risk factors.
Results
The findings reveal that AI-assisted writing challenges fundamental principles of traditional copyright law, with judicial practice tending to position AI as a creative tool while emphasizing human agency. Regarding academic integrity, new risks, such as “credibility illusion” and “implicit plagiarism,” have become prominent in AI-generated content, necessitating adaptive regulatory mechanisms. Research data protection and personal information security face dual challenges in data security that require technological and institutional innovations.
Discussion
Based on these findings, we propose a three-dimensional regulatory framework of “transparency, accountability, technical support” and present systematic policy recommendations from institutional design, organizational structure, and international cooperation perspectives. The research results deepen understanding of legal attributes of AI creation, promote theoretical innovation in digital era copyright and academic ethics, and provide practical guidance for academic institutions in formulating AI usage policies.
... There are different experimental results about the ability of AI-generated text detection by humans in the literature. According to [6], [7] AI-generated text can be detected by human readers with 76% accuracy. In the work of [8] 50 medical abstracts were generated using AI-tools, and compared to 50 human-authored abstracts in term of plagiarism scores and detectability by human reviewers. ...
... While humans are good at detecting semantic errors, detectors are good at detecting certain statistical differences in the text. [7] analyzed the performance of existing AI content detectors and reported that Originality.ai and ZeroGPT can accurately detect AI generated text. ...
With the rise of advanced natural language models like GPT, distinguishing between human-written and GPT-generated text has become increasingly challenging and crucial across various domains, including academia. The long-standing issue of plagiarism has grown more pressing, now compounded by concerns about the authenticity of information, as it is not always clear whether the presented facts are genuine or fabricated. In this paper, we present a comprehensive study of feature extraction and analysis for differentiating between human-written and GPT-generated text. By applying machine learning classifiers to these extracted features, we evaluate the significance of each feature in detection. Our results demonstrate that human and GPT-generated texts exhibit distinct writing styles, which can be effectively captured by our features. Given sufficiently long text, the two can be differentiated with high accuracy.
... Se plantean discusiones sobre el papel de la tecnología en el avance del conocimiento, así como sobre los sesgos éticos en el manejo de la información y la producción científica. Canto-Esquivel et al. (2022), Romero (2023) y Liu et al. (2024) señalan que, en el marco de paradigmas emergentes, la IA ofrece oportunidades de transformación editorial al automatizar tareas repetitivas, mejorar la eficiencia y permitir que la gestión se enfoque en procesos más creativos y estratégicos. ...
... En este sentido, la En este contexto, la IA surge como una herramienta mediadora que permite homogeneizar procedimientos de forma objetiva y basada en datos para la revisión y selección de materiales (Repiso, 2024;Tennant et al., 2017). En particular, representa un medio para optimizar la gestión en revistas académicas al automatizar diversas tareas en el proceso editorial (Liu et al., 2024;Penabad-Camacho et al., 2024). Uno de los principales desafíos en este ámbito es la revisión por pares, tradicionalmente costosa en términos de tiempo y recursos. ...
... Este reto puede abordarse mediante herramientas de procesamiento del lenguaje natural (NLP), que permiten analizar aspectos formales como coherencia, estructura, originalidad y plagio (Penabad-Camacho et al., 2024). No obstante, aunque estos algoritmos contribuyen a automatizar la revisión, en la toma de decisiones siempre será determinante el criterio humano (Liu et al., 2024). ...
El artículo analiza los desafíos en la normalización de revistas en ciencias sociales, específicamente en el área de educación, en cuanto a calidad científica y editorial. Los criterios estudiados son el acceso abierto, la política de ética, la endogamia académica, la mediación de la inteligencia artificial y la revisión por pares. Con un enfoque cuali-cuantitativo, aplicando métodos deductivos e inductivos, se analizan artículos indexados principalmente en Scopus y Web of Science. Se hace un estudio comparado de 17 revistas colombianas de educación categorizadas en Publindex (Convocatoria 910 de 2021), para establecer convergencias y divergencias. Los resultados evidencian que la estandarización editorial representa varios desafíos, que implican disponer todo un soporte que la haga viable (unidades, procesos, recursos). Finalmente se determinan estrategias orientadas al fortalecimiento de la gestión editorial.
... ZeroGPT, Turnitin, GPT-2 Output Detector (GPT-2 ODD), Copyleaks, GPTZero, Content at Scale, QuillBot, Plagiarism Detector Score (Turnitin), AI Content Detector Device Using Machine Learning Technique, etc. Originality.ai and ZeroGPT excelled in detecting AIgenerated articles, with 100% and 96% accuracy, respectively [44]. Turnitin achieved 94% accuracy for ChatGPT-generated articles. ...
The world is currently facing the issue of text authenticity in different areas. The implications of generated text can raise concerns about manipulation. When a photo of a celebrity is posted alongside an impactful message, it can generate outrage, hatred, or other manipulative beliefs. Numerous artificial intelligence tools use different techniques to determine whether a text is artificial intelligence-generated or authentic. However, these tools fail to accurately determine cases in which a text is written by a person who uses patterns specific to artificial intelligence tools. For these reasons, this article presents a new approach to the issue of deepfake texts. The authors propose methods to determine whether a text is associated with a specific person by using specific written patterns. Each person has their own written style, which can be identified in the average number of words, the average length of the words, the ratios of unique words, and the sentiments expressed in the sentences. These features are used to develop a custom-made written-style machine learning model named the custom deepfake text model. The model’s results show an accuracy of 99%, a precision of 97.83%, and a recall of 90%. A second model, the anomaly deepfake text model, determines whether the text is associated with a specific author. For this model, an attempt was made to determine anomalies at the level of textual characteristics that are assumed to be associated with particular patterns of a certain author. The results show an accuracy of 88.9%, a precision of 100%, and a recall of 89.9%. The findings outline the possibility of using the model to determine if a text is associated with a certain author. The paper positions itself as a starting point for identifying deepfakes at the text level.
... Recent studies have shown that some AI detectors, like Originality.ai, have a 100% detection rate for AI-generated and AI-rephrased texts (Liu et al., 2024). However, others like Turnitin have shown a 0% misclassification rate for human-written articles but only identified 30% of AI-rephrased articles. ...
... This indicates a significant variance in the effectiveness of different AI detectors. Human reviewers, particularly those with professorial experience, have been able to accurately discriminate at least 96% of AI-rephrased articles (Liu et al., 2024). This suggests that human expertise still plays a crucial role in the detection process. ...
... Therefore, AI detection tools should be combined with traditional plagiarism detectors and human review for better accuracy in identifying AI-generated content. Liu et al. [46] found that human reviewers, combined with AI detectors, effectively identify AI-generated content, even when paraphrased. Their findings highlight the need for human expertise to complement AI detectors, especially in fields where detection accuracy is critical. ...
This study investigates whether assessments fostering higher-order thinking skills can reduce plagiarism involving generative AI tools. Participants completed three tasks of varying complexity in four groups: control, e-textbook, Google, and ChatGPT. Findings show that AI plagiarism decreases as task complexity increases, with higher-order tasks resulting in lower similarity scores and AI plagiarism percentages. The study also highlights the distinction between similarity scores and AI plagiarism, recommending both for effective plagiarism detection. Results suggest that assessments promoting higher-order thinking are a viable strategy for minimizing AI-driven plagiarism.
... Such technologies pose huge challenges, and at the same time they offer unprecedented opportunities for automation and development, particularly in the field of AI text vs. Human text classification. Consequently, this problem has farreaching repercussions such as likelihood of misinformation, cheating, and loss of trust of the public in the digital information [2]. Increased reliance on AI in the content generation process necessitates the adoption of effective methods for distinguishing AI generated text from human generated text [3]. ...
... Originality AI API. The Originality AI API is a commercial AI text detection service, and some studies have reported its high performance [14,15]. It returns an AI score ranging between 0 and 1 which indicates the model's confidence that an input text was AI generated. ...
Peer review is a critical process for ensuring the integrity of published scientific research. Confidence in this process is predicated on the assumption that experts in the relevant domain give careful consideration to the merits of manuscripts which are submitted for publication. With the recent rapid advancements in the linguistic capabilities of large language models (LLMs), a new potential risk to the peer review process is that negligent reviewers will rely on LLMs to perform the often time consuming process of reviewing a paper. In this study, we investigate the ability of existing AI text detection algorithms to distinguish between peer reviews written by humans and different state-of-the-art LLMs. Our analysis shows that existing approaches fail to identify many GPT-4o written reviews without also producing a high number of false positive classifications. To address this deficiency, we propose a new detection approach which surpasses existing methods in the identification of GPT-4o written peer reviews at low levels of false positive classifications. Our work reveals the difficulty of accurately identifying AI-generated text at the individual review level, highlighting the urgent need for new tools and methods to detect this type of unethical application of generative AI.
... For example, AI technologies could assist with literature review, data cleaning, and data analysis, thereby leaving more time for the "joy of writing." 4 As AI technologies continue to develop and evolve, the academic community must acknowledge the potential benefits of AI use while ensuring that ethical standards are upheld and the authenticity of human authorship is preserved. ...