ArticleLiterature Review

A Chat(GPT) about the future of scientific publishing

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The advancement of AI within the medical field has led to substantial transformations [20,21], including assisting in the specialized diet supports [22,23], prevention of potential allergic reactions [24,25], detection of prescription errors [26], extraction of drug interactions from the literature [27], and particularly concerning literature reviews [28,29]. As the production of medical evidence continues to grow at an accelerated rate, the need for effective tools to sift through and analyze pertinent information has become critically important. ...
... As the production of medical evidence continues to grow at an accelerated rate, the need for effective tools to sift through and analyze pertinent information has become critically important. In response to this need, AI-powered platforms such as ChatGPT, Bing Chat, and Bard AI have emerged as potential aids in literature reviews [11,12,29]. The findings of this study, however, demonstrate varying degrees of reliability and validity exhibited by the three generative AI chatbots, namely ChatGPT-3.5, ...
... The integration of AI tools in the medical field introduces complex ethical considerations [28,29,[31][32][33]. As evidenced by this study, the inaccuracies and fabrications in AI-generated references could potentially lead to misinterpretation, misinformation, and misguided medical decisions. ...
Article
Full-text available
Background and objectives: Literature reviews are foundational to understanding medical evidence. With AI tools like ChatGPT, Bing Chat and Bard AI emerging as potential aids in this domain, this study aimed to individually assess their citation accuracy within Nephrology, comparing their performance in providing precise. Materials and methods: We generated the prompt to solicit 20 references in Vancouver style in each 12 Nephrology topics, using ChatGPT, Bing Chat and Bard. We verified the existence and accuracy of the provided references using PubMed, Google Scholar, and Web of Science. We categorized the validity of the references from the AI chatbot into (1) incomplete, (2) fabricated, (3) inaccurate, and (4) accurate. Results: A total of 199 (83%), 158 (66%) and 112 (47%) unique references were provided from ChatGPT, Bing Chat and Bard, respectively. ChatGPT provided 76 (38%) accurate, 82 (41%) inaccurate, 32 (16%) fabricated and 9 (5%) incomplete references. Bing Chat provided 47 (30%) accurate, 77 (49%) inaccurate, 21 (13%) fabricated and 13 (8%) incomplete references. In contrast, Bard provided 3 (3%) accurate, 26 (23%) inaccurate, 71 (63%) fabricated and 12 (11%) incomplete references. The most common error type across platforms was incorrect DOIs. Conclusions: In the field of medicine, the necessity for faultless adherence to research integrity is highlighted, asserting that even small errors cannot be tolerated. The outcomes of this investigation draw attention to inconsistent citation accuracy across the different AI tools evaluated. Despite some promising results, the discrepancies identified call for a cautious and rigorous vetting of AI-sourced references in medicine. Such chatbots, before becoming standard tools, need substantial refinements to assure unwavering precision in their outputs.
... The copyright holder for this preprint this version posted August 18, 2023. ; https://doi.org/10.1101/2023.08.17.553688 doi: bioRxiv preprint and the superficiality of the generated content [7], [10], [26]- [30], [35]- [39], [12], [40]- [47], [49], [50], [13], [51], [52], [54], [55], [58]- [60], [62]- [64], [14], [66], [67], [72]- [74], [77], [79], [80], [82], [84], [19], [86], [92], [95], [97]- [99], [102]- [105], [21], [106], [107], [110], [111], [113], [115], [117], [120], [125], [126], [23], [130], [132], [136]- [143], [24], [144]- [150], [25]. Researchers expressed apprehension about relying on ChatGPT for accurate and in-depth scientific knowledge. ...
... ChatGPT provides superficial knowledge and has been reported to provide very well constructed responses that are completely falsified and misleading known as "hallucinations" [5], [13], [20]- [22], [29], [42], [63], [67], [68], [76], [77], [80], [89], [93], [97], [99], [102], [104], [110], [135], [138], [140]- [143], [151]- [153]. One very alarming concern is that the tool tend to provide citations to the provided falsified response which are themselves are fake and non-existing [19], [20], [22], [24], [28], [29], [34], [39], [41], [43], [45], [52], [53], [59], [63], [64], [69], [72], [77], [81], [84], [93], [102], [103], [110], [117], [135], [137], [138], [141]- [144], [149]- [152], [154]- [157]. For example, in one study, out of the 23 references that were provided by ChatGPT , only 14 were accurate, 6 seems to have been completely made up and 4 existed but were attributed to the wrong author [158]. ...
... This is thought to lead to the rapid production of low-quality scientific articles, highlighting the potential for a decline in the overall quality of research output due to the automated nature of ChatGPT [14], [42], [176], [180] which will facilitate and enhance the growth of predatory journals and papermills [51], [63], [86], [110], [176], [180]. Other issues surrounding privacy and security [24], [71], [106], [130], [139], [174], transparency, credibility, and validity were also raised as concerns [17], [19], [21], [24], [41], [67], [70], [71], [93], [106], [110], [114]- [116], [130], [136], [139], [140], [142], [166], with researchers expressing doubts about the up-to-date nature of the information provided by ChatGPT [7], [17], [21], [34], [63], [73], [79], [110], [111], [142], [157], [180] since it is trained on data up to November 2021. ...
Preprint
Full-text available
Background ChatGPT has emerged as a valuable tool for enhancing scientific writing. It is the first openly available Large Language Model (LLM) with unrestricted access to its capabilities. ChatGPT has the potential to alleviate researchers’ workload and enhance various aspects of research, from planning to execution and presentation. However, due to the rapid growth of publications and diverse opinions surrounding ChatGPT, a comprehensive review is necessary to understand its benefits, risks, and safe utilization in scientific research. This review aims to provide a comprehensive overview of the topic by extensively examining existing literature on the utilization of ChatGPT in academic research. The goal is to gain insights into the potential benefits and risks of using ChatGPT in scientific research, exploring secure and efficient methods for its application while identifying potential pitfalls to minimize negative consequences. Method The search was conducted in PubMed/MEDLINE, SCOPUS, and Google Scholar, yielding a total of 1279 articles and concluded on April 23 rd , 2023. After full screening of titles/abstracts and removing duplicates and irrelevant articles, a total of 181 articles were included for analysis. Information collected included publication details, purposes, benefits, risks, and recommendation regarding ChatGPT’s use in scientific research. Results The majority of existing literature consists of editorials expressing thoughts and concerns, followed by original research articles analyzing ChatGPT’s performance in scientific research. The most significant advantage of using ChatGPT in scientific writing is its ability to expedite the writing process, enabling researchers to draft their work more efficiently. It also proves beneficial in improving writing style and proofreading by offering suggestions for sentence structure, grammar, and overall clarity. Additional benefits identified include support in data analysis, the formulation of protocols for clinical trials, and the design of scientific studies. Concerns mainly revolve around the accuracy and superficiality of the generated content, leading to what is referred to as “hallucinations.” Researchers have also expressed concerns about the tool providing citations to nonexistent sources. Other concerns discussed include authorship and plagiarism issues, accountability, copyright considerations, potential loss of diverse writing styles, privacy and security, transparency, credibility, validity, presence of bias, and the potential impact on scientific progress, such as a decrease in groundbreaking discoveries. Conclusion ChatGPT has the potential to revolutionize scientific writing as a valuable tool for researchers. However, it cannot replace human expertise and critical thinking. Researchers must exercise caution, ensuring the generated content complements their own knowledge. Ethical standards should be upheld, involving knowledgeable human researchers to avoid biases and inaccuracies. Collaboration among stakeholders and training on AI technology are essential for identifying best practices in LLMs use and maintaining scientific integrity.
... Having the most number of publications, academics from the field of medicine responded quickly to the use of this technology. They had foreseen the potential of ChatGPT as a writing aid for healthcare academics and professionals [55,57,58,86,117]. It may be utilized to generate research ideas [105,106], write parts of a research paper [1,57,58,86], construct a draft of a paper, proofread a paper [1], search for and write a literature review from an outline, and manage the references [1,55]. ...
... They had foreseen the potential of ChatGPT as a writing aid for healthcare academics and professionals [55,57,58,86,117]. It may be utilized to generate research ideas [105,106], write parts of a research paper [1,57,58,86], construct a draft of a paper, proofread a paper [1], search for and write a literature review from an outline, and manage the references [1,55]. These findings provide further context for the ten most frequently occurring words shown in Table 2. ...
Article
Full-text available
This study employed text mining analytics to determine the perceptions of academics (e.g., scholars, teachers, educators, and researchers) regarding ChatGPT in research writing. Toward this goal, research articles relating to the use of ChatGPT in research writing were collected from Scopus and Google Scholar. Using keywords and the criteria set forth, 86 selected peer- and non-peer-reviewed articles were published between 2022 and 2023. All except one paper were published in a scientific journal. It was shown that ChatGPT is a valuable tool for content generation and could provide quick feedback from a text-based inquiry. Support and content generation, publication ethics, and the duality of ChatGPT were the three themes discovered using topic modeling. The top concern of academics was that it could facilitate plagiarism. Overall, the sentiment score was positive, implying that the academic community has a favorable perception of this novel technology. Recommendations and implications are offered.
... This serves as an appropriate example of recycled powder processing and analysis using the knowledge graph developed in this study. Finally, ChatGPT [6] and BERT training [7] are implemented to enhance the knowledge graph's accuracy. ...
... This serves as an appropriate example of recycled powder cessing and analysis using the knowledge graph developed in this study. Fin ChatGPT [6] and BERT training [7] are implemented to enhance the knowledge gra accuracy. ...
Article
Full-text available
Research on manufacturing components for electric vehicles plays a vital role in their development. Furthermore, significant advancements in additive manufacturing processes have revolutionized the production of various parts. By establishing a system that enables the recovery, processing, and reuse of metal powders essential for additive manufacturing, we can achieve sustainable production of electric vehicles. This approach holds immense importance in terms of reducing manufacturing costs, expanding the market, and safeguarding the environment. In this study, we developed an additive manufacturing system for recycled metal powders, encompassing powder variety, properties, processing, manufacturing, component properties, and applications. This system was used to create a knowledge graph providing a convenient resource for researchers to understand the entire procedure from recycling to application. To improve the graph’s accuracy, we employed ChatGPT and BERT training. We also demonstrated the knowledge graph’s utility by processing recycled 316 L stainless steel powders and assessing their quality through image processing. This experiment serves as a practical example of recycling and analyzing powders using the established knowledge graph.
... Researchers have also questioned the role of generative AI in medical writing (Biswas, 2023b;Hill-Yardin et al., 2023;Kitamura, 2023) recognising its potential, but at the cost of a significant risk, due to inaccurate and false information. These risks are not unique to generative AI-produced content: humans make errors too, but the systematic nature of the errors within AI products gives the largest cause for concern. ...
... This might indicate the technology's current inability to interpret contested and complex arguments and also might reflect the challenges coaching scholars have grappled with themselves over this very issue (Bachkirova and Kauffman, 2009). It must also be acknowledged that, as with Biswas (2023b), Hill-Yardin et al. (2023) and Kitamura (2023), much of the attribution for propositions P1 and P5 not being supported in this present research is due to GPT-4 responses containing fictitious and inaccurate content. It is this aspect which is of most concern. ...
Article
Full-text available
Purpose This study aimed to evaluate the potential of artificial intelligence (AI) as a tool for knowledge synthesis, the production of written content and the delivery of coaching conversations. Design/methodology/approach The research employed the use of experts to evaluate the outputs from ChatGPT's AI tool in blind tests to review the accuracy and value of outcomes for written content and for coaching conversations. Findings The results from these tasks indicate that there is a significant gap between comparative search tools such as Google Scholar, specialist online discovery tools (EBSCO and PsycNet) and GPT-4's performance. GPT-4 lacks the accuracy and detail which can be found through other tools, although the material produced has strong face validity. It argues organisations, academic institutions and training providers should put in place policies regarding the use of such tools, and professional bodies should amend ethical codes of practice to reduce the risks of false claims being used in published work. Originality/value This is the first research paper to evaluate the current potential of generative AI tools for research, knowledge curation and coaching conversations.
... Inilah jawabannya: masalah aksesibilitas (terbatasnya jumlah pendidikan tinggi, dan terjadinya ketidakmerataan antara kota dengan luar kota), kualitas pendidikan yang belum memuaskan, kurangnya keterampilan lulusan menghadapi dinamika kerja dan usaha, rendahnya riset dan inovasi, kurangnya keterlibatan industri dalam proses produksi pendidikan. Jawaban ini sejajar tingkatannya dengan analisis guru besar dalam arti yang sebenarnya (seorang manusia bergelar profesor) [6]. ...
... Untuk pertanyaan "Apa masalah yang dihadapi oleh dosen perguruan tinggi di Indonesia?" inilah lima jawaban ChatGPT: beban kerja yang tinggi, kurangnya waktu untuk penelitian, rendahnya dukungan dan fasilitas penelitian, keterbatasan pengembangan profesional, peningkatan tekanan publikasi dan penilaian kinerja (6). Sekali lagi penulis terpesona dengan mutu jawaban ChatGPT. ...
Article
Full-text available
In April 2023 there was disorder in the world of Indonesian higher education when the Indonesian Ministry of Education and Culture asked Indonesian lecturers to fill in credit scores in an information system. The pressed deadline and protracted lecturer administration issues have led to resistance from lecturers in April 2023, which was responded by cancelling the policy by the Indonesian Ministry of Education and Culture. This phenomenon is interesting to study using the theory of hyper-capitalism and the knowledge economy of Philip W. Graham, a philosopher based on media-political-economy studies. In his series of theories, Graham spawned a method of analysis which he called discourse historical in which communication, language, and the mass media play a role in a mediation process that will produce new values. As a result of the research it can be shown that the mass media has been used and has a major role in the mediation process related to the Indonesian-style knowledge economy to create new values. However, a question remains whether the Indonesian Ministry of Education and Culture will produce a teaching administration policy that will truly make it easier for Indonesian lecturers to work in education, and so on, will make the dynamics of Indonesian education even better.
... We -as scientists -may therefore wonder: how will the advent of LLMs affect the practice of science? Finding answers to this question is urgent as LLMs are already starting to permeate the academic landscape [9][10][11][12][13][14][15][16][17] . For instance, in 2022, MetaAI released the first science-specific LLM (under the name Galactica) aimed to support researchers in the process of knowledge discovery 18 . ...
Preprint
Full-text available
Large language models (LLMs) are being increasingly incorporated into scientific workflows. However, we have yet to fully grasp the implications of this integration. How should the advent of large language models affect the practice of science? For this opinion piece, we have invited four diverse groups of scientists to reflect on this query, sharing their perspectives and engaging in debate. Schulz et al. make the argument that working with LLMs is not fundamentally different from working with human collaborators, while Bender et al. argue that LLMs are often misused and over-hyped, and that their limitations warrant a focus on more specialized, easily interpretable tools. Marelli et al. emphasize the importance of transparent attribution and responsible use of LLMs. Finally, Botvinick and Gershman advocate that humans should retain responsibility for determining the scientific roadmap. To facilitate the discussion, the four perspectives are complemented with a response from each group. By putting these different perspectives in conversation, we aim to bring attention to important considerations within the academic community regarding the adoption of LLMs and their impact on both current and future scientific practices.
... The development of artificial intelligence-powered language model of chatbot is an emerging field in medicine and surgery. These new generations of chatbots may respond to simple-to-complicated questions in all fields of medicine and research, and, consequently are considered as theoretical adjunctive clinical and research tools [1,2]. To date, the studies investigating the accuracy of Chatbot Generative Pre-trained Transformer (ChatGPT, OpenIA, CA, USA) in theoretical knowledges, medical school examinations, and clinical vignettes, reported encouraging results [3][4][5]. ...
Article
Full-text available
Objectives To evaluate the ChatGPT-4 performance in oncological board decisions. Methods Twenty medical records of patients with head and neck cancer were evaluated by ChatGPT-4 for additional examinations, management, and therapeutic approaches. The ChatGPT-4 propositions were assessed with the Artificial Intelligence Performance Instrument. The stability of ChatGPT-4 was evaluated through regenerated answers at 1-day interval. Results ChatGPT-4 provided adequate explanations for cTNM staging in 19 cases (95%). ChatGPT-4 proposed a significant higher number of additional examinations than practitioners (72 versus 103; p = 0.001). ChatGPT-4 indications of endoscopy–biopsy, HPV research, ultrasonography, and PET–CT were consistent with the oncological board decisions. The therapeutic propositions of ChatGPT-4 were accurate in 13 cases (65%). Most additional examination and primary treatment propositions were consistent throughout regenerated response process. Conclusions ChatGPT-4 may be an adjunctive theoretical tool in oncological board simple decisions.
... You can ask GPT-4 anything and receive human-like replies to your questions or requests [5]. This includes manuscript writing [6], and assessment for grammar, spelling, and style [7]. While there is a growing interest in using AI-synthesized text for academic purposes [8], AI-synthesized text could be a potential tool for researchers in the development of their articles [3]. ...
... It can answer long texts for one short question by using the rules of academic writing in different languages. After its release, a lot of researchers shifted their focus to studying ChatGPT and its potential applications in various fields, such as healthcare [17][18][19], tourism industry [20], academic integrity [21], education [22,23], programming bugs [24], dental medicine [25], global warming [26], medical education [27], and future development [28][29][30][31][32][33]. ...
Preprint
Full-text available
The Chat Generative Pre-training Transformer (GPT), also known as ChatGPT, is a powerful generative AI model that can simulate human-like dialogues across a variety of domains. However, this popularity has attracted the attention of malicious actors who exploit ChatGPT to launch cyberattacks. This paper examines the tactics that adversaries use to leverage ChatGPT in a variety of cyberattacks. Attackers pose as regular users and manipulate ChatGPT’s vulnerability to malicious interactions, particularly in the context of cyber assault. The paper presents illustrative examples of cyberattacks that are possible with ChatGPT and discusses the realm of ChatGPT-fueled cybersecurity threats. The paper also investigates the extent of user awareness of the relationship between ChatGPT and cyberattacks. A survey of 253 participants was conducted, and their responses were measured on a three-point Likert scale. The results provide a comprehensive understanding of how ChatGPT can be used to improve business processes and identify areas for improvement. Over 80% of the participants agreed that cyber criminals use ChatGPT for malicious purposes. This finding underscores the importance of improving the security of this novel model. Organizations must take steps to protect their computational infrastructure. This analysis also highlights opportunities for streamlining processes, improving service quality, and increasing efficiency. Finally, the paper provides recommendations for using ChatGPT in a secure manner, outlining ways to mitigate potential cyberattacks and strengthen defenses against adversaries.
... ChatGPT shines in helping researchers to collate and focus on pertinent scientific data drawn from a plethora of sources, guiding them in formulating well-grounded arguments and discussions. This ensures a higher quality manuscript that can be completed in a fraction of the time, offering researchers the luxury of focusing more on analysis rather than the arduous data collection task [4]. ...
Article
Full-text available
In the ever-evolving realm of scientific research, this letter underscores the vital role of ChatGPT as an invaluable ally in manuscript creation, focusing on its remarkable grammar and spelling error correction capabilities. Furthermore, it highlights ChatGPT's efficacy in expediting the manuscript preparation process by streamlining the collection and highlighting critical scientific information. By elucidating the aim of this letter and the multifaceted benefits of ChatGPT, we aspire to illuminate the path toward a future where scientific writing achieves unparalleled efficiency and precision.
... Es más rápido indicar qué información se negará a dar el Chat GPT: las relativas a "la fabricación o uso ilegal de armas o drogas, actividades ilegal-es o violentas, que puedan causar daños a personas o bienes, que puedan ser utilizadas para actividades delictivas o terroristas" o contradictorio con las leyes vigentes"; por lo demás, el robot está bastante inspirado, ya que, en segundos, ChatGPT es capaz de proporcionar un discurso razonado sobre la necesidad de establecer unos ingresos mínimos, una carta de presentación, poemas, una carta de recordatorio de una factura impaga, un resumen por capítulos sobre la constitución política del Perú, un diálogo imaginario entre Sartre y Aristóteles, un pastiche de las Fábulas de Esopo. Puede explicar cómo funciona el entrelazamiento cuántico, dar una receta de tarta de manzana, escribir una disertación sobre la relación entre el inconsciente y el libre albedrío; aún más impresionante, si se le pide que escriba un programa de computadora para realizar ciertas tareas: proporcionará un código comentado (Hill et al., 2023). ...
Book
El presente libro tiene el objetivo de categorizar y definir los enfoques de la investigación científica de una manera amena y efectiva, para entender la metodología de la investigación, comprendiendo la importancia de difundir buenas conductas responsables en investigación, en el marco del respeto por los diferentes métodos de investigación para estudios de diversos campos de estudios, que implican técnicas de observación, con consulta previa del estado del arte, y lo que no se acostumbra a decir de manera coloquial sobre el desarrollo de las actividades de investigación, en un claro y ameno aporte a la producción científica.
... Additional challenges emerge due to the limitations in both current software, and trained professionals in detecting when AI-generated content is present Weber-Wulff et al., 2023). The social research field regarding AI and ChatGPT is still immature, but studies investigating the influence of AI technologies in various domains, such as scientific publishing, higher education policy development and academic integrity, have been conducted (Perkins, 2023;Hill-Yardin et al., 2023;. ...
Article
Full-text available
This study analyses the discursive representation of Artificial Intelligence (AI) and ChatGPT in UK news media headlines from January to May 2023. A total of 671 headlines were collected and analysed using inductive thematic analysis, theoretically informed by Agenda-Setting theory and Framing theory. The results offer an initial picture of how recent technological advances in the fields of AI have been communicated to the public. The results show that there is a complex and at times paradoxical portrayal of AI in general and ChatGPT as well as other Large Language Models (LLMs), oscillating between promising potential for solving societal challenges while simultaneously warning of imminent and systemic dangers. Further to this, the analysis provides evidence for the claim that media representations of AI are often sensationalised and tend to focus more on warnings and caution to readers, as only a minority of headlines were related to helpful, useful, or otherwise positive applications of AI, ChatGPT, and other Large Language Models (LLMs). These findings underscore the pivotal role of media discourse in shaping public perceptions of AI. The study prompts reflections on news media practices in the United Kingdom and encourages future research to further examine the influence of social, cultural, and political contexts on AI representation during a period of technological change. This research provides relevant insights for policymakers, AI developers, and educators to support public engagement with AI technologies.
... As well, research shows that AI text analysis carries the risk of propagating biases and errors from training datasets due to its reliance on machine learning from such data. The lack of transparency in complex neural networks aggravates this concern by hindering the ability to provide clear explanations and accountability [127][128][129]. However, a cautious use with constant human supervision, as here carried out, to avoid the well-known and typical AI's "hallucinations" [130][131][132][133][134], can be helpful in mixed and integrated approaches, for example, to facilitate the analysis of vast quantities of textual data. ...
Article
Full-text available
The emergence of glucagon-like peptide-1 receptor agonists (GLP-1 RAs; semaglutide and others) now promises effective, non-invasive treatment of obesity for individuals with and without diabetes. Social media platforms’ users started promoting semaglutide/Ozempic as a weight-loss treatment, and the associated increase in demand has contributed to an ongoing worldwide shortage of the drug associated with levels of non-prescribed semaglutide intake. Furthermore, recent reports emphasized some GLP-1 RA-associated risks of triggering depression and suicidal thoughts. Consistent with the above, we aimed to assess the possible impact of GLP-1 RAs on mental health as being perceived and discussed in popular open platforms with the help of a mixed-methods approach. Reddit posts yielded 12,136 comments, YouTube videos 14,515, and TikTok videos 17,059, respectively. Out of these posts/entries, most represented matches related to sleep-related issues, including insomnia (n = 620 matches); anxiety (n = 353); depression (n = 204); and mental health issues in general (n = 165). After the initiation of GLP-1 RAs, losing weight was associated with either a marked improvement or, in some cases, a deterioration, in mood; increase/decrease in anxiety/insomnia; and better control of a range of addictive behaviors. The challenges of accessing these medications were a hot topic as well. To the best of our knowledge, this is the first study documenting if and how GLP-1 RAs are perceived as affecting mood, mental health, and behaviors. Establishing a clear cause-and-effect link between metabolic diseases, depression and medications is difficult because of their possible reciprocal relationship, shared underlying mechanisms and individual differences. Further research is needed to better understand the safety profile of these molecules and their putative impact on behavioral and non-behavioral addictions.
... Chatbots are commonly used in various marketing platforms, websites or messaging [1]. The Chatbot Generative Pre-trained Transformer (ChatGPT) was launched November 20, 2022 by OpenAI (Open AI, San Francisco, USA) to use algorithms to respond to simple-to-complicated questions [2]. Some reports have showed that Chat-GPT is able to succeed law, business, or medical school exams [3], and should be useful to help the practitioner in clinical practice, research or administrative tasks [4,5]. ...
Article
Full-text available
To study the performance of ChatGPT in the management of laryngology and head and neck (LHN) cases. History and clinical examination of patients consulting at the Otolaryngology-Head and Neck Surgery department were presented to ChatGPT, which was interrogated for differential diagnosis, management, and treatment. The ChatGPT performance was assessed by two blinded board-certified otolaryngologists using the following items of a composite score and the Ottawa Clinic Assessment Tool: differential diagnosis; additional examination; and treatment options. The complexity of clinical cases was evaluated with the Amsterdam Clinical Challenge Scale test. Forty clinical cases were submitted to ChatGPT, accounting for 14 (35%), 12 (30%), and 14 (35%) easy, moderate and difficult cases, respectively. ChatGPT indicated a significant higher number of additional examinations compared to practitioners (p = 0.001). There was a significant agreement between practitioners and ChatGPT for the indication of some common examinations (audiometry, ultrasonography, biopsy, gastrointestinal endoscopy or videofluoroscopy). ChatGPT never indicated some important additional examinations (PET–CT, voice quality assessment, or impedance-pH monitoring). ChatGPT reported highest performance in the proposition of the primary (90%) or the most plausible differential diagnoses (65%), and the therapeutic options (60–68%). The ChatGPT performance in the indication of additional examinations was lowest. ChatGPT is a promising adjunctive tool in LHN practice, providing extensive documentation about disease-related additional examinations, differential diagnoses, and treatments. The ChatGPT is more efficient in diagnosis and treatment, rather than in the selection of the most adequate additional examination.
... Although ChatGPT is a powerful tool, there are still several limitations to be taken into account when using it for book reviewing [13], [14], [18]. These limitations have been considered in some approaches to avoid academic misconduct [19]. ...
Conference Paper
Full-text available
This study evaluates the potential of ChatGPT-4, an artificial intelligence language model developed by OpenAI, as an editing tool for Spanish literary and academic books. The need for efficient and accessible reviewing and editing processes in the publishing industry has driven the search for automated solutions. ChatGPT-4, being one of the most advanced language models, offers notable capabilities in text comprehension and generation. In this study, the features and capabilities of ChatGPT-4 are analyzed in terms of grammatical correction, stylistic coherence, and linguistic enrichment of texts in Spanish. Tests were conducted with 100 literary and academic texts, where the edits made by ChatGPT-4 were compared to those made by expert human reviewers and editors. The results show that while ChatGPT-4 is capable of making grammatical and orthographic corrections with high accuracy and in a very short time, it still faces challenges in areas such as context sensitivity, bibliometric analysis, deep contextual understanding, and interaction with visual content like graphs and tables. However, it is observed that collaboration between ChatGPT-4 and human reviewers and editors can be a promising strategy for improving efficiency without compromising quality. Furthermore, the authors consider that ChatGPT-4 represents a valuable tool in the editing process, but its use should be complementary to the work of human editors to ensure high-caliber editing in Spanish literary and academic books.
... Such an approach has already been taken, and there are several evaluations of the performance of AI chatbots in scientific writing, but most of them focus on medicine and similar fields [49,51,[56][57][58][59][60][61][62][63][70][71][72]. We are not aware of any such test designed specifically for humanities. ...
Article
Full-text available
Historically, mastery of writing was deemed essential to human progress. However, recent advances in generative AI have marked an inflection point in this narrative, including for scientific writing. This article provides a comprehensive analysis of the capabilities and limitations of six AI chatbots in scholarly writing in the humanities and archaeology. The methodology was based on tagging AI-generated content for quantitative accuracy and qualitative precision by human experts. Quantitative accuracy assessed the factual correctness in a manner similar to grading students, while qualitative precision gauged the scientific contribution similar to reviewing a scientific article. In the quantitative test, ChatGPT-4 scored near the passing grade (−5) whereas ChatGPT-3.5 (−18), Bing (−21) and Bard (−31) were not far behind. Claude 2 (−75) and Aria (−80) scored much lower. In the qualitative test, all AI chatbots, but especially ChatGPT-4, demonstrated proficiency in recombining existing knowledge, but all failed to generate original scientific content. As a side note, our results suggest that with ChatGPT-4, the size of large language models has reached a plateau. Furthermore, this paper underscores the intricate and recursive nature of human research. This process of transforming raw data into refined knowledge is computationally irreducible, highlighting the challenges AI chatbots face in emulating human originality in scientific writing. Our results apply to the state of affairs in the third quarter of 2023. In conclusion, while large language models have revolutionised content generation, their ability to produce original scientific contributions in the humanities remains limited. We expect this to change in the near future as current large language model-based AI chatbots evolve into large language model-powered software.
... Chat GPT is famous for generating text-based context [6]. However, modern websites usually include images or videos, which cannot be generated by GPT directly. ...
Conference Paper
In the present information technology era, the desire for personalized individual websites is notably significant [1]. The creation of such personal websites necessitates proficient knowledge of HTML-based programming, a skill predominantly possessed by professional programmers [2]. As an alternative to this limited option, individuals often resort to purchasing websites from service providers, incurring substantial costs and time investments. In order to address this widespread demand for personalized websites, we propose the implementation of an AI program capable of automatically generating websites based on user instructions [3]. Such a program would considerably reduce the financial and temporal expenditures associated with purchasing pre-made websites.
... It can answer long texts for one short question by using the rules of academic writing in different languages. After its release, a lot of researchers shifted their focus to studying ChatGPT and its potential applications in various fields, such as healthcare [9][10][11], tourism industry [12], academic integrity [13], education [14,15], programming bugs [16], dental medicine [17], global warming [18], medical education [19], and future development [20][21][22][23][24][25]. ...
Preprint
Full-text available
The Chat Generative Pre-training Transformer (GPT), also known as ChatGPT, is a powerful generative AI model that can simulate human-like dialogues across a variety of domains. However, this popularity has attracted the attention of malicious actors who exploit ChatGPT to launch cyberattacks. This paper examines the tactics that adversaries use to leverage ChatGPT in a variety of cyberattacks. Attackers pose as regular users and manipulate ChatGPT’s vulnerability to malicious interactions, particularly in the context of cyber assault. The paper presents illustrative examples of cyberattacks that are possible with ChatGPT and discusses the realm of ChatGPT-fueled cybersecurity threats. The paper also investigates the extent of user awareness of the relationship between ChatGPT and cyberattacks. A survey of 253 participants was conducted, and their responses were measured on a three-point Likert scale. The results provide a comprehensive understanding of how ChatGPT can be used to improve business processes and identify areas for improvement. Over 80% of the participants agreed that cyber criminals use ChatGPT for malicious purposes. This finding underscores the importance of improving the security of this novel model. Organizations must take steps to protect their computational infrastructure. This analysis also highlights opportunities for streamlining processes, improving service quality, and increasing efficiency. Finally, the paper provides recommendations for using ChatGPT in a secure manner, outlining ways to mitigate potential cyberattacks and strengthen defenses against adversaries.
... In the realm of research, they can be used to generate summaries of complex papers, abstracts, or literature reviews. This function can support researchers by simplifying the process of digesting extensive amounts of information (Hill-Yardin et al., 2023;Sarrison, 2023). ...
Article
Full-text available
As the application of Artificial Intelligence (AI) continues to permeate various sectors, the educational landscape is no exception. Several AI in education (AIEd) applications, like chatbots, present an intriguing array of opportunities and challenges. This paper provides an in-depth exploration of the use and role of AI in education and research, focusing on the benefits (the good) and potential pitfalls (the bad and ugly) associated with the deployment of chatbots and other AIEDs. The opportunities explored include personalised learning, facilitation of administrative tasks, enriched research capabilities, and the provision of a platform for collaboration. These advantages are balanced against potential downsides, such as job displacement, misinformation, plagiarism, and the erosion of human connection. Ethical considerations, particularly concerning data privacy, bias reinforcement, and the digital divide, are also examined. Conclusions drawn from this analysis stress the importance of striking a balance between AI capabilities and human elements in education, as well as developing comprehensive ethical frameworks for AI deployment in educational contexts.
... Aunque el software no es capaz de realizar un pensamiento crítico de alto nivel, no obstante, su avance resulta ser de utilidad en la identificación de conceptos que convergen a partir de las numerosas fuentes bibliográficas disponibles en la nube de internet (Hill-Yardin et al., 2023). Al mismo tiempo, su aparición motiva la necesidad de evolucionar métodos y técnicas digitales para detectar contenido escrito por la inteligencia artificial generativa como ChatGPT (Hammad, 2023). ...
Preprint
Full-text available
Resumen [PREPRINT]. El avance de la Inteligencia Artificial (IA) en las últimas tres décadas puede impactar de manera positiva o negativa en el desarrollo de las sociedades. A principios del año 2023, en el ámbito de la educación superior, comienza a tener eco esta disciplina con la evolución del ChatGPT. Esta investigación tiene dos propósitos en particular, por un lado, poner a prueba esta tecnología mediante la introducción de una serie de cuestionamientos en su línea de captura de texto relacionado con la originalidad de sus productos y el plagio académico para posteriormente ser examinado. Segundo, analizar y discutir sobre su efecto en el proceso de formación y producción académica en el ámbito de la educación superior. Como parte de los resultados obtenidos, se logró constatar que esta tecnología no puede estar exenta de errores por lo que se recomienda usarse con cautela. Asimismo, se concluye que su impacto negativo puede ser neutralizado, si se logra fomentar una formación critica, reflexiva y de valores en el estudiantado con el fin de promover la consciencia para usar este tipo de tecnologías a su favor sin perjuicio alguno. Por último, el resultado de esta investigación tiene sus implicaciones para que las universidades logren transformar sus modelos educativo y académicos acordes a las demandas actuales de las sociedades del conocimiento.
... The above-cited position statements and regulations show that the international editorial community is aware of the fact that using AI chat-bots in the development of semantic contents and grammatically correct texts for submissions is a large challenge for journal editors because of the difficulties associated with (i) judging the originality of submissions, (ii) detect purely AI-generated text and images, (iii) shaping text patterns to reduce similarity and plagiarism, and (iv) creating uncertainty concerning the validity of the propositions and proposed future work. In view to the functional development of AI tools, it is yet undecided how AI errors, ambiguities, and plagiarism should be detected and filtered out reliably, beyond the current direct rejection policy which is preferred by some publishers with regard to the outcomes of generative pre-trained transformers (Hill-Yardin et al., 2023). Shall an artificial intelligence-based textual document developer software control the goal of its own content generation process, and check the outcome for compliance, trustworthiness, and usefulness? ...
Article
Full-text available
This Extended Editorial has been compiled by the members of the Editorial Board to celebrate the 25th anniversary of the establishment of the Journal of Integrated Design and Process Science, which operates as the Transactions of the Society for Process and Design Science. The paper divides in three parts. The first part provides a detailed overview of the preliminaries, the objectives, and the periods of operation. It also includes a summary of the current application-orientated professional fields of interests, which are: (i) convergence mechanisms of creative scientific disciplines, (ii) convergence of artificial intelligence, team and health science, (iii) convergence concerning next-generation cyber-physical systems, and (iv) convergence in design and engineering education. The second part includes invited papers, which exemplifies domains within the four fields of interest, and also represent good examples of science communication. Short synopses of the contents of these representative papers are included. The third part takes the major changes in scientific research and the academic publication arena into consideration, circumscribes the mission and vision as formulated by the current Editorial Board, and elaborates on the planned strategic exploration and utilization domains of interest.
... A growing body of research has been examining the effects of ChatGPT on education and academia in general. At the time of writing, there are two discrete strands of thinking: one that considers ChatGPT as a potential device to enhance learning [35,[37][38][39][40][41][42][43][44] and one that considers its effect on assignment writing and associated student misconduct [32,40,41,[45][46][47][48][49], as well as the integrity of academic writing and publishing in general [50][51][52][53][54][55][56][57][58][59]. ...
Article
Full-text available
The public release of ChatGPT, a generative artificial intelligence language model, caused widespread public interest in its abilities but also concern about the implications of the application on academia, depending on whether it was deemed benevolent (e.g., supporting analysis and simplification of tasks) or malevolent (e.g., assignment writing and academic misconduct). While ChatGPT has been shown to provide answers of sufficient quality to pass some university exams, its capacity to write essays that require an exploration of value concepts is unknown. This paper presents the results of a study where ChatGPT-4 (released May 2023) was tasked with writing a 1500-word essay to discuss the nature of values used in the assessment of cultural heritage significance. Based on an analysis of 36 iterations, ChatGPT wrote essays of limited length with about 50% of the stipulated word count being primarily descriptive and without any depth or complexity. The concepts, which are often flawed and suffer from inverted logic, are presented in an arbitrary sequence with limited coherence and without any defined line of argument. Given that it is a generative language model, ChatGPT often splits concepts and uses one or more words to develop tangential arguments. While ChatGPT provides references as tasked, many are fictitious, albeit with plausible authors and titles. At present, ChatGPT has the ability to critique its own work but seems unable to incorporate that critique in a meaningful way to improve a previous draft. Setting aside conceptual flaws such as inverted logic, several of the essays could possibly pass as a junior high school assignment but fall short of what would be expected in senior school, let alone at a college or university level.
... A chatbot is defined as an electronic system that simulates conversations by responding to keywords or phrases. 1 The Chatbot Generative Pre-trained Transformer (ChatGPT) is a new artificial intelligence-powered language model that was developed by OpenAI to use algorithms to respond to simple-tocomplicated questions. 2 The version 4.0, ChatGPT-4, was able to pass exams from medical schools, 3 and could help the physician in consultation, scientific, and administrative tasks. [4][5][6] To date, there are no publications about the usefulness of ChatGPT-4 in the editing of scientific manuscripts written by nonnative English researchers. ...
Article
ChatGPT is a new artificial intelligence-powered language model of chatbot able to help otolaryngologists in clinical practice and research. We investigated the ability of ChatGPT-4 in the editing of a manuscript in otolaryngology. Four papers were written by a nonnative English otolar- yngologist and edited by a professional editing service. ChatGPT-4 was used to detect and correct errors in manuscripts. From the 171 errors in the manuscripts, ChatGPT-4 detected 86 errors (50.3%) including vocabulary (N = 36), determiner (N = 27), preposition (N = 24), capita- lization (N=20), and number (N=11). ChatGPT-4 pro- posed appropriate corrections for 72 (83.7%) errors, while some errors were poorly detected (eg, capitalization [5%] and vocabulary [44.4%] errors. ChatGPT-4 claimed to change something that was already there in 82 cases. ChatGPT demonstrated usefulness in identifying some types of errors but not all. Nonnative English researchers should be aware of the current limits of ChatGPT-4 in the proofreading of manuscripts.
... Another useful application is acting as a virtual tutor; it can break down a complex concept into an easier-to-understand language [21,22]. For research projects, ChatGPT can not only aid in literature review but can also generate innovative ideas in brainstorming sessions [23,24]. In computer science, it can aid students by debugging their code and suggesting programming solutions [25]. ...
Article
Full-text available
ChatGPT is an emerging tool that can be employed in many activities including in learning/teaching in universities. Like many other tools, it has its benefits and its drawbacks. If used properly, it can improve learning, and if used irresponsibly, it can have a negative impact on learning. The aim of this research is to study how ChatGPT can be used in academia to improve teaching/learning activities. In this paper, we study students’ opinions about how the tool can be used positively in learning activities. A survey is conducted among 430 students of an MSc degree in computer science at the University of Hertfordshire, UK, and their opinions about the tool are studied. The survey tries to capture different aspects in which the tool can be employed in academia and the ways in which it can harm or help students in learning activities. The findings suggest that many students are familiar with the tool but do not regularly use it for academic purposes. Moreover, students are skeptical of its positive impacts on learning and think that universities should provide more vivid guidelines and better education on how and where the tool can be used for learning activities. The students’ feedback responses are analyzed and discussed and the authors’ opinions regarding the subject are presented. This study shows that ChatGPT can be helpful in learning/teaching activities, but better guidelines should be provided for the students in using the tool.
... In November 2022, OpenAI (Open AI, San Francisco, USA) launched the Chatbot Generative Pre-trained Transformer (ChatGPT), which uses algorithms to respond to questions poses by the users [2]. Since then, many studies have been conducted to assess the performance of ChatGPT in different areas, such as law, business, or medical school exams, scientific manuscript revisions, or in some clinical fields [3][4][5]. Given to its large database, most experts agreed with the potential usefulness of ChatGPT as an adjunctive instrument in clinical practice, research, or administrative tasks [5]. ...
Article
Full-text available
Objectives To evaluate the reliability and validity of the Artificial Intelligence Performance Instrument (AIPI). Methods Medical records of patients consulting in otolaryngology were evaluated by physicians and ChatGPT for differ- ential diagnosis, management, and treatment. The ChatGPT performance was rated twice using AIPI within a 7-day period to assess test–retest reliability. Internal consistency was evaluated using Cronbach’s α. Internal validity was evaluated by comparing the AIPI scores of the clinical cases rated by ChatGPT and 2 blinded practitioners. Convergent validity was measured by comparing the AIPI score with a modified version of the Ottawa Clinical Assessment Tool (OCAT). Interrater reliability was assessed using Kendall’s tau. Results Forty-five patients completed the evaluations (28 females). The AIPI Cronbach’s alpha analysis suggested an ade- quate internal consistency (α = 0.754). The test–retest reliability was moderate-to-strong for items and the total score of AIPI (rs = 0.486, p = 0.001). The mean AIPI score of the senior otolaryngologist was significantly higher compared to the score of ChatGPT, supporting adequate internal validity (p = 0.001). Convergent validity reported a moderate and significant correlation between AIPI and modified OCAT (rs = 0.319; p = 0.044). The interrater reliability reported significant posi- tive concordance between both otolaryngologists for the patient feature, diagnostic, additional examination, and treatment subscores as well as for the AIPI total score. Conclusions AIPI is a valid and reliable instrument in assessing the performance of ChatGPT in ear, nose and throat condi- tions. Future studies are needed to investigate the usefulness of AIPI in medicine and surgery, and to evaluate the psycho- metric properties in these fields.
... This learning and skills development will not only be passed onto our student cohorts but used to improve our practice and teaching will applications in marking, report writing and even session planning. [11][12][13] Summary AI should be seen as a new tool, rather than a threat to our practice. AI will change how students study and will affect our teaching, learning and assessment. ...
Article
Artificial intelligence (AI), once a subject of science fiction, is now a tangible, disruptive force in teaching and learning. In an educational setting, generative large language models (LLM), such as OpenAI’s ChatGPT, perform and supplement tasks that usually require human thought, such as data analysis, understanding complex ideas, problem-solving, coding and producing written outputs. AI advances are moving quickly. From the emergence of ChatGPT 3.5 in November 2022, we have witnessed the arrival of other progressive language models, like OpenAI’s GPT-4, Google’s Bard AI and Microsoft’s Bing AI. Most recently, AIs gained the ability to access real-time information, analyse images and are becoming directly embedded in many applications.
... The tendency of ChatGPT to provide erroneous/inexistent scientific reference had been already reported in the form of case reports [8,9] and was listed among the possible risks in a systematic review regarding perspectives and concerns of AI in healthcare education, research, and practice [5]. The ChatGPT homepage itself disclaims: "May occasionally generate incorrect information". ...
PurposeChatGPT has gained popularity as a web application since its release in 2022. While artificial intelligence (AI) systems’ potential in scientific writing is widely discussed, their reliability in reviewing literature and providing accurate references remains unexplored. This study examines the reliability of references generated by ChatGPT language models in the Head and Neck field.Methods Twenty clinical questions were generated across different Head and Neck disciplines, to prompt ChatGPT versions 3.5 and 4.0 to produce texts on the assigned topics. The generated references were categorized as “true,” “erroneous,” or “inexistent” based on congruence with existing records in scientific databases.ResultsChatGPT 4.0 outperformed version 3.5 in terms of reference reliability. However, both versions displayed a tendency to provide erroneous/non-existent references.Conclusions It is crucial to address this challenge to maintain the reliability of scientific literature. Journals and institutions should establish strategies and good-practice principles in the evolving landscape of AI-assisted scientific writing.
... Used in several areas, such as medicine, it is observed that this program can be a tool in the recognition of some diseases, which facilitates early diagnosis. ChatGPT has shown to be promising in the dissemination of knowledge, including scienti c knowledge, since it presents, in most cases, correct answers and in language accessible to the lay public [2]. NMOSD has been a mysterious condition since it was described more than a century ago by Eugene Devic, for whom it was originally named [3]. ...
Preprint
Full-text available
Introduction: Artificial intelligence (AI) has developed rapidly, and it has been used in medical practice. The Chat Generative Pre-Trained Transformer (ChatGPT) is a recently released open access AI model that interacts with user inputs in a conversational manner. The objective of this work is to analyze the veracity of the information provided by the software and compare the responses generated with the current medical literature. Methods: Several questions about neuromyelitis optica spectrum disorders (NMOSD) were sent to ChatGPT on June 19, 2023. We analyzed the veracity of the information provided by it and we analyzed its possible limitations in the dissemination of information to both physicians and patients. Results: The answers provided by ChatGPT demonstrated that the information is compatible with the current literature on the topic. With regard to diseases, unlike other websites, ChatGPT proved to be responsible in providing information, clarifying that, as an AI software, it is not capable of providing accurate medical diagnoses; It therefore recommends consultation with a health professional. ChatGPT responses are only as good as the data they are trained on. Conclusions: ChatGPT was responsible for accurate responses and can now offer patients the ability to select and categorize the results of these queries, as well as pre-specify the language complexity of the output text. We can only speculate about the next steps in the exponential growth of this technology, and how it will transform future care for neuromyelitis optica spectrum disorders.
... Both tweets and user queries contain public concerns about ChatGPT. And mining these silent concerns lying behind texts is valuable for academia (Hill-Yardin et al., 2023) and public daily life. ...
Preprint
The recently released artificial intelligence conversational agent, ChatGPT, has gained significant attention in academia and real life. A multitude of early ChatGPT users eagerly explore its capabilities and share their opinions on it via social media. Both user queries and social media posts express public concerns regarding this advanced dialogue system. To mine public concerns about ChatGPT, a novel Self-Supervised neural Topic Model (SSTM), which formalizes topic modeling as a representation learning procedure, is proposed in this paper. Extensive experiments have been conducted on Twitter posts about ChatGPT and queries asked by ChatGPT users. And experimental results demonstrate that the proposed approach could extract higher quality public concerns with improved interpretability and diversity, surpassing the performance of state-of-the-art approaches.
... In recent years, there has been a rapid emergence of large language models (LLMs) like GPT-4 and ChatGPT 1 , which have demonstrated excellent zero-shot capabilities without the need for supervised fine-tuning. These models have shown promising performance in various traditional natural language processing tasks Jiao et al., 2023b;Kasneci et al., 2023;Hill-Yardin et al., 2023;Šlapeta, 2023;Aydın and Karaarslan, Instruction Input sentence Respose : Example data in Pre-Instruction and Post-Instruction format. Different blocks represent textual data from different fields, while the '+' symbol signifies the concatenation operation for the textual data. ...
Preprint
Large language models (LLMs) are capable of performing conditional sequence generation tasks, such as translation or summarization, through instruction fine-tuning. The fine-tuning data is generally sequentially concatenated from a specific task instruction, an input sentence, and the corresponding response. Considering the locality modeled by the self-attention mechanism of LLMs, these models face the risk of instruction forgetting when generating responses for long input sentences. To mitigate this issue, we propose enhancing the instruction-following capability of LLMs by shifting the position of task instructions after the input sentences. Theoretical analysis suggests that our straightforward method can alter the model's learning focus, thereby emphasizing the training of instruction-following capabilities. Concurrently, experimental results demonstrate that our approach consistently outperforms traditional settings across various model scales (1B / 7B / 13B) and different sequence generation tasks (translation and summarization), without any additional data or annotation costs. Notably, our method significantly improves the zero-shot performance on conditional sequence generation, e.g., up to 9.7 BLEU points on WMT zero-shot translation tasks.
... At the time of writing, there are two lines of thought: one that considers ChatGPT as a potential tool to enhance student learning [21,[25][26][27][28][29][30][31][32][33] and one that focuses on its ability to aid in assignment writing with the (potentially) concomitant student misconduct [28,29,[34][35][36][37][38][39]. Expanding on this, other papers are concerned with integrity of academic writing and publishing in general [40][41][42][43][44][45][46][47][48][49]. Tools have been developed and are being continually refined to counteract the threat posed by AI-generated text to the integrity of assignments by assessing a block of text as being of human vs AI authorship [50,51]. ...
Preprint
Full-text available
Generative artificial intelligence (AI), in particular large language models such as ChatGPT have reached public consciousness with a wide-ranging discussion of their capabilities and suitability for various professions. The extant literature on the ethics of generative AI revolves around its usage and application, rather than the ethical framework of the responses provided. In the education sector, concerns have been raised with regard to the ability of these language models to aid in student assignment writing with the potentially concomitant student misconduct of such work is submitted for assessment. Based on a series of ‘conversations’ with multiple replicates, using a range of discussion prompts, this paper examines the capability of ChatGPT to provide advice on how to cheat in assessments. Since its public release in November 2022, numerous authors have developed ‘jailbreaking’ techniques to trick ChatGPT into answering questions in ways other than the default mode. While the default mode activates a safety awareness mechanism that prevents ChatGPT from providing unethical advice, other modes partially or fully bypass the this mechanism and elicit answers that are outside expected ethical boundaries. ChatGPT provided a wide range of suggestions on how to best cheat in university assignments, with some solutions common to most replicates (‘plausible deniability,’ language adjustment of contract written text’). Some of ChatGPT’s solutions to avoid cheating being detected were cunning, if not slightly devious. The implications of these findings are discussed.
... This may assist in finding good keywords and search terms. Any probe to ChatGPT to deliver a summary or overview requires researchers to screen the content to judge its accuracy, as a quality assurance step [11]. Such quality assurance steps can be challenging, since ChatGPT presents facts in a plausible and convincing format. ...
... Esto puede ser especialmente útil para editores científicos que desean identificar temas de interés para los lectores y garantizar que los trabajos publicados en la revista sean relevantes y estén a la vanguardia de las últimas tendencias en el campo. (18,19) Chat GPT y edición científica Basados en la amplias funcionalidades que posee el Chat GPT, (20) la literatura científica (21,22,23,24) y la experiencia del uso por parte de los autores de este artículo, se pueden sistematizar una serie de usos y/o aplicaciones del Chat GPT en la edición científica: ...
Article
Full-text available
Academic editing is a crucial task to ensure the quality and accuracy of scientific works. However, reviewing and editing large amounts of text can be a daunting and time-consuming task. Artificial intelligence-based language models, such as Chat GPT, have proven to be useful in detecting and correcting grammatical errors, improving the coherence and clarity of text, and generating additional content. The purpose of this communication is to explore the potential of Chat GPT as a tool for academic editing. The potential of Chat GPT as a tool for academic editing includes its ability to process large amounts of text and understand the structure of language, allowing for error detection, writing quality improvement, translation, summarization, data analysis, and identifying emerging trends. It should be noted that language models like Chat GPT have the potential to transform academic editing and improve the quality of scientific works. However, some limitations and challenges that must be addressed to fully harness the potential of this emerging technology were identified, and scientific editors should be aware of its limitations.
... While ChatGPT as a generative language model is generally good at collating, extracting and summarising information that it was exposed to in its training data set, its accuracy is based on statistical models of associations during training and its frequency (Elazar et al. 2022). It lacks reasoning ability (Bang et al. 2023) and is thus unable to provide essay-based assignments, let alone academic manuscripts, of an acceptable standard (Fergus, Botha, and Ostovar 2023;Hill-Yardin et al. 2023;Wen and Wang 2023). Moreover, ChatGPT has also been shown to, at least occasionally, suffer from inverted logic (Spennemann 2023a), ultimately providing disinformation to the reader. ...
Preprint
Full-text available
The public release of ChatGPT has resulted in considerable publicity and has led to wide-spread discussion of the usefulness and capabilities of generative AI language models. Its ability to extract and summarise data from textual sources and present them as human-like contextual responses makes it an eminently suitable tool to answer questions users might ask. This paper tested what archaeological literature appears to have been included in ChatGPT's training phase. While ChatGPT offered seemingly pertinent references, a large percentage proved to be fictitious. Using cloze analysis to make inferences on the sources 'memorised' by a generative AI model, this paper was unable to prove that ChatGPT had access to the full texts of the genuine references. It can be shown that all references provided by ChatGPT that were found to be genuine have also been cited on Wikipedia pages. This strongly indicates that the source base for at least some of the data is found in those pages. The implications of this in relation to data quality are discussed.
Chapter
One of the themes in the emergence of text- and image-making (multimodal) generative AIs is their value in the learning space, with the vast potential just beginning to be explored by mass humanity. This chapter explores the potential and early use of large language models (LLMs) harnessed for their mass learning, human-friendly conversations, and their efficacies, for self-learning for individuals and groups, based on a review of the literature, system constraints and affordances, and abductive logic. There are insights shared about longitudinal and lifelong learning and foci on co-evolving processes between the human learner and the computing machines and large language models.
Article
This literature review investigates the influence of Chat GPT AI on the effectiveness of the civil engineering curriculum and student performance. The study explores the use of Chat GPT AI in education and emphasizes the significance of rubric assessment in evaluating student achievements. The findings reveal that incorporating Chat GPT AI can significantly enhance the learning process by providing prompt and accurate responses to students' queries. However, the study emphasizes the continued importance of human interaction in the assessment process, as rubric assessment remains crucial for evaluating student performance and fostering motivation for better outcomes. The implications underscore the need to align the curriculum with industry standards and leverage technology to enrich the learning experience. These findings hold potential for educators and policymakers seeking to enhance educational quality and produce highly qualified civil engineering graduates. Highlights: Chat GPT AI: Improving Efficiency - The study explores the impact of Chat GPT AI in civil engineering education, highlighting its potential to enhance the efficiency of the learning process by providing quick and accurate answers to students' questions. Importance of Rubric Assessment - The research emphasizes the crucial role of rubric assessment in measuring student performance and motivating them to achieve better results, highlighting its significance alongside the integration of technology. Adapting Curriculum and Utilizing Technology - The study highlights the importance of adapting the curriculum to meet industry needs and standards, while also leveraging technology, such as Chat GPT AI, to enhance the learning experience and improve the quality of civil engineering education. Keywords: Chat GPT AI, civil engineering education, curriculum efficiency, student performance, rubric assessment.
Article
Purpose: This study aims to systematically analyze the acknowledgment of generative AI tools, particularly ChatGPT, in academic publications. It delves into patterns across multiple dimensions, including geographical distribution, disciplinary affiliations, journals, and institutional representation. Methodology: Using a dataset from the Dimensions database consisting of 1,226 publications from November 2022 to July 2023, the study employs a variety of analytical techniques, including temporal analysis and distribution mapping across fields of research and geographical locations. Findings: Acknowledgments are most frequent from authors affiliated with U.S. institutions, followed by significant contributions from China and India. Fields such as Biomedical and Clinical Sciences are highly represented, along with Information and Computing Sciences. Prominent journals like "The Lancet Digital Health" and preprint platforms including "bioRxiv" frequently feature acknowledgments, suggesting an accelerating role for AI tools in expediting research publication. Limitations: The scope of this study is restricted to the Dimensions database, possibly missing data from other platforms or non-indexed literature. Moreover, the study does not evaluate the quality or ethics of the acknowledgments. Practical implications: The results are instructive for a range of stakeholders, including researchers, academic publishers, and institutions. They offer a foundational understanding that can inform future policies and research on the ethical and transparent use of AI in academia. Originality: This research is the first empirical investigation of its kind to systematically examine acknowledgment patterns concerning generative AI tools, thus filling an identified gap in existing scholarship.
Article
Full-text available
Mogućnosti primene veštačke inteligencije (VI) u izradi programskog koda za uređaje u automatizaciji su raznovrsne. Osim za generisanje programskog koda, VI se može koristi i za optimizaciju postojećeg koda, kao za testiranje, debagovanje i održavanje programskog koda za uređaje u automatizaciji. Budući da su softverske platforme koje nude primenu VI u izradi programskog koda, još uvek u početnoj fazi primene, sigurno je da se bez određene provere, ne mogu odmah iskoristiti bez određene provere. U okviru ovog rada, analiziraće se izrada programskog koda, za neki od tipičnih zadataka u automatizaciji, korišćenjem softverskih platformi zasnovanih na VI. Korišćenje ovih platformi postaje jedno od najvažnijih obeležja primene koncepta Industrija 4.0.
Conference Paper
Full-text available
Yapay zeka (YZ), çağımızın teknolojik araştırmaları ve inovasyonunun merkezî bir elemanı haline gelmiştir. Bu paradigmada, birçok yenilikçi gelişme gözlemlenmiştir ve doğal dil işleme (NLP) teknolojileri bu yenilikler arasında belirgin bir şekilde ayırt edici bir konum edinmiştir. NLP, insan dilinin semantik ve sentaktik yapısını anlama, yorumlama ve simülasyon yetenekleriyle karakterize edilen, YZ'nin kritik bir bileşenidir. Bu literatür incelemesi, YZ literatüründe oldukça etki yaratan ChatGP modeline kapsamlı bir şekilde odaklanacaktır. OpenAI tarafından kavramsallaştırılıp hayata geçirilen ChatGP , geniş bir metin gövdesi üzerinde, sofistike derin öğrenme algoritmalarıyla eğitilmiştir ve NLP literatüründe devrimsel bir yaklaşım olarak kabul edilir. ChatGPT, dil işleme kapasitesindeki üstünlüğüyle tanınırken, beraberinde getirdiği teknik ve etik problematiklerle de ön plana çıkar. Bu sebeple, ChatGP 'nin NLP ve YZ bağlamındaki konumlandırması, teknik inovasyonların ötesinde etik ve felsefi bir değerlendirme gerektirir. Bu inceleme, modelin altyapısını ve işleyişini teknik bir perspektiften ele alarak başlayacak, ardından uygulama senaryoları, potansiyel avantajları ve zorlukları üzerine derinlemesine bir tartışma sunacaktır. Ayrıca, ChatGP başta olmak üzere büyük ölçekli YZ modellerinin ortaya koyduğu etik ve güvenlik meselelerine ışık tutacak, bu teknolojilerin yaygınlaşması ile birlikte getirdiği sorumlulukları ele alacaktır. Nihai olarak, geleceğin potansiyel araştırma yolları ve paradigmalarını sorgulayarak, ChatGP ve benzeri teknolojik yaklaşımların YZ literatüründe nasıl bir evrim gösterebileceği üzerine spekülasyonlarda bulunacaktır. Bu literatür incelemesi, sadece teknolojik ilerlemelerin değil, aynı zamanda toplumsal, etik ve güvenlikle ilgili tematik konuların da YZ ve NLP alanlarındaki disiplinlerarası önemini vurgulamayı hedeflemektedir.
Article
Full-text available
Technological advances in Natural Language Processing have brought forth language models capable of advanced response delivery. For humans, inputting natural language to a software system, and getting natural language as a response, is intuitive to learning and development. Scientific papers are traditionally written manually by human researchers, but with the advent of mainstream Large Language Models, e.g., OpenAI’s ChatGPT, it is of increasing concern to scientists and academics that content in scientific papers may be generated by Artificial Intelligence (AI). Wishing to stop this is a losing attitude, as large-scale generative AI only becomes more powerful and accessible as time progresses. Taking the more tenable position of cautious adaptation, this paper argues that there exists a taxonomy, as yet implicit, in the structure of scientific papers, and that language models can be used by scientific researchers to bootstrap scientific writing. Furthermore, AI can augment their own writing workflow to traverse the academic publishing pipeline with greater efficiency. Despite the shortcomings of language models, e.g., hallucination, when prompted appropriately with sufficient constraints and input data, language models are extremely accurate and efficient content providers. In this work, the canonical scientific paper is broken down into its taxonomy of parts, where it is then considered how each part can benefit from language models, e.g., in generating abstracts and keywords, reformatting sections, theorizing titles, etc. A theoretical system for the implementation of the proposed idea using GPT-4 is provided. Finally, a call for consensus among the academic and scientific communities regarding the use of language models in the scientific writing workflow is established.
Article
Purpose: The purpose of conducting research on the "Application of ChatGPT in Higher Education and Research – A Futuristic Analysis" is to critically examine the evolving role of advanced AI language models like ChatGPT in shaping the future of education and research. This research seeks to anticipate how ChatGPT and similar technologies will impact pedagogy, academic support, and scholarly inquiry in the years ahead, shedding light on their potential benefits and challenges. By analyzing current implementations and forecasting future possibilities, this research aims to inform educators, institutions, and researchers about the transformative opportunities and ethical considerations associated with the integration of AI-driven chatbots and language models in higher education and research settings. Methodology: This is exploratory research and makes use of the information obtained from scholarly articles through Google Scholar and AI-based GPTs to analyse, compare, evaluate, and interpret the concept of application of ChatGPT in Higher Education and Research. Results/Analysis: A systematic analysis is carried out on the futuristic and effective use of ChatGPT for higher education, advanced research, scholarly publication, and possible threats of it on higher education industry. Originality/Value: A systematic analysis is carried out to interpret: (1) the diverse applications of ChatGPT in various academic disciplines, including basic sciences, engineering, health sciences, agriculture, management, and social sciences within higher education, (2) how ChatGPT contributes to different types of research, including exploratory, empirical, and experimental research endeavours. Type of Paper: Exploratory Research.
Article
Full-text available
The emergence of artificial intelligence language services has raised hopes related to facilitating the task of publication activity. Members of the academic community wondered whether chatbots could optimize the process of scientific writing. ChatGPT, a language model capable of, among other things, generating scholarly texts, received particular attention. The cases of writing academic papers using ChatGPT have led to a number of publications analyzing the pros and cons of using this neural network. In this paper, we investigate the possibility of using ChatGPT to write an introduction to a scientific paper on a topical issue of the Arctic governance. A set of queries to ChatGPT network, based on the logic of the commonly accepted in academia publication format IMRAD, has being developed. This format is characterized by structural and functional elements, which served as a logical basis for the queries. The responses received from ChatGPT were analyzed for their compliance with the requirements for a scientific article, according to the IMRAD publication format. The result of the analysis showed that ChatGPT is not able to meet the requirements for publishing a scientific article in the modern scientific publication discourse.
Article
Full-text available
Background: ChatGPT is an artificial intelligence-based tool developed by OpenAI (California, USA). This systematic review examines the potential of ChatGPT in patient care and its role in medical research. Methods: The systematic review was done according to the PRISMA guidelines. Embase, Scopus, PubMed, and Google Scholar databases were searched. We also searched preprint databases. Our search was aimed to identify all kinds of publications, without any restrictions, on ChatGPT and its application in medical research, medical publishing and patient care. We used search term “ChatGPT”. We reviewed all kinds of publications including original articles, reviews, editorial/ commentaries, and even letter to the editor. Each selected records were analysed using ChatGPT and responses generated were compiled in a table. The word table was transformed in to a PDF and was further analysed using ChatPDF. Results: We reviewed full texts of 118 articles. ChatGPT can assist with patient enquiries, note writing, decision-making, trial enrolment, data management, decision support, research support, and patient education. But the solutions it offers are usually insufficient and contradictory, raising questions about their originality, privacy, correctness, bias, and legality. Due to its lack of human-like qualities, ChatGPT’s legitimacy as an author is questioned when used for academic writing. ChatGPT-generated content has concerns with bias and possible plagiarism. Conclusion: Although it can help with patient treatment and research, there are issues with accuracy, authorship, and bias. ChatGPT can serve as a “clinical assistant” and be a help in research and scholarly writing.
Article
Full-text available
Diagnosing systemic lupus erythematosus (SLE) may be difficult in cases of negative results for antinuclear antibodies (ANAs) and anti-double stranded DNA (dsDNA) antibodies, which is known as seronegative SLE. Additionally, in patients with HIV infection, the diagnosis of SLE is made complicated by the overlap of symptoms and the possibility of false negative results on antibody tests. Herein, we report the case of a 24-year-old female with HIV infection on anti-retroviral therapy who presented with vesicles and plaques over the malar area and ulcers over the roof of the mouth. Antibody tests for ANAs and dsDNA were negative. She was initially treated for herpes simplex with a secondary infection, but the symptoms did not improve. She ultimately died from acute myocardial infarction while awaiting results of direct immunofluorescence, which revealed the deposition of immunoglobulin (Ig) M, IgG, and C3 along the basement membrane, thus enabling a diagnosis of SLE. Therefore, SLE can be difficult to diagnose in patients with HIV, and other diagnostic criteria should be considered when suspecting SLE and treating these patients. Additionally, we also present our experience with ChatGPT (OpenAI LP, OpenAI Inc., San Francisco, CA, USA) in academic publishing and its pros and cons.