ArticlePDF Available

May Artificial Intelligence Be a Co-Author on an Academic Paper?

Authors:

Abstract

Dear Colleagues, Recently, for an article submitted to the European Journal of Therapeutics, it was reported that the paper may have been written with artificial intelligence support at a rate of more than 50% in the preliminary examination made with Turnitin. However, the authors did not mention this in the article’s material method or explanations section. Fortunately, the article’s out-of-date content and fundamental errors in its methodology allowed us no difficulty making the desk rejection decision. On the other hand, similar situations that we may encounter later caused us to discuss how we would decide when the artificial intelligence support of the articles was written. The general opinion that we have adopted and currently available in the literature is that if artificial intelligence is used while writing an article, how artificial intelligence is used in the methodology should be written in detail. Moreover, we encountered a much more exciting situation during our evaluation. In a few academic studies, we have seen that artificial intelligence is added as a co-author. On July 06, 2023, in the Web of Science, using the advanced search, we found four articles with the author name ChatGPT [1]. We have determined that ChatGPT is the author in one of these articles [2] and the Group Author in three [3-5]. Lee [6] stated that although artificial intelligence tools are much more advanced than search engines, they cannot be an author regarding research ethics because they cannot take responsibility for what they write. Similarly, Goto and Katanoda [7] stated that it is the author’s responsibility to confirm that the texts written by ChatGPT are correct. However, Pourhoseingholi et al. [8] reported that keeping up with technology is inevitable. Additionally, they said that“this action will be more fruitful and practical in extended dimensions when international institutes like ICMJE or COPE come up with the appropriate adjustments and establish robust criteria to scheme the AI authorship”. Most probably, the use of artificial intelligence applications in scientific articles and whether it can be a co-author in these papers will be discussed soon. We encourage interested authors to submit their ideas to our journal as a letter to the editor. Yours sincerely,
e12
Dear Colleagues,
Recently, for an article submitted to the European Journal of Therapeutics, it was reported that

in the preliminary examination made with Turnitin. However, the authors did not mention this
in the article’s material method or explanations section. Fortunately, the article’s out-of-date
       
rejection decision.
On the other hand, similar situations that we may encounter later caused us to discuss how we
       


written in detail.
             




        


   
“this action
will be more fruitful and practical in extended dimensions when international institutes like
ICMJE or COPE come up with the appropriate adjustments and establish robust criteria to
scheme the AI authorship”.

can be a co-author in these papers will be discussed soon.

Yours sincerely,
Keywords: 
Correspondence

Department of Anatomy,
Gaziantep University School of
Medicine, Gaziantep, Turkey
 








Editorial
Accepted

 1  2
1 

2 Department of Anatomy, Gaziantep University School of Medicine, Gaziantep, Turkey

REFERENCES
       

 Accessed

 
      
      

 
   
    

 

       
   

  


        
      

 


    

European Journal of erapeutics (2023) Balat A, Bahşi İ
How to Cite;
 


... This raises the question of the (im)possibility of AI being the authorship of scientific publications. Attributing authorship of scientific publications to AI is a complex and controversial issue that is increasingly debated (7,(9)(10)(11). This is a challenging and uncertain problem, but one that needs to be addressed, inasmuch that it is increasingly unavoidable when considering scientific publication. ...
... AI faces several challenges in authorship (1,5,(7)(8)(9)(10)(11)(12)(13)(14)(15)20). While AI systems may excel at language generation, one of their key limitations is understanding context. ...
Article
Full-text available
Ascribing authorship of scientific publications to artificial intelligence is a complex and controversial issue. However, it is a challenging and uncertain problem that, given the growing development of artificial intelligence-based technologies that go beyond the performance of purely technical tasks and even contribute to the development of aspects such as the incorporation of scientific research information published in languages other than English, also contributing to potential insights in research, is becoming unavoidable when considering scientific publishing. This paper aims to add to this discussion by arguing that, although this is a challenging and even controversial position, it is inevitable and even ethically desirable to accept artificial intelligence, if it subsidizes sufficiently, as a (co-)author of any scientific publication. It is a matter of starting to think about how this attribution can be controlled and achieved with increasing respect for the ethics of scientific publication.
... 8,9 In addition, AI tools have also become widely used for academic support. 10 ChatGPT, a chatbot recently developed by OpenAI, is one of the most widely used AI tools. It is known that ChatGPT can make very important contributions in education, training, and academic studies. ...
Article
Full-text available
Objective: Since 1990, the Journal of Craniofacial Surgery has been an important resource for clinicians and basic scientists. The journal addresses clinical practice, surgical innovations, and educational issues. This study aims to evaluate the contribution of these articles to clinical practice innovations and surgical procedures by analyzing the content of the 25 most cited articles published in the journal. It also aims to demonstrate the potential of artificial intelligence tools in academic content analysis. Methods: All articles published in the Journal of Craniofacial Surgery on June 13, 2024, were searched using the Web of Science Database, and the 25 most cited articles were identified. The full texts of these articles were saved in PDF format and metadata were saved as plain text files. Content analysis of these 25 articles was performed using ChatGPT-4o. Results: As a result of the analysis, some articles stood out in terms of clinical importance. It also appeared that ChatGPT could be used to compare multiple articles. Conclusion: In this study, the authors analyzed the content of the 25 most cited articles published in the Journal of Craniofacial Surgery using ChatGPT-4o. These articles were evaluated according to the criteria of innovations in clinical practice and compliance with surgical procedures. This study presents interesting findings in terms of the use of artificial intelligence tools in academic content analysis. The authors thought that this study could be a source of inspiration for future studies.
... AI could improve cooperation among researchers throughout the article review process (9). AI-driven online platforms facilitate the connection of academics who share similar research interests, allowing them to exchange and engage in discussions on pertinent literature. ...
Article
Full-text available
The advent of artificial intelligence (AI) has brought about a profound transformation in numerous areas, including the field of science writing. With the rising complexity and data-driven nature of scientific research, effective communication of findings and ideas becomes ever more vital. AI has become a potent technology that may aid in producing scientific material, conducting data analysis, and optimizing literature reviews. Nevertheless, it is crucial to acknowledge the constraints of AI in generating scientific material and to comprehend its optimal integration with human expertise. We herein prospectively examined the role of AI in science writing, discussing its possible advantages and difficulties, and emphasizing the significance of upholding human subjectivity in this developing field.
... The ethics of using AI in academic writing is a concern in scientific writing topics. Balat & Bahsi (2023) discussed whether AI tools, no matter how advanced, should be credited as authors. They believe that these tools can't be held accountable for what they write. ...
Article
Full-text available
ChatGPT has emerged as a promising advanced large language model that needs prompt to gain information. However, designing a good prompt is not an easy task for many end-users. Therefore, this study intends to determine the amount of information gained because of varied amounts of information in the prompt. This study used two types of prompts, initial and improved, to query the introduction sections of 327 highly cited articles on traffic safety. The queried introduction sections were then matched with the corresponding human-written introduction sections from the same articles. Similarity tests and text network analysis were used to understand the level of similarities and the content of ChatGPT-generated and human-written introductions. The findings indicate the improved prompts, which have the addition of generic persona and information about the citations and references, changed the ChatGPT's output insignificantly. While the perfect similar contents are supposed to have a 1.0 similarity score, the initial and improved prompt's introduction materials have average similarity scores of 0.5387 and 0.5567, respectively. Further, the content analysis revealed that themes such as statistics, trends, safety measures, and safety technologies are more likely to have high similarity scores, irrespective of the amount of information provided in the prompt. On the other hand, themes such as human behavior, policy and regulations, public perception, and emerging technologies require a detailed level of information in their prompt to produce materials that are close to human-written materials. The prompt engineers can use the findings to evaluate their outputs and improve their prompting skills.
... I have followed with great interest your editorial content [1] which encourages academics to create a common mind, and the writings of our contributing colleagues, and I wanted to share my views and suggestions in order to offer a perspective on the subject. While the focal point of the debate is the question of whether AI can be included in an article as a co-author, it is evident that there are various debates on the periphery. ...
Article
Full-text available
Dear Editor, I have followed with great interest your editorial content [1] which encourages academics to create a common mind, and the writings of our contributing colleagues, and I wanted to share my views and suggestions in order to offer a perspective on the subject. While the focal point of the debate is the question of whether AI can be included in an article as a co-author, it is evident that there are various debates on the periphery. When we discuss the peripheral questions, the answer to the focal question will emerge automatically. Thanks to the computer and internet revolution, we now have the simplest, fastest, and cheapest way to access any data that we have ever known, and this development does not seem to stop. For example, it is argued that the 6G communication network will enter the market in 2030–2040 and that extended reality and augmented reality tools will be integrated into our lives together with the internet of things with smart intelligence [2]. While the easy storage and accessibility of information uploaded to the Internet environment facilitates the production of new data, the production of false information can be uploaded to information repositories and circulated easily, which creates other major problems in itself, such as the use of reliable scientific data [3]. Artificial intelligence (AI) tools, especially large language models (LLMs), such as ChatGPT, which is on the agenda, have entered our lives like "aliens born on Earth" with their ability to access information in millions of different data sets from almost every language and culture. It is obvious that if this super-powered extraterrestrial from this world uses his powers on issues that humans demand in common, it will be described as "Superman", and vice versa, it will be described as the mythological "Erlik", and the current debate is exactly in the middle of these two superheroes. It is true that AI tools can be very useful when we use them to extract vast oceans of data or for various other academic tasks (e.g. automated draft generation, article summarizing, and language translation) [4]. However, at this point, it should be taken into account that the artificial AI tools available today may not be limited to performing the given tasks and may present a world reality that is adorned with “artificial hallucinations” [5]. We may end up fighting an unrelenting force in the production and distribution of misinformation that we lose control over. We should discuss the responsibility for the control of products that will be obtained using artificial intelligence and prepare appropriate guidelines. Responsibility for control means that any digital result (whether it is an analysis of data or an analysis of a situation or an interpretation) must be reliable, i.e., it must be testable, rationally reproducible, and ethically attainable. Three different interlocutors—the producer, the distributor, and the consumer—have different but critical responsibilities in controlling liability. When using AI tools, the scientific research group (producer party) working on any subject unconditionally bears the responsibility for each and every sentence of each and every piece of data obtained through these digital machines, and it should be declared that any negative consequences that may arise otherwise are accepted in advance. The acceptance of these digital machines as a kind of co-author in scientific products (translation text, statistical analysis, research title determination, or any text that will bring the research result to the academic literature) obtained with AI tools that cannot legally bear responsibility is similar to the acceptance of the computer, operating system, or code groups that enable any digital operation as the author. It is also a fact that this topic will come up for discussion again in the future when the issue of the individualization of AI (in terms of legal responsibility and rights) begins to be discussed. Scientific journals and publishing houses consisting of competent referees at the point of control of the academic products produced are the gatekeepers in protecting the naivety of the literature. There are many examples of how these indomitable guardians can be easily circumvented due to bad intentions and a failure to internalize ethical principles. In this respect, it can be predicted that the use of AI tools will help publishers in their work and that the quality and quantity of this help will gradually increase [6]. On the other hand, another major problem of the near future is that it will become increasingly easy to circumvent the gatekeepers with the malicious intent and misdirection of the people who take responsibility for AIs, and the content of the broadcasts may become corrupt. At the last point, the responsibilities of us, the readers who will consume the product, are also increasing. While reading articles that are declared to be written with the help of AI, we should question and check each sentence we read in more detail and increase our positive or negative feedback. To sum up, the use of AI tools as a technique in research should be explained in detail, trainings where the effective and ethical use of the tools are taught and licensed should be given to researchers urgently, and people who do not have an AI Usage License should not take part in scientific articles in the near future. It might be safe to say that the planning of a special education accompanied by leading scientists from every society is behind us and that the frauds of today could cripple the science of the future. Yours sincerely
... I have read your editorials on the use of artificial intelligence in academic articles with great attention and enthusiasm [1,2]. In addition, in the comments made to your articles, I reviewed the ethical problems that may arise from the use of artificial intelligence in scientific articles and the contributions that the article will provide in the writing process [3][4][5][6]. ...
Article
Full-text available
Dear Editors, I have read your editorials on the use of artificial intelligence in academic articles with great attention and enthusiasm [1,2]. In addition, in the comments made to your articles, I reviewed the ethical problems that may arise from the use of artificial intelligence in scientific articles and the contributions that the article will provide in the writing process [3-6]. Although technological developments and advances in artificial intelligence have gained great momentum in recent years, I believe they should be accepted as an accumulation of all humanity. As a matter of fact, in very old sources, there is information that the machines known as robots and automatons at that time were used for entertainment purposes in the centuries before Christ. Furthermore, sophisticated machines, water clocks, and programmable humanoid automatons invented by İsmâil bin er-Rezzâz el-Cezerî in the 12th century, which have an important position in our scientific history, have played a significant role in the development of today's robot technology and mechanical sciences. Artificial intelligence applications are progressively being employed in agriculture, industry, military activities, health, art, and numerous other disciplines. Today, when we type "artificial intelligence" into the Google Scholar, we get 5,410,000 results, demonstrating how these developments have affected the academic world. As indicated in previous comments, I believe that applications such as ChatGPT in academic writings can be used for grammar corrections and abstract editing. Furthermore, these apps might be employed in the introduction section, where broad information about the topic under investigation is provided in the articles. However, since these applications do not only use academic databases during the literature review, the final version of the article should be evaluated by the relevant author. The primary ethical issue with these practices is that they are unable to accept responsibility in proportion to their authority. As a result, regardless of their contribution to the design of the paper, I think that these apps should not be deemed co-authors. However, it should be noted that these applications were used in the article. In conclusion, I believe that in the not-too-distant future, artificial intelligence applications will make significant contributions to the writing of the article, particularly in academic studies involving quantitative data. We should use these technologies as a tool to contribute more to academic advancement. Kind regards
... in your journal has attracted the attention of many researchers [1,2]. I believe that including such current discussions in your journal will guide my future work plans on similar topics. ...
Article
Full-text available
Dear Editors, Recently, the discussion of an artificial intelligence (AI) - fueled platform in several articles in your journal has attracted the attention of many researchers [1, 2]. I believe that including such current discussions in your journal will guide my future work plans on similar topics. I wanted to present my views on academic cooperation and co-authorship with ChatGPT (Chat Generative Pre-Trained Transformer) to your journal. Innovations brought by technology undoubtedly arouse curiosity in almost every branch of science. Researchers are among the professional groups that follow new technological developments most closely because the basic nature of research consists of concepts such as curiosity, innovation, and information sharing. Technology-based materials may be needed for anatomy education to be permanent and to be used pragmatically during clinical practices. Especially in recent years, tools such as augmented reality, virtual reality and 3D printing, which offer 3D images of anatomical structures, as well as social media platforms have started to be used in anatomy education [3]. Similarly, anatomy is a window of opportunity for the first trials of many innovative researches. Indeed, it did not take long for meet with AI-based chatbot platforms such as ChatGPT and Artificial Intelligence Support System (AISS) [4-8]. AISS was reported by several researchers about a year before ChatGPT. AISS is a chatbot equipped with only anatomy knowledge based on a machine learning platform and neural network module [8]. According to the developers of the AISS, students feel comfortable making mistakes with this chatbot, and therefore students' interaction with anatomy is at a high level. Recent studies with ChatGPT are also contributing to the critical role of these AI-based chatbots in anatomy education. Some studies questioned the current capabilities and potential of AI in anatomy education and anatomy research through interviews [5, 7]. In another study, students and ChatGPT were quizzed on anatomy and their knowledge was compared [6]. The results obtained from the studies are that ChatGPT is more successful than the students and has the potential to increase student participation. However, this AI software model will increase the likelihood of making errors in basic knowledge in anatomy as we move to complex topics. Sometimes the same anatomical knowledge will be presented differently depending on how widely the internet-based data is scanned [4]. This situation is likely to be overcome in the future with the learning potential of AI. In this context, I think that the use of AI can help physicians and physiotherapists by increasing the dynamic connections between anatomy knowledge and clinical practices. Furthermore, advances in educational technologies cannot provide equal opportunities to students in every country and university. ChatGPT partially eliminates this limitation. At this point, educators who want to increase student participation can design an anatomy education supported by ChatGPT and create research opportunities for students. It is stated that AI chatbots can be more useful in anatomy education and can provide students with access to educational resources regardless of location or time [5]. Apart from chatbots, the use of AI in anatomy can be seen in anatomy teaching approaches where student-centered and active learning is supported. Artificial Neural Networks or Convolutional Neural Networks are modelled similar to neural networks in the human brain. Bayesian U-Net is used to diagnose pathological anatomical deviations based on supervised deep learning by learning the normal anatomical structure and utilizing various biomarkers [9]. AI-based tools other than ChatGPT can also be used to display, classify or scale differences in anatomical structures. Thus, it may have pragmatic benefits for clinicians in the management of disease processes. In some studies indicate that the interpretation of anatomical regions in ultrasound, magnetic resonance and computed tomography images integrated with AI is facilitated [10]. Similarly, in specialties (such as dermatology) that require visual-oriented clinical skills in the processes required for diagnosis and treatment, AI's functions in recognition on images, computer-aided diagnosis and decision-making algorithms can be useful. I think that the use of ChatGPT in research in these fields can produce innovative and practical solutions if they provide information from an accurate and reliable database. In addition, its contributions to the research cause its collaborative position in the research to be questioned. In my opinion, the explanations under the heading "Promoting collaborative partnerships" in the third answer of this editorial, which includes an interview with ChatGPT, are satisfactory [2]. This supports traditional norms of authorship. Besides, concerns about co-authorship are already strictly protected by international organizations. The Committee on Publication Ethics (COPE) clearly rejects the contribution of AI tools such as ChatGPT or Large Language Models in co-authorship and explains several reasons for this in the COPE position statement. Responsibility for the study should be shared among the authors. However, it is unclear to what extent an AI can fulfil this criterion, which is one of the most basic requirements of authorship. What is known today about anatomy has been obtained by sharing the knowledge of many famous anatomists who lived in ancient history. ChatGPT is already collecting this information and making it available to the researcher. Can we talk about a real contribution at this point? Partly yes. AI can document this information quickly, but it can only make a general contribution when formulating a research question. For example, I asked it for an example of a research question that I use to examine the role of the pelvis in gait function. I received a response like “What is the effect of the anatomical and biomechanical properties of the pelvis on a person's balance, stride length, stride speed and gait efficiency during walking?". It is seen that the answers consist of general concepts. However, a researcher who has worked on the subject can broaden your horizons more during an in-depth conversation over a coffee. AI's contribution will not require its to be a co-author. Currently, ChatGPT or other AI tools are not yet capable of performing a literature search suitable for academic writing. However, if ChatGPT is developed in this field, it may be suitable for use by researchers. If ChatGPT has been used in research, I think it is necessary and sufficient to indicate in one sentence in the acknowledgments or method section how and in what way it contributed to the article. The data processing, collection and synthesis potential of ChatGPT is used for different purposes in every field [9]. For example, good agricultural practices or research on existing jurisprudence in law. No matter how it is used in areas whose subject is qualified professions, there is a fact that does not change. It alone is not an educator; it does not have the conscientious conviction of a judge and it does not have the skill of a doctor in caring for the sick. It should only be used as a complementary tool in the fields where it is used. It should be used by all health educators and researchers, including the field of anatomy, with awareness of its risks. In conclusion, the expectations of this new AI technology in anatomy are on students. The 3D model feature and its potential contribution to case-based learning practice during clinical applications can be further developed in the future. On the other hand, it is clear that ChatGPT cannot be a co-author of a publication. If ChatGPT is a co-author of a publication, who and how will prepare the response letters to the referee comments on this issue? While contributing to this editorial discussion, I thought that the reviewer assigned to review an academic publication could prepare a reviewer comment with the help of ChatGPT. I hope this will never happen. Otherwise, we may soon encounter a journal publisher consisting of AI authors and reviewers. Yours sincerely
Article
Full-text available
The application of artificial intelligence (AI) in education has gained great attention recently. Integration of AI tools in anatomy teaching is currently engaging researchers and academics worldwide. Several AI chatbots have been generated, the most popular being ChatGPT (OpenAI: San Francisco, California, USA). Since its first public release in November 2022, several research papers have pointed to its potential role in anatomy education. However, it is not yet known whether it will prove superior to other available AI tools in this role. This article sheds some light on the current status of research concerning AI applications in anatomy education and compares the performances of three well‐known chatbots (ChatGPT, Gemini, and Claude) in answering anatomy questions. A total of 23 questions were used as prompts for each chatbot. These questions comprised 10 knowledge‐based, 10 analysis‐based USMLE Step 1‐type, and three radiographs. ChatGPT was the most accurate of the three, scoring 100% accuracy. However, in terms of comprehensiveness, Claude was the best; it gave very organized anatomical responses. Gemini performed less well than the other two, with a scored accuracy of 60% and less scientific explanations. On the basis of these findings, this study recommends the incorporation of Claude and ChatGPT in anatomy education, but not Gemini, at least in its current state.
Article
Full-text available
To the Editor: Recently, the use of artificial intelligence (AI) tools and discussions on this subject have been frequently encountered in the academic field and many areas.1 In this direction, questions began to arise that many of us could not possibly anticipate. In the literature, we can find significant discussions on preparing articles through AI tools, incorporating AI tools into articles as co-authors2 and conducting the reviewing process of academic studies through AI tools.3 On the other hand, after our extensive literature search, it was determined that the editorial roles of academic journals through AI tools had not been discussed yet. (Note: please see the Acknowledgment).4 It is seen that the most critical discussion topic in the literature regarding the support of academic studies with AI tools covers the stages from the preparation of academic studies to their submission to journals. Previous studies have examined this situation from many perspectives, such as ethics,5 scientific accuracy, and responsibility.6 According to the current conditions, our general opinion is that AI tools can help prepare academic studies. Still, in terms of content, the authors should check the entire text and confirm the accuracy of these expressions. Bahşi and Balat7 stated that AI tools are a tool such as EndNote or SPSS, which we use in the preparation of academic studies, according to current conditions; therefore, it would be more appropriate to write the necessary explanations in the material and method section rather than adding AI tools to academic studies as a co-author. However, Solmaz8 looked at this situation from a different perspective. In the acknowledgments section of the article by Solmaz,8 with ChatGPT’s suggestion, it has been stated that it thanked those who enabled the emergence of this application instead of ChatGPT. On the other hand, until now, we think that the answers provided by AI tools in evaluating the articles we examined during the refereeing and editorial processes are unsatisfactory. However, as the functionality of AI tools increases, it will be possible for us to receive support from them in these processes. Therefore, Garcia concerns are critical and must be discussed.3 In the maturation of academic studies and contributions to the literature, many people, such as reviewers, who we can describe as unsung heroes, editors, and members of publishers, make significant contributions to these processes. So, would it be appropriate to conduct an academic study’s review and editorial evaluation processes through AI applications? Garcia3 has articulated in detail the critical concerns in using AI applications for peer-reviewing academic studies and suggested that measures should be taken as soon as possible. On the other hand, it is known that many journals have difficulties in finding qualified referees from time to time. While we agree with all of Garcia concerns,3 using AI applications in the initial evaluation of academic studies submitted to journals would be nice. Can we use AI applications to evaluate the articles before sending the referees? In other words, just like in Turnitin or iThenticate, which we use to assess plagiarism, can we question the value of articles regarding content with AI applications? We are unsure if this review qualifies as a peer and an editorial review. However, it may be an excellent tool to measure the content value of studies. On the other hand, we think the editors should make the final decision about the process. However, in this case, a different topic of discussion will arise. How will the process proceed when the authors benefit from AI applications regarding refereeing or editing before submitting their articles to the journals? As we emphasized at the beginning of this paper, it seems that with the development of AI applications, we will continue to encounter questions that most of us cannot even imagine. ACKNOWLEDGMENT The phrases “Guest Editor” and “Artificial Intelligence” in the title of an article by Forghani inspired us to come up with this discussion topic.
Article
Full-text available
To the Editor: Recently, we have encountered artificial intelligence (AI), an evolution, perhaps even a revolution, in every aspect of our lives. We can see that this rapid progress of AI has started to be used in medicine.1 Moreover, we also see that academic studies are prepared with the support of AI.2 Over and above, it is seen that ChatGPT, one of the AI applications, takes place as an author in some articles.3 On the other hand, there is much discussion in the literature about preparing articles with the support of AI and adding AI applications such as ChatGPT to academic studies as co-author.4,5 Although preparing the scientific article, we think getting support from AI applications is inevitable, and will become widespread in the future. Just as applications such as EndNote, SPSS, and Microsoft Word and databases such as PubMed, Google Scholar, Scopus, and Web of Science are used while preparing an article; we think that AI applications that offer many features of all these applications and databases may be used. Perhaps, this rapid progress in AI may cause us to encounter many situations that we cannot foresee in a short time. The most critical debate on this topic is whether AI applications can be co-authored in academic studies. The widespread thought in this discussion is that AI applications should not be co-authored in articles.6 Indeed, various ethical concerns have also been reported on this issue.7 Despite all this, some support the idea that one should not be prejudiced in science and be open to technological developments.8 We think that the discussions on this subject will continue to increase rapidly. On the other hand, although there are a few articles with ChatGPT as a co-author,3 we have not yet come across a study with another AI, Google Bard, as a co-author. If we accept that AI applications can be co-authored in academic articles, when the same theme is supported by two AI applications, ChatGPT and Google BARD, will we add both as co-authors? If we define the role of the author in AI applications, would two competing AIs want to co-author the same article? As a result, it is inevitable and even necessary to carry out academic studies with the support of AI applications. However, our current thinking is that it would be more appropriate not to include AI applications as co-authors in articles. On the other hand, we think it is not right to make a definitive judgment since the developments in this area have progressed beyond our forecast.
Article
Full-text available
In the summer of 2021, Singh and Garg collaborated with an artificial intelligence (AI) agent (GPT-3) to research human factors in decision-making in the context of NDE 4.0. They created an interface script to engage with AI. The outcome was published in the Journal of Nondestructive Evaluation in August 2021. The authors shared their experience in a previous NDE Outlook column. They articulated their collaboration as an eye-opening and rewarding experience for human partners. The article provided convincing evidence of a powerful human-machine coworking at linguistic and cognitive levels—one that is closer than we think, and more powerful than we conceive. On 30 November 2022, OpenAI released ChatGPT, a spin-off of GPT-3 that is geared toward answering questions via back-and-forth dialogue. This conversational format allows ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. No need for an interface script. It is similar to a model called Sparrow, which DeepMind revealed in September. Both these models were trained using feedback from human users around the world. ChatGPT has gone viral, beating all previous records of user engagement and social media domination. It gained a million registered users in the first week, almost 15× faster than the record holder Instagram.
Article
At the end of 2022, the appearance of ChatGPT, an artificial intelligence (AI) chatbot with amazing writing ability, caused a great sensation in academia. The chatbot turned out to be very capable, but also capable of deception, and the news broke that several researchers had listed the chatbot (including its earlier version) as co-authors of their academic papers. In response, Nature and Science expressed their position that this chatbot cannot be listed as an author in the papers they publish. Since an AI chatbot is not a human being, in the current legal system, the text automatically generated by an AI chatbot cannot be a copyrighted work; thus, an AI chatbot cannot be an author of a copyrighted work. Current AI chatbots such as ChatGPT are much more advanced than search engines in that they produce original text, but they still remain at the level of a search engine in that they cannot take responsibility for their writing. For this reason, they also cannot be authors from the perspective of research ethics.
ChatGPT (2023) What every health physicist should know about ChatGPT
  • N Rashidifard
  • C A Wilson
  • E A Caffrey
Rashidifard N, Wilson CA, Caffrey EA, ChatGPT (2023) What every health physicist should know about ChatGPT. Health Physics 125:63-63