Available via license: CC BY 4.0
Content may be subject to copyright.
Central Asian Journal of Medical Hypotheses and Ethics
2024; Vol 5(4)
259
© 2024 by the authors. This work is licensed under
Creative Commons Attribution 4.0 International License
https://creativecommons.org/licenses/by/4.0/
eISSN: 2708-9800
https://doi.org/10.47316/cajmhe.2024.5.4.02
ARTIFICIAL INTELLIGENCE IN WRITING AND RESEARCH:
ETHICAL IMPLICATIONS AND BEST PRACTICES
Received: September 23, 2024
Accepted: December 11, 2024
Abdel Rahman Feras AlSamhori1 https://orcid.org/0000-0002-2715-4320
Fatima Alnaimat2 https://orcid.org/0000-0002-5574-2939
1School of Medicine, University of Jordan, Amman 11941, Jordan
2Department of Internal Medicine, Division of Rheumatology, School of Medicine, The University of
Jordan, Amman 11941, Jordan
*Corresponding author:
Fatima Alnaimat, Department of Internal Medicine, Division of Rheumatology, School of Medicine, The University of Jordan,
Amman 11941, Jordan;
E-mail: f.naimat@ju.edu.jo
Abstract
Artificial Intelligence (AI) is a field that utilizes computer technology to imitate, improve, and expand human intelligence.
The concept of AI was originally proposed in the mid-twentieth century, and it has evolved into a technology that serves
different purposes, ranging from simple automation to complex decision-making processes. AI encompasses Artificial
Narrow Intelligence, General Intelligence, and Super Intelligence. AI is transforming data analysis, language checks, and
literature reviews in research. In many fields of AI applications, ethical considerations, including plagiarism, bias, privacy,
responsibility, and transparency, need precise norms and human oversight. By promoting understanding and adherence
to ethical principles, the research community may successfully utilize the advantages of AI while upholding academic
accountability and integrity. It takes teamwork from all stakeholders to improve human knowledge and creativity, and
ethical AI use in research is essential.
Keywords: Artificial intelligence, Ethics, Medical writing, Privacy, Ethics in publishing
How to cite: AlSamhori AR. F, Alnaimat F. Artificial intelligence in writing and research: ethical implications and
best practices. Cent Asian J Med Hypotheses Ethics 2024:5(4):259-268. https://doi.org/10.47316/cajmhe.2024.5.4.02
INTRODUCTION
Artificial Intelligence (AI) is a field that utilizes computer
technology to imitate, improve, and expand human
intelligence [1]. The concept of AI was originally
proposed by scientist Alan Turing, referred to as the
“father of AI,” in 1950. Turing invented the “Turing test,”
claiming that AIs were more advanced than human
brains [2,3]. Since its establishment in the 1950s, AI has
evolved into a technology that serves different purposes,
ranging from simple automation to complex decision-
making processes [4]. Because of its developing
abilities, AI is quickly becoming a vital tool in various
industries, including healthcare, finance, entertainment,
and transportation [5].
AI has become a common element of everyday living,
cutting across all sectors beyond just being confined to
research labs [6]. AI is everywhere, from
recommendation algorithms on streaming services to
voice assistants on smartphones. The immense ability to
analyze and predict data patterns has helped it in diverse
fields, such as productivity enhancement, efficiency
improvement, and creativity enhancement [5,7]. One
example where AI has made strides in medicine includes
enabling individualized treatment plans, improving
PUBLICATION ETHICS REVIEW
260
Cent Asian J Med Hypotheses Ethics 2024: Vol 5(4)
diagnostics, or even forecasting possible disease
outbreaks [3,5]. AI-driven algorithms are used in finance
to identify fraudulent activity, evaluate credit risk, and
improve trading tactics [8].
This paper explores ethical issues and implications of
using AI in writing and research, focusing on the need for
clearly defined guidelines and responsible practices to
mitigate the benefits against possible threats.
Defining AI
AI can be defined as a machine (mainly a computer)
capable of replicating human capabilities like learning,
reasoning, and self-correction [9,10]. AI tools vary from
simple programs for specific tasks to complex systems
with human-like thinking and creativity [11]. Three
primary categories of AI exist [12]:
1. Artificial Narrow Intelligence: A task-focused
system that excels beyond human capacity but
cannot solve problems outside its skill sets.
2. Artificial General Intelligence: AGI operates
independently in various fields, helping to
achieve any cognitive task humans can do.
3. Artificial Super Intelligence: This type of
technology surpasses human performance in
every possible area and outmatches the human
mind in every respect, being able to quickly deal
with challenging dilemmas.
Nowadays, most AI applications are meant to have a
tight focus and excel in a certain set of tasks [13].
Professional and academic contexts increasingly
embrace AI applications like grammar checkers such as
Grammarly or Hemingway Editor [14]. These AI-powered
tools evaluate content, identify grammar errors, and
suggest ways to enhance clarity, conciseness, and
tone [15]. Contemporary grammar-checking programs
are significantly advanced compared to older ones,
focusing predominantly on basic corrections that ensure
clear and error-free communication in writing [16].
AI is making rapid pace with language models such as
Gemini from Google and GPT-4 by OpenAI [17,18].
Unlike grammar checkers, these models can create
writings that resemble those produced by human beings
based on the context given [18]. The models have
multiple capabilities, including translation across
languages using multi-lingual chatbots and writing
content like essays or articles [19]. Nonetheless, a small
portion of the AI community has raised concerns about
the potential misuse of this technology to generate
misinformation [20,21].
The way we handle data in school research is being
renovated by AI [22]. For example, AI-driven tools
analyze massive amounts of genomic data,
revolutionizing our understanding of complex diseases
[23]. In social sciences, AI utilizes huge sets of data,
such as those on social networks and surveys, to
uncover relationships or motifs that have been elusive
before [24]. On top of this, medical surveys can be
greatly improved through AI systems such as ChatGPT,
which enhance questionnaire designs, process data
quickly, improve analytical techniques, and modernize
reporting, hence improving healthcare outcomes and
better insights [25,26]. Therefore, Fasola states [27], that
these include applications like Iris.ai and Scite, which
help academics scan scholarly literature in large
volumes and summarize them, thus expediting literature
reviews and making them more comprehensive.
As technology becomes more prevalent, the question of
AI ethics is getting progressively more significant. They
encompass potential misuse concerns, privacy, liability,
and bias, all of which are complex issues [28]. Each field
should receive considerable attention to ensure that AI is
utilized properly and ethically.
I. Bias and Fairness
AI bias is an important ethical concern [29]. In training AI
systems, large datasets that may have contained
societal prejudices related to gender, race, social class,
etc., are often used [30]. If these biases are present in
the data sets, the AI may reproduce or amplify such
errors [29].
II. Privacy and Surveillance
Many AI applications often do not ask for users’ consent
while collecting information, including information on
social media and smart devices [31]. It is possible to
create detailed profiles of certain individuals using this
information for tracking or targeted advertising purposes
[32]. To ensure privacy, strong data protection laws such
as the General Data Protection Regulation, clear data
management practices, and technologies that maintain
confidentiality must all be implemented [33].
III. Accountability and Transparency
The other ethical issue is ensuring AI accountability and
transparency [34]. Specifically, many AI systems,
especially those relying on deep learning, work like
“black boxes,” meaning no one knows how they arrive at
their decisions [35]. The absence of clarity could create
problems for instance, during medical diagnosis, where
it is necessary to understand the reasoning behind
decisions [36].
261
Cent Asian J Med Hypotheses Ethics 2024: Vol 5(4)
Enhancement vs. Diminishment
AI enhances productivity through the automation of
redundant tasks, support in literature reviews, and the
production of drafts of articles [37,38]. Consequently, AI
allows scholars and writers to focus more on analysis,
interpretation, and original thoughts [39]. These days,
manuscripts can be made almost free from errors with
the help of AI tools like Grammarly [40]. However, the
same advancements might hinder essential human skills
as well. People may lose critical thinking abilities,
problem-solving skills, and creativity when they rely too
much on AI-based information [41,42]. Moreover, career
automation tends to devalue meaningful work that
humans had previously done, which could lead to a loss
of knowledge and lower levels of satisfaction within
research or artistic fields [43].
Academic integrity: concerns about originality and
authenticity
Academic writing is now facing issues regarding
verification and originality due to using artificial
intelligence[44]. This raises fears about plagiarism due
to conflating skill machine-written vocabulary with made-
up stories and raises issues regarding the actual creator
even after the article has been published [45]. If
somebody attributes their authorship to something they
didn’t produce, it can threaten academic honesty [44,46].
To ensure the correct usage of AI technology in
academia, explicit regulations and moral guidelines are
needed [25,47]. Publishers and educational institutions
should develop rules specifying allowable AI applications
in research papers and writing pieces, stressing the need
for creativity preservation and proper citation [48].
Transparency: importance of disclosing ai usage
For trust and transparency, it is important to declare the
use of AI technologies when writing or doing research
[49]. This helps maintain academic integrity and
transparency, allowing people to know what role AI
played in creating a paper [50]. Such disclosure is
important in collaborative research because different
contributors may have various degrees of engagement
with AI-generated information [51]. Furthermore, just as
referencing references or acknowledging collaborators
for their work, disclosing AI usage must become a habit
[52]. Transparency increases confidence in the research
process through a commitment to ethics and
accountability [49].
Balancing Benefits and Risks
One must consider the advantages and disadvantages
of applying AI to research and writing [53]. It can change
research and communication processes by making them
more efficient and accessible [54]. Nevertheless, it may
affect human work's originality, hindering the
development of key skills and academic ethics [55,56].
To achieve this equilibrium, there is a need for best
practices that will make use of AI beneficial while
minimizing its potential downsides. These would involve
encouraging the acquisition of skills that go alongside AI
technologies instead of always relying on them solely,
setting out specific guidelines for their usage by
educators, and promoting transparency [57,58]. This
allows us to maintain the principles that underpin
academic and creative striving while using AI to enhance
our work [59].
Acceptable Uses of AI in Research
AI is transforming research by offering instruments that
improve several facets of the study process [54,60]. AI
has major benefits for anything from data analysis to
paper writing [61]. The appropriate applications of AI in
research, the associated hazards, and the ethical
standards that must be adhered to for responsible AI
integration are all covered in depth below.
1. Situations where ai usage is widely accepted
It is important to mention that AI technologies are being
widely utilized in research these days, especially in fields
that aid decision-making processes, precision, and
optimizing regular tasks [60,62]:
Language and Grammar Checking: In research
written by many people, AI-powered
applications like Grammarly have been shown to
improve them [63]. They help point out sentence
errors and style suggestions and provide
coherence and clarity in scientific write-ups [15].
Additionally, this technology allows researchers
to focus more on their work’s relevance and
content [15].
Data Analysis and Interpretation: AI algorithms
are being utilized more and more to analyze
massive datasets, spot trends, and produce
insights that may be hard for people to discover
[64]. To find novel associations or make highly
accurate predictions, for example, ML models
may be used for complicated biological data
[65].
Literature Review Automation: AI literature
review tools, for instance, must move through
numerous publications, pinpoint important
research findings, and speak about them [66].
Therefore, this helps keep researchers informed
of the latest trends and saves them considerable
time [66].
Plagiarism Detection: The study's originality is
ensured through AI-based plagiarism-checking
262
Cent Asian J Med Hypotheses Ethics 2024: Vol 5(4)
tools such as Turnitin [67]. These algorithms
analyze a paper instantly and compare it with
vast databases containing records of previously
written articles to detect any anomalies relating
to originality and citation [44].
2. Potential pitfalls: risks of over-reliance and ethical
boundaries
Though there are many advantages of Artificial
Intelligence, it still possesses some risks when using it
for research [44,68]:
Over-Reliance on AI: One risk is that
researchers may overly rely on AI technologies,
leading to diminished critical and creative
thinking abilities [69]. For instance, if
researchers rely too much on AI for data
interpretation, they might accept correct AI-
generated results without completely
understanding the processes involved in
producing them or questioning the results’
authenticity [50,70].
Ethical Concerns: Several ethical concerns
emerge when using AI in research. To illustrate,
it is considered unethical to use AI-generated
materials to deceive reviewers or academia
through automatically written articles or
fraudulent information [71]. Additionally,
implementing AI models in sensitive fields, such
as predictive modeling within medical care, can
result in biased outcomes if these models are
not developed and tested appropriately [72].
3. Guidelines for ethical AI use
The next suggestions need to be adhered to in order to
avoid the possible dangers attached to use of AI in
research [73]:
Transparency and Disclosure: Researchers
must be straightforward and clear about what
kinds of AI they are applying and how
extensively in their research [52]. This includes
specifications on the exact AI tools used, how
they have been used, and how they have
influenced the study outcomes [52].
Transparency ensures that AI has a defined role
in research, making it possible to assess its
effect on research appropriately [74].
Human Oversight: Instead of substituting human
judgment, AI should help it [75]. Maintain that
researchers should have final control over a
research process, using AI to support their
analysis and conclusions rather than as a
substitute for their expertise [75]. This ensures
that human judgment and critical reasoning will
always underpin the study [54].
Ethical AI Development and Use: AI
technologies should be created and applied in a
way that upholds moral values, including
accountability, transparency, and justice [30,76].
This entails guaranteeing that AI models are
devoid of prejudice, that their decision-making
procedures are comprehensible and interpreted,
and that their application is carried out to uphold
the rights and dignity of every person concerned
[77].
4. Establishing clear policies and guidelines for ai
usage in various domains
Institutions and research groups must set Specific rules
and regulations to use AI responsibly in research [78].
Some of the important things that these policies should
cover are:
Data Privacy and Security: Policies should
ensure that AI uses comply with data privacy
and security regulations like the General Data
Protection Regulation in Europe [79]. This
implies safeguarding personal data, ensuring its
use is strictly for legitimate research, and
preventing misuse or unauthorized access[80].
Intellectual Property Rights: This includes
defining how AI data and materials will be
treated and those policies [81]. Furthermore, it
involves identifying who owns the research
outcomes generated by AI and ensuring that its
application does not infringe on somebody else's
intellectual property rights [82].
Ethical Review Processes: Research with AI
should go through as stringent ethical review
procedures as those used to study human
participants [83,84]. Consequently, the pros and
cons of using AI must be weighed; one must
consider what ethical implications the decisions
made using AI carry with them and ensure that
the use of AI fits the broader ethical aims of the
research [85,86].
The following points summarize the guidelines
for researchers using generative AI and AI-
assisted technologies [87]: First, make sure to
declare any AI use in your writing. Second, while
AI can be useful for improving language and
readability, it must not replace human beings in
critical instances such as research and
conclusion-making. Third, always remember
that when it comes to thorough review and other
tasks involving AI, there may be inaccuracies
and certain biases at times. Fourth, since these
263
Cent Asian J Med Hypotheses Ethics 2024: Vol 5(4)
roles should only be undertaken by human
beings, authorship or co-authorship should not
be attributed to an AI. Ultimately, you are
responsible for the integrity, originality, and
correctness of published texts.
5. Promoting Awareness and Education About
Ethical AI Practices
Knowledge and education on attitudes regarding ethical
AI are inducements to correct AI power devices for
research in the long term [88]:
Training Programs: Organizations must offer
training programs on ethically utilizing AI for
professionals, students and researchers [56].
Such programs should incorporate issues
relating to data privacy, intellectual property
rights, AI ethics, and ethical use in scientific
inquiry [89].
Workshops and Seminars: Many workshops and
seminars provide an avenue for discussions on
current developments in AI ethics and best
practices [90].
Guidance Documents and Resources: Providing
researchers access to ethical frameworks,
advisory papers and other useful resources
could help them navigate the complex ethical
issues in AI [91].
Whenever there is a research mentality over AI usage
that comes with the best accountability and integrity, this
research community will be able to benefit from the
advantage of AI at the same time.
CONCLUSION
AI should be used ethically in writing research, as we are
entering an era increasingly following AI's example.
While its transformative abilities are strong for these
areas, they also carry onerous responsibilities. So,
academic honesty and moral standards should not be
sacrificed if human creativity and productivity are to be
boosted by AI.
For the future, we need to develop clear guidelines and
standards for using AI in academic research and
professional efforts. To achieve this, there should be
clarity regarding the use of AI systems, responsible
utilization of ML that enhances human expertise at work,
and the development of organizations whose activities
include training stakeholders on technological
competencies. The success of implementing these ideas
will depend largely on teamwork involving scientists and
engineers who create robots with high intelligence levels,
teachers who are educators in universities, individuals
studying moral philosophy, or those working as public
servants responsible for making policies.
We must follow ethical guidelines and continue to have
conversations regarding the uses of AI in writing and
research, which would enable us to make the best use of
its benefits without compromising our intellectual
pursuits.
Submission statement
This work has not been submitted for publication
elsewhere, and all the authors listed have approved the
enclosed manuscript.
AVAILABILITY OF DATA AND MATERIALS
Not applicable
AUTHORS’ CONTRIBUTIONS
Conceptualization, investigation, and supervision:
Fatima Alnaimat. Writing - original draft: Abdel Rahman
Feras AlSamhori. Writing - review & editing: Fatima
Alnaimat, Abdel Rahman Feras AlSamhori.
CONFLICTS OF INTEREST
None
FUNDING
None
ACKNOWLEDGMENT
Grammarly (https://app.grammarly.com/ )was utilized to
enhance the grammar and clarity of this manuscript.
DISCLAIMER
This review is an original work, and no part of it has been
copied, published, or submitted elsewhere in whole or in
part in any language.
264
Cent Asian J Med Hypotheses Ethics 2024: Vol 5(4)
References
1. Liu P ran, Lu L, Zhang J yao, Huo T tong, Liu S xiang, Ye Z wei. Application of artificial intelligence in medicine:
an overview. Curr Med Sci 2021;41(6):1105-1115.
2. Kaul V, Enslin S, Gross SA. History of artificial intelligence in medicine. Gastrointest Endosc 2020;92(4):807-812.
3. Mintz Y, Brodie R. Introduction to artificial intelligence in medicine. Minim Invasive Ther Allied Technol MITAT Off
J Soc Minim Invasive Ther 2019;28(2):73-81.
4. Ayorinde IT, Idyorough PN. Exploring the frontiers of artificial intelligence: a comprehensive analysis. Innov Sci
Technol 2024;3(4):35-49.
5. Koppisetti VSK. Deep learning: advancements and applications in artificial intelligence. ESP Int J Adv Comput
Technol ESP-IJACT 2024;2(2):106-113.
6. Howard J. Artificial intelligence: implications for the future of work. Am J Ind Med 2019;62(11):917-926.
7. McLean G, Osei-Frimpong K. Hey Alexa … examine the variables influencing the use of artificial intelligent in-
home voice assistants. Comput Hum Behav 2019;99:28-37.
8. Carter E, Anderson J. Optimizing financial services with ai: enhancing risk management and strategic decision
making. 2024 [cited 2024 Aug 13]
9. Bini SA. Artificial intelligence, machine learning, deep learning, and cognitive computing: what do these terms
mean and how will they impact health care? J Arthroplasty 2018;33(8):2358-2361.
10. Mondal B. Artificial intelligence: state of the art. In: Balas VE, Kumar R, Srivastava R, editors. Recent Trends and
Advances in Artificial Intelligence and Internet of Things. Springer International Publishing, 2020 [cited 2024 Aug
15].
11. Dergaa I, Chamari K, Zmijewski P, Ben Saad H. From human writing to artificial intelligence generated text:
examining the prospects and potential threats of ChatGPT in academic writing. Biol Sport 2023;40(2):615-622.
12. Kaplan A, Haenlein M. Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations,
and implications of artificial intelligence. Bus Horiz 2019;62(1):15-25.
13. Chang C, Shi W, Wang Y, Zhang Z, Huang X, Jiao Y. The path from task-specific to general purpose artificial
intelligence for medical diagnostics: a bibliometric analysis. Comput Biol Med 2024;172:108258.
14. Giglio AD, Costa MUP da. The use of artificial intelligence to improve the scientific writing of non-native english
speakers. Rev Assoc Medica Bras 2023;69(9):e20230560.
15. Mendoza Zaragoza NE, Téllez Tula Á, Herrera Corona L. Artificial Intelligence in thesis writing: exploring the role
of advanced grammar checkers (grammarly). Estud Perspect Rev Científica Académica 2024;4(2):649-683.
16. Infante Vera AE, Landivar Mesías EP, Paccha Soto MDLÁ. Intelligent English grammar: ai strategies to master
the rules [Internet]. 1°. CID-Centro de Investigación y Desarrollo; 2024 [cited 2024 Aug 15].
17. Farhat F, Chaudhry BM, Nadeem M, Sohail SS, Madsen DØ. Evaluating large language models for the national
premedical exam in india: comparative analysis of GPT-3.5, GPT-4, and Bard. JMIR Med Educ 2024;10:e51523.
18. Yenduri G, Ramalingam M, Selvi GC, Supriya Y, Srivastava G, Maddikunta PKR, et al. GPT (Generative Pre-
Trained Transformer)— a comprehensive review on enabling technologies, potential applications, emerging
challenges, and future directions. IEEE Access. 2024;12:54608-54649.
19. Liu Y, Han T, Ma S, Zhang J, Yang Y, Tian J, et al. Summary of ChatGPT-Related research and perspective
towards the future of large language models. Meta-Radiol 2023;1(2):100017.
20. Hadi MU, Tashi QA, Qureshi R, Shah A, Muneer A, Irfan M, et al. A survey on large language models: applications,
challenges, limitations, and practical usage. 2023 [cited 2024 Aug 15].
21. Ray PP. ChatGPT: a comprehensive review on background, applications, key challenges, bias, ethics, limitations
and future scope. Internet Things Cyber-Phys Syst 2023;3:121-154.
22. Rahmani AM, Azhir E, Ali S, Mohammadi M, Ahmed OH, Ghafour MY, et al. Artificial intelligence approaches and
mechanisms for big data analytics: a systematic study. PeerJ Comput Sci 2021;7:e488.
23. Dias R, Torkamani A. Artificial intelligence in clinical and genomic diagnostics. Genome Med 2019;11(1):70.
24. Miller T. Explanation in artificial intelligence: insights from the social sciences. Artif Intell 2019;267:1-38.
25. lnaimat F, Al-Halaseh S, AlSamhori ARF. Evolution of research reporting standards: adapting to the influence of
artificial intelligence, statistics software, and writing tools. J Korean Med Sci 2024;39:e231.
26. AlSamhori ARF, AlSamhori JF, AlSamhori AF. ChatGPT role in a medical survey. High Yield Med Rev, 2023;1(2).
27. Fasola OS. Harnessing artificial intelligence-powered search engines for the literature review process. Digital
commons? University of Nebraska, Lincoln; 2023.
265
Cent Asian J Med Hypotheses Ethics 2024: Vol 5(4)
28. Africa institute for regulatory affairs LBG, Mensah GB. AI Ethics. Afr J Regul Aff, 2024 [cited 2024 Aug 15]
29. Tatineni S. Ethical considerations in ai and data science: bias, fairness, and accountability. Int J Inf Technol
Manag Inf Syst 2019;10:11-20.
30. O’Sullivan S, Nevejans N, Allen C, Blyth A, Leonard S, Pagallo U, et al. Legal, regulatory, and ethical frameworks
for development of standards in artificial intelligence (AI) and autonomous robotic surgery. Int J Med Robot
Comput Assist Surg MRCAS 2019;15(1):e1968.
31. Laitinen A, Sahlgren O. AI systems and respect for human autonomy. Front Artif Intell 2021;4:705164.
32. Manheim K, Kaplan L. Artificial Intelligence: risks to privacy and democracy. Yale J Law Technol 2019;21:106.
33. Timan T, Mann Z. Data protection in the era of artificial intelligence: trends, existing solutions and
recommendations for privacy-preserving technologies. In: Curry E, Metzger A, Zillner S, Pazzaglia JC, García
Robles A, editors. The Elements of Big Data Value. Cham: Springer International Publishing; 2021 [cited 2024
Aug 15]
34. Cheong BC. Transparency and accountability in AI systems: safeguarding wellbeing in the age of algorithmic
decision-making. Front Hum Dyn 2024;6.
35. Qamar T, Bawany NZ. Understanding the black-box: towards interpretable and reliable deep learning models.
PeerJ Comput Sci 2023;9:e1629.
36. Von Eschenbach WJ. Transparency and the black box problem: why we do not trust AI. Philos Technol
2021;34(4):1607-1622.
37. Bolanos F, Salatino A, Osborne F, Motta E. Artificial Intelligence for literature reviews: opportunities and
challenges. arXiv, 2024 [cited 2024 Aug 16].
38. Lee PY, Salim H, Abdullah A, Teo CH. Use of ChatGPT in medical research and scientific writing. Malays Fam
Physician Off J Acad Fam Physicians Malays 2023;18:58.
39. Dergaa I, Chamari K, Zmijewski P, Ben Saad H. From human writing to artificial intelligence generated text:
examining the prospects and potential threats of ChatGPT in academic writing. Biol Sport 2023;40(2):615-622.
40. Chemaya N, Martin D. Perceptions and detection of AI use in manuscript preparation for academic journals. PloS
One 2024;19(7):e0304807.
41. Cardon P, Fleischmann C, Aritz J, Logemann M, Heidewald J. The challenges and opportunities of ai-assisted
writing: developing ai literacy for the AI age. Bus Prof Commun Q 2023;86(3):257-295.
42. Yusuf A, Bello S, Pervin N, Tukur AK. Implementing a proposed framework for enhancing critical thinking skills in
synthesizing AI-generated texts. Think Ski Creat 2024;53:101619.
43. Bankins S, Formosa P. The ethical implications of Artificial Intelligence (AI) for meaningful work. J Bus Ethics
2023;185(4):725-740.
44. Carobene A, Padoan A, Cabitza F, Banfi G, Plebani M. Rising adoption of artificial intelligence in scientific
publishing: evaluating the role, risks, and ethical implications in paper drafting and review process. Clin Chem
Lab Med CCLM 2024;62(5):835-843.
45. Longoni C, Tully S, Shariff A. The AI-human unethicality gap: plagiarizing ai-generated content is seen as more
permissible. OSF, 2023 [cited 2024 Aug 16].
46. Kumar R, Eaton SE, Mindzak M, Morrison R. Academic integrity and artificial intelligence: an overview. in: eaton
se, editor. second handbook of academic integrity. Springer Nature Switzerland, 2024 [cited 2024 Aug 16].
47. Sounderajah V, Ashrafian H, Golub RM, Shetty S, De Fauw J, Hooft L, et al. Developing a reporting guideline for
artificial intelligence-centred diagnostic test accuracy studies: the STARD-AI protocol. BMJ Open
2021;11(6):e047709.
48. Perkins M, Roe J. Academic publisher guidelines on AI usage: a ChatGPT supported thematic analysis.
F1000Research 2023;12:1398.
49. van Genderen ME, van de Sande D, Hooft L, Reis AA, Cornet AD, Oosterhoff JHF, et al. Charting a new course
in healthcare: early-stage AI algorithm registration to enhance trust and transparency. NPJ Digit Med
2024;7(1):119.
50. Perkins M. Academic Integrity considerations of AI large language models in the post-pandemic era: ChatGPT
and beyond. J Univ Teach Learn Pract 2023;20(2).
51. Ballardini RM, He K, Roos T. AI-generated content: authorship and inventorship in the age of artificial intelligence.
In: Pihlajarinne T, Vesala J, Honkkila O, editors. Online Distribution of Content in the EU. Edward Elgar Publishing,
2019 [cited 2024 Aug 16].
266
Cent Asian J Med Hypotheses Ethics 2024: Vol 5(4)
52. Kostygina G, Kim Y, Seeskin Z, LeClere F, Emery S. Disclosure standards for social media and generative
artificial intelligence research: toward transparency and replicability. Soc Media Soc
2023;9(4):20563051231216947.
53. Bahammam AS, Trabelsi K, Pandi-Perumal SR, Jahrami H. Adapting to the impact of Artificial Intelligence in
scientific writing: balancing benefits and drawbacks while developing policies and regulations. J Nat Sci Med 2023
Sep;6(3):152.
54. Dwivedi YK, Hughes L, Ismagilova E, Aarts G, Coombs C, Crick T, et al. Artificial Intelligence (AI): multidisciplinary
perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. Int J Inf Manag
2021;57:101994.
55. Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, et al. AI4People—an ethical framework for
a good ai society: opportunities, risks, principles, and recommendations. Minds Mach 2018;28(4):689–707.
56. Nguyen A, Ngo HN, Hong Y, Dang B, Nguyen BPT. Ethical principles for artificial intelligence in education. Educ
Inf Technol 2023;28(4):4221-4241.
57. Kargl M, Plass M, Müller H. A literature review on ethics for ai in biomedical research and biobanking. Yearb Med
Inform 2022;31(1):152-160.
58. Shneiderman B. Bridging the gap between ethics and practice: guidelines for reliable, safe, and trustworthy
human-centered AI systems. ACM Trans Interact Intell Syst 2020;10(4):26:1-26:31.
59. Allioui H, Mourdi Y. Unleashing the potential of AI: investigating cutting-edge technologies that are transforming
businesses. Int J Comput Eng Data Sci IJCEDS 2023;3(2):1–12.
60. Adams D, Chuah KM. Artificial intelligence-based tools in research writing: current trends and future potentials.
In: Artificial Intelligence in Higher Education. CRC Press, 2022.
61. Khalifa M, Albadawy M. Using artificial intelligence in academic writing and research: an essential productivity
tool. Comput Methods Programs Biomed Update. 2024;5:100145.
62. Akinrinmade AO, Adebile TM, Ezuma-Ebong C, Bolaji K, Ajufo A, Adigun AO, et al. Artificial Intelligence in
healthcare: perception and reality. Cureus 2023;15(9):e45594.
63. Raad B, Anjum F, Ghafar Z. Exploring the profound impact of artificial intelligence applications (quillbot,
grammarly and chatgpt) on english academic writing: a systematic review. 2023;V1:599-622.
64. Kelly CJ, Karthikesalingam A, Suleyman M, Corrado G, King D. Key challenges for delivering clinical impact with
artificial intelligence. BMC Med 2019;17(1):195.
65. Stanfill MH, Marc DT. Health information management: implications of artificial intelligence on healthcare data
and information management. Yearb Med Inform 2019;28(1):56-64.
66. Baviskar D, Ahirrao S, Potdar V, Kotecha K. Efficient automated processing of the unstructured documents using
artificial intelligence: a systematic literature review and future directions. IEEE Access 2021;9:72894-728936.
67. Nwohiri A, Joda O, Ajayi O. AI-powered plagiarism detection: leveraging forensic linguistics and natural language
processing. FUDMA J Sci 2021;5(3):207-218.
68. Mhlanga D. Generative AI for emerging researchers: the promises, ethics, and risks. Rochester, NY, 2024 [cited
2024 Aug 17].
69. Shaji George A, Baskar T, Balaji Srikaanth P. The erosion of cognitive skills in the technological age: how reliance
on technology impacts critical thinking, problem-solving, and creativity. 2024 [cited 2024 Aug 17].
70. Cotton DRE, Cotton PA, Shipway JR. Chatting and cheating: ensuring academic integrity in the era of ChatGPT.
Innov Educ Teach In 2024;61(2):228-239.
71. Chen Z, Chen C, Yang G, He X, Chi X, Zeng Z, et al. Research integrity in the era of artificial intelligence:
challenges and responses. Medicine (Baltimore) 2024;103(27):e38811.
72. de Hond AAH, Leeuwenberg AM, Hooft L, Kant IMJ, Nijman SWJ, van Os HJA, et al. Guidelines and quality
criteria for artificial intelligence-based prediction models in healthcare: a scoping review. Npj Digit Med
2022;5(1):1-13.
73. Miao J, Thongprayoon C, Suppadungsuk S, Garcia Valencia OA, Qureshi F, Cheungpasitporn W. Ethical
dilemmas in using ai for academic writing and an example framework for peer review in nephrology academia: a
narrative review. Clin Pract 2023;14(1):89-105.
74. Walmsley J. Artificial intelligence and the value of transparency. AI Soc 2021;36(2):585-595.
75. Mondal H, Mondal S. Artificial intelligence-generated content needs a human oversight. Indian J Dermatol
2024;69(3):284.
76. Abràmoff MD, Tobey D, Char DS. Lessons learned about autonomous ai: finding a safe, efficacious, and ethical
path through the development process. Am J Ophthalmol 2020;214:134-142.
267
Cent Asian J Med Hypotheses Ethics 2024: Vol 5(4)
77. Leslie D. Understanding artificial intelligence ethics and safety. 2019 [cited 2024 Aug 17].
78. Spivakovsky OV, Omelchuk SA, Kobets VV, Valko NV, Malchykova DS. Institutional policies on artificial
intelligence in university learning, teaching and research. Inf Technol Learn Tools 2023;97(5):181-202.
79. Sartor G, Lagioia F. The impact of the General Data Protection Regulation (GDPR) on artificial intelligence. BEL,
2020 [cited 2024 Aug 17].
80. Forcier MB, Gallois H, Mullan S, Joly Y. Integrating artificial intelligence into health care through data access: can
the GDPR act as a beacon for policymakers? J Law Biosci 2019;6(1):317-335.
81. Adolfsson S. AI as a creator : how do AI-generated creations challenge EU intellectual property law and how
should the EU react? 2021 [cited 2024 Aug 17].
82. Kirakosyan A. Intellectual property ownership of AI-generated content. Digit Law J 2023;4:40.
83. Elendu C, Amaechi DC, Elendu TC, Jingwa KA, Okoye OK, John Okah M, et al. Ethical implications of AI and
robotics in healthcare: a review. Medicine (Baltimore) 2023 Dec;102(50):e36671.
84. Gasparyan AY, Ayvazyan L, Blackmore H, Kitas GD. Writing a narrative biomedical review: considerations for
authors, peer reviewers, and editors. Rheumatol Int 2011;31(11):1409-1417.
85. Osasona F, Amoo OO, Atadoga A, Abrahams TO, Farayola OA, Ayinla BS. Reviewing the ethical implications of
ai in decision making processes. Int J Manag Entrep Res 2024;6(2):322-335.
86. Silva JAT da, Tsigaris P. Acknowledgments through the prism of the ICMJE and ChatGPT Cent Asian J Med
Hypotheses Ethics 2024;5(2):117-126.
87. Lubowitz JH. Guidelines for the use of generative artificial intelligence tools for biomedical journal authors and
reviewers. Arthrosc J Arthrosc Relat Surg 2024;40(3):651-652.
88. Roche C, Wall PJ, Lewis D. Ethics and diversity in artificial intelligence policies, strategies and initiatives. AI Ethics
2022;1-21.
89. Stahl BC, Wright D. Ethics and privacy in ai and big data: implementing responsible research and innovation.
IEEE Secur Priv 2018;16(3):26-33.
90. Floridi L. The ethics of artificial intelligence: principles, challenges, and opportunities. Oxford University Press
2023:272.
91. Crossnohere NL, Elsaid M, Paskett J, Bose-Brill S, Bridges JFP. Guidelines for Artificial Intelligence in medicine:
literature review and content analysis of frameworks. J Med Internet Res 2022;24(8):e36823.
268
Cent Asian J Med Hypotheses Ethics 2024: Vol 5(4)
ҒЫЛЫМИ МАҚАЛАЛАРДЫ ЖАЗУ МЕН ЗЕРТТЕУЛЕРДЕГІ ЖАСАНДЫ ИНТЕЛЛЕКТ:
ЭТИКАЛЫҚ САЛДАРЫ ЖӘНЕ ҮЗДІК ТӘЖІРИБЕ
Түйін
Жасанды интеллект (ЖИ) — бұл адам интеллектін ұқсату, жақсарту және кеңейту үшін компьютерлік
технологияны қолданатын сала. ЖИ тұжырымдамасы бастапқыда ХХ ғасырдың ортасында
ұсынылып, қарапайым автоматтандырудан бастап күрделі шешім қабылдау процестеріне дейін
әртүрлі мақсаттарға қызмет ететін технологияға айналды. ЖИ жасанды біржақты интеллектті, жалпы
интеллект пен суперинтеллектіні қамтиды. Зерттеу жұмыстарында ЖИ деректерді талдауды, тілді
тексеруді және әдебиеттерді шолуды өзгертеді. ЖИ пайдаланылатын көптеген салаларда плагиат,
ағат пікірлілік, құпиялылық, жауапкершілік және ашықтықпен қоса этикалық ойлар нақты нормалар
мен адамның қадағалауын қажет етеді. Этикалық қағидаттарды түсінуге және сақтауға ықпал ету
арқылы, зерттеу қауымдастығы академиялық жауапкершілік пен адалдықты сақтап, ЖИ
артықшылықтарын сәтті пайдалана алады. Адамның білімі мен шығармашылығын жақсарту үшін
барлық мүдделі тараптардың топтық жұмысы қажет, ал зерттеулерде ЖИ-ны этикалық қолдану өте
маңызды болып табылады.
Түйін сөздер: жасанды интеллект, этнос, медициналық мақала, құпиялылық, баспа ісіндегі этика.
Дәйексөз үшін: Аль-Самхори АР. Ф, Альнаймат Ф. Ғылыми мақалаларды жазу мен зерттеулердегі
жасанды интеллект: этикалық салдары және үздік тәжірибе. Орта Азиялық медицина гипотезасы мен
этикасы журналы 2024:5(4):259-268. https://doi.org/10.47316/cajmhe.2024.5.4.02
ИСКУССТВЕННЫЙ ИНТЕЛЛЕКТ В НАПИСАНИИ НАУЧНЫХ СТАТЕЙ И ИССЛЕДОВАНИЯХ:
ЭТИЧЕСКИЕ ПОСЛЕДСТВИЯ И ЛУЧШИЕ ПРАКТИКИ
Резюме
Искусственный интеллект (ИИ) — это область, которая использует компьютерные технологии для
имитации, улучшения и расширения человеческого интеллекта. Концепция ИИ была первоначально
предложена в середине двадцатого века и превратилась в технологию, которая служит различным
целям, от простой автоматизации до сложных процессов принятия решений. ИИ охватывает
искусственный узкий интеллект, общий интеллект и суперинтеллект. ИИ трансформирует анализ
данных, проверку языка и обзоры литературы в исследованиях. Во многих областях применения ИИ
этические соображения, включая плагиат, предвзятость, конфиденциальность, ответственность и
прозрачность, требуют точных норм и человеческого надзора. Содействуя пониманию и соблюдению
этических принципов, исследовательское сообщество может успешно использовать преимущества
ИИ, одновременно поддерживая академическую ответственность и честность. Для улучшения
человеческих знаний и креативности требуется командная работа всех заинтересованных сторон, и
этическое использование ИИ в исследованиях имеет важное значение.
Ключевые слова: искусственный интеллект, этика, медицинские статьи, конфиденциальность, этика
в издательском деле.
Для цитирования: Аль-Самхори АР. Ф, Альнаймат Ф. Искусственный интеллект в написании
научных статей и исследованиях: этические последствия и лучшие практики. Центральноазиатский
журнал медицинских гипотез и этики 2024:5(4):259-268. https://doi.org/10.47316/cajmhe.2024.5.4.02