ArticlePDF Available

Abstract

The phenomenon of plagiarism in peer-review evaluation reports remained surprisingly unrecognized, despite a notable rise of such cases in recent years. This study reports multiple cases of peer-review plagiarism recently detected in 50 different scientific articles published in 19 journals. Their in-depth analysis reveals that such reviews tend to be nonsensical, vague and unrelated to the actual manuscript. The analysis is followed by a discussion of the roots of such plagiarism, its consequences and measures that could counteract its further spreading. In addition, we demonstrate how increased availability and access to AI technologies through recent emergence of chatbots may be misused to write or conceal plagiarized peer-reviews. Plagiarizing reviews is a severe misconduct that requires urgent attention and action from all affected parties.
1
Emerging plagiarism in peer-review evaluation reports: a tip of the iceberg?
Mikołaj Piniewski1*, Ivan Jarić2,3, Demetris Koutsoyiannis4, Zbigniew W. Kundzewicz5
1 Department of Hydrology, Meteorology and Water Management, Warsaw University of Life
Sciences, Warsaw, Poland, email: mikolaj_piniewski@sggw.edu.pl
2 Université Paris-Saclay, CNRS, AgroParisTech, Ecologie Systématique Evolution, Gif-sur-Yvette,
France, email: ivan.jaric@universite-paris-saclay.fr
3 Biology Centre of the Czech Academy of Sciences, Institute of Hydrobiology, České Budějovice,
Czech Republic
4 National Technical University of Athens, Athens, Greece, email: dk@itia.ntua.gr
5 Faculty of Environmental Engineering and Mechanical Engineering, Poznań University of Life
Sciences, Poznań, Poland, email: kundzewicz@yahoo.com
*Corresponding author
Abstract
The phenomenon of plagiarism in peer-review evaluation reports remained surprisingly unrecognized,
despite a notable rise of such cases in recent years. This study reports multiple cases of peer-review
plagiarism recently detected in 50 different scientific articles published in 19 journals. Their in-depth
analysis reveals that such reviews tend to be nonsensical, vague and unrelated to the actual
manuscript. The analysis is followed by a discussion of the roots of such plagiarism, its consequences
and measures that could counteract its further spreading. In addition, we demonstrate how increased
availability and access to AI technologies through recent emergence of chatbots may be misused to
write or conceal plagiarized peer-reviews. Plagiarizing reviews is a severe misconduct that requires
urgent attention and action from all affected parties.
Keywords: plagiarism, peer-review, publication ethics, scientific misconduct, chatbot
Introduction
A primary aim of a “classic” peer-review which has been around for many decades, is to assist the
journal editor in reaching a decision on the suitability of a submitted manuscript to be published in a
journal, and to help with suggestions on how those manuscripts potentially suitable for publishing
could be improved (Fiala and Diamandis, 2017). Beck (2003) simplified the issue to a binary scheme
filtering out junk science and providing useful feedback to authors of non-filtered out works to
improve them. A review should be fair, provided in reasonable time, framed in the spirit of
constructive criticism, and its recommendations should be supported by arguments. Kundzewicz &
Koutsoyiannis (2005) identified a range of potential problems in the peer-review system, embracing
subjectivity, bias, abuse, non-detection of errors, as well as fraud and misconduct.
While a lot has been written about the issue of plagiarism in research and scientific publishing (Butler,
2010; Helgesson and Eriksson, 2015; Maurer et al., 2006), literally no attention was paid to the fact
that the peer-review process can also be subject to plagiarism. The latter is substantiated in this paper
by reporting an original case of detection of plagiarism in peer-review evaluation reports (hereafter
2
referred to as “peer-review plagiarism”). Since the reported case reflects a systemic problem, we also
tried to interpret motivations of plagiarizers, and discussed possible solutions to prevent the problem,
as well as an emerging issue of using artificial intelligence (AI) chatbots to write or conceal
plagiarized reviews.
The real case of reviewers stealing ideas, i.e. rejecting papers and publishing their own papers based
on the stolen ideas, is beyond the scope of this paper. The sample of such cases of serious fraud
known to the authors is very small, so that they do not refer to it in this paper.
Detected cases of peer-review plagiarism
The issue of plagiarism detection in scientific publications is very old: the journal Science published a
discussion about a potential plagiarism case in late 19th century (Halsted, 1896). However, what about
the contrary problem, i.e. cases of plagiarism in the peer-review reports detected by the authors?
Indeed, in the peer-review reports for the two manuscripts that the lead author of this article recently
published (Eini et al., 2022; Venegas-Cordero et al., 2023) lack of substance, vagueness and odd
jargon (with syntax or orthographic errors) raised suspicion on the originality of some (not all)
reviews.
Both manuscripts were submitted to a reputable journal, which is expected to warrant competent,
high-quality reviews. To address our suspicions, we assessed peer-review reports using an online
plagiarism check tool (https://www.duplichecker.com/), which returned a similarity index ranging
between 44% and 89% in three out of four peer-reviews for the first manuscript, and between 44%
and 100% in two out of three peer-reviews for the second manuscript. These figures are likely an
underestimation of the true level of plagiarism, since we were able to identify additional cases of
identical phrases using the Google search engine. If such high similarity indices are found in
incoming journal manuscripts, they would likely result in desk rejections, as exceeding a rule-of-
thumb threshold of 10-15% of similarity usually triggers visual inspection by editors (Lykkesfeldt,
2016). Of course, the editor always has the final say, as some text overlaps may be well justified.
In order to get a better insight into the newly noted phenomenon of peer-review plagiarism, we carried
out two types of analyses. First, we studied the entire length of one of the five peer-review reports that
our initial analysis flagged as plagiarism (the one with the estimated similarity index of 59%) by
breaking it into smaller pieces (one to three sentences each) and searching full quotes by Google
search engine (other search engines were also tried, but showed inferior results). In the second
attempt, we performed an in-depth analysis of a single quote, consisting of Google search of different
variants of this quote, extraction of search data and further analysis. For both types of analyses the
Google search engine by default omitted some entries very similar to the first few displayed. It was
possible, though, to repeat the search with the omitted results, which in fact led to a much higher
number of relevant hits.
The first analysis showed that exact quotes from the selected review report have been found in 22
different sources (Supplementary Table 1), which all represented existing open review reports
available online and published in 2021-2023. A lack of identified cases before 2021 suggests that the
phenomenon could be new, although it could also simply have been less easy to detect since the
growth of open-review practices is relatively new. The likelihood of plagiarism is undeniable, as the
length of investigated quotes ranged between 21 and 44 words, while a common rule of thumb is that
even using a string of five consecutive words identical in two source items represents likely
plagiarism. It is also clear that the same phrases were recycled already a few times. Interestingly, a
3
similar number of cases were anonymous and eponymous reviews, and in a few cases we found that
two different reviews with exactly matching phrases were signed by the same individual.
One would generally assume that the contents and language of a peer-review report should be very
much discipline-specific. However, that was not the case for the plagiarized review: here the same
phrases were used for manuscripts from diverse disciplines. Thus, the most important feature of the
plagiarized quotes was their vagueness: such comments in the one-size-fits-all mode do not refer to
any specific line of the reviewed manuscript, e.g. (Supplementary Table 1):
More explanation is needed for where there is a research gap and what the goals of the
research are. The research gap and the goals of the research are not explained in detail
which leads to the reader missing the significance of the research.
We also found that plagiarized reviews often seemed to have multiple sources, representing a
“collage” of texts from different existing reviews, rather than a simple copy of a single review. While
exactly the same phrases could be found in multiple reviews, it is usually difficult to assess whether it
is a case of plagiarism or self-plagiarism, due to prevailing anonymity of reviewers.
In the second analysis we picked the following quote from one of the reviews:
The major defect of this study is the debate or Argument is not clear stated in the introduction
session. Hence, the contribution is weak in this manuscript. I would suggest the author
to enhance your theoretical discussion and arrives your debate or argument.
We picked this quote because it is long (three sentences, 43 words) and has multiple language errors,
and the presence of exactly the same errors is a strong proof of plagiarism. For example, one notable
error is "Argument" with an upper-case A in the middle of a sentence.
Our analysis revealed that different variants of this quote were found in 50 different review reports
available on internet (Supplementary Table 2). This number is very likely an underestimation of the
true level of plagiarism of this quote, first because the majority of peer-review reports are not
available online and thus cannot be detected, and second since one could come up with various
modifications of the original text, which our search could not capture. In half of all cases the
identified reviews contained exactly the same quote, whereas the remaining half had very similar
variants (e.g. one word modified, or keeping two of three phrases).
We further analysed the identified 50 search results, extracting basic metadata from 50 reviews
(Supplementary Table 3). The suspicious reviews were found in 19 different journals (14 reviews in
PLOS ONE, 11 in Sustainability) belonging to seven different publishers (MDPI 29, PLOS 14,
Elsevier 3). In 45 cases (all papers published in MDPI and PLOS) the identified websites were those
of journals sharing open reviews as a part of their service. It is important to note that the adoption of
the practice of publishing the peer-review reports is a positive development (Ross-Hellauer, 2017),
leading to more openness and transparency, without which the detection of peer-review plagiarism
reported here would not be possible. This option was pioneered by journals Atmospheric Chemistry
and Physics (Copernicus Publications) and European Cells & Materials as well as 36 journals
published by BioMed Central in early 2000s (Wolfram et al., 2020), and is currently offered by an
increasing number of publishers (e.g. EMBO, MDPI, Nature Research, PLOS, Royal Society) and
journals (e.g. eLife, PeerJ).
4
The majority (37) of review reports identified as plagiarism were anonymous, whereas those that were
eponymous were predominantly (11 out of 13) signed by different individuals. Perhaps most
importantly, the problem has been apparently gaining in strength in recent years. We analysed
submission dates of the manuscripts related to suspicious reviews, revealing that only two such
manuscripts were submitted in 2019 compared to 22 in 2022.
In five cases (of which three published by Elsevier) the Google search returned links containing the
pdf files of manuscripts along with their responses to review comments (typically files generated by
manuscript tracking systems). These files were most likely uploaded by authors, as the corresponding
journals do not have open review policies. This finding is also very important, as it shows that the
review plagiarism problem may be even worse for an overwhelming majority of journals that do not
publish review reports. The confidence that the review reports are not publicly available could
additionally encourage some reviewers to plagiarize. On the other hand, we suppose that the severity
of the review plagiarism problem is also connected to the journal peer-review standards. The fact that
MDPI was on top of our list resonates with the recent study that highlighted its prioritizing self-
interest, as well as forsaking the best editorial and publication practices (Oviedo-García, 2021). The
second position of the megajournal PLOS ONE echoes with recently revealed cases of anomalous
activity of a small number of extremely active editors in this journal (Petersen, 2019).
Why do reviewers review and why would they want to plagiarize?
The two cases described above are just a reflection of a systemic problem and very likely a tip of the
iceberg. Since the publish-or-perish system is self-imposed on scientific community, it takes a huge
number of reviewers to examine the ever-growing number of manuscripts (Epstein et al., 2017)
submitted to an ever-growing number of scientific periodicals. From a reviewers viewpoint, writing a
review on a voluntary basis can be a time-consuming, and unwelcome necessity. Certainly, such effort
can yield benefits, such as getting to know about a potentially important and interesting work early
on, prestige and public recognition, as well as training a skill that can ultimately lead to
improvement of one’s own research and writing (Koshy et al., 2018). However, many potential
reviewers probably feel that the benefit-cost ratio is low, that is that writing a review is a kind of
sacrifice.
So why some individuals do not simply decline the invitation to review rather than engaging into
preparation of a plagiarized review? Perhaps sometimes saying no can be uneasy, for instance if a
review request comes from the Editor-in-Chief to a member of the Editorial Board who is obliged to
prepare a certain (high) number of reviews on a regular basis. Some journals try to attract reviewers
by offering article processing charge (APC) discount vouchers, or other perks. This may be an
important factor for some researchers in some countries, as shown by a survey conducted in Central
Eastern European countries, in which the MDPI has increased its market share tremendously in recent
years (Csomos & Farkas, 2023). Yet another possibility is a wish to boost one’s CV by reporting the
number of registered reviews in the Publons web platform (now incorporated by Clarivate into Web of
Science under the tab “My peer review records”) that tracks review and editorial contributions for
academic journals (Teixeira da Silva & Nazarovets, 2022). In some countries (e.g. Poland) review
record is a part of the periodic (e.g. annual or four-year) reporting and being frequently invited to
review is interpreted as a proxy of professional recognition. Another potential scenario, potentially
practiced by some predatory publishers and journals, is peer-review plagiarism generated not by
reviewers but by editors, with the aim to hide the actual absence of the rigorous peer-review process.
A motivation for undertaking peer-review could also have unethical grounds: e.g., to “engineer” a
5
rejection of a competitor's or opponent's work (hidden behind the status of anonymity), to steal novel
ideas, or to force one’s own references to boost citations.
Considering the above, it should not be perhaps so surprising that some reviewers may shamelessly
take the easy way out by resorting to "copy and paste", the main benefit being saving time. Another
possible motivation for a plagiarized review could be simply feeling insecure to use one’s own words
to write reviews (often due to low English proficiency). Searching and copying from open reviews
may be a strategy to avoid writing one’s own phrases in English. Indeed, scientific writing can be a
real challenge for non-native English writers, particularly for those using a non-Latin (e.g. Russian,
Arabic or Asian) alphabet system (Amano et al., 2023; Roig, 2015).
Regardless of motivation, plagiarising reviews is a serious misconduct that, as our analysis proves,
could have grown fast in recent years and thus requires urgent attention and action from all affected
parties (editors, publishers and authors). One possible exception where self-plagiarizing might be
tolerable, is the use of cliché phrases, possibly compiled by an organized individual for personal use
in reviews of problematic papers. It may well be that in some manuscript submission portals,
reviewers can select from a pool of common critical cliché statements compiled by the journal editors.
This is similar to cliché editors’ letters, who use particular templates for different categories of
submitted manuscripts, for instance to announce desk rejections. Finally, using similar expressions in
evaluation reports by the same reviewer might be a habitual practice.
Consequences of peer-review plagiarism
Besides being a form of scientific misconduct by itself, as a plagiarism, one key problem with such
plagiarized reviews is that they will surely tend to be nonsensical, vague and unrelated to the actual
manuscript. Consequently, they will negatively affect the quality of the peer-review process and of the
reviewed manuscripts, and thus also erode the quality of published science. They can also lead to a
decreased public’s trust in the peer-review process. The presence and prevalence of peer-review
plagiarism should be urgently assessed by publishers, editors, and other involved parties, to
understand the frequency of such cases. We call for urgent measures to be taken to monitor, control
and prevent such cases from happening.
How to prevent peer-review plagiarism?
As noted earlier, our discovery would not have been possible without an easy access to review reports
published with open-access alongside the accepted articles. The efforts to provide full public access to
the peer-review documents is generally received well by the scientific community. For example, a
survey of authors who published their work in Nature Communications showed that 98% would
continue such open-review practice in the future (Anonymous, 2020). Although the number of
journals adopting open peer-reviews is growing fast and some journals have undertaken transition
from optional to obligatory models (Anonymous, 2022), wider adoption of this model is still lacking
(Wolfram et al., 2021). The advantage of this solution as a measure against plagiarising reviews
would be allowing cross-comparisons between different peer-review reports. Publishing a review
alongside a paper is useful for the readership. This would, however, have some negative side effects,
as greater availability of open-access review reports would also mean greater availability of texts for
plagiarising. Thus, plagiarism in journals with hidden reviews (which in theory should be impossible
to be discovered from outside, but our search revealed five such cases) might be even intensified by
such a partial measure.
6
Therefore, we argue that the most meaningful solution to prevent review plagiarism is a routine
treatment of all submitted reviews with a plagiarism detection software. As a matter of fact, it is
astonishing that this has not, to our knowledge, been done yet. Many journals are already using such
software for screening manuscripts routinely, so why not use it also for review reports? Perhaps a
concern by editors that they might lose some reviewers who would feel “oppressed” is one
explanation; indeed, finding reviewers is an editor's headache. A frequently occurring problem is a
trade-off between quality over quantity (Epstein et al., 2017). Adopting plagiarism software by
journals was a major breakthrough in late 2000s: publishers were delighted to have an instrument to
police submissions (Butler, 2010). When some publishers first tested CrossCheck, they discovered
high levels of plagiarism of which they had been previously unaware. For example, in one of the
science journals of Taylor & Francis, 13 of 56 of new incoming articles got rejected because of
plagiarism during the CrossCheck testing phase (Butler, 2010). Perhaps history could be repeating,
this time with review reports. Publishers and journals already have the necessary tools, they just need
to start applying them, keeping in mind that using software is a necessary, but not a sufficient
condition to correctly detect plagiarism, since taking reported similarity indices at face value may be
risky (Weber-Wulff, 2019).
Certainly, plagiarism detection software should not be used blindly, as there are some pitfalls that
should be omitted. There are some common aspects of reviewed publications (e.g. format, language
norms, article structure) for which the reviewers may provide similar suggestions of changes, simply
due to their writing habits. It is the role of editors to carefully consider if such “repetitions” constitute
potential cases of plagiarism or not, so as to avoid unnecessary harm on reviewers. In addition, if
reviewers find out about plagiarism checks of their review reports by a certain journal, they may
resort to reversing sentence order, replacing identical words or other similar techniques, potentially
resulting in awkward/unclear statements in their reports, and thus unintentionally affecting the
authors’ understanding of reviewers’ suggestions.
Another solution worth considering would be to establish a shared database among journals with all
peer-review reports received submitted there to be compared with past reviews, and kept for future
comparisons. Similar solution was advocated by Jarić (2016) for submitted manuscripts, but it could
be easily extended to submitted reviews. They would not have to be made available for checking
individually by editors, they could only be there as a pooled dataset for software to cross-check, while
being kept anonymous and confidential. Good examples of sharing peer-review data among journals,
such as the PEERE initiative, already exist (Squazzoni et al., 2017).
We also generally advocate for better control by the journal editors of who gets invited to review.
Blacklisting of plagiarizers is often done only locally, by the involved journal, which leaves it open
for them to continue with misconduct in other journals, and it is also often imposed as only a
temporary measure. Sharing of information about misconduct cases and blacklisted authors among
journals would also be an important measure, both directly and as a deterrent. Editors of the Journal of
Zhejiang University Science A/B/C compared such a measure to a point system in which car drivers,
caught breaking the law, may eventually lose their driving licence (Anonymous, 2012). Blacklisting
of misconducting reviewers, regardless of the reason for doing so, improved the quality of
publications in a peer-review simulation study (D’Andrea & O’Dwyer, 2017).
We believe that mutual trust between editors and reviewers is fundamental for maintaining high peer-
review standards (Resnik & Elmore, 2016). This trust can function more easily and stably in non-
profit journals owned or operated by academic societies or professional associations (Mercer, 2022).
7
The great majority of review plagiarism cases detected in our study were associated either with a big,
commercial publishing house MDPI or a megajournal PLOS ONE, both of which are associated with
problematic peer-review standards (Oviedo-García, 2022; Petersen, 2019). However, society-owned
or society-operated journals are not immune, they can also be affected by the detected problem, as
show three examples from Supplementary Table 3: Atherosclerosis (owned by the European
Atherosclerosis Society), Ain Shams Engineering Journal (owned by Ain Shams University) or Open
Research Europe (an ‘open science’ journal launched in 2021 by the European Commission).
AI chatbots enter the game
Manual introduction of changes to the strings of characters, in order to reduce the similarity of
material to a source may have been a common practice of manuscript authors, e.g. triggered by a high
similarity index reported by journal editors. The aim of such an action is to make plagiarism more
difficult to detect by the anti-plagiarism software. This activity can be called plagiarism laundering
or ‘white-collar plagiarism’. However, a new category of review plagiarism problems, that is about to
emerge and potentially be very difficult to remedy, is related to the outputs of the AI chatbots. While
the use of online paraphrasing tools in academic writing is not a new thing (Rogerson & McCarthy,
2017), the recent outburst of AI chatbots such as ChatGPT or Bard makes a true difference. The
Supplementary Table 4 demonstrates that such systems can generate multiple versions of alternative
wording of any sentence, e.g. a slightly different version of the first sentence in this paragraph. One
can select the solution with the lowest value of the similarity index. This observation is of vast
importance for plagiarism detection in reviews (as well as in papers).
Checco et al. (2021) wrote in their article on AI-assisted peer-review that they “do not envisage any
relevant contribution from AI on the processes requiring significant domain expertise and intellectual
effort, at least for the foreseeable future”. Yet, modern chatbots can write an editorial (Aghemo et al.,
2023) or generate a review of a scientific paper. An example of a bot-generated review is given in
Supplementary Material 5. If a reviewer submits a bot-generated review as his or her own, it can be
considered as plagiarized from ChatGPT (Thorp, 2023). Indeed, the current consensus is that AIGC
(AI-generated content) is not seen as a responsible party, and thus attention is needed by journal
editorial boards to address this emerging issue, especially given that claims are being made about
efficiency and reliability of AI-generated peer-reviews (Srivastava, 2023).
The Supplementary Tables 4 and 5 demonstrate a burning question: can editors detect that a review is
bot-generated? We cannot give, unconditionally, a positive response to such question (Anonymous,
2023). Gao et al. (2023) provide results of a human-reviewer scoring for whether the document
carefully inspected by them (in their case paper abstract) was bot-generated. In their study, the
probabilities of experts making errors of the first kind (text was generated but was recognized as
original) and of the second kind (text was original but was recognized as generated) were found to be
7/41 and 16/59, respectively. The Supplementary Tables 4 and 5 show that using plagiarism check
alone may not really be a sufficient remedy (neither for papers nor for reviews) against bot-assisted
fraud. This seems to hold for a raw chatbot output. One can hypothesize that the situation may get
even more difficult in case of a hybrid system where raw chatbot output is post-processed (de-botted)
by an expert, or with further development of chatbot systems.
Most of the above problems would be reduced, if not eliminated, in a fully open and transparent
system, with a complete attribution of scientific works, both papers and reviews, to their producers. It
can be speculated that reviews published alongside with the papers or rendering reviews eponymous
8
could reduce the review plagiarism, as knowing that the review will be publicly available, with a
disclosed (rather than secret) reviewer’s identity, may make plagiarizing less likely. It would also
substantially improve scientific ethics, as the necessary condition of scientific endeavour.
Nevertheless, researchers have mixed attitudes towards this practice and often worry about the
potential negative consequences of disclosing their identities as reviewers, which limits a wider
adoption of open reviews (He et al., 2023; Ross-Hellauer et al., 2017). In addition, post-publication
peer-review could additionally reduce the risk of review plagiarism, provided the reviewers do not
hide behind the curtain of anonymity, which is often the case (Knoepfler, 2015).
We are convinced that the peer-review plagiarism, either “classic” or boosted by chatbots, in both
cases largely unnoticed, will continue to grow in the coming years. How rapidly, depends on us, the
scientific community (scientists, editors and publishers). Now it is time for the community to make an
effort to investigate this issue thoroughly, to estimate the actual prevalence of this phenomenon, and
to find and implement adequate solutions. Here we provided a number of suggestions that should
hopefully stimulate discussion.
Acknowledgements
The authors thank an anonymous reviewer for providing helpful comments and suggestions that
improved the manuscript. This version of the article has been accepted for publication, after peer
review and is subject to Springer Nature’s AM terms of use, but is not the Version of Record and does
not reflect post-acceptance improvements, or any corrections. The Version of Record is available
online at: http://dx.doi.org/10.1007/s11192-024-04960-1
Data availability statement
All data supporting this article are included as Supplementary Materials.
References
1. Aghemo, A., Forner, A., Valenti, L. (2023) Should Artificial Intelligence-based language
models be allowed in developing scientific manuscripts? A debate between ChatGPT and the
editors of Liver International. Liver Int, 43: 956-957. DOI: 10.1111/liv.15580.
2. Amano, T., Ramírez-Castañeda, V., Berdejo-Espinola, V., Borokini, I., Chowdhury, S.,
Golivets, M., ... & Veríssimo, D. (2023). The manifold costs of being a non-native English
speaker in science. PLoS Biology, 21(7), e3002184. DOI: 10.1371/journal.pbio.3002184.
3. Anonymous (2012) How to stop plagiarism. Nature 481, 2123 (2012). DOI:
10.1038/481021a
4. Anonymous (2020). Nature will publish peer review reports as a trial. Nature 578, 8, DOI:
10.1038/d41586-020-00309-9
5. Anonymous (2022). Transparent peer review for all. Nat Commun 13, 6173. DOI:
10.1038/s41467-022-33056-8
6. Anonymous (2023) Tools such as ChatGPT threaten transparent science; here are our ground
rules for their use. Nature, 613, 612; DOI: 10.1038/d41586-023-00191-1.
9
7. Beck, E., Jr (2003) Anonymous reviews: self-serving, counterproductive, and unacceptable.
Eos, Trans. Am. Geophys. Union 84(26), 249. DOI: 10.1029/2003EO260005.
8. Butler, D. (2010) Journals step up plagiarism policing. Nature 466, 167 DOI:
10.1038/466167a
9. Checco, A., Bracciale, L., Loreti, P., Pinfield, S., Bianchi, G. (2021) AI-assisted peer review.
Humanit Soc Sci Commun 8, 25. DOI: 10.1057/s41599-020-00703-8.
10. Csomós, G., Farkas, J.Z. (2023) Understanding the increasing market share of the academic
publisher “Multidisciplinary Digital Publishing Institute” in the publication output of Central
and Eastern European countries: a case study of Hungary. Scientometrics 128, 803824. DOI:
10.1007/s11192-022-04586-1
11. D’Andrea, R., O’Dwyer, J.P. (2017) Can editors save peer review from peer reviewers? PLoS
ONE 12(10): e0186111. DOI: 10.1371/journal.pone.0186111.
12. Eini, M.R., Rahmati, A., Salmani, H., Brocca, L., Piniewski, M. (2022) Detecting
characteristics of extreme precipitation events using regional and satellite-based precipitation
gridded datasets over a region in Central Europe. Science of The Total Environment 852,
158497. DOI: 10.1016/j.scitotenv.2022.158497.
13. Epstein, D.; Wiseman, V.; Salaria, N.; Mounier-Jack, S. (2017) The need for speed: the peer-
review process and what are we doing about it?, Health Policy and Planning, 32(10), 1345
1346, DOI: 10.1093/heapol/czx129
14. Fiala, C., Diamandis, E.P. (2017) The emerging landscape of scientific publishing. Clinical
Biochemistry 50 (12), 651-655. DOI: 10.1016/j.clinbiochem.2017.04.009
15. Gao, C.A., Howard, F.M., Markov, N.S., Dyer, E.C., Ramesh, S., Luo Yuan, Pearson, A.T.
(2023) Comparing scientific abstracts generated by ChatGPT to real abstracts with detectors
and blinded human reviewers. npj Digital Medicine 6:75; DOI: 10.1038/s41746-023-00819-6.
16. Halsted, G.B. (1896) Compliment or plagiarism. Science, 4 (102), 877-878.
17. He, Y., Tian, K., Xu, X. (2023) A validation study on the factors affecting the practice modes
of open peer review. Scientometrics 128, 587607.
18. Helgesson, G., Eriksson, S. (2015) Plagiarism in research. Med Health Care and Philos 18,
91101. DOI: 10.1007/s11019-014-9583-8.
19. Hosseini, M., Rasmussen, L.M. & Resnik, D.B. (2023): Using AI to write scholarly
publications, Accountability in Research, DOI: 10.1080/08989621.2023.2168535
20. Jarić, I. (2016) High time for a common plagiarism detection system. Scientometrics 106,
457459, DOI: 10.1007/s11192-015-1756-6
21. Knoepfler P. (2015) Reviewing post-publication peer review. Trends Genet. 31(5):221-3
DOI: 10.1016/j.tig.2015.03.006.
10
22. Koshy, K., Fowler, A.J., Gundogan, B., Agha, R.A. (2018) Peer review in scholarly
publishing part A: why do it?. International Journal of Surgery Oncology 3(2):p e56, DOI:
10.1097/IJ9.0000000000000056
23. Kundzewicz, Z.W. & Koutsoyiannis, D. (2005) EditorialThe peer-review system: prospects
and challenges, Hydrol. Sci. J., 50(4), 577-590.
24. Lykkesfeldt, J. (2016), Strategies for Using Plagiarism Software in the Screening of Incoming
Journal Manuscripts: Recommendations Based on a Recent Literature Survey. Basic Clin
Pharmacol Toxicol, 119: 161-164. DOI: 10.1111/bcpt.12568
25. Maurer, H., Kappe, F., Zaka, B. (2006) Plagiarism a survey. Journal of Universal Computer
Science 12(8). DOI: 10.3217/jucs-012-08-1050.
26. Oviedo-García, M.A. (2021) Journal citation reports and the definition of a predatory journal:
The case of the Multidisciplinary Digital Publishing Institute (MDPI), Research Evaluation,
30(3), 405419, DOI:10.1093/reseval/rvab020
27. Petersen, A.M. (2019) Megajournal mismanagement: Manuscript decision bias and
anomalous editor activity at PLOS ONE. Journal of Informetrics 13, 4, 100974. DOI:
10.1016/j.joi.2019.100974
28. Resnik, D.B., Elmore, S.A. (2016) Ensuring the Quality, Fairness, and Integrity of Journal
Peer Review: A Possible Role of Editors. Sci Eng Ethics 22, 169188. DOI: 10.1007/s11948-
015-9625-5
29. Rogerson, A.M., McCarthy, G. (2017) Using Internet Based Paraphrasing Tools: Original
Work, Patchwriting or Facilitated Plagiarism? Int. J. Educ. Integr. 13, 115. DOI:
10.1007/s40979-016-0013-y
30. Ross-Hellauer, T. (2017) What is open peer review? A systematic review. F1000Res. 6:588.
DOI: 10.12688/f1000research.11369.2.
31. Ross-Hellauer, T., Deppe, A., Schmidt, B. (2017) Survey on open peer review: Attitudes and
experience amongst editors, authors and reviewers. PLoS ONE 12(12): e0189311.
https://doi.org/10.1371/journal.pone.0189311
32. Squazzoni, F., Grimaldo, F. & Marušić, A. (2017) Publishing: Journals could share peer-
review data. Nature 546, 352. DOI: 10.1038/546352a.
33. Srivastava, M. (2023). A day in the life of ChatGPT as an academic reviewer: Investigating
the potential of large language model for scientific literature review. DOI:
10.31219/osf.io/wydct.
34. Teixeira da Silva, J.A., Nazarovets, S. (2022) The Role of Publons in the Context of Open
Peer Review. Pub Res Q 38, 760781. DOI: 10.1007/s12109-022-09914-0.
35. Thorp, H.H. (2023) ChatGPT is fun, but not an author. Science, 379 (6630), DOI:
10.1126/science.adg7879
11
36. Venegas-Cordero, N.; Cherrat, C.; Kundzewicz, Z.W.; Singh, J.; Piniewski, M. (2023)
Model-based assessment of flood generation mechanisms over Poland: The roles of
precipitation, snowmelt, and soil moisture excess. Science of The Total Environment, 891,
164626, DOI: 10.1016/j.scitotenv.2023.164626
37. Weber-Wulff (2019) Plagiarism detectors are a crutch, and a problem. Nature 567, 435, DOI:
10.1038/d41586-019-00893-5.
38. Wolfram, D.; Wang, P.; Abuzahra, F. (2021), An exploration of referees’ comments
published in open peer review journals: The characteristics of review language and the
association between review scrutiny and citations, Research Evaluation, 30(3), 314322, DOI:
10.1093/reseval/rvab005
39. Wolfram, D., Wang, P., Hembree, A., Park, H. (2020) Open peer review: promoting
transparency in open science. Scientometrics 125, 10331051. DOI: 10.1007/s11192-020-
03488-4.
... Public review platforms can expose researchers to scrutiny or harassment, raising ethical concerns (Wang et al., 2023). AI-powered peer review introduces risks that require human oversight (Seghier, 2024), while plagiarism in review reports and the rise of review mills threaten review integrity (Piniewski et al., 2024;Oviedo-García, 2024;Ezhumalai et al., 2024). To address these risks, researchers advocate for clearer policies on reviewer disclosures, public critique, and misconduct prevention, ensuring transparency strengthens rather than undermines the review process (Kaltenbrunner et al., 2022;Kuznetsov et al., 2024). ...
... Plagiarism One frequently cited concern is that open review may inadvertently facilitate plagiarism (Piniewski et al., 2024;Oviedo-García, 2024) if innovative concepts are publicly visible before a paper is formally published. When submissions are posted online (e.g., in open-review platforms or preprint servers like arXiv) and later rejected, these ideas remain accessible, allowing others to potentially adopt or iterate on them without proper attribution. ...
Preprint
The rapid growth of submissions to top-tier Artificial Intelligence (AI) and Machine Learning (ML) conferences has prompted many venues to transition from closed to open review platforms. Some have fully embraced open peer reviews, allowing public visibility throughout the process, while others adopt hybrid approaches, such as releasing reviews only after final decisions or keeping reviews private despite using open peer review systems. In this work, we analyze the strengths and limitations of these models, highlighting the growing community interest in transparent peer review. To support this discussion, we examine insights from Paper Copilot, a website launched two years ago to aggregate and analyze AI / ML conference data while engaging a global audience. The site has attracted over 200,000 early-career researchers, particularly those aged 18-34 from 177 countries, many of whom are actively engaged in the peer review process. Drawing on our findings, this position paper advocates for a more transparent, open, and well-regulated peer review aiming to foster greater community involvement and propel advancements in the field.
... Peer review is the cornerstone of scientific research. The integration of LLMs into the peer review process represents a significant advancement, addressing longstanding challenges such as reviewer bias, inconsistent standards, and workload imbalances [42,117]. This integration has gained significant traction in the academic community, as evidenced by major computer science conferences adopting LLM-assisted reviewing practices. ...
... Equally concerning are the ethical implications of LLM-assisted peer review. Issues of algorithmic bias and transparency [133] have emerged alongside new forms of academic misconduct, such as "plagiarism laundering" [117]. Additionally, a critical concern is the potential homogenization of academic feedback if many researchers rely on the same LLM systems for peer review [91]. ...
Preprint
Full-text available
In recent years, the rapid advancement of Large Language Models (LLMs) has transformed the landscape of scientific research, offering unprecedented support across various stages of the research cycle. This paper presents the first systematic survey dedicated to exploring how LLMs are revolutionizing the scientific research process. We analyze the unique roles LLMs play across four critical stages of research: hypothesis discovery, experiment planning and implementation, scientific writing, and peer reviewing. Our review comprehensively showcases the task-specific methodologies and evaluation benchmarks. By identifying current challenges and proposing future research directions, this survey not only highlights the transformative potential of LLMs, but also aims to inspire and guide researchers and practitioners in leveraging LLMs to advance scientific inquiry. Resources are available at the following repository: https://github.com/du-nlp-lab/LLM4SR
... Particularly, the rise of open access and predatory journals makes this critical process even more challenging (Carobene et al., 2024). For example, a research group investigated plagiarism in the reviews of two different articles they submitted to reputable journals and found similarities ranging from 44% to 89% in three out of four reviews for the first article, and from 44% to 100% in two out of three reviews for the second article (Piniewski et al., 2024). Therefore, both the creation and writing of a scientific article and its review process can lead to ethical violations and include plagiarism. ...
... The ability of GAI tools to make changes and reproduce texts exposed to plagiarism by altering expressions poses the greatest challenge for similarity detection platforms, potentially encouraging this unethical trend. Particularly with AI chatbots like ChatGPT capable of generating multiple versions of any sentence, the situation becomes quite challenging (Piniewski et al., 2024). In this context, ethical lapses young scientists might commit early in their careers could follow them throughout their lives, damaging their reputations. ...
Article
This paper discusses the profound impact of generative AI (GAI) technologies, like ChatGPT, on academic writing and research. GAI tools have revolutionized content creation, offering capabilities such as text generation, grammar improvement, and literature scanning. However, the letter raises ethical concerns, particularly regarding the use of GAI in the peer review process and its potential to introduce plagiarism and bias. It emphasizes that while GAI can enhance productivity, it cannot be considered a co-author due to its inability to assume responsibility for the content. The letter calls for clear acknowledgment of GAI's contributions in academic work while upholding the essential role of human oversight to maintain integrity and ethical standards in research and publishing.
... The growing volume of scientific output is accompanied by a corresponding increase in various forms of academic misconduct, including paper mills, questionable journals, plagiarism and the fabrication or falsification of research findings [11][12][13][14]. This concerning trend places heightened demands on journal editors and reviewers, whose workload is experiencing a corresponding escalation [15]. As a result, errors or misconduct may not always be promptly identified. ...
Article
Full-text available
The proliferation of scholarly publications underscores the necessity for reliable tools to navigate scientific literature. OpenAlex, an emerging platform amalgamating data from diverse academic sources, holds promise in meeting these evolving demands. Nonetheless, our investigation uncovered a flaw in OpenAlex’s portrayal of publication status, particularly concerning retractions. Despite accurate metadata sourced from Crossref database, OpenAlex consolidated this information into a single Boolean field, ‘is_retracted’, leading to misclassifications of papers. This challenge not only impacts OpenAlex users but also extends to users of other academic resources integrating the OpenAlex API. The issue affects data provided by OpenAlex in the period between 22 December 2023 and 19 March 2024. Anyone using data from this period should urgently check it and replace it if necessary.
... For instance, a recent analysis of the occurrence of some adjectives that are the hallmarks of ChatGPT in referee reports found 17% of the referee reports of conference proceedings were significantly modified by ChatGPT (Liang et al., 2024). Another study identified instances of plagiarism in the reviewers' reports, sometimes with the help of chatbots, as evidenced by identical phrases repeated across many supposedly independent reports (Piniewski et al., 2024). In this context, the current paper discusses applications of GenAI to peer review, and highlights potential risks and how to mitigate them to ensure a responsible integration of AI into peer review. ...
Article
Full-text available
Purpose This paper aims to appraise current challenges in adopting generative AI by reviewers to evaluate the readability and quality of submissions. The paper discusses how to make the AI-powered peer-review process immune to unethical practices, such as the proliferation of AI-generated poor-quality or fake reviews that could harm the value of peer review. Design/methodology/approach This paper examines the potential roles of AI in peer review, the challenges it raises and their mitigation. It critically appraises current opinions and practices while acknowledging the lack of consensus about best practices in AI for peer review. Findings The adoption of generative AI by the peer review process seems inevitable, but this has to happen (1) gradually, (2) under human supervision, (3) by raising awareness among stakeholders about all its ramifications, (4) through improving transparency and accountability, (5) while ensuring confidentiality through the use of locally hosted AI systems, (6) by acknowledging its limitations such as its inherent bias and unawareness of up-to-date knowledge, (7) by putting in place robust safeguards to maximize its benefits and lessen its potential harm and (8) by implementing a robust quality assurance to assess its impact on overall research quality. Originality/value In the current race for more AI in scholarly communication, this paper advocates for a human-centered oversight of AI-powered peer review. Eroding the role of humans will create an undesirable situation where peer review will gradually metamorphose into an awkward conversation between an AI writing a paper and an AI evaluating that paper.
... There is consensus here because major publishers like Elsevier, Springer, Taylor & Francis, Emerald, and others now apply this policy (Ganjavi et al., 2024). An ongoing discussion relates to the use of LLMs to support researchers in creating peer review reports (Mollaki, 2024;Oviedo-García, 2024;Piniewski et al., 2024), but the policies for writing research papers are being quicker established (Garcia, 2024). These publisher policies are still evolving and becoming more nuanced over time. ...
Article
Full-text available
The aim of this paper is to highlight the situation whereby content generated by the large language model ChatGPT is appearing in peer‐reviewed papers in journals by recognized publishers. The paper demonstrates how to identify sections that indicate that a text fragment was generated, that is, entirely created, by ChatGPT. To prepare an illustrative compilation of papers that appear in journals indexed in the Web of Science and Scopus databases and possessing Impact Factor and CiteScore indicators, the SPAR4SLR method was used, which is mainly applied in systematic literature reviews. Three main findings are presented: in highly regarded premier journals, articles appear that bear the hallmarks of the content generated by AI large language models, whose use was not declared by the authors (1); many of these identified papers are already receiving citations from other scientific works, also placed in journals found in scientific databases (2); and, most of the identified papers belong to the disciplines of medicine and computer science, but there are also articles that belong to disciplines such as environmental science, engineering, sociology, education, economics and management (3). This paper aims to continue and add to the recently initiated discussion on the use of large language models like ChatGPT in the creation of scholarly works.
... This comment is most interesting and I will reply to it in full detail. Generally, while Reviewer's #5 comments do not seem to be plagiarized in the sense recently revealed and studied by Piniewski et al. [68], this particular comment is a paraphrasis of Cawley's [69] comment in the pubpeer site, created by Rice [70]. The notations, equations and the figure are precisely the same (without paraphrasis) as in Cawley's [69] comment. ...
Research
Full-text available
Publishing papers that challenge conventional wisdom is not easy. I struggled to publish each one of the papers that contradict the established climate narrative. I still struggle to publish others which are being reviewed or have been rejected. The attacks may continue after publication of published papers, aiming to force publishers to retract published papers. Here I provide an example of an attack, referring to the review of a paper, in which the comments were focused on my earlier papers and used material from online attacks on them. For the history and transparency, I include the attacking comments, as well as my detailed replies to them.
Article
In this paper, we introduce a special issue of Q Open on open access and open science that presents papers from a session at the 2023 European Association of Agricultural Economists Congress. We briefly discuss some of the emerging issues confronting applied economists regarding open access and open science. We also consider how the growth in open access is changing the publication landscape, as well as ongoing efforts to promote open science. As the papers published in the special issue show, there remain unresolved questions regarding the costs and benefits associated with implementing open access and open science.
Article
Review mills sum up a new category of reviewer misconduct that flies in the face of reviewer ethics and integrity. A pattern of generic, vague, and repeated affirmations (identical or very similar boilerplate phrasing) is noted in the analysis of 263 review reports, regardless of the scientific content of the papers under review, coupled with coercive citation (perhaps among the main reasons for such behavior), which when combined produce fake reviews. The misconduct associated with review mills is unlike mere plagiarism (self-plagiarism) of reviewer comments. It is important to quantify the problem and to take urgent measures: (a) to identify the review millers; (b) to rectify the published literature; and (c) to determine procedures for journals and publishers on procedures to counter this new type of misconduct.
Article
Full-text available
The use of English as the common language of science represents a major impediment to maximising the contribution of non-native English speakers to science. Yet few studies have quantified the consequences of language barriers on the career development of researchers who are non-native English speakers. By surveying 908 researchers in environmental sciences, this study estimates and compares the amount of effort required to conduct scientific activities in English between researchers from different countries and, thus, different linguistic and economic backgrounds. Our survey demonstrates that non-native English speakers, especially early in their careers, spend more effort than native English speakers in conducting scientific activities, from reading and writing papers and preparing presentations in English, to disseminating research in multiple languages. Language barriers can also cause them not to attend, or give oral presentations at, international conferences conducted in English. We urge scientific communities to recognise and tackle these disadvantages to release the untapped potential of non-native English speakers in science. This study also proposes potential solutions that can be implemented today by individuals, institutions, journals, funders, and conferences. Please see the Supporting information files (S2–S6 Text) for Alternative Language Abstracts and Figs 5 and 6.
Article
Full-text available
Hydrometeorological variability, such as changes in extreme precipitation, snowmelt, or soil moisture excess, in Poland can lead to fluvial flooding. In this study we employed the dataset covering components of the water balance with a daily time step at the sub-basin level over the country for 1952-2020. The data set was derived from the previously calibrated and validated Soil & Water Assessment Tool (SWAT) model for over 4000 sub-basins. We applied the Mann Kendall test and circular statistics-based approach on annual maximum floods and various potential flood drivers to estimate the trend, seasonality, and relative importance of each driver. In addition, two sub-periods (1952-1985 and 1986-2020) were considered to examine changes in flood mechanism in the recent decades. We show that floods in the northeast Poland were decreasing, while in the south the trend showed a positive behavior. Moreover, the snowmelt is a primary driver of flooding across the country, followed by soil moisture excess and precipitation. The latter seemed to be the dominant driver only in a small, mountain-dominated region in the south. Soil moisture excess gained importance mainly in the northern part, suggesting that the spatial pattern of flood generation mechanisms is also governed by other features. We also found a strong signal of climate change in large parts of northern Poland, where snowmelt is losing importance in the second sub-period in favor of soil moisture excess, which can be explained by the temperature warming and diminishing role of snow processes.
Article
Full-text available
Large language models such as ChatGPT can produce increasingly realistic text, with unknown information on the accuracy and integrity of using these models in scientific writing. We gathered fifth research abstracts from five high-impact factor medical journals and asked ChatGPT to generate research abstracts based on their titles and journals. Most generated abstracts were detected using an AI output detector, ‘GPT-2 Output Detector’, with % ‘fake’ scores (higher meaning more likely to be generated) of median [interquartile range] of 99.98% ‘fake’ [12.73%, 99.98%] compared with median 0.02% [IQR 0.02%, 0.09%] for the original abstracts. The AUROC of the AI output detector was 0.94. Generated abstracts scored lower than original abstracts when run through a plagiarism detector website and iThenticate (higher scores meaning more matching text found). When given a mixture of original and general abstracts, blinded human reviewers correctly identified 68% of generated abstracts as being generated by ChatGPT, but incorrectly identified 14% of original abstracts as being generated. Reviewers indicated that it was surprisingly difficult to differentiate between the two, though abstracts they suspected were generated were vaguer and more formulaic. ChatGPT writes believable scientific abstracts, though with completely generated data. Depending on publisher-specific guidelines, AI output detectors may serve as an editorial tool to help maintain scientific standards. The boundaries of ethical and acceptable use of large language models to help scientific writing are still being discussed, and different journals and conferences are adopting varying policies.
Preprint
Full-text available
In this paper, we investigate the potential of large language model for scientific literature review. The exponential growth of research papers is placing an increasing burden on human reviewers, making it challenging to maintain efficient and reliable review processes. To address this, we explore the use of ChatGPT to assist with the review process. Our experiments demonstrate that ChatGPT can review the papers followed by the sentiment analysis of the review of research papers and provide insights into their potential for acceptance or rejection. Although our study is limited to a small sample of papers, the results are promising and suggest that further research in this area is warranted. We note that the use of large language models for scientific literature review is still in its early stages, and there are many challenges to be addressed. Nonetheless, our work highlights the potential of these models to augment the traditional peer-review process, providing a new perspective on research papers and potentially accelerating the pace of scientific discovery and innovation.
Article
Full-text available
As the open access movement has gained widespread popularity in the scientific community , academic publishers have gradually adapted to the new environment. The pioneer open access journals have turned themselves into megajournals, and the subscription-based publishers have established open access branches and have turned subscription-based journals into hybrid ones. Maybe the most dramatic outcome of the open access boom is the market entry of such fast-growing open access publishers as Frontiers and Multidisciplinary Digital Publishing Institute (MDPI). By 2021, in terms of the number of papers published , MDPI has become one of the largest academic publishers worldwide. However, the publisher's market shares across countries and regions show an uneven pattern. Whereas in such scientific powers as the United States and China, MDPI has remained a relatively small-scale player, it has gained a high market share in Europe, particularly in the Central and Eastern European (CEE) countries. In 2021, 28 percent of the SCI/SSCI papers authored/co-authored by researchers from CEE countries were published in MDPI journals , a share that was as high as the combined share of papers published by Elsevier and Springer Nature, the two largest academic publishers in the world. This paper seeks to find an explanation for the extensively growing share of MDPI in the publication outputs of CEE countries by choosing Hungary as a case study. To do this, by employing data analysis , some unique features of MDPI will be revealed. Then, we will present the results of a questionnaire survey conducted among Hungary-based researchers regarding MDPI and the factors that motivated them to publish in MDPI journals. Our results show that researchers generally consider MDPI journals' sufficiently prestigious, emphasizing the importance of the inclusion of MDPI journals in Scopus and Web of Science databases and their high ranks and impacts. However, most researchers posit that the quick turnaround time that MDPI journals offer is the top driver of publishing in such journals.
Article
Full-text available
In this paper we conduct a validation study on the factors affecting the practice modes of open peer review. Taking the Open Access Journals (OAJ) in Directory of Open Access Journals (DOAJ) as the research objects, we crawled the internet to gather their relevant data. Based on the method of categorical variable assignment, a quantitative analysis was performed on the qualitative factors that affect the practice modes of open peer review. A multi-dimensional analysis chart is used to illustrate the relationships between the factors. Optimal scale regression modeling and discriminant analysis were also employed to reveal the degrees of influences by the factors. The public categories of “type of open peer review” and “reviewer identity” are closely related to each other. “Reviewer identity” has evident positive influence on “type of open peer review”, and the degree of influence is the highest. Therefore, “reviewer identity” is the primary and most crucial factor affecting open peer review practice modes. “Review report” and “order of review report and publication” are the secondary ones. Whether or not the identities of review experts are open has become the most important factor affecting the practice modes of open peer review. Transparent peer review is currently the most effective practice mode of open peer review. Technologies like block chain can be used to address the psychological uneasiness for the peer review experts who are concerned with privacy issues. The fact that most OAJs use “pre-publication review” shows that open peer review still plays the traditional role of “academic goalkeeper”. Publication of peer review reports actually helps peer review experts augment their reputation, which in turn practically promotes the development of open peer review.
Article
Full-text available
Publons was a peer reviewer rewards platform that aimed to recognize the contribution that academics made during peer review to a journal. For about 10 years of its existence, Publons became the most popular service among peer reviewers. Having gained traction and popularity, Publons was purchased in 2017 by Clarivate Analytics (now Clarivate), and many academics, journals and publishers invested time and effort to participate in Publons. Using Publons, various peer review-related experiments or pilot programs were initiated by some academic publishers regarding the introduction of open peer review into their journals’ editorial processes. In this paper, we examine pertinent literature related to Publons, and reflect on its benefits and flaws during its short-lived history. In mid-August 2022, Clarivate fused Publons into the Web of Science platform. Publons, as a brand peer review service, has now ceased to exist but some of the functionality remains in Web of Science while other aspects that used to be open and free at Publons are now paid-for services. We reflect on the effect of such experiments, which initially had bold and ambitious academic objectives to fortify peer review, on academics’ trust, especially when such projects become commercialized.
Article
In less than 2 months, the artificial intelligence (AI) program ChatGPT has become a cultural sensation. It is freely accessible through a web portal created by the tool's developer, OpenAI. The program-which automatically creates text based on written prompts-is so popular that it's likely to be "at capacity right now" if you attempt to use it. When you do get through, ChatGPT provides endless entertainment. I asked it to rewrite the first scene of the classic American play Death of a Salesman, but to feature Princess Elsa from the animated movie Frozen as the main character instead of Willy Loman. The output was an amusing conversation in which Elsa-who has come home from a tough day of selling-is told by her son Happy, "Come on, Mom. You're Elsa from Frozen. You have ice powers and you're a queen. You're unstoppable." Mash-ups like this are certainly fun, but there are serious implications for generative AI programs like ChatGPT in science and academia.