Content uploaded by Mikołaj Piniewski
Author content
All content in this area was uploaded by Mikołaj Piniewski on Mar 01, 2024
Content may be subject to copyright.
1
Emerging plagiarism in peer-review evaluation reports: a tip of the iceberg?
Mikołaj Piniewski1*, Ivan Jarić2,3, Demetris Koutsoyiannis4, Zbigniew W. Kundzewicz5
1 Department of Hydrology, Meteorology and Water Management, Warsaw University of Life
Sciences, Warsaw, Poland, email: mikolaj_piniewski@sggw.edu.pl
2 Université Paris-Saclay, CNRS, AgroParisTech, Ecologie Systématique Evolution, Gif-sur-Yvette,
France, email: ivan.jaric@universite-paris-saclay.fr
3 Biology Centre of the Czech Academy of Sciences, Institute of Hydrobiology, České Budějovice,
Czech Republic
4 National Technical University of Athens, Athens, Greece, email: dk@itia.ntua.gr
5 Faculty of Environmental Engineering and Mechanical Engineering, Poznań University of Life
Sciences, Poznań, Poland, email: kundzewicz@yahoo.com
*Corresponding author
Abstract
The phenomenon of plagiarism in peer-review evaluation reports remained surprisingly unrecognized,
despite a notable rise of such cases in recent years. This study reports multiple cases of peer-review
plagiarism recently detected in 50 different scientific articles published in 19 journals. Their in-depth
analysis reveals that such reviews tend to be nonsensical, vague and unrelated to the actual
manuscript. The analysis is followed by a discussion of the roots of such plagiarism, its consequences
and measures that could counteract its further spreading. In addition, we demonstrate how increased
availability and access to AI technologies through recent emergence of chatbots may be misused to
write or conceal plagiarized peer-reviews. Plagiarizing reviews is a severe misconduct that requires
urgent attention and action from all affected parties.
Keywords: plagiarism, peer-review, publication ethics, scientific misconduct, chatbot
Introduction
A primary aim of a “classic” peer-review which has been around for many decades, is to assist the
journal editor in reaching a decision on the suitability of a submitted manuscript to be published in a
journal, and to help with suggestions on how those manuscripts potentially suitable for publishing
could be improved (Fiala and Diamandis, 2017). Beck (2003) simplified the issue to a binary scheme
– filtering out junk science and providing useful feedback to authors of non-filtered out works to
improve them. A review should be fair, provided in reasonable time, framed in the spirit of
constructive criticism, and its recommendations should be supported by arguments. Kundzewicz &
Koutsoyiannis (2005) identified a range of potential problems in the peer-review system, embracing
subjectivity, bias, abuse, non-detection of errors, as well as fraud and misconduct.
While a lot has been written about the issue of plagiarism in research and scientific publishing (Butler,
2010; Helgesson and Eriksson, 2015; Maurer et al., 2006), literally no attention was paid to the fact
that the peer-review process can also be subject to plagiarism. The latter is substantiated in this paper
by reporting an original case of detection of plagiarism in peer-review evaluation reports (hereafter
2
referred to as “peer-review plagiarism”). Since the reported case reflects a systemic problem, we also
tried to interpret motivations of plagiarizers, and discussed possible solutions to prevent the problem,
as well as an emerging issue of using artificial intelligence (AI) chatbots to write or conceal
plagiarized reviews.
The real case of reviewers stealing ideas, i.e. rejecting papers and publishing their own papers based
on the stolen ideas, is beyond the scope of this paper. The sample of such cases of serious fraud
known to the authors is very small, so that they do not refer to it in this paper.
Detected cases of peer-review plagiarism
The issue of plagiarism detection in scientific publications is very old: the journal Science published a
discussion about a potential plagiarism case in late 19th century (Halsted, 1896). However, what about
the contrary problem, i.e. cases of plagiarism in the peer-review reports detected by the authors?
Indeed, in the peer-review reports for the two manuscripts that the lead author of this article recently
published (Eini et al., 2022; Venegas-Cordero et al., 2023) lack of substance, vagueness and odd
jargon (with syntax or orthographic errors) raised suspicion on the originality of some (not all)
reviews.
Both manuscripts were submitted to a reputable journal, which is expected to warrant competent,
high-quality reviews. To address our suspicions, we assessed peer-review reports using an online
plagiarism check tool (https://www.duplichecker.com/), which returned a similarity index ranging
between 44% and 89% in three out of four peer-reviews for the first manuscript, and between 44%
and 100% in two out of three peer-reviews for the second manuscript. These figures are likely an
underestimation of the true level of plagiarism, since we were able to identify additional cases of
identical phrases using the Google search engine. If such high similarity indices are found in
incoming journal manuscripts, they would likely result in desk rejections, as exceeding a rule-of-
thumb threshold of 10-15% of similarity usually triggers visual inspection by editors (Lykkesfeldt,
2016). Of course, the editor always has the final say, as some text overlaps may be well justified.
In order to get a better insight into the newly noted phenomenon of peer-review plagiarism, we carried
out two types of analyses. First, we studied the entire length of one of the five peer-review reports that
our initial analysis flagged as plagiarism (the one with the estimated similarity index of 59%) by
breaking it into smaller pieces (one to three sentences each) and searching full quotes by Google
search engine (other search engines were also tried, but showed inferior results). In the second
attempt, we performed an in-depth analysis of a single quote, consisting of Google search of different
variants of this quote, extraction of search data and further analysis. For both types of analyses the
Google search engine by default omitted some entries very similar to the first few displayed. It was
possible, though, to repeat the search with the omitted results, which in fact led to a much higher
number of relevant hits.
The first analysis showed that exact quotes from the selected review report have been found in 22
different sources (Supplementary Table 1), which all represented existing open review reports
available online and published in 2021-2023. A lack of identified cases before 2021 suggests that the
phenomenon could be new, although it could also simply have been less easy to detect since the
growth of open-review practices is relatively new. The likelihood of plagiarism is undeniable, as the
length of investigated quotes ranged between 21 and 44 words, while a common rule of thumb is that
even using a string of five consecutive words identical in two source items represents likely
plagiarism. It is also clear that the same phrases were recycled already a few times. Interestingly, a
3
similar number of cases were anonymous and eponymous reviews, and in a few cases we found that
two different reviews with exactly matching phrases were signed by the same individual.
One would generally assume that the contents and language of a peer-review report should be very
much discipline-specific. However, that was not the case for the plagiarized review: here the same
phrases were used for manuscripts from diverse disciplines. Thus, the most important feature of the
plagiarized quotes was their vagueness: such comments in the one-size-fits-all mode do not refer to
any specific line of the reviewed manuscript, e.g. (Supplementary Table 1):
More explanation is needed for where there is a research gap and what the goals of the
research are. The research gap and the goals of the research are not explained in detail
which leads to the reader missing the significance of the research.
We also found that plagiarized reviews often seemed to have multiple sources, representing a
“collage” of texts from different existing reviews, rather than a simple copy of a single review. While
exactly the same phrases could be found in multiple reviews, it is usually difficult to assess whether it
is a case of plagiarism or self-plagiarism, due to prevailing anonymity of reviewers.
In the second analysis we picked the following quote from one of the reviews:
The major defect of this study is the debate or Argument is not clear stated in the introduction
session. Hence, the contribution is weak in this manuscript. I would suggest the author
to enhance your theoretical discussion and arrives your debate or argument.
We picked this quote because it is long (three sentences, 43 words) and has multiple language errors,
and the presence of exactly the same errors is a strong proof of plagiarism. For example, one notable
error is "Argument" with an upper-case A in the middle of a sentence.
Our analysis revealed that different variants of this quote were found in 50 different review reports
available on internet (Supplementary Table 2). This number is very likely an underestimation of the
true level of plagiarism of this quote, first because the majority of peer-review reports are not
available online and thus cannot be detected, and second since one could come up with various
modifications of the original text, which our search could not capture. In half of all cases the
identified reviews contained exactly the same quote, whereas the remaining half had very similar
variants (e.g. one word modified, or keeping two of three phrases).
We further analysed the identified 50 search results, extracting basic metadata from 50 reviews
(Supplementary Table 3). The suspicious reviews were found in 19 different journals (14 reviews in
PLOS ONE, 11 in Sustainability) belonging to seven different publishers (MDPI 29, PLOS 14,
Elsevier 3). In 45 cases (all papers published in MDPI and PLOS) the identified websites were those
of journals sharing open reviews as a part of their service. It is important to note that the adoption of
the practice of publishing the peer-review reports is a positive development (Ross-Hellauer, 2017),
leading to more openness and transparency, without which the detection of peer-review plagiarism
reported here would not be possible. This option was pioneered by journals Atmospheric Chemistry
and Physics (Copernicus Publications) and European Cells & Materials as well as 36 journals
published by BioMed Central in early 2000s (Wolfram et al., 2020), and is currently offered by an
increasing number of publishers (e.g. EMBO, MDPI, Nature Research, PLOS, Royal Society) and
journals (e.g. eLife, PeerJ).
4
The majority (37) of review reports identified as plagiarism were anonymous, whereas those that were
eponymous were predominantly (11 out of 13) signed by different individuals. Perhaps most
importantly, the problem has been apparently gaining in strength in recent years. We analysed
submission dates of the manuscripts related to suspicious reviews, revealing that only two such
manuscripts were submitted in 2019 compared to 22 in 2022.
In five cases (of which three published by Elsevier) the Google search returned links containing the
pdf files of manuscripts along with their responses to review comments (typically files generated by
manuscript tracking systems). These files were most likely uploaded by authors, as the corresponding
journals do not have open review policies. This finding is also very important, as it shows that the
review plagiarism problem may be even worse for an overwhelming majority of journals that do not
publish review reports. The confidence that the review reports are not publicly available could
additionally encourage some reviewers to plagiarize. On the other hand, we suppose that the severity
of the review plagiarism problem is also connected to the journal peer-review standards. The fact that
MDPI was on top of our list resonates with the recent study that highlighted its prioritizing self-
interest, as well as forsaking the best editorial and publication practices (Oviedo-García, 2021). The
second position of the megajournal PLOS ONE echoes with recently revealed cases of anomalous
activity of a small number of extremely active editors in this journal (Petersen, 2019).
Why do reviewers review and why would they want to plagiarize?
The two cases described above are just a reflection of a systemic problem and very likely a tip of the
iceberg. Since the publish-or-perish system is self-imposed on scientific community, it takes a huge
number of reviewers to examine the ever-growing number of manuscripts (Epstein et al., 2017)
submitted to an ever-growing number of scientific periodicals. From a reviewer’s viewpoint, writing a
review on a voluntary basis can be a time-consuming, and unwelcome necessity. Certainly, such effort
can yield benefits, such as getting to know about a potentially important and interesting work early
on, prestige and public recognition, as well as training – a skill that can ultimately lead to
improvement of one’s own research and writing (Koshy et al., 2018). However, many potential
reviewers probably feel that the benefit-cost ratio is low, that is that writing a review is a kind of
sacrifice.
So why some individuals do not simply decline the invitation to review rather than engaging into
preparation of a plagiarized review? Perhaps sometimes saying no can be uneasy, for instance if a
review request comes from the Editor-in-Chief to a member of the Editorial Board who is obliged to
prepare a certain (high) number of reviews on a regular basis. Some journals try to attract reviewers
by offering article processing charge (APC) discount vouchers, or other perks. This may be an
important factor for some researchers in some countries, as shown by a survey conducted in Central
Eastern European countries, in which the MDPI has increased its market share tremendously in recent
years (Csomos & Farkas, 2023). Yet another possibility is a wish to boost one’s CV by reporting the
number of registered reviews in the Publons web platform (now incorporated by Clarivate into Web of
Science under the tab “My peer review records”) that tracks review and editorial contributions for
academic journals (Teixeira da Silva & Nazarovets, 2022). In some countries (e.g. Poland) review
record is a part of the periodic (e.g. annual or four-year) reporting and being frequently invited to
review is interpreted as a proxy of professional recognition. Another potential scenario, potentially
practiced by some predatory publishers and journals, is peer-review plagiarism generated not by
reviewers but by editors, with the aim to hide the actual absence of the rigorous peer-review process.
A motivation for undertaking peer-review could also have unethical grounds: e.g., to “engineer” a
5
rejection of a competitor's or opponent's work (hidden behind the status of anonymity), to steal novel
ideas, or to force one’s own references to boost citations.
Considering the above, it should not be perhaps so surprising that some reviewers may shamelessly
take the easy way out by resorting to "copy and paste", the main benefit being saving time. Another
possible motivation for a plagiarized review could be simply feeling insecure to use one’s own words
to write reviews (often due to low English proficiency). Searching and copying from open reviews
may be a strategy to avoid writing one’s own phrases in English. Indeed, scientific writing can be a
real challenge for non-native English writers, particularly for those using a non-Latin (e.g. Russian,
Arabic or Asian) alphabet system (Amano et al., 2023; Roig, 2015).
Regardless of motivation, plagiarising reviews is a serious misconduct that, as our analysis proves,
could have grown fast in recent years and thus requires urgent attention and action from all affected
parties (editors, publishers and authors). One possible exception where self-plagiarizing might be
tolerable, is the use of cliché phrases, possibly compiled by an organized individual for personal use
in reviews of problematic papers. It may well be that in some manuscript submission portals,
reviewers can select from a pool of common critical cliché statements compiled by the journal editors.
This is similar to cliché editors’ letters, who use particular templates for different categories of
submitted manuscripts, for instance to announce desk rejections. Finally, using similar expressions in
evaluation reports by the same reviewer might be a habitual practice.
Consequences of peer-review plagiarism
Besides being a form of scientific misconduct by itself, as a plagiarism, one key problem with such
plagiarized reviews is that they will surely tend to be nonsensical, vague and unrelated to the actual
manuscript. Consequently, they will negatively affect the quality of the peer-review process and of the
reviewed manuscripts, and thus also erode the quality of published science. They can also lead to a
decreased public’s trust in the peer-review process. The presence and prevalence of peer-review
plagiarism should be urgently assessed by publishers, editors, and other involved parties, to
understand the frequency of such cases. We call for urgent measures to be taken to monitor, control
and prevent such cases from happening.
How to prevent peer-review plagiarism?
As noted earlier, our discovery would not have been possible without an easy access to review reports
published with open-access alongside the accepted articles. The efforts to provide full public access to
the peer-review documents is generally received well by the scientific community. For example, a
survey of authors who published their work in Nature Communications showed that 98% would
continue such open-review practice in the future (Anonymous, 2020). Although the number of
journals adopting open peer-reviews is growing fast and some journals have undertaken transition
from optional to obligatory models (Anonymous, 2022), wider adoption of this model is still lacking
(Wolfram et al., 2021). The advantage of this solution as a measure against plagiarising reviews
would be allowing cross-comparisons between different peer-review reports. Publishing a review
alongside a paper is useful for the readership. This would, however, have some negative side effects,
as greater availability of open-access review reports would also mean greater availability of texts for
plagiarising. Thus, plagiarism in journals with hidden reviews (which in theory should be impossible
to be discovered from outside, but our search revealed five such cases) might be even intensified by
such a partial measure.
6
Therefore, we argue that the most meaningful solution to prevent review plagiarism is a routine
treatment of all submitted reviews with a plagiarism detection software. As a matter of fact, it is
astonishing that this has not, to our knowledge, been done yet. Many journals are already using such
software for screening manuscripts routinely, so why not use it also for review reports? Perhaps a
concern by editors that they might lose some reviewers who would feel “oppressed” is one
explanation; indeed, finding reviewers is an editor's headache. A frequently occurring problem is a
trade-off between quality over quantity (Epstein et al., 2017). Adopting plagiarism software by
journals was a major breakthrough in late 2000s: publishers were delighted to have an instrument to
police submissions (Butler, 2010). When some publishers first tested CrossCheck, they discovered
high levels of plagiarism of which they had been previously unaware. For example, in one of the
science journals of Taylor & Francis, 13 of 56 of new incoming articles got rejected because of
plagiarism during the CrossCheck testing phase (Butler, 2010). Perhaps history could be repeating,
this time with review reports. Publishers and journals already have the necessary tools, they just need
to start applying them, keeping in mind that using software is a necessary, but not a sufficient
condition to correctly detect plagiarism, since taking reported similarity indices at face value may be
risky (Weber-Wulff, 2019).
Certainly, plagiarism detection software should not be used blindly, as there are some pitfalls that
should be omitted. There are some common aspects of reviewed publications (e.g. format, language
norms, article structure) for which the reviewers may provide similar suggestions of changes, simply
due to their writing habits. It is the role of editors to carefully consider if such “repetitions” constitute
potential cases of plagiarism or not, so as to avoid unnecessary harm on reviewers. In addition, if
reviewers find out about plagiarism checks of their review reports by a certain journal, they may
resort to reversing sentence order, replacing identical words or other similar techniques, potentially
resulting in awkward/unclear statements in their reports, and thus unintentionally affecting the
authors’ understanding of reviewers’ suggestions.
Another solution worth considering would be to establish a shared database among journals with all
peer-review reports received submitted there to be compared with past reviews, and kept for future
comparisons. Similar solution was advocated by Jarić (2016) for submitted manuscripts, but it could
be easily extended to submitted reviews. They would not have to be made available for checking
individually by editors, they could only be there as a pooled dataset for software to cross-check, while
being kept anonymous and confidential. Good examples of sharing peer-review data among journals,
such as the PEERE initiative, already exist (Squazzoni et al., 2017).
We also generally advocate for better control by the journal editors of who gets invited to review.
Blacklisting of plagiarizers is often done only locally, by the involved journal, which leaves it open
for them to continue with misconduct in other journals, and it is also often imposed as only a
temporary measure. Sharing of information about misconduct cases and blacklisted authors among
journals would also be an important measure, both directly and as a deterrent. Editors of the Journal of
Zhejiang University Science A/B/C compared such a measure to a point system in which car drivers,
caught breaking the law, may eventually lose their driving licence (Anonymous, 2012). Blacklisting
of misconducting reviewers, regardless of the reason for doing so, improved the quality of
publications in a peer-review simulation study (D’Andrea & O’Dwyer, 2017).
We believe that mutual trust between editors and reviewers is fundamental for maintaining high peer-
review standards (Resnik & Elmore, 2016). This trust can function more easily and stably in non-
profit journals owned or operated by academic societies or professional associations (Mercer, 2022).
7
The great majority of review plagiarism cases detected in our study were associated either with a big,
commercial publishing house MDPI or a megajournal PLOS ONE, both of which are associated with
problematic peer-review standards (Oviedo-García, 2022; Petersen, 2019). However, society-owned
or society-operated journals are not immune, they can also be affected by the detected problem, as
show three examples from Supplementary Table 3: Atherosclerosis (owned by the European
Atherosclerosis Society), Ain Shams Engineering Journal (owned by Ain Shams University) or Open
Research Europe (an ‘open science’ journal launched in 2021 by the European Commission).
AI chatbots enter the game
Manual introduction of changes to the strings of characters, in order to reduce the similarity of
material to a source may have been a common practice of manuscript authors, e.g. triggered by a high
similarity index reported by journal editors. The aim of such an action is to make plagiarism more
difficult to detect by the anti-plagiarism software. This activity can be called ‘plagiarism laundering’
or ‘white-collar plagiarism’. However, a new category of review plagiarism problems, that is about to
emerge and potentially be very difficult to remedy, is related to the outputs of the AI chatbots. While
the use of online paraphrasing tools in academic writing is not a new thing (Rogerson & McCarthy,
2017), the recent outburst of AI chatbots such as ChatGPT or Bard makes a true difference. The
Supplementary Table 4 demonstrates that such systems can generate multiple versions of alternative
wording of any sentence, e.g. a slightly different version of the first sentence in this paragraph. One
can select the solution with the lowest value of the similarity index. This observation is of vast
importance for plagiarism detection in reviews (as well as in papers).
Checco et al. (2021) wrote in their article on AI-assisted peer-review that they “do not envisage any
relevant contribution from AI on the processes requiring significant domain expertise and intellectual
effort, at least for the foreseeable future”. Yet, modern chatbots can write an editorial (Aghemo et al.,
2023) or generate a review of a scientific paper. An example of a bot-generated review is given in
Supplementary Material 5. If a reviewer submits a bot-generated review as his or her own, it can be
considered as plagiarized from ChatGPT (Thorp, 2023). Indeed, the current consensus is that AIGC
(AI-generated content) is not seen as a responsible party, and thus attention is needed by journal
editorial boards to address this emerging issue, especially given that claims are being made about
efficiency and reliability of AI-generated peer-reviews (Srivastava, 2023).
The Supplementary Tables 4 and 5 demonstrate a burning question: can editors detect that a review is
bot-generated? We cannot give, unconditionally, a positive response to such question (Anonymous,
2023). Gao et al. (2023) provide results of a human-reviewer scoring for whether the document
carefully inspected by them (in their case – paper abstract) was bot-generated. In their study, the
probabilities of experts making errors of the first kind (text was generated but was recognized as
original) and of the second kind (text was original but was recognized as generated) were found to be
7/41 and 16/59, respectively. The Supplementary Tables 4 and 5 show that using plagiarism check
alone may not really be a sufficient remedy (neither for papers nor for reviews) against bot-assisted
fraud. This seems to hold for a raw chatbot output. One can hypothesize that the situation may get
even more difficult in case of a hybrid system where raw chatbot output is post-processed (de-botted)
by an expert, or with further development of chatbot systems.
Most of the above problems would be reduced, if not eliminated, in a fully open and transparent
system, with a complete attribution of scientific works, both papers and reviews, to their producers. It
can be speculated that reviews published alongside with the papers or rendering reviews eponymous
8
could reduce the review plagiarism, as knowing that the review will be publicly available, with a
disclosed (rather than secret) reviewer’s identity, may make plagiarizing less likely. It would also
substantially improve scientific ethics, as the necessary condition of scientific endeavour.
Nevertheless, researchers have mixed attitudes towards this practice and often worry about the
potential negative consequences of disclosing their identities as reviewers, which limits a wider
adoption of open reviews (He et al., 2023; Ross-Hellauer et al., 2017). In addition, post-publication
peer-review could additionally reduce the risk of review plagiarism, provided the reviewers do not
hide behind the curtain of anonymity, which is often the case (Knoepfler, 2015).
We are convinced that the peer-review plagiarism, either “classic” or boosted by chatbots, in both
cases largely unnoticed, will continue to grow in the coming years. How rapidly, depends on us, the
scientific community (scientists, editors and publishers). Now it is time for the community to make an
effort to investigate this issue thoroughly, to estimate the actual prevalence of this phenomenon, and
to find and implement adequate solutions. Here we provided a number of suggestions that should
hopefully stimulate discussion.
Acknowledgements
The authors thank an anonymous reviewer for providing helpful comments and suggestions that
improved the manuscript. This version of the article has been accepted for publication, after peer
review and is subject to Springer Nature’s AM terms of use, but is not the Version of Record and does
not reflect post-acceptance improvements, or any corrections. The Version of Record is available
online at: http://dx.doi.org/10.1007/s11192-024-04960-1
Data availability statement
All data supporting this article are included as Supplementary Materials.
References
1. Aghemo, A., Forner, A., Valenti, L. (2023) Should Artificial Intelligence-based language
models be allowed in developing scientific manuscripts? A debate between ChatGPT and the
editors of Liver International. Liver Int, 43: 956-957. DOI: 10.1111/liv.15580.
2. Amano, T., Ramírez-Castañeda, V., Berdejo-Espinola, V., Borokini, I., Chowdhury, S.,
Golivets, M., ... & Veríssimo, D. (2023). The manifold costs of being a non-native English
speaker in science. PLoS Biology, 21(7), e3002184. DOI: 10.1371/journal.pbio.3002184.
3. Anonymous (2012) How to stop plagiarism. Nature 481, 21–23 (2012). DOI:
10.1038/481021a
4. Anonymous (2020). Nature will publish peer review reports as a trial. Nature 578, 8, DOI:
10.1038/d41586-020-00309-9
5. Anonymous (2022). Transparent peer review for all. Nat Commun 13, 6173. DOI:
10.1038/s41467-022-33056-8
6. Anonymous (2023) Tools such as ChatGPT threaten transparent science; here are our ground
rules for their use. Nature, 613, 612; DOI: 10.1038/d41586-023-00191-1.
9
7. Beck, E., Jr (2003) Anonymous reviews: self-serving, counterproductive, and unacceptable.
Eos, Trans. Am. Geophys. Union 84(26), 249. DOI: 10.1029/2003EO260005.
8. Butler, D. (2010) Journals step up plagiarism policing. Nature 466, 167 DOI:
10.1038/466167a
9. Checco, A., Bracciale, L., Loreti, P., Pinfield, S., Bianchi, G. (2021) AI-assisted peer review.
Humanit Soc Sci Commun 8, 25. DOI: 10.1057/s41599-020-00703-8.
10. Csomós, G., Farkas, J.Z. (2023) Understanding the increasing market share of the academic
publisher “Multidisciplinary Digital Publishing Institute” in the publication output of Central
and Eastern European countries: a case study of Hungary. Scientometrics 128, 803–824. DOI:
10.1007/s11192-022-04586-1
11. D’Andrea, R., O’Dwyer, J.P. (2017) Can editors save peer review from peer reviewers? PLoS
ONE 12(10): e0186111. DOI: 10.1371/journal.pone.0186111.
12. Eini, M.R., Rahmati, A., Salmani, H., Brocca, L., Piniewski, M. (2022) Detecting
characteristics of extreme precipitation events using regional and satellite-based precipitation
gridded datasets over a region in Central Europe. Science of The Total Environment 852,
158497. DOI: 10.1016/j.scitotenv.2022.158497.
13. Epstein, D.; Wiseman, V.; Salaria, N.; Mounier-Jack, S. (2017) The need for speed: the peer-
review process and what are we doing about it?, Health Policy and Planning, 32(10), 1345–
1346, DOI: 10.1093/heapol/czx129
14. Fiala, C., Diamandis, E.P. (2017) The emerging landscape of scientific publishing. Clinical
Biochemistry 50 (12), 651-655. DOI: 10.1016/j.clinbiochem.2017.04.009
15. Gao, C.A., Howard, F.M., Markov, N.S., Dyer, E.C., Ramesh, S., Luo Yuan, Pearson, A.T.
(2023) Comparing scientific abstracts generated by ChatGPT to real abstracts with detectors
and blinded human reviewers. npj Digital Medicine 6:75; DOI: 10.1038/s41746-023-00819-6.
16. Halsted, G.B. (1896) Compliment or plagiarism. Science, 4 (102), 877-878.
17. He, Y., Tian, K., Xu, X. (2023) A validation study on the factors affecting the practice modes
of open peer review. Scientometrics 128, 587–607.
18. Helgesson, G., Eriksson, S. (2015) Plagiarism in research. Med Health Care and Philos 18,
91–101. DOI: 10.1007/s11019-014-9583-8.
19. Hosseini, M., Rasmussen, L.M. & Resnik, D.B. (2023): Using AI to write scholarly
publications, Accountability in Research, DOI: 10.1080/08989621.2023.2168535
20. Jarić, I. (2016) High time for a common plagiarism detection system. Scientometrics 106,
457–459, DOI: 10.1007/s11192-015-1756-6
21. Knoepfler P. (2015) Reviewing post-publication peer review. Trends Genet. 31(5):221-3
DOI: 10.1016/j.tig.2015.03.006.
10
22. Koshy, K., Fowler, A.J., Gundogan, B., Agha, R.A. (2018) Peer review in scholarly
publishing part A: why do it?. International Journal of Surgery Oncology 3(2):p e56, DOI:
10.1097/IJ9.0000000000000056
23. Kundzewicz, Z.W. & Koutsoyiannis, D. (2005) Editorial—The peer-review system: prospects
and challenges, Hydrol. Sci. J., 50(4), 577-590.
24. Lykkesfeldt, J. (2016), Strategies for Using Plagiarism Software in the Screening of Incoming
Journal Manuscripts: Recommendations Based on a Recent Literature Survey. Basic Clin
Pharmacol Toxicol, 119: 161-164. DOI: 10.1111/bcpt.12568
25. Maurer, H., Kappe, F., Zaka, B. (2006) Plagiarism – a survey. Journal of Universal Computer
Science 12(8). DOI: 10.3217/jucs-012-08-1050.
26. Oviedo-García, M.A. (2021) Journal citation reports and the definition of a predatory journal:
The case of the Multidisciplinary Digital Publishing Institute (MDPI), Research Evaluation,
30(3), 405–419, DOI:10.1093/reseval/rvab020
27. Petersen, A.M. (2019) Megajournal mismanagement: Manuscript decision bias and
anomalous editor activity at PLOS ONE. Journal of Informetrics 13, 4, 100974. DOI:
10.1016/j.joi.2019.100974
28. Resnik, D.B., Elmore, S.A. (2016) Ensuring the Quality, Fairness, and Integrity of Journal
Peer Review: A Possible Role of Editors. Sci Eng Ethics 22, 169–188. DOI: 10.1007/s11948-
015-9625-5
29. Rogerson, A.M., McCarthy, G. (2017) Using Internet Based Paraphrasing Tools: Original
Work, Patchwriting or Facilitated Plagiarism? Int. J. Educ. Integr. 13, 1–15. DOI:
10.1007/s40979-016-0013-y
30. Ross-Hellauer, T. (2017) What is open peer review? A systematic review. F1000Res. 6:588.
DOI: 10.12688/f1000research.11369.2.
31. Ross-Hellauer, T., Deppe, A., Schmidt, B. (2017) Survey on open peer review: Attitudes and
experience amongst editors, authors and reviewers. PLoS ONE 12(12): e0189311.
https://doi.org/10.1371/journal.pone.0189311
32. Squazzoni, F., Grimaldo, F. & Marušić, A. (2017) Publishing: Journals could share peer-
review data. Nature 546, 352. DOI: 10.1038/546352a.
33. Srivastava, M. (2023). A day in the life of ChatGPT as an academic reviewer: Investigating
the potential of large language model for scientific literature review. DOI:
10.31219/osf.io/wydct.
34. Teixeira da Silva, J.A., Nazarovets, S. (2022) The Role of Publons in the Context of Open
Peer Review. Pub Res Q 38, 760–781. DOI: 10.1007/s12109-022-09914-0.
35. Thorp, H.H. (2023) ChatGPT is fun, but not an author. Science, 379 (6630), DOI:
10.1126/science.adg7879
11
36. Venegas-Cordero, N.; Cherrat, C.; Kundzewicz, Z.W.; Singh, J.; Piniewski, M. (2023)
Model-based assessment of flood generation mechanisms over Poland: The roles of
precipitation, snowmelt, and soil moisture excess. Science of The Total Environment, 891,
164626, DOI: 10.1016/j.scitotenv.2023.164626
37. Weber-Wulff (2019) Plagiarism detectors are a crutch, and a problem. Nature 567, 435, DOI:
10.1038/d41586-019-00893-5.
38. Wolfram, D.; Wang, P.; Abuzahra, F. (2021), An exploration of referees’ comments
published in open peer review journals: The characteristics of review language and the
association between review scrutiny and citations, Research Evaluation, 30(3), 314–322, DOI:
10.1093/reseval/rvab005
39. Wolfram, D., Wang, P., Hembree, A., Park, H. (2020) Open peer review: promoting
transparency in open science. Scientometrics 125, 1033–1051. DOI: 10.1007/s11192-020-
03488-4.