ChapterPDF Available

Bias and Groupthink in Science’s Peer-Review System



Studies have shown that various types of biases can impact scientific peer review. These biases may contribute to a type of groupthink that can make it difficult to obtain funding or publish innovative or controversial research. The desire to achieve consensus and uniformity within a research group or scientific discipline can make it difficult for individuals to contradict the status quo. This chapter reviews the scientific literature regarding biases in the peer-review system, reflects on the potential impact of bias, and discusses approaches to minimize or control bias in peer review.
Bias and Groupthink in Science’s Peer Review System
David B. Resnik, JD, PhD, NIEHS/NIH
Elise M. Smith, PhD, NIEHS/NIH
Studies have shown that various types of biases can impact scientific peer review. These biases
may contribute to a type of groupthink that can make it difficult to obtain funding or publish
innovative or controversial research. The desire to achieve consensus and uniformity within a
research group or scientific discipline can make it difficult for individuals to contradict the status
quo. This chapter will review the scientific literature regarding biases in the peer review system,
reflects on the potential impact of bias, and discusses approaches to minimize or control bias in
peer review.
Peer review is a key part of the scientific enterprise. Most journals use peer review to
evaluate articles submitted for publication; funding agencies use peer review to assess research
proposals; and, scholarly conference committees use peer review to evaluate abstracts and
conference proceedings. Peer review serves mainly as a gate-keeping mechanism to ensure that
published or funded research meets appropriate disciplinary or interdisciplinary norms including
standards of rigor, reproducibility, validity, objectivity, novelty and integrity.
The peer-review process involves subject-matter experts and this lends a certain prestige
and credibility to scientific works that are positively reviewed. Peer review also serves to
improve the quality of articles and research proposals and educate junior scholars about
methodological standards (Resnik and Elmore 2016). However, peer review is not perfect (Smith
2006). Various socials scientists have opened the black box of scientific inquiry - including peer-
review - to reveal social norms held by peer-reviewers which can create bias (Latour and
Woolgar 1979). Specific studies have identified bias regarding academic rank, gender, race,
nationality, institutional affiliation that can undermine the impartiality and integrity of the peer
review system (Lee et al. 2012, Resnik and Elmore 2016).
Bias may also be the result of groupthink – which exists when the psychological drive for
consensus is so strong that any divergence from that consensus is ignored or rejected. This is not
to say that a degree of consensus or agreement is not beneficial. Understandably, the public
expects scientific findings which are endorsed by a large group of specialists to be sound and
credible. Also, in undertaking research, there should be some agreement among scientists
respecting working assumptions, certain hypotheses, and methodology, notwithstanding some
uncertainty or divergence of opinion. For example, health care providers must agree on standards
of practice to apply the results of biomedical research. If there was no ‘common ground’ or
consensus, medical practice would be chaotic, unpredictable and possibly unsafe. However,
groupthink may be problematic if a complacent trust between members is created and individual
critical reflection in scientific decision-making is no longer accepted or promoted.
This chapter will describe scientific peer review, discuss some of the evidence for bias,
examine the impact of biases related to groupthink, and consider some options for reforming the
system to reduce the impact of bias.
Science’s Peer Review System
In the seventeenth century, the editors of Philosophical Transactions of the Royal Society
of London instituted the first known instance of peer review to address the concerns of some of
the members of the Royal Society that their journal was publishing highly speculative, rambling
articles and works of fiction. However, peer review did not become standard practice in
scientific publishing until the nineteenth century (Shamoo and Resnik 2015). In the twentieth
century, government agencies began using peer review to determine the allocation of funds for
scientific projects. Peer review is now also used to make decisions concerning tenure and
promotion in academic institutions and to award scientific prizes (Shamoo and Resnik 2015).
Journal editors usually conduct an initial review of a manuscript submitted for
publication to determine whether it fits aims and scope of the journal and meets some minimum
standards of quality. If it is deemed suitable for peer review, the editors invite researchers with
the subject-matter expertise to review the paper. Some journals allow authors to recommend
potential reviewers or request that certain individuals not serve as reviewers. Most journals seek
input from two reviewers, but some may ask for more, especially when reviewers disagree
(Resnik and Elmore 2016). Reviewer reports usually include some comments intended for the
authors or editors, an evaluation of the manuscript based on journal criteria (e.g. originality,
significance, statistical soundness, strength of the argument, quality of writing, etc.), and an
overall recommendation (e.g. accept the manuscript as is, accept with revisions (minor or major),
revise and resubmit, or reject). Editors usually follow the reviewers’ recommendations given
their expertise, but in instances when they disagree with the reviewers, editors may differ
(Shamoo and Resnik 2015). Editors are responsible for conducting their own assessment of the
manuscript and determining whether the reviewer reports are fair and competent (Resnik and
Elmore 2016).
The most common form of peer review in scientific publishing is the single-blind
approach wherein reviewers are informed of the authors’ identities and affiliations but not vice
versa. Blinding reduces the unwanted effects created by power dynamics. For example, a junior
scholar may hesitate to reject a paper because of significant methodological flaws if she was
aware that a prestigious author knew her identity. The reviewer might be worried about being
ostracized from her field of research because of her difference of opinion with an esteemed
member in high standing in said group. Generally speaking, regardless of hierarchy or power
dynamics, the purpose of single-blinding is to encourage reviewers to make candid comments
without fear of reprisal from disgruntled authors (Shamoo and Resnik 2015). Informing the
reviewers of the authors’ identities allows them to disclose any conflicts of interest and consider
relevant institutional factors into account during review. For example, if the reviewers know that
the manuscript comes from a non-English speaking country, they may decide to take this into
account when evaluating the written presentation (Shamoo and Resnik 2015).
An increasing number of journals have switched to a double-blind system in which
neither party is told the other’s identity or affiliation. The purpose of double-blinding is to
reduce bias related to gender, institutional affiliation, or nationality (see discussion below).
However, studies have shown that nearly half the time reviewers can correctly identify the first
author on the manuscript (Justice et al. 1998, Baggs et al. 2008). Identification of authors may
be more likely to occur in highly specialized fields where most researchers know each other’s
work through writing style, subject of research and research cited (Resnik and Elmore 2016).
Laboratories usually are aware of their competition since research is often shared at conferences
and through professional collaboration.
Some journals have gone to an open system in which authors and reviewers are told each
other’s identities and affiliations (Resnik and Elmore 2015). The purpose of open review is to
deter unethical behavior from reviewers, such as breach of confidentiality or theft of ideas
(Resnik and Elmore 2016). Open review may also allow for both authors and reviewers to be
named on the paper which adds a degree of accountability as well as recognition for the peer-
review. Researchers who find peer-reviewers work particularly important may choose to name
peer-reviewers openly by name in acknowledgements. One drawback with this approach is that
scientists may prefer to not participate in an open review system because they fear reprisal (Ho et
al. 2013).
Although anonymity may promote candid review, one might argue that secrecy in science
is counterproductive and may ultimately reduce the quality of peer-review. There has been a
recent trend where researchers promote full transparency and expediency by using post-
publication open peer-review (Hunter, 2012). In journals, such as F1000 Research, authors send
a publication to the journal which conducts a very basic an in-house review. The article is then
put online without delay. Peer-reviewers then post reviews online and articles are often reviewed
if requested. This public ‘review’ period and ensuing debate allows researchers to cite
manuscripts that have not yet completed peer-review.
Researchers have argued that publishing work that will be later revised and republished
could reduce quality control and may augment the incidence of unsound science or of
misconduct (Teixeira da Silva and Dobránszki 2015). It may be argued that although traditional
peer-review has flaws, it is better than publishing research without oversight. However, the goal
of post-publication peer review is often to allow more continuous peer-review, not less (Lauer,
Krumholz and Topol 2015). And although manuscripts may change, journals do openly mention
when a paper is not yet peer-reviewed.
Interestingly, the culture and integration of post-publication peer-review is similar to pre-
publication archiving which originated in physics but has adopted by many disciplines in the
biomedical sciences. In pre-publication archiving, researchers put a copy of their paper in an
online repository, such as or, which is often cited with a specific digital
object identifier (DOI) during or before any formal peer-review process in another journal. A
considerable upside to this model is that identification of the provider or source of an idea occurs
before any peer-review process. This helps to stop individuals from stealing ideas during the
peer-review process to safeguard idea providence. Some have even mentioned that open models
are part of the “Open Science future” (Pulverer 2016).
Numerous studies have examined how these different approaches impact the quality,
consistency, and effectiveness of peer review, but thus far the evidence has been inconclusive
(Armstrong 1997, Lee et al. 2012). For example, two small studies conducted by McNutt et al.
(1990) and Fisher et al. (1994) found that blinding reviewers improves the quality of review, but
larger studies conducted by van Rooyen et al. (1998), Justice et al. (1998), Godlee et al. (1998)
found the blinding does not have this effect. While a study by Walsh et al. (2000) concluded that
openness improves the quality and courtesy of reviewer reports, other studies found no evidence
that openness improves peer review (van Rooyen et al. 1998, 1999, 2010; Justice et al. 1998,
Godlee et al. 1998). More research is therefore needed to more conclusively demonstrate the
extent to which different approaches to peer review impact quality, consistency, and
Peer review by funding agencies varies considerably, depending on the method used.
Some agencies convene in-person panels, while others may handle the review process remotely
via secure websites or email, or by some combination of in-person and remote (Shamoo and
Resnik 2015). The National Institutes of Health (NIH), for example, uses study sections
composed of experts to review research grants. The study section will usually meet in-person to
review grant proposals and materials will be distributed in advance. A proposal will normally be
assigned a primary and a secondary reviewer. These reviewers will present the proposal to the
group and provide an assessment. The entire group will also evaluate the proposal and score it
based on specific review criteria, including scientific significance, methodology, qualifications of
the principal investigator, institutional resources, adequacy of the budget, preliminary research,
and potential impact of the study (Shamoo and Resnik 2015). The NIH requires reviewers to
declare conflicts of interest (COIs) and prohibits individuals from reviewing proposals submitted
by colleagues from their institution, from recent collaborators, or former students or advisors.
The final funding decision is made by NIH leadership, based on recommendations from the
study section.
Bias in Peer Review
Although the peer review system is designed to provide an ‘impartial’ quality assessment,
evidence indicates that various biases can impact decisions related to publication and funding
(Lee et al. 2012, Shamoo and Resnik 2015). One well-documented type of bias is the tendency
for journals to publish positive or confirmatory results rather than negative ones. Initial studies
of this phenomena conducted by Mahoney (1977), Easterbrook et al. (1991), and Stern and
Simes (1997) found that clinical trials reporting positive results were more likely to be published
than those reporting negative results, and subsequent research confirmed these findings (Lee et
al. 2012). A systematic review and meta-analysis of studies of publication bias conducted by
Dwan et al. (2013) found that clinical trials reporting positive results are more likely to be
published than those reporting negative results. While there is a substantial evidence of a bias in
favor of publishing positive results, it is unclear whether this bias is due to decisions made by
reviewers, editors, or authors. It may be the case that most of the bias results from authors’
decisions not to publish negative results rather than reviewer or editor preferences for positive
results (Olson et al. 2002).
Numerous studies have shown that gender bias impacts the funding of grant proposals
(Wenneras and Wold 1997, Shen 2013). Bornmann et al. (2007) conducted a meta-analysis of 21
studies of grant peer review conducted between 1979 and 2004 and found that men were 7%
more likely than women to receive funding, although there was considerable variability in the
impact of gender. The authors note that a variety of causal factors may contribute to this
discrepancy, including fewer women on peer review panels or in leadership positions. Waisbren
et al. (2008) found that differences in grant funding success between male and female applicants
disappeared when they controlled for academic rank, suggesting gender biases in grant peer
review may be a function of differences in career paths between men and women. Two studies
of gender bias in NIH grant review by Kaatz and coauthors suggest that an awareness of an
applicant’s gender may function as a subconscious (implicit) influence on decision-making.
These studies found that reviewers consistently gave female applicants lower scores than male
applicants, even when they used similar words and phrases to describe their proposals (Kaatz et
al. 2015, 2016).
Gender bias is more difficult to study in journal peer review than in grant review because
most journals do not disclose the names of reviewers. However, Helmer et al. (2017) obtained
gender data from Frontiers journals, which include the names of the reviewers and associate
editors alongside the article accepted for publication. They analyzed data from 126,000 authors,
43,000 reviewers, and 9000 editors for 41,000 articles published in 142 journals from the natural
and social sciences, medicine, engineering, and the humanities and found that women are
underrepresented in the peer review process and that there is a strong same-gender preference
(e.g. men editors give higher ranking to men authors; women give higher ranking to women).
Grod et al. (2008) also found that acceptance rates for papers with female first authors increased
significantly after Trends in Ecology and Evolution adopted a double-blind review format.
However, other studies have shown little to no bias regarding gender. For instance, a study in
biosciences (989 responses) found that the gender of the first author had no significant effect on
the reviewer’s recommendations or acceptation rate (Borsuk et al. 2009). Other studies have
found no significant difference in acceptance rates between male and female first-authored
papers (Tregenza 2002, Lane and Linden 2009).
Gender disparities may impact peer review. Despite efforts to encourage women to
pursue careers in science, there remain important gender disparities in science on a global scale
and in most developing countries including the US and Canada (Larivière et al.). Overall, many
different factors impact gender discrepancies in science, including culture, education, workplace
environment, childbearing and rearing responsibilities, labor distribution within teams and career
decisions (Ceci and Williams 2011). Although reviewers’ implicit biases in peer review can play
a role in the underrepresentation of women in science, it is but one among many confounding
Like gender, race and ethnicity also appear to influence the peer review process. Ginther
and colleagues published several studies of racial and ethnic bias in the grant peer review process
in the US. Their first study found that black applicants were 10% less likely than white
applicants to receive funding for R01 grants when other relevant factors, such as education,
training, previous awards, and publication record were controlled for (Ginther et al. 2011).
Another study found that biases against black applicants for NIH R01 grants decreased when one
included medical school affiliation: blacks from medical schools were only 7.8% less likely to
receive funding than whites (Ginther et al. 2012). A third found that white women were no less
likely than white men to receive funding, but that Asian and black women were less likely to
receive funding, when controlling for relevant factors (Ginther et al. 2016). However, Jang et al.
(2013) conducted a bibliometric analysis comparing research productivity of black and white
applicants and found the NIH peer review process is not biased against black applicants. Racial
and ethnic differences in funding disappear when one controls for research productivity (Jang et
al. 2013).
There is also evidence of bias in peer review related to nationality and institutional
affiliation (Lee et al. 2012). Ross et al. (2006) studied abstracts accepted at the American Heart
Association’s Scientific Sessions before and after it instituted double-blind review and found that
blinding the reviewers to the authors’ names reduced biases related to nationality and
institutional affiliation. More specifically, Ross et al. (2006) showed that when affiliation of
researchers were made public, papers with US institutions were accepted 7.4% more often than
during blinded review; papers with non-US institutions were accepted 0.9 % less then when
during blinded review. A study of abstract acceptance by Timmer et al. (2000) also found
evidence of bias related to nationality, and a study by Ernst and Kienbacher (1991) found that
reviewers were more likely to accept articles submitted by authors who have the same nationality
as that of the journal. Murray et al. (2016) found that funding success and the award amount
were significantly lower for smaller institutions submitting grant applications to Canada’s
Natural Sciences and Engineering Research Council Discovery Grant program. However,
Garfunkle et al. (1994) found that institutional ranking in terms of NIH-funding in the US did not
impact reviewers’ recommendations or the acceptance rate for major papers submitted to a
biomedical journal, although it did impact recommendations and acceptance rates for brief
Groupthink and Bias in Science
We will now turn our attention to bias related to groupthink, which we will define as a
situation in which the psychological drive for group consensus is so strong that dissent is hidden,
rejected or dissuaded.1 The social psychologist Irving Janis (1972) coined the term ‘groupthink’
to describe decision-making processes that have led to foreign-policy fiascos, such as the US’
failed invasion of Cuba’s Bay of Pigs in April 1961. After Fidel Castro led a revolution to
overthrow the Cuban government in 1959, the US began looking for ways to undermine or
change his regime. American intelligence officials and military leaders wrongly assumed that
the 1,400 Cuban exiles who took part in the invasion would be able to instigate a successful
venture to oust Castro, but they were vastly outnumbered by the Cuban army and surrendered
within 24 hours (Janis 1982). Janis observed that groupthink led to this ill-fated military venture
by causing decision-makers to not examine evidence critically and consider alternative course of
action. Janis’ work built upon earlier studies of cohesiveness and conformity in group decision-
making (Janis 1972, 1982).
In scientific research, groupthink may lead researchers to reject innovative or
controversial ideas, hypotheses or methodologies that challenge the status quo. Philosophers,
historians, and sociologists have observed that scientists often resist new ideas, despite their
reputation for open-mindedness (Barber 1961, Kuhn 1962). The great quantum physicists
Maxwell Planck has been quoted as saying: “A new scientific truth does not triumph by
convincing its opponents and making them see the light, but rather because its opponents
eventually die, and a new generation grows up that is familiar with it (Planck 1962:33-34).”
In his seminal work on the history of science, The Structure of Scientific Revolutions,
Kuhn described the role of conformity and close-mindedness in scientific advancement.
According to Kuhn (1962), science progresses though different stages. In the first stage, known
as normal science, scientists conduct their research within a paradigm that defines the field. A
paradigm is a way of doing science that includes basic assumptions, beliefs, principles, theories,
methods, and epistemic values that establish how one solves problems within the normal science
tradition; normal science involves consensus within a scientific community. For example,
Newtonian physics was a normal science tradition that established ways of solving problems
related to motion and electromagnetic radiation (Kuhn 1962). During the normal science stage,
scientists attempt to apply the paradigm to problems they can solve and they resist certain
theories, methods, and ideas that challenge the paradigm. At this stage, scientists tend to think
outside the theoretical limits of the paradigm limiting novel ideas. However, as problems emerge
that cannot be solved within the paradigm, scientists start to consider new ideas, theories, and
methods that form the basis of a new and emerging paradigm. A scientific revolution occurs
when the new paradigm replaces the old. For example, during the early twentieth century,
Newtonian physics succumbed to quantum mechanics and relativity theory (Kuhn 1962). A
paradigm-shift is not a purely rational process driven by logical argumentation and empirical
1 Our definition is loosely inspired by Irvin’s definition (1972) but has been modified so as to apply to the context
of science and peer review.
evidence; rather, it involves a change in perception, or a willingness to see the world in a
different way (Kuhn 1962). After the revolution, a new paradigm takes hold and the process
repeats itself.
Some philosophers have argued that a certain amount of closed-mindedness, known as
epistemological conservatism, is justified in scientific research. The rationale for this
epistemological stance is that change in a network of beliefs should be based on substantial
empirical evidence. Since changes in beliefs can consume a considerable amount of time and
effort and our cognitive resources are limited, we should not change our beliefs, especially ones
that play a central role in our worldview, without compelling evidence (Quine 1961, Sklar 1975,
Lycan 1988, Resnik 1994). For example, because Einstein’s general theory of relativity
contradicted the fundamental principle of Newtonian physics that space and time are immutable,
it took extraordinary proof—i.e. that observation of the sun’s gravity bending light from a star
during a solar eclipse in 1919—to confirm the theory (Buchen 2009). While it seems clear that a
certain amount of conservatism makes sense in research, scientists should be careful to avoid
dogmatism. Although scientists should practice a degree a skepticism pertaining to hypotheses
and theories that challenge the status quo, they should be open to new ideas and avoid
dogmatism (Resnik 1994).
Groupthink and Bias in Peer-Review
Issues surrounding groupthink that we find in scientific norms may permeate the process
of peer review, which may result in the rejection of innovative or controversial manuscripts and
research proposals. It is plausible to hypothesize that lack of social diversity could contribute to
groupthink in peer review. As noted earlier, Helmer et al. (2017) found that women are
underrepresented in the population of peer reviewers for Frontiers journals. Since racial and
ethnic minorities are underrepresented in science (Nelson 2007, Committee on Science,
Engineering, and Public Policy 2010), it is likely that they are also underrepresented in the
reviewer population. It is possible that there is also a lack of diversity with respect to nationality
and institutional affiliation in the reviewer population, although we know of no published
research on this topic.
If we suppose that there is a lack of social diversity in the population of peer reviewers, it
is conceivable that this type of bias could impact the review process and that increasing reviewer
diversity would decrease groupthink (Longino 1990). However, it is important to recognize that
this argument assumes that diversity with respect to social factors translates into increased
willingness to accept ideas that challenge the status quo, and that lack of such diversity has the
opposite effect, neither of which might be the case. It is often assumed that social diversity leads
to diversity with respect to opinions, beliefs, and epistemological norms (i.e. intellectual
diversity), but it has also been argued that this might not occur (Card 2005). For example, a
socially diverse group of researchers could still fall prey to groupthink because they accept lack
intellectual diversity and favor the status quo, or want to preserve their chances for peer
recognition. Also, a socially homogenous group of researchers might not fall prey to groupthink
because they are intellectually diverse and are open to new and controversial ideas.
Clearly, more research is needed on the relationship between social diversity and peer
review. Some argue, based on standpoint theory, that marginalized groups may have a different
or perhaps an even better view of social phenomena, given their position at the margins of
society (Harding 2004). Although some aspects of standpoint theory remain contentious as it
methodologically and systematically questions the position of power (Wylie and Sismondo
2015), it may be an effective tool to question the status quo.
There are different ways that groupthink could occur in the peer review process. Because
of the difficulty in finding peer-reviewers, editors may resort to the same network of individuals
to review manuscripts. Groupthink may set in as editors rely on a limited network that reduces
diversity of reviews. The hyper-specialization of certain fields may narrow the choice of
qualified reviewers significantly and thus reduce their diversification. Moreover, with time,
editors may become overly trusting towards certain peers, especially those with similar scientific
stances. Editors may come to blindly trust individuals of high academic standing which may
reduce the proper evaluation of the review.
Another type of groupthink involves the occurrence of dogmatism in the peer review
process itself; i.e., a predisposition to reject innovative or controversial theories, hypotheses, or
methods. Shamoo and Resnik (2015) have observed that a certain amount of dogmatism may be
unavoidable in peer review because reviewers are chosen for their expertise, and experts are
usually established researchers with theoretical and methodological commitments (or biases) that
can compromise their open-mindedness. An anthropological perspective of peer-review has
shown that even when trying to promote fairness, peer-reviewers usually think that research that
is similar to their own ( in terms of methods, topics, results) is of a higher standard – making
criteria for excellence somewhat subjective (Lamont 2009). If excellence is fitted to suit the
present scientific elite, the status quo will most likely be maintained and novelty discouraged; in
effect, past truths would be left unchallenged and perpetuated multidisciplinary panels on review
boards have created a need for researchers to justify their criteria for excellency and this often
does help to reduce groupthink and bias. However, this is not the case in journal peer-review
where there is no open multidisciplinary debate. To reduce or counteract closed-mindedness,
editors could select reviewers who are not established researchers, but this strategy could
potentially undermine the quality of peer review.
While there is anecdotal evidence (i.e. complaints from scientists) that intellectual
dogmatism impacts peer review (Chubin and Hackett 1990), it is difficult to obtain systematic
data that supports this hypothesis (Lee et al. 2012). A study by Resnik et al. (2008) found that
50.5% of 283 scientists responding to a survey conducted at a government biomedical research
institution had experienced bias in the peer review process. However, this study did not define
bias and gathered data on scientists’ perceptions of bias, not on bias itself.
An interesting study conducted by Resch et al. (2000) randomly assigned 398 reviewers
to receive papers on conventional or non-conventional treatments for obesity. The papers were
virtually the same with respect to research methodology and design; the main difference related
to the type of intervention. 141 reviewers responded to the review request. 67% of the
reviewers who received papers on the effectiveness of conventional treatment recommended
publication as opposed to 57% of those who received paper on the effectiveness of non-
conventional treatment. This difference was statistically significant, suggesting that reviewers
are biased in favor of conventional therapies (Resch et al. 2000). While the study by Resch et al.
presents some useful data related to bias in peer review, it is limited to certain types of bias in
clinical research and may not generalize to other fields. Also, it does not address some of the
deeper issues that underlie the bias, such as dogmatic allegiance to various theories, methods,
ideas, and so on.
Research conducted by Campanario (2009) spans various fields of science and provides
evidence for dogmatism in science. Campanario collected data on the peer review process for 16
papers from the fields of medicine, biochemistry, chemistry, and physics; while these papers did
eventually earn Nobel Prizes for the authors, they were severely panned during the peer review
process or were rejected. He obtained evidence concerning the review of these papers from the
authors’ autobiographies, personal accounts, Nobel lectures, and other written reports. For
example, Arne Tiselius won the Nobel Prize in Chemistry in 1948 for his work on
electrophoreses and adsorption, but the editors at Biochemical Journal where he initially sent his
key paper rejected it because it focused too much on physical science (Campanario 2009). David
Lee, Douglas Osheroff, and Robert Richardson received the Nobel Prize in Physics in 1996 for
their discovery of superfluid helium, but Physical Review Letters initially rejected their work
because the reviewers did not believe that the physical system they described was possible. They
succeeded in overturning this decision by convincing the editors that their discovery would work
(Campanario 2009). Murray Gell-Mann receive the Nobel Prize in Physics in 1953 for his work
on the phenomenon of “strangeness” in particle physics, but the editors of Physical Review
objected to use of the word “strangeness” and he had to change his terminology to “new unstable
particles” (Campanario 2009: 553). Thomas Cech won the Nobel Prize in Chemistry in 1989 for
discovering that some ribonucleic acid (RNA) molecules can act as enzymes, but the reviewers
for his paper submitted to Nature strongly objected to his decision to characterize the properties
he observed as “enzyme-like” or as a type of “catalysis” (Campanario 2009: 553). Most
biochemists at that time believed that RNA cannot act as an enzyme.
In the discussion section at the end of his paper, Campanario offers dogmatism as a
possible explanation for the encounters that these Nobel Prize winners had with scientific peer
review: “A possible explanation for peer resistance to scientific discovery lies in the fact that
new theories or discoveries often clash with orthodox viewpoints held by the referees
(Campanario 2009: 558).” He also suggests that difficulties that some Nobel Prize winners have
had with peer review may also be due to delayed recognition: some discoveries are so far ahead
of their time that it takes other researchers years, perhaps even decades to appreciate them (Stent
1972, Garfield 1989, Campanario 2009). Of course, delayed recognition may simply be another
form of dogmatism insofar as scientists fail to recognize research because it contradicts the status
quo. In his conclusion, Campanario also observes: “Peer review has been shown to be plagued
with many imperfections…there is a real risk that evidence contrary to the established views can
be suppressed or disregarded (Campanario 2009:559).”
While Companario’s research provides compelling evidence of dogmatism in scientific
peer review, the sample for his study is highly selective, and the experiences these Nobel Prize
winners had with peer review may not reflect other researchers’ the experiences. Nobel Prize
winners are usually chosen for their highly innovative and influential contributions to science,
and the dogmatism encountered by some Nobel Prize winners may not be as prevalent
throughout science. However, it does seem reasonable to assume that non-Nobel Prize winning
scientists may also encounter strong resistance to innovative research they submit to journals.
Although peer review is essential to the evaluation of scientific research, it is susceptible
to various biases, some of which may result from or contribute to groupthink. To counteract
groupthink in peer review, scientists should take steps to enhance the diversity of reviewers with
regards to gender, race, nationality, institutional affiliation, and other social factors that could
impact reviewer judgments. Intellectual diversity should also be promoted (e.g. including
individuals using different methods and expertise) as well as the funding and publication of
innovative or controversial research that challenges the status quo. Editors and funding agency
leaders should also stress open-mindedness in the review of research and seek to publish and
fund innovative and controversial research that meets appropriate standards of rigor,
reproducibility, objectivity, and integrity. To overcome confirmatory biases, editors should be
open to publishing research that reports negative results if it meets appropriate scientific
Journal editors and funding agency leaders should collect data on peer review, so that
they can better understand how to control and/or mitigate biases that may impact the process.
Journal editors and funding agency leaders should conduct their own, independent assessment of
reviewer reports so that they can determine whether these reports are biased. Journals should
also consider experimenting with procedures, such as double-blind review, which may minimize
the impact of biases. Reviewers, editors, and funding agency leaders should try to address their
own biases so that manuscripts and research applications can receive a fair hearing. Additional
meta-research on the factors related to groupthink in science will help researchers, editors, and
funding agency leaders understand how to promote neutrality and integrity in peer review.
This research was supported, in part, by the National Institute of Environmental Health Sciences
(NIEHS), National Institutes of Health (NIH) and the Fonds de Recherche du Québec en Santé
(FRQS). This paper does not represent the views of the NIEHS, NIH, the FRQS or any
governmental organization.
Barber B. 1961. Resistance by scientists to scientific discovery. Science 134(3479):596-602.
Buchen L. 2009. May 29, 1919: a major eclipse, relatively speaking. Wired, May 29, 2009.
Available at: Accessed: April 17, 2017.
Armstrong JS. 1997. Peer review for journals: evidence on quality control, fairness, and
innovation. Science and Engineering Ethics 3(1):63-84.
Baggs JG, Broome ME, Dougherty MC, Freda MC, Kearney MH. 2008. Blinding in peer review:
the preferences of reviewers for nursing journals. Journal of Advances in Nursing 64(2):131-138.
Bornmann, L., Mutza, R., Daniela, H.D. (2007). Gender differences in grant peer review: A
meta-analysis. Journal of Informetrics 1(3), 226–238.
Borsuk RM, Aarssen LW, Budden AE, Koricheva J, Leimu R, Tregenza T, Lortie CJ. 2009. To
name or not to name: the effect of changing author gender on peer review. Bioscience
Campanario, JM. 2009. Rejecting and resisting Nobel class discoveries: accounts by Nobel
Laureates. Scientometrics 81(2):549-565.
Card RF. 2005. Making sense of the diversity-based legal argument for affirmative action.
Public Affairs Quarterly 19(1):11-24.
Ceci SJ, Williams WM. 2011. Understanding current causes of women's underrepresentation in
science. Proceedings of the National Academy of Sciences of the United States of America
Chubin D, Hackett E. 1990. Peerless Science: Peer Review and U.S. Science Policy. Albany,
NY: State University of New York Press.
Committee on Science, Engineering, and Public Policy. 2010. Expanding Underrepresented
Minority Participation: America's Science and Technology Talent at the Crossroads. National
Academies Press: Washington, DC.
Dwan K, Gamble C, Williamson PR, Kirkham JJ; Reporting Bias Group. 2013. Systematic
review of the empirical evidence of study publication bias and outcome reporting bias - an
updated review. PLoS One 8(7):e66844.
Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR. 1991. Publication bias in clinical research.
Lancet 337(8746):867-872.
Ernst E, Kienbacher T. 1991. Chauvinism. Nature 352(6336):560.
Fisher M, Friedman SB, Strauss B. 1994. The effects of blinding on acceptance of research
papers by peer review. Journal of the American Medical Association 272(2):143-146.
Garfield E. 1989. Delayed recognition in scientific discovery: citation frequency analyses aids
the search for case histories. Current Contents 23: 3-9
Garfunkel JM, Ulshen MH, Hamrick HJ, Lawson EE. 1994. Effect of institutional prestige on
reviewers' recommendations and editorial decisions. Journal of the American Medical
Association 272(2):137-138.
Ginther DK, Haak LL, Schaffer WT, Kington R. 2012. Are race, ethnicity, and medical school
affiliation associated with NIH R01 type 1 award probability for physician investigators?
Academic Medicine 87(11):1516-1524.
Ginther DK, Kahn S, Schaffer WT. 2016. Gender, Race/Ethnicity, and National Institutes of
Health R01 Research Awards: Is There Evidence of a Double Bind for Women of Color?
Academic Medicine 91(8):1098-1107.
Ginther DK, Schaffer WT, Schnell J, Masimore B, Liu F, Haak LL, Kington R. 2011. Race,
ethnicity, and NIH research awards. Science 333(6045):1015-1019.
Godlee F, Gale CR, Martyn CN. 1998. Effect on the quality of peer review of blinding reviewers
and asking them to sign their reports: a randomized controlled trial. Journal of the American
Medical Association 280(3):237-240.
Grod ON, Budden AE, Tregenza T, Koricheva J, Leimu R, Aarssen LW, Lortie CJ. 2008.
Systematic variation in reviewer practice according to country and gender in the field of ecology
and evolution. PLoS One 3(9):e3202.
Harding S. 2004.A socially relevant philosophy of science? Resources from standpoint theory’s
controversiality. Hypatia 19(1):25–47.
Helmer M, Schottdorf M, Neef A, Battaglia D. 2017 Gender bias in scholarly peer review.
Rodgers P, ed. eLife 6:e21718.
Ho RC, Mak KK, Tao R, Lu Y, Day JR, Pan F. 2013. Views on the peer review system of
biomedical journals: an online survey of academics from high-ranking universities. BMC
Medical Research Methodology 13:74.
Jang J, Vannier MW, Wang F, Deng Y, Ou F, Bennett J, Liu Y, Wang G. 2013. A bibliometric
analysis of academic publication and NIH funding. Journal of Informetrics 7(2):318-324.
Janis IL. 1972. Victims of Groupthink: A Psychological Study of Foreign Policy Decisions and
Fiascoes, 2nd ed. Boston: Houghton Mifflin Company.
Janis IL. 1982. Groupthink: Psychological Studies of Policy Decisions and Fiascos. Boston,
MA: Cengage Learning.
Justice AC, Cho MK, Winker MA, Berlin JA, Rennie D. 1998. Does masking author identity
improve peer review quality? A randomized controlled trial. PEER Investigators. Journal of the
American Medical Association 280(3):240-242.
Kaatz A, Lee YG, Potvien A, Magua W, Filut A, Bhattacharya A, Leatherberry R, Zhu X, Carnes
M. 2016. Analysis of National Institutes of Health R01 application critiques, impact, and criteria
scores: does the sex of the principal investigator make a difference? Academic Medicine
Hunter J. 2012. Post-publication peer review: opening up scientific conversation. Frontiers in
Computational Neuroscience 6 (August 30).
Kaatz A, Magua W, Zimmerman DR, Carnes M. 2015. A quantitative linguistic analysis of
National Institutes of Health R01 application critiques from investigators at one institution.
Academic Medicine 90(1):69–75.
Kuhn TS. 1962. The Structure of Scientific Revolutions. Chicago: University of Chicago Press.
Lane JA, Linden DJ. 2009. Is there gender bias in the peer review process at Journal of
Neurophysiology? Journal of Neurophysiology 101(5):2195–2196.
Lamont M. . 2009. How Professors Think: Inside the Curious World of Academic Judgment.
Cambridge, MA: Harvard University Press.
Latour B , and Woolgar S. 1979. Laboratory Life: The Social Construction of Scientific Facts.
London, UK: Sage.
Larivière V, Ni C, Gingras Y, Cronin B, Sugimoto C. 2013. Global gender disparities in science.
Nature 504 (7479):211–213.
Lauer MS, Krumholz HM, Topol EJ. 2015. Time for a prepublication culture in clinical
research? Lancet 386(1012):2447–2449.
Lee CJ, Sugimoto CR, Zhang G, Cronin B. 2012. Bias in peer review. Journal of the American
Society for Information Science and Technology 64(1):2–17.
Longino H. 1990. Science as Social Knowledge. Princeton, NJ: Princeton University Press.
Lycan WG. 1988. Judgement and Justification. Cambridge, UK: Cambridge University Press.
Mahoney MJ. 1977. Publication preferences: An experimental study of confirmatory bias in the
peer review system. Cognitive Therapy and Research 1(2):161–175.
McNutt RA, Evans AT, Fletcher RH, Fletcher SW. 1990. The effects of blinding on the quality of
peer review. A randomized trial. Journal of the American Medical Association 263(10):1371-
Murray DL, Morris D, Lavoie C, Leavitt PR, MacIsaac H, Masson ME, Villard MA. 2016. Bias
in research grant evaluation has dire consequences for small universities. PLoS One
Nelson DJ 2007. A National Analysis of Minorities in Science and Engineering Faculties at
Research Universities. Norman, OK: University of Oklahoma.
Olson CM, Rennie D, Cook D, Dickersin K, Flanagin A, Hogan JW, Zhu Q, Reiling J, Pace B.
2002. Publication bias in editorial decision making. Journal of the American Medical Association
Planck M. 1962. Quoted in Kuhn TS. 1962. The Structure of Scientific Revolutions. Chicago,
IL: University of Chicago Press, pp. 33-34.
Quine WV. 1961. From a Logical Point of View. New York, NY: Harper and Rowe.
Resch KI, Ernst E, Garrow J. 2000. A randomized controlled study of reviewer bias against an
unconventional therapy. Journal of the Royal Society of Medicine 93(4):164-167.
Resnik DB. 1994. Methodological conservatism and social epistemology. International Studies in
the Philosophy of Science 8(3):247-264.
Resnik DB, Elmore SA. 2016. Ensuring the quality, fairness, and integrity of journal peer
review: a possible role for editors. Science and Engineering Ethics 22(1):169-188.
Resnik DB, Gutierrez-Ford C, Peddada S. 2008. Perceptions of ethical problems with scientific
journal peer review: an exploratory study. Science and Engineering Ethics 14(3):305-310.
Ross JS, Gross CP, Desai MM, Hong Y, Grant AO, Daniels SR, Hachinski VC, Gibbons RJ,
Gardner TJ, Krumholz HM. 2006. Effect of blinded peer review on abstract acceptance. Journal
of the American Medical Association 295(14):1675-1680.
Shamoo AE, Resnik DB. 2015. Responsible Conduct of Research, 3rd ed. New York, NY: Oxford
University Press.
Shen H. 2013. Mind the gender gap. Nature 495(7439):22-24.
Sklar L. 1975. Methodological conservatism. Philosophical Review 84(3):374-400.
Smith R. 2006. Peer review: A flawed process at the heart of science and journals. Journal of the
Royal Society of Medicine 99(4):178–182.
Stent GS. 1972. Prematurity and uniqueness in scientific discovery. Scientific American 227(6):
Stern JM, Simes RJ. 1997. Publication bias: evidence of delayed publication in a cohort study of
clinical research projects. British Medical Journal 315(7109):640-645.
Teixeira da Silva JA, Dobránszki J. 2015. Problems with traditional science publishing and f
finding a wider niche for post-publication peer review. Accountability in Research 22(1): 22–40.
Timmer A, Hilsden RJ, Sutherland LR. 2001. Determinants of abstract acceptance for the
Digestive Diseases Week--a cross sectional study. BMC Medical Research Methodology 1:13.
Tregenza T. 2002. Gender bias in the refereeing process? Trends in Ecology and Evolution
van Rooyen S, Delamothe T, Evans SJ. 2010. Effect on peer review of telling reviewers that their
signed reviews might be posted on the web: randomised controlled trial. British Medical Journal
van Rooyen S, Godlee F, Evans S, Black N, Smith R. 1999. Effect of open peer review on
quality of reviews and on reviewers' recommendations: a randomised trial. British Medical
Journal 318(7175):23-27.
van Rooyen S, Godlee F, Evans S, Smith R, Black N. (1998). Effect of blinding and unmasking
on the quality of peer review: a randomized trial. Journal of the American Medical Association
Waisbren SE, Bowles H, Hasan T, Zou KH, Emans SJ, Goldberg C, Gould S, Levine D,
Lieberman E, Loeken M, Longtine J, Nadelson C, Patenaude AF, Quinn D, Randolph AG, Solet
JM, Ullrich N, Walensky R, Weitzman P, Christou H. 2008. Gender differences in research grant
applications and funding outcomes for medical school faculty. Journal of Women’s Health
(Larchmont) 17(2):207-214.
Pulverer B. 2016. Preparing for preprints. EMBO Journal 35 (24): 2617–19.
Walsh E, Rooney M, Appleby L, Wilkinson G. 2000. Open peer review: a randomised controlled
trial. British Journal of Psychiatry 176:47-51.
Wenneras C, Wold A. 1997. Nepotism and sexism in peer-review. Nature 387(6631):341-343.
... author-level characteristics such as race/ethnicity, gender, nationality, and institutional affiliation (Fox & Paine, 2019;Helmer et al., 2017;Link, 1998;Peters & Ceci, 1982). 6 Other commentators have pointed out that peer review is often ineffective at detecting or generally unable (or unwilling) to investigate cases of potential fraud (Relman, 1983;Rennie, 1989;Smith, 1997;Stroebe et al., 2012), that it does not ensure the credibility, reproducibility, and replicability of findings (Vazire, 2020), and that it is generally unaccountable for abuse and manipulation that can occur during the review process (Davis, 1979;Ingelfinger, 1975;Lloyd, 1985;O'Grady, 2020;Ross-Hellauer, 2017a;Smith, 2006;Tennant, 2018)among numerous other issues (Horton, 2000;Resnik, 2011;Resnik & Smith, 2020). ...
Full-text available
Peer review serves an essential role in the cultivation, validation, and dissemination of social work knowledge and scholarship. Nevertheless, the current peer review system has many limitations. It is charged as being unreliable, biased, ineffective, and unaccountable, among numerous other issues. That said, peer review is still commonly viewed as the best possible system of knowledge governance, given the relevant alternatives. In this research note, I scrutinize this assumption. Although peer review can sometimes be effective, it is not therefore a rigorous or even dependable system. Indeed, the practice of peer review in social work is overwhelmingly closed and opaque, and assurances of its rigor are speculative at best. Given that social work research informs policies and practices that have real world consequences for clients and communities, it is imperative that our research – and its appraisal – be held to the strictest of standards. This includes our system(s) of peer review. After highlighting common criticisms of traditional peer review, I articulate a research agenda on “open peer review” which can reform how peer review is performed, provide feedback to editors and reviewers, and help make the process more rigorous, transparent, and evidence-based. Implications for social work education are explored and discussed.
... Editors also have the responsibility of not manipulating peer review and editorial processing to favor themselves or their journal because this would introduce bias-and thus error-into the bibliometric profile of the editor, journal, and any papers that cite it (Kojaku et al. 2021;Lutmar and Reingewertz 2021). Unconventional ideas might also face unfair editorial screening, rejection or exclusion due to status quo editorial groupthink biases (Resnik and Smith 2020). ...
Academic publishing is undergoing a highly transformative process, and many established rules and value systems that are in place, such as traditional peer review (TPR) and preprints, are facing unprecedented challenges, including as a result of post-publication peer review. The integrity and validity of the academic literature continue to rely naively on blind trust, while TPR and preprints continue to fail to effectively screen out errors, fraud, and misconduct. Imperfect TPR invariably results in imperfect papers that have passed through varying levels of rigor of screening and validation. If errors or misconduct were not detected during TPR's editorial screening, but are detected at the post-publication stage, an opportunity is created to correct the academic record. Currently, the most common forms of correcting the academic literature are errata, corrigenda, expressions of concern, and retractions or withdrawals. Some additional measures to correct the literature have emerged, including manuscript versioning, amendments, partial retractions and retract and replace. Preprints can also be corrected if their version is updated. This paper discusses the risks, benefits and limitations of these forms of correcting the academic literature.
... However, the wider literature on reviewing has documented evidence of serious biases in reviews, including, at times, language use that speaks of biases against particular groups and that goes beyond reactions to the individual characteristics of a manuscript (see, for example, Lee et al., 2013;Resnik & Smith, 2020;Silbiger & Stubler, 2019). 6 Beyond the documented evidence of bias and the occasional presence of unprofessional, even shameful comments (Silbiger & Stubler, 2019), we should be concerned that language choices from reviewers might be perceived as reducing the merits of a manuscript to assumptions about the social group to which the author belongs. ...
... Scientific journals use peer review to evaluate manuscripts submitted by investigators (Asplund & Welle, 2018;Gropp et al., 2017;Stoff & Cargill, 2016;Wadman, 2012). However, studies indicate a lack of racial, gender, and sexual diversity among peerreview personnel (Medina & Luna, 2020;Resnik & Smith, 2020;Wadman, 2012). Lacking diverse perspectives in peer review contributes to gaps in knowledge about disproportionately affected communities (e.g., BLSMM; Asplund & Welle, 2018). ...
Full-text available
Black and Latino sexual minority men (BLSMM) scholars are well positioned to draw on their unique perspectives and expertise to address the health status and life opportunities (HSLO) of BLSMM. Increasingly, research related to the positionality of scholars of color suggests that the scholar’s stance in relation to the community being researched has important implications for the research. Despite growing recognition of the importance of scholar positionality, limited attention has been paid to the relationship between scholar-of-color positionality and improving HSLO trajectories of BLSMM. Furthermore, extant literature fails to specify the mechanisms by which scholar-of-color positionality can improve HSLO among BLSMM. This article seeks to fill this gap in research by arguing that an inadequate consideration of scholar positionality in health and life opportunity research has important implications for the HSLO of BLSMM. A multilevel, mediational model addressing factors at the micro-level (i.e., intrapersonal resources)—BLSMM scholars’ personal commitments to BLSMM communities, cultural knowledge and expertise, and shared life experiences; meso-level (i.e., scholar and affected community interactions)—historical membership, mutual interdependency and trust, and community and organizational gatekeeping; and macro-level (i.e., national policies and priorities regarding BLSMM)—national priorities regarding the health and social welfare of BLSMM, allocation of BLSMM research and program funding, societal sentiment, and national investment in the workforce development of BLSMM scholars and clinicians are detailed. In conclusion, we identify recommendations and strategies for advancing scientific, programmatic, and policy efforts, aimed at improving HSLO among communities of BLSMM.
... 52 The most common example of a paradigm shift is the replacement of Newtonian physics with Einstein's theory of relativity, which was resisted for decades. 53 Although Kuhn's complete theory may not have been formally adopted, the current system is known to promote groupthink in scientific teams; like-minded people end up in teams from similar disciplines that share or adhere to the same manner of thinking. 54 The concept of GroupThink, developed by Irvin Janis, was found to promote inordinately high group cohesion that could undo or hinder rational decision making. ...
Full-text available
Retractions of coronavirus disease 2019 (COVID‐19) papers in high impact journals, such as The Lancet and the New England Journal of Medicine, have been panned as major scientific fraud in public media. The initial reaction to this news was to seek out scapegoats and blame individual authors, peer‐reviewers, editors, and journals for wrong doing. This paper suggests that scapegoating a few individuals for faulty science is a myopic approach to the more profound problem with peer‐review. Peer‐review in its current limited form cannot be expected to adequately address the scope and complexity of large interdisciplinary science research collaboration, which is central in translational research. In addition, empirical studies on the effectiveness of traditional peer‐review reveal its very real potential for bias and groupthink; as such, expectations regarding the capacity and effectiveness of the current peer review process are unrealistic. This paper proposes a new vision of peer‐review in translational science that, on the one hand, would allow for early release of a manuscript to ensure expediency, whereas also creating a forum or a collective of various experts to actively comment, scrutinize, and even build on the research under review. The aim would be to not only generate open discussion and oversight respecting the quality and limitations of the research, but also to assess the extent and the means for that knowledge to translate into social benefit.
Purpose: The American Speech-Language-Hearing Association (ASHA) has committed to advancing diversity, equity, and inclusion (DEI) by retaining and advancing Black, Indigenous, and people of color (BIPOC) individuals in the discipline of communication sciences and disorders (CSD), amid critical shortages of faculty to train the next generation of practitioners and researchers. Publishing research is central to the recruitment, retention, and advancement of faculty. However, inequity in peer review may systematically target BIPOC scholars, adding yet another barrier to their success as faculty. This viewpoint article addresses the challenge of inequity in peer review and provides some practical strategies for developing equitable peer-review practices. First, we describe the demographics of ASHA constituents, including those holding research doctorates, who would typically be involved in peer review. Next, we explore the peer-review process, describing how inequity in peer review may adversely impact BIPOC authors or research with BIPOC communities. Finally, we offer real-world examples of and a framework for equitable peer review. Conclusions: Inequity at the individual and systemic levels in peer review can harm BIPOC CSD authors. Such inequity has effects not limited to peer review itself and exerts long-term adverse effects on the recruitment, retention, and advancement of BIPOC faculty in CSD. To uphold ASHA's commitment to DEI and to move the discipline of CSD forward, it is imperative to build equity into the editorial structure for publishing, the composition of editorial boards, and journals content. While we focus on inequity in CSD, these issues are relevant to other disciplines.
During the coronavirus disease 2019 pandemic, a complex mix of political pressure, social urgency, public panic, and scientific curiosity has significantly impacted the context of research and development. The goal of this study is to understand if and how researchers are shifting their practices and adjusting norms and beliefs regarding research ethics and integrity. We have conducted 31 interviews with Health Science Researchers at the University of Texas Medical Branch which were then analyzed using integrated deductive and inductive coding. We categorized participant views into four main areas: 1) limitations to the research design, 2) publication, 3) duplication of studies, and 4) research pipeline. Although certain researchers were in keeping to the status quo, more were willing to modify norms to address social need and urgency. Notably, they were more likely to opt for systemic change rather than modifications within their own research practices.
Delving into the review reports, this paper is aimed at analyzing reviewers` attitudes toward different sections of the manuscripts they review. The research focuses on the consistency of reviewers` evaluation through analysis of their assessment of separate parts of a paper, if it corresponds with the recommendations they made to the editors and whether a paper needs revision or should be accepted/rejected. It is assumed that the assessment of separate parts of a paper should be consistent with the final decision regarding the acceptance or rejection of a manuscript. Based on the analysis presented in this paper it can be concluded that the assessments of separate parts of articles in the evaluation sheets do not fully reflect the final recommendations of the reviewers. The results showed that the most correlated and therefore the most significant sections for the reviewers are the main text and the conclusions. The conditional probability analysis showed that the decision of reviewers, when number of points in the evaluation sheet is taken into consideration, is slightly unpredictable. No significant differences in the reviewers` recommendations based on gender or country of origin of the reviewers were found.
Full-text available
For improving the performance and effectiveness of peer review, a novel review system is proposed, based on analysis of peer review process for academic journals under a parallel model built via Monte Carlo method. The model can simulate the review, application and acceptance activities of the review systems, in a distributed manner. According to simulation experiments on two distinct review systems respectively, significant advantages manifest for the novel one.
The point of using the designation ‘pseudo-science’ or the rhetoric of ‘fake news’ against established scientists, currently deployed on social media to flag certain posts, despite whatever good research reasons such scientists have for holding such alternative views, is to rule their views unworthy of consideration especially for the layperson or public reader who is either encouraged to ignore such posts or discouraged from accessing or even blocked in accord with fact-checker dictum. Arguably, some of this follows from our need to believe in ‘science’ in place of religion and to see science as a checkable repository, the locus of truth unchanging. But science, if it is science, progresses and this can only mean that science undergoes change and even revolutionary transformation. Now more than ever, we need research quite as opposed to science by fiat.
Full-text available
Peer review is the cornerstone of scholarly publishing and it is essential that peer reviewers are appointed on the basis of their expertise alone. However, it is difficult to check for any bias in the peer-review process because the identity of peer reviewers generally remains confidential. Here, using public information about the identities of 9000 editors and 43000 reviewers from the Frontiers series of journals, we show that women are underrepresented in the peer-review process, that editors of both genders operate with substantial same-gender preference (homophily), and that the mechanisms of this homophily are gender-dependent. We also show that homophily will persist even if numerical parity between genders is reached, highlighting the need for increased efforts to combat subtler forms of gender bias in scholarly publishing.
Full-text available
Purpose: To analyze the relationship between gender, race/ethnicity, and the probability of being awarded an R01 grant from the National Institutes of Health (NIH). Method: The authors used data from the NIH Information for Management, Planning, Analysis, and Coordination grants management database for the years 2000-2006 to examine gender differences and race/ethnicity-specific gender differences in the probability of receiving an R01 Type 1 award. The authors used descriptive statistics and probit models to determine the relationship between gender, race/ethnicity, degree, investigator experience, and R01 award probability, controlling for a large set of observable characteristics. Results: White women PhDs and MDs were as likely as white men to receive an R01 award. Compared with white women, Asian and black women PhDs and black women MDs were significantly less likely to receive funding. Women submitted fewer grant applications, and blacks and women who were new investigators were more likely to submit only one application between 2000 and 2006. Conclusions: Differences by race/ethnicity explain the NIH funding gap for women of color, as white women have a slight advantage over men in receiving Type 1 awards. Findings of a lower submission rate for women and an increased likelihood that they will submit only one proposal are consistent with research showing that women avoid competition. Policies designed to address the racial and ethnic diversity of the biomedical workforce have the potential to improve funding outcomes for women of color.
Full-text available
Purpose: Prior text analysis of R01 critiques suggested that female applicants may be disadvantaged in National Institutes of Health (NIH) peer review, particularly for renewals. NIH altered its review format in 2009. The authors examined R01 critiques and scoring in the new format for differences due to principal investigator (PI) sex. Method: The authors analyzed 739 critiques-268 from 88 unfunded and 471 from 153 funded applications for grants awarded to 125 PIs (76 males, 49 females) at the University of Wisconsin-Madison between 2010 and 2014. The authors used seven word categories for text analysis: ability, achievement, agentic, negative evaluation, positive evaluation, research, and standout adjectives. The authors used regression models to compare priority and criteria scores, and results from text analysis for differences due to PI sex and whether the application was for a new (Type 1) or renewal (Type 2) R01. Results: Approach scores predicted priority scores for all PIs' applications (P < .001), but scores and critiques differed significantly for male and female PIs' Type 2 applications. Reviewers assigned significantly worse priority, approach, and significance scores to female than male PIs' Type 2 applications, despite using standout adjectives (e.g., "outstanding," "excellent") and making references to ability in more critiques (P < .05 for all comparisons). Conclusions: The authors' analyses suggest that subtle gender bias may continue to operate in the post-2009 NIH review format in ways that could lead reviewers to implicitly hold male and female applicants to different standards of evaluation, particularly for R01 renewals.
Full-text available
Federal funding for basic scientific research is the cornerstone of societal progress, economy, health and well-being. There is a direct relationship between financial investment in science and a nation’s scientific discoveries, making it a priority for governments to distribute public funding appropriately in support of the best science. However, research grant proposal success rate and funding level can be skewed toward certain groups of applicants, and such skew may be driven by systemic bias arising during grant proposal evaluation and scoring. Policies to best redress this problem are not well established. Here, we show that funding success and grant amounts for applications to Canada’s Natural Sciences and Engineering Research Council (NSERC) Discovery Grant program (2011–2014) are consistently lower for applicants from small institutions. This pattern persists across applicant experience levels, is consistent among three criteria used to score grant proposals, and therefore is interpreted as representing systemic bias targeting applicants from small institutions. When current funding success rates are projected forward, forecasts reveal that future science funding at small schools in Canada will decline precipitously in the next decade, if skews are left uncorrected. We show that a recently-adopted pilot program to bolster success by lowering standards for select applicants from small institutions will not erase funding skew, nor will several other post-evaluation corrective measures. Rather, to support objective and robust review of grant applications, it is necessary for research councils to address evaluation skew directly, by adopting procedures such as blind review of research proposals and bibliometric assessment of performance. Such measures will be important in restoring confidence in the objectivity and fairness of science funding decisions. Likewise, small institutions can improve their research success by more strongly supporting productive researchers and developing competitive graduate programming opportunities.
Preprints reduce delays in sharing research results and increase the amount and diversity of data available to the scientific community. Support of this communication mechanism through appropriate policies by journals, funders and institutions will encourage community engagement. Widespread adoption would benefit both individual scientists and research, and it might improve publishing in scientific journals. Preprints are one step towards an Open Science future.
In 1969, Franz Ingelfinger wrote in The New England Journal of Medicine about the journal’s “definition of a ‘sole contribution’”.1 The journal’s masthead had stated a clear condition for any manuscript’s consideration: “Articles are accepted for consideration with the understanding that they are contributed for publication solely in this journal.”1 In other words, a given paper could be published exclusively in The New England Journal of Medicine, and nowhere else. This policy, known as the “Ingelfinger rule”, has had a major role in determining the ethos of publication in clinical research, and was revisited at least three times.2–4 The editors offered justifications: assuring the novelty and newsworthiness of published papers, and that published papers met high quality standards by virtue of going through the process of peer review.