Bias and Groupthink in Science’s Peer Review System
David B. Resnik, JD, PhD, NIEHS/NIH
Elise M. Smith, PhD, NIEHS/NIH
Studies have shown that various types of biases can impact scientific peer review. These biases
may contribute to a type of groupthink that can make it difficult to obtain funding or publish
innovative or controversial research. The desire to achieve consensus and uniformity within a
research group or scientific discipline can make it difficult for individuals to contradict the status
quo. This chapter will review the scientific literature regarding biases in the peer review system,
reflects on the potential impact of bias, and discusses approaches to minimize or control bias in
Peer review is a key part of the scientific enterprise. Most journals use peer review to
evaluate articles submitted for publication; funding agencies use peer review to assess research
proposals; and, scholarly conference committees use peer review to evaluate abstracts and
conference proceedings. Peer review serves mainly as a gate-keeping mechanism to ensure that
published or funded research meets appropriate disciplinary or interdisciplinary norms including
standards of rigor, reproducibility, validity, objectivity, novelty and integrity.
The peer-review process involves subject-matter experts and this lends a certain prestige
and credibility to scientific works that are positively reviewed. Peer review also serves to
improve the quality of articles and research proposals and educate junior scholars about
methodological standards (Resnik and Elmore 2016). However, peer review is not perfect (Smith
2006). Various socials scientists have opened the black box of scientific inquiry - including peer-
review - to reveal social norms held by peer-reviewers which can create bias (Latour and
Woolgar 1979). Specific studies have identified bias regarding academic rank, gender, race,
nationality, institutional affiliation that can undermine the impartiality and integrity of the peer
review system (Lee et al. 2012, Resnik and Elmore 2016).
Bias may also be the result of groupthink – which exists when the psychological drive for
consensus is so strong that any divergence from that consensus is ignored or rejected. This is not
to say that a degree of consensus or agreement is not beneficial. Understandably, the public
expects scientific findings which are endorsed by a large group of specialists to be sound and
credible. Also, in undertaking research, there should be some agreement among scientists
respecting working assumptions, certain hypotheses, and methodology, notwithstanding some
uncertainty or divergence of opinion. For example, health care providers must agree on standards
of practice to apply the results of biomedical research. If there was no ‘common ground’ or
consensus, medical practice would be chaotic, unpredictable and possibly unsafe. However,
groupthink may be problematic if a complacent trust between members is created and individual
critical reflection in scientific decision-making is no longer accepted or promoted.
This chapter will describe scientific peer review, discuss some of the evidence for bias,
examine the impact of biases related to groupthink, and consider some options for reforming the
system to reduce the impact of bias.
Science’s Peer Review System
In the seventeenth century, the editors of Philosophical Transactions of the Royal Society
of London instituted the first known instance of peer review to address the concerns of some of
the members of the Royal Society that their journal was publishing highly speculative, rambling
articles and works of fiction. However, peer review did not become standard practice in
scientific publishing until the nineteenth century (Shamoo and Resnik 2015). In the twentieth
century, government agencies began using peer review to determine the allocation of funds for
scientific projects. Peer review is now also used to make decisions concerning tenure and
promotion in academic institutions and to award scientific prizes (Shamoo and Resnik 2015).
Journal editors usually conduct an initial review of a manuscript submitted for
publication to determine whether it fits aims and scope of the journal and meets some minimum
standards of quality. If it is deemed suitable for peer review, the editors invite researchers with
the subject-matter expertise to review the paper. Some journals allow authors to recommend
potential reviewers or request that certain individuals not serve as reviewers. Most journals seek
input from two reviewers, but some may ask for more, especially when reviewers disagree
(Resnik and Elmore 2016). Reviewer reports usually include some comments intended for the
authors or editors, an evaluation of the manuscript based on journal criteria (e.g. originality,
significance, statistical soundness, strength of the argument, quality of writing, etc.), and an
overall recommendation (e.g. accept the manuscript as is, accept with revisions (minor or major),
revise and resubmit, or reject). Editors usually follow the reviewers’ recommendations given
their expertise, but in instances when they disagree with the reviewers, editors may differ
(Shamoo and Resnik 2015). Editors are responsible for conducting their own assessment of the
manuscript and determining whether the reviewer reports are fair and competent (Resnik and
The most common form of peer review in scientific publishing is the single-blind
approach wherein reviewers are informed of the authors’ identities and affiliations but not vice
versa. Blinding reduces the unwanted effects created by power dynamics. For example, a junior
scholar may hesitate to reject a paper because of significant methodological flaws if she was
aware that a prestigious author knew her identity. The reviewer might be worried about being
ostracized from her field of research because of her difference of opinion with an esteemed
member in high standing in said group. Generally speaking, regardless of hierarchy or power
dynamics, the purpose of single-blinding is to encourage reviewers to make candid comments
without fear of reprisal from disgruntled authors (Shamoo and Resnik 2015). Informing the
reviewers of the authors’ identities allows them to disclose any conflicts of interest and consider
relevant institutional factors into account during review. For example, if the reviewers know that
the manuscript comes from a non-English speaking country, they may decide to take this into
account when evaluating the written presentation (Shamoo and Resnik 2015).
An increasing number of journals have switched to a double-blind system in which
neither party is told the other’s identity or affiliation. The purpose of double-blinding is to
reduce bias related to gender, institutional affiliation, or nationality (see discussion below).
However, studies have shown that nearly half the time reviewers can correctly identify the first
author on the manuscript (Justice et al. 1998, Baggs et al. 2008). Identification of authors may
be more likely to occur in highly specialized fields where most researchers know each other’s
work through writing style, subject of research and research cited (Resnik and Elmore 2016).
Laboratories usually are aware of their competition since research is often shared at conferences
and through professional collaboration.
Some journals have gone to an open system in which authors and reviewers are told each
other’s identities and affiliations (Resnik and Elmore 2015). The purpose of open review is to
deter unethical behavior from reviewers, such as breach of confidentiality or theft of ideas
(Resnik and Elmore 2016). Open review may also allow for both authors and reviewers to be
named on the paper which adds a degree of accountability as well as recognition for the peer-
review. Researchers who find peer-reviewers work particularly important may choose to name
peer-reviewers openly by name in acknowledgements. One drawback with this approach is that
scientists may prefer to not participate in an open review system because they fear reprisal (Ho et
Although anonymity may promote candid review, one might argue that secrecy in science
is counterproductive and may ultimately reduce the quality of peer-review. There has been a
recent trend where researchers promote full transparency and expediency by using post-
publication open peer-review (Hunter, 2012). In journals, such as F1000 Research, authors send
a publication to the journal which conducts a very basic an in-house review. The article is then
put online without delay. Peer-reviewers then post reviews online and articles are often reviewed
if requested. This public ‘review’ period and ensuing debate allows researchers to cite
manuscripts that have not yet completed peer-review.
Researchers have argued that publishing work that will be later revised and republished
could reduce quality control and may augment the incidence of unsound science or of
misconduct (Teixeira da Silva and Dobránszki 2015). It may be argued that although traditional
peer-review has flaws, it is better than publishing research without oversight. However, the goal
of post-publication peer review is often to allow more continuous peer-review, not less (Lauer,
Krumholz and Topol 2015). And although manuscripts may change, journals do openly mention
when a paper is not yet peer-reviewed.
Interestingly, the culture and integration of post-publication peer-review is similar to pre-
publication archiving which originated in physics but has adopted by many disciplines in the
biomedical sciences. In pre-publication archiving, researchers put a copy of their paper in an
online repository, such as arXiv.org or bioRxiv.org, which is often cited with a specific digital
object identifier (DOI) during or before any formal peer-review process in another journal. A
considerable upside to this model is that identification of the provider or source of an idea occurs
before any peer-review process. This helps to stop individuals from stealing ideas during the
peer-review process to safeguard idea providence. Some have even mentioned that open models
are part of the “Open Science future” (Pulverer 2016).
Numerous studies have examined how these different approaches impact the quality,
consistency, and effectiveness of peer review, but thus far the evidence has been inconclusive
(Armstrong 1997, Lee et al. 2012). For example, two small studies conducted by McNutt et al.
(1990) and Fisher et al. (1994) found that blinding reviewers improves the quality of review, but
larger studies conducted by van Rooyen et al. (1998), Justice et al. (1998), Godlee et al. (1998)
found the blinding does not have this effect. While a study by Walsh et al. (2000) concluded that
openness improves the quality and courtesy of reviewer reports, other studies found no evidence
that openness improves peer review (van Rooyen et al. 1998, 1999, 2010; Justice et al. 1998,
Godlee et al. 1998). More research is therefore needed to more conclusively demonstrate the
extent to which different approaches to peer review impact quality, consistency, and
Peer review by funding agencies varies considerably, depending on the method used.
Some agencies convene in-person panels, while others may handle the review process remotely
via secure websites or email, or by some combination of in-person and remote (Shamoo and
Resnik 2015). The National Institutes of Health (NIH), for example, uses study sections
composed of experts to review research grants. The study section will usually meet in-person to
review grant proposals and materials will be distributed in advance. A proposal will normally be
assigned a primary and a secondary reviewer. These reviewers will present the proposal to the
group and provide an assessment. The entire group will also evaluate the proposal and score it
based on specific review criteria, including scientific significance, methodology, qualifications of
the principal investigator, institutional resources, adequacy of the budget, preliminary research,
and potential impact of the study (Shamoo and Resnik 2015). The NIH requires reviewers to
declare conflicts of interest (COIs) and prohibits individuals from reviewing proposals submitted
by colleagues from their institution, from recent collaborators, or former students or advisors.
The final funding decision is made by NIH leadership, based on recommendations from the
Bias in Peer Review
Although the peer review system is designed to provide an ‘impartial’ quality assessment,
evidence indicates that various biases can impact decisions related to publication and funding
(Lee et al. 2012, Shamoo and Resnik 2015). One well-documented type of bias is the tendency
for journals to publish positive or confirmatory results rather than negative ones. Initial studies
of this phenomena conducted by Mahoney (1977), Easterbrook et al. (1991), and Stern and
Simes (1997) found that clinical trials reporting positive results were more likely to be published
than those reporting negative results, and subsequent research confirmed these findings (Lee et
al. 2012). A systematic review and meta-analysis of studies of publication bias conducted by
Dwan et al. (2013) found that clinical trials reporting positive results are more likely to be
published than those reporting negative results. While there is a substantial evidence of a bias in
favor of publishing positive results, it is unclear whether this bias is due to decisions made by
reviewers, editors, or authors. It may be the case that most of the bias results from authors’
decisions not to publish negative results rather than reviewer or editor preferences for positive
results (Olson et al. 2002).
Numerous studies have shown that gender bias impacts the funding of grant proposals
(Wenneras and Wold 1997, Shen 2013). Bornmann et al. (2007) conducted a meta-analysis of 21
studies of grant peer review conducted between 1979 and 2004 and found that men were 7%
more likely than women to receive funding, although there was considerable variability in the
impact of gender. The authors note that a variety of causal factors may contribute to this
discrepancy, including fewer women on peer review panels or in leadership positions. Waisbren
et al. (2008) found that differences in grant funding success between male and female applicants
disappeared when they controlled for academic rank, suggesting gender biases in grant peer
review may be a function of differences in career paths between men and women. Two studies
of gender bias in NIH grant review by Kaatz and coauthors suggest that an awareness of an
applicant’s gender may function as a subconscious (implicit) influence on decision-making.
These studies found that reviewers consistently gave female applicants lower scores than male
applicants, even when they used similar words and phrases to describe their proposals (Kaatz et
al. 2015, 2016).
Gender bias is more difficult to study in journal peer review than in grant review because
most journals do not disclose the names of reviewers. However, Helmer et al. (2017) obtained
gender data from Frontiers journals, which include the names of the reviewers and associate
editors alongside the article accepted for publication. They analyzed data from 126,000 authors,
43,000 reviewers, and 9000 editors for 41,000 articles published in 142 journals from the natural
and social sciences, medicine, engineering, and the humanities and found that women are
underrepresented in the peer review process and that there is a strong same-gender preference
(e.g. men editors give higher ranking to men authors; women give higher ranking to women).
Grod et al. (2008) also found that acceptance rates for papers with female first authors increased
significantly after Trends in Ecology and Evolution adopted a double-blind review format.
However, other studies have shown little to no bias regarding gender. For instance, a study in
biosciences (989 responses) found that the gender of the first author had no significant effect on
the reviewer’s recommendations or acceptation rate (Borsuk et al. 2009). Other studies have
found no significant difference in acceptance rates between male and female first-authored
papers (Tregenza 2002, Lane and Linden 2009).
Gender disparities may impact peer review. Despite efforts to encourage women to
pursue careers in science, there remain important gender disparities in science on a global scale
and in most developing countries including the US and Canada (Larivière et al.). Overall, many
different factors impact gender discrepancies in science, including culture, education, workplace
environment, childbearing and rearing responsibilities, labor distribution within teams and career
decisions (Ceci and Williams 2011). Although reviewers’ implicit biases in peer review can play
a role in the underrepresentation of women in science, it is but one among many confounding
Like gender, race and ethnicity also appear to influence the peer review process. Ginther
and colleagues published several studies of racial and ethnic bias in the grant peer review process
in the US. Their first study found that black applicants were 10% less likely than white
applicants to receive funding for R01 grants when other relevant factors, such as education,
training, previous awards, and publication record were controlled for (Ginther et al. 2011).
Another study found that biases against black applicants for NIH R01 grants decreased when one
included medical school affiliation: blacks from medical schools were only 7.8% less likely to
receive funding than whites (Ginther et al. 2012). A third found that white women were no less
likely than white men to receive funding, but that Asian and black women were less likely to
receive funding, when controlling for relevant factors (Ginther et al. 2016). However, Jang et al.
(2013) conducted a bibliometric analysis comparing research productivity of black and white
applicants and found the NIH peer review process is not biased against black applicants. Racial
and ethnic differences in funding disappear when one controls for research productivity (Jang et
There is also evidence of bias in peer review related to nationality and institutional
affiliation (Lee et al. 2012). Ross et al. (2006) studied abstracts accepted at the American Heart
Association’s Scientific Sessions before and after it instituted double-blind review and found that
blinding the reviewers to the authors’ names reduced biases related to nationality and
institutional affiliation. More specifically, Ross et al. (2006) showed that when affiliation of
researchers were made public, papers with US institutions were accepted 7.4% more often than
during blinded review; papers with non-US institutions were accepted 0.9 % less then when
during blinded review. A study of abstract acceptance by Timmer et al. (2000) also found
evidence of bias related to nationality, and a study by Ernst and Kienbacher (1991) found that
reviewers were more likely to accept articles submitted by authors who have the same nationality
as that of the journal. Murray et al. (2016) found that funding success and the award amount
were significantly lower for smaller institutions submitting grant applications to Canada’s
Natural Sciences and Engineering Research Council Discovery Grant program. However,
Garfunkle et al. (1994) found that institutional ranking in terms of NIH-funding in the US did not
impact reviewers’ recommendations or the acceptance rate for major papers submitted to a
biomedical journal, although it did impact recommendations and acceptance rates for brief
Groupthink and Bias in Science
We will now turn our attention to bias related to groupthink, which we will define as a
situation in which the psychological drive for group consensus is so strong that dissent is hidden,
rejected or dissuaded.1 The social psychologist Irving Janis (1972) coined the term ‘groupthink’
to describe decision-making processes that have led to foreign-policy fiascos, such as the US’
failed invasion of Cuba’s Bay of Pigs in April 1961. After Fidel Castro led a revolution to
overthrow the Cuban government in 1959, the US began looking for ways to undermine or
change his regime. American intelligence officials and military leaders wrongly assumed that
the 1,400 Cuban exiles who took part in the invasion would be able to instigate a successful
venture to oust Castro, but they were vastly outnumbered by the Cuban army and surrendered
within 24 hours (Janis 1982). Janis observed that groupthink led to this ill-fated military venture
by causing decision-makers to not examine evidence critically and consider alternative course of
action. Janis’ work built upon earlier studies of cohesiveness and conformity in group decision-
making (Janis 1972, 1982).
In scientific research, groupthink may lead researchers to reject innovative or
controversial ideas, hypotheses or methodologies that challenge the status quo. Philosophers,
historians, and sociologists have observed that scientists often resist new ideas, despite their
reputation for open-mindedness (Barber 1961, Kuhn 1962). The great quantum physicists
Maxwell Planck has been quoted as saying: “A new scientific truth does not triumph by
convincing its opponents and making them see the light, but rather because its opponents
eventually die, and a new generation grows up that is familiar with it (Planck 1962:33-34).”
In his seminal work on the history of science, The Structure of Scientific Revolutions,
Kuhn described the role of conformity and close-mindedness in scientific advancement.
According to Kuhn (1962), science progresses though different stages. In the first stage, known
as normal science, scientists conduct their research within a paradigm that defines the field. A
paradigm is a way of doing science that includes basic assumptions, beliefs, principles, theories,
methods, and epistemic values that establish how one solves problems within the normal science
tradition; normal science involves consensus within a scientific community. For example,
Newtonian physics was a normal science tradition that established ways of solving problems
related to motion and electromagnetic radiation (Kuhn 1962). During the normal science stage,
scientists attempt to apply the paradigm to problems they can solve and they resist certain
theories, methods, and ideas that challenge the paradigm. At this stage, scientists tend to think
outside the theoretical limits of the paradigm limiting novel ideas. However, as problems emerge
that cannot be solved within the paradigm, scientists start to consider new ideas, theories, and
methods that form the basis of a new and emerging paradigm. A scientific revolution occurs
when the new paradigm replaces the old. For example, during the early twentieth century,
Newtonian physics succumbed to quantum mechanics and relativity theory (Kuhn 1962). A
paradigm-shift is not a purely rational process driven by logical argumentation and empirical
1 Our definition is loosely inspired by Irvin’s definition (1972) but has been modified so as to apply to the context
of science and peer review.
evidence; rather, it involves a change in perception, or a willingness to see the world in a
different way (Kuhn 1962). After the revolution, a new paradigm takes hold and the process
Some philosophers have argued that a certain amount of closed-mindedness, known as
epistemological conservatism, is justified in scientific research. The rationale for this
epistemological stance is that change in a network of beliefs should be based on substantial
empirical evidence. Since changes in beliefs can consume a considerable amount of time and
effort and our cognitive resources are limited, we should not change our beliefs, especially ones
that play a central role in our worldview, without compelling evidence (Quine 1961, Sklar 1975,
Lycan 1988, Resnik 1994). For example, because Einstein’s general theory of relativity
contradicted the fundamental principle of Newtonian physics that space and time are immutable,
it took extraordinary proof—i.e. that observation of the sun’s gravity bending light from a star
during a solar eclipse in 1919—to confirm the theory (Buchen 2009). While it seems clear that a
certain amount of conservatism makes sense in research, scientists should be careful to avoid
dogmatism. Although scientists should practice a degree a skepticism pertaining to hypotheses
and theories that challenge the status quo, they should be open to new ideas and avoid
dogmatism (Resnik 1994).
Groupthink and Bias in Peer-Review
Issues surrounding groupthink that we find in scientific norms may permeate the process
of peer review, which may result in the rejection of innovative or controversial manuscripts and
research proposals. It is plausible to hypothesize that lack of social diversity could contribute to
groupthink in peer review. As noted earlier, Helmer et al. (2017) found that women are
underrepresented in the population of peer reviewers for Frontiers journals. Since racial and
ethnic minorities are underrepresented in science (Nelson 2007, Committee on Science,
Engineering, and Public Policy 2010), it is likely that they are also underrepresented in the
reviewer population. It is possible that there is also a lack of diversity with respect to nationality
and institutional affiliation in the reviewer population, although we know of no published
research on this topic.
If we suppose that there is a lack of social diversity in the population of peer reviewers, it
is conceivable that this type of bias could impact the review process and that increasing reviewer
diversity would decrease groupthink (Longino 1990). However, it is important to recognize that
this argument assumes that diversity with respect to social factors translates into increased
willingness to accept ideas that challenge the status quo, and that lack of such diversity has the
opposite effect, neither of which might be the case. It is often assumed that social diversity leads
to diversity with respect to opinions, beliefs, and epistemological norms (i.e. intellectual
diversity), but it has also been argued that this might not occur (Card 2005). For example, a
socially diverse group of researchers could still fall prey to groupthink because they accept lack
intellectual diversity and favor the status quo, or want to preserve their chances for peer
recognition. Also, a socially homogenous group of researchers might not fall prey to groupthink
because they are intellectually diverse and are open to new and controversial ideas.
Clearly, more research is needed on the relationship between social diversity and peer
review. Some argue, based on standpoint theory, that marginalized groups may have a different
or perhaps an even better view of social phenomena, given their position at the margins of
society (Harding 2004). Although some aspects of standpoint theory remain contentious as it
methodologically and systematically questions the position of power (Wylie and Sismondo
2015), it may be an effective tool to question the status quo.
There are different ways that groupthink could occur in the peer review process. Because
of the difficulty in finding peer-reviewers, editors may resort to the same network of individuals
to review manuscripts. Groupthink may set in as editors rely on a limited network that reduces
diversity of reviews. The hyper-specialization of certain fields may narrow the choice of
qualified reviewers significantly and thus reduce their diversification. Moreover, with time,
editors may become overly trusting towards certain peers, especially those with similar scientific
stances. Editors may come to blindly trust individuals of high academic standing which may
reduce the proper evaluation of the review.
Another type of groupthink involves the occurrence of dogmatism in the peer review
process itself; i.e., a predisposition to reject innovative or controversial theories, hypotheses, or
methods. Shamoo and Resnik (2015) have observed that a certain amount of dogmatism may be
unavoidable in peer review because reviewers are chosen for their expertise, and experts are
usually established researchers with theoretical and methodological commitments (or biases) that
can compromise their open-mindedness. An anthropological perspective of peer-review has
shown that even when trying to promote fairness, peer-reviewers usually think that research that
is similar to their own ( in terms of methods, topics, results) is of a higher standard – making
criteria for excellence somewhat subjective (Lamont 2009). If excellence is fitted to suit the
present scientific elite, the status quo will most likely be maintained and novelty discouraged; in
effect, past truths would be left unchallenged and perpetuated multidisciplinary panels on review
boards have created a need for researchers to justify their criteria for excellency and this often
does help to reduce groupthink and bias. However, this is not the case in journal peer-review
where there is no open multidisciplinary debate. To reduce or counteract closed-mindedness,
editors could select reviewers who are not established researchers, but this strategy could
potentially undermine the quality of peer review.
While there is anecdotal evidence (i.e. complaints from scientists) that intellectual
dogmatism impacts peer review (Chubin and Hackett 1990), it is difficult to obtain systematic
data that supports this hypothesis (Lee et al. 2012). A study by Resnik et al. (2008) found that
50.5% of 283 scientists responding to a survey conducted at a government biomedical research
institution had experienced bias in the peer review process. However, this study did not define
bias and gathered data on scientists’ perceptions of bias, not on bias itself.
An interesting study conducted by Resch et al. (2000) randomly assigned 398 reviewers
to receive papers on conventional or non-conventional treatments for obesity. The papers were
virtually the same with respect to research methodology and design; the main difference related
to the type of intervention. 141 reviewers responded to the review request. 67% of the
reviewers who received papers on the effectiveness of conventional treatment recommended
publication as opposed to 57% of those who received paper on the effectiveness of non-
conventional treatment. This difference was statistically significant, suggesting that reviewers
are biased in favor of conventional therapies (Resch et al. 2000). While the study by Resch et al.
presents some useful data related to bias in peer review, it is limited to certain types of bias in
clinical research and may not generalize to other fields. Also, it does not address some of the
deeper issues that underlie the bias, such as dogmatic allegiance to various theories, methods,
ideas, and so on.
Research conducted by Campanario (2009) spans various fields of science and provides
evidence for dogmatism in science. Campanario collected data on the peer review process for 16
papers from the fields of medicine, biochemistry, chemistry, and physics; while these papers did
eventually earn Nobel Prizes for the authors, they were severely panned during the peer review
process or were rejected. He obtained evidence concerning the review of these papers from the
authors’ autobiographies, personal accounts, Nobel lectures, and other written reports. For
example, Arne Tiselius won the Nobel Prize in Chemistry in 1948 for his work on
electrophoreses and adsorption, but the editors at Biochemical Journal where he initially sent his
key paper rejected it because it focused too much on physical science (Campanario 2009). David
Lee, Douglas Osheroff, and Robert Richardson received the Nobel Prize in Physics in 1996 for
their discovery of superfluid helium, but Physical Review Letters initially rejected their work
because the reviewers did not believe that the physical system they described was possible. They
succeeded in overturning this decision by convincing the editors that their discovery would work
(Campanario 2009). Murray Gell-Mann receive the Nobel Prize in Physics in 1953 for his work
on the phenomenon of “strangeness” in particle physics, but the editors of Physical Review
objected to use of the word “strangeness” and he had to change his terminology to “new unstable
particles” (Campanario 2009: 553). Thomas Cech won the Nobel Prize in Chemistry in 1989 for
discovering that some ribonucleic acid (RNA) molecules can act as enzymes, but the reviewers
for his paper submitted to Nature strongly objected to his decision to characterize the properties
he observed as “enzyme-like” or as a type of “catalysis” (Campanario 2009: 553). Most
biochemists at that time believed that RNA cannot act as an enzyme.
In the discussion section at the end of his paper, Campanario offers dogmatism as a
possible explanation for the encounters that these Nobel Prize winners had with scientific peer
review: “A possible explanation for peer resistance to scientific discovery lies in the fact that
new theories or discoveries often clash with orthodox viewpoints held by the referees
(Campanario 2009: 558).” He also suggests that difficulties that some Nobel Prize winners have
had with peer review may also be due to delayed recognition: some discoveries are so far ahead
of their time that it takes other researchers years, perhaps even decades to appreciate them (Stent
1972, Garfield 1989, Campanario 2009). Of course, delayed recognition may simply be another
form of dogmatism insofar as scientists fail to recognize research because it contradicts the status
quo. In his conclusion, Campanario also observes: “Peer review has been shown to be plagued
with many imperfections…there is a real risk that evidence contrary to the established views can
be suppressed or disregarded (Campanario 2009:559).”
While Companario’s research provides compelling evidence of dogmatism in scientific
peer review, the sample for his study is highly selective, and the experiences these Nobel Prize
winners had with peer review may not reflect other researchers’ the experiences. Nobel Prize
winners are usually chosen for their highly innovative and influential contributions to science,
and the dogmatism encountered by some Nobel Prize winners may not be as prevalent
throughout science. However, it does seem reasonable to assume that non-Nobel Prize winning
scientists may also encounter strong resistance to innovative research they submit to journals.
Although peer review is essential to the evaluation of scientific research, it is susceptible
to various biases, some of which may result from or contribute to groupthink. To counteract
groupthink in peer review, scientists should take steps to enhance the diversity of reviewers with
regards to gender, race, nationality, institutional affiliation, and other social factors that could
impact reviewer judgments. Intellectual diversity should also be promoted (e.g. including
individuals using different methods and expertise) as well as the funding and publication of
innovative or controversial research that challenges the status quo. Editors and funding agency
leaders should also stress open-mindedness in the review of research and seek to publish and
fund innovative and controversial research that meets appropriate standards of rigor,
reproducibility, objectivity, and integrity. To overcome confirmatory biases, editors should be
open to publishing research that reports negative results if it meets appropriate scientific
Journal editors and funding agency leaders should collect data on peer review, so that
they can better understand how to control and/or mitigate biases that may impact the process.
Journal editors and funding agency leaders should conduct their own, independent assessment of
reviewer reports so that they can determine whether these reports are biased. Journals should
also consider experimenting with procedures, such as double-blind review, which may minimize
the impact of biases. Reviewers, editors, and funding agency leaders should try to address their
own biases so that manuscripts and research applications can receive a fair hearing. Additional
meta-research on the factors related to groupthink in science will help researchers, editors, and
funding agency leaders understand how to promote neutrality and integrity in peer review.
This research was supported, in part, by the National Institute of Environmental Health Sciences
(NIEHS), National Institutes of Health (NIH) and the Fonds de Recherche du Québec en Santé
(FRQS). This paper does not represent the views of the NIEHS, NIH, the FRQS or any
Barber B. 1961. Resistance by scientists to scientific discovery. Science 134(3479):596-602.
Buchen L. 2009. May 29, 1919: a major eclipse, relatively speaking. Wired, May 29, 2009.
Available at: https://www.wired.com/2009/05/dayintech_0529/. Accessed: April 17, 2017.
Armstrong JS. 1997. Peer review for journals: evidence on quality control, fairness, and
innovation. Science and Engineering Ethics 3(1):63-84.
Baggs JG, Broome ME, Dougherty MC, Freda MC, Kearney MH. 2008. Blinding in peer review:
the preferences of reviewers for nursing journals. Journal of Advances in Nursing 64(2):131-138.
Bornmann, L., Mutza, R., Daniela, H.D. (2007). Gender differences in grant peer review: A
meta-analysis. Journal of Informetrics 1(3), 226–238.
Borsuk RM, Aarssen LW, Budden AE, Koricheva J, Leimu R, Tregenza T, Lortie CJ. 2009. To
name or not to name: the effect of changing author gender on peer review. Bioscience
Campanario, JM. 2009. Rejecting and resisting Nobel class discoveries: accounts by Nobel
Laureates. Scientometrics 81(2):549-565.
Card RF. 2005. Making sense of the diversity-based legal argument for affirmative action.
Public Affairs Quarterly 19(1):11-24.
Ceci SJ, Williams WM. 2011. Understanding current causes of women's underrepresentation in
science. Proceedings of the National Academy of Sciences of the United States of America
Chubin D, Hackett E. 1990. Peerless Science: Peer Review and U.S. Science Policy. Albany,
NY: State University of New York Press.
Committee on Science, Engineering, and Public Policy. 2010. Expanding Underrepresented
Minority Participation: America's Science and Technology Talent at the Crossroads. National
Academies Press: Washington, DC.
Dwan K, Gamble C, Williamson PR, Kirkham JJ; Reporting Bias Group. 2013. Systematic
review of the empirical evidence of study publication bias and outcome reporting bias - an
updated review. PLoS One 8(7):e66844.
Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR. 1991. Publication bias in clinical research.
Ernst E, Kienbacher T. 1991. Chauvinism. Nature 352(6336):560.
Fisher M, Friedman SB, Strauss B. 1994. The effects of blinding on acceptance of research
papers by peer review. Journal of the American Medical Association 272(2):143-146.
Garfield E. 1989. Delayed recognition in scientific discovery: citation frequency analyses aids
the search for case histories. Current Contents 23: 3-9
Garfunkel JM, Ulshen MH, Hamrick HJ, Lawson EE. 1994. Effect of institutional prestige on
reviewers' recommendations and editorial decisions. Journal of the American Medical
Ginther DK, Haak LL, Schaffer WT, Kington R. 2012. Are race, ethnicity, and medical school
affiliation associated with NIH R01 type 1 award probability for physician investigators?
Academic Medicine 87(11):1516-1524.
Ginther DK, Kahn S, Schaffer WT. 2016. Gender, Race/Ethnicity, and National Institutes of
Health R01 Research Awards: Is There Evidence of a Double Bind for Women of Color?
Academic Medicine 91(8):1098-1107.
Ginther DK, Schaffer WT, Schnell J, Masimore B, Liu F, Haak LL, Kington R. 2011. Race,
ethnicity, and NIH research awards. Science 333(6045):1015-1019.
Godlee F, Gale CR, Martyn CN. 1998. Effect on the quality of peer review of blinding reviewers
and asking them to sign their reports: a randomized controlled trial. Journal of the American
Medical Association 280(3):237-240.
Grod ON, Budden AE, Tregenza T, Koricheva J, Leimu R, Aarssen LW, Lortie CJ. 2008.
Systematic variation in reviewer practice according to country and gender in the field of ecology
and evolution. PLoS One 3(9):e3202.
Harding S. 2004.A socially relevant philosophy of science? Resources from standpoint theory’s
controversiality. Hypatia 19(1):25–47.
Helmer M, Schottdorf M, Neef A, Battaglia D. 2017 Gender bias in scholarly peer review.
Rodgers P, ed. eLife 6:e21718.
Ho RC, Mak KK, Tao R, Lu Y, Day JR, Pan F. 2013. Views on the peer review system of
biomedical journals: an online survey of academics from high-ranking universities. BMC
Medical Research Methodology 13:74.
Jang J, Vannier MW, Wang F, Deng Y, Ou F, Bennett J, Liu Y, Wang G. 2013. A bibliometric
analysis of academic publication and NIH funding. Journal of Informetrics 7(2):318-324.
Janis IL. 1972. Victims of Groupthink: A Psychological Study of Foreign Policy Decisions and
Fiascoes, 2nd ed. Boston: Houghton Mifflin Company.
Janis IL. 1982. Groupthink: Psychological Studies of Policy Decisions and Fiascos. Boston,
MA: Cengage Learning.
Justice AC, Cho MK, Winker MA, Berlin JA, Rennie D. 1998. Does masking author identity
improve peer review quality? A randomized controlled trial. PEER Investigators. Journal of the
American Medical Association 280(3):240-242.
Kaatz A, Lee YG, Potvien A, Magua W, Filut A, Bhattacharya A, Leatherberry R, Zhu X, Carnes
M. 2016. Analysis of National Institutes of Health R01 application critiques, impact, and criteria
scores: does the sex of the principal investigator make a difference? Academic Medicine
Hunter J. 2012. Post-publication peer review: opening up scientific conversation. Frontiers in
Computational Neuroscience 6 (August 30).
Kaatz A, Magua W, Zimmerman DR, Carnes M. 2015. A quantitative linguistic analysis of
National Institutes of Health R01 application critiques from investigators at one institution.
Academic Medicine 90(1):69–75.
Kuhn TS. 1962. The Structure of Scientific Revolutions. Chicago: University of Chicago Press.
Lane JA, Linden DJ. 2009. Is there gender bias in the peer review process at Journal of
Neurophysiology? Journal of Neurophysiology 101(5):2195–2196.
Lamont M. . 2009. How Professors Think: Inside the Curious World of Academic Judgment.
Cambridge, MA: Harvard University Press.
Latour B , and Woolgar S. 1979. Laboratory Life: The Social Construction of Scientific Facts.
London, UK: Sage.
Larivière V, Ni C, Gingras Y, Cronin B, Sugimoto C. 2013. Global gender disparities in science.
Nature 504 (7479):211–213.
Lauer MS, Krumholz HM, Topol EJ. 2015. Time for a prepublication culture in clinical
research? Lancet 386(1012):2447–2449.
Lee CJ, Sugimoto CR, Zhang G, Cronin B. 2012. Bias in peer review. Journal of the American
Society for Information Science and Technology 64(1):2–17.
Longino H. 1990. Science as Social Knowledge. Princeton, NJ: Princeton University Press.
Lycan WG. 1988. Judgement and Justification. Cambridge, UK: Cambridge University Press.
Mahoney MJ. 1977. Publication preferences: An experimental study of confirmatory bias in the
peer review system. Cognitive Therapy and Research 1(2):161–175.
McNutt RA, Evans AT, Fletcher RH, Fletcher SW. 1990. The effects of blinding on the quality of
peer review. A randomized trial. Journal of the American Medical Association 263(10):1371-
Murray DL, Morris D, Lavoie C, Leavitt PR, MacIsaac H, Masson ME, Villard MA. 2016. Bias
in research grant evaluation has dire consequences for small universities. PLoS One
Nelson DJ 2007. A National Analysis of Minorities in Science and Engineering Faculties at
Research Universities. Norman, OK: University of Oklahoma.
Olson CM, Rennie D, Cook D, Dickersin K, Flanagin A, Hogan JW, Zhu Q, Reiling J, Pace B.
2002. Publication bias in editorial decision making. Journal of the American Medical Association
Planck M. 1962. Quoted in Kuhn TS. 1962. The Structure of Scientific Revolutions. Chicago,
IL: University of Chicago Press, pp. 33-34.
Quine WV. 1961. From a Logical Point of View. New York, NY: Harper and Rowe.
Resch KI, Ernst E, Garrow J. 2000. A randomized controlled study of reviewer bias against an
unconventional therapy. Journal of the Royal Society of Medicine 93(4):164-167.
Resnik DB. 1994. Methodological conservatism and social epistemology. International Studies in
the Philosophy of Science 8(3):247-264.
Resnik DB, Elmore SA. 2016. Ensuring the quality, fairness, and integrity of journal peer
review: a possible role for editors. Science and Engineering Ethics 22(1):169-188.
Resnik DB, Gutierrez-Ford C, Peddada S. 2008. Perceptions of ethical problems with scientific
journal peer review: an exploratory study. Science and Engineering Ethics 14(3):305-310.
Ross JS, Gross CP, Desai MM, Hong Y, Grant AO, Daniels SR, Hachinski VC, Gibbons RJ,
Gardner TJ, Krumholz HM. 2006. Effect of blinded peer review on abstract acceptance. Journal
of the American Medical Association 295(14):1675-1680.
Shamoo AE, Resnik DB. 2015. Responsible Conduct of Research, 3rd ed. New York, NY: Oxford
Shen H. 2013. Mind the gender gap. Nature 495(7439):22-24.
Sklar L. 1975. Methodological conservatism. Philosophical Review 84(3):374-400.
Smith R. 2006. Peer review: A flawed process at the heart of science and journals. Journal of the
Royal Society of Medicine 99(4):178–182.
Stent GS. 1972. Prematurity and uniqueness in scientific discovery. Scientific American 227(6):
Stern JM, Simes RJ. 1997. Publication bias: evidence of delayed publication in a cohort study of
clinical research projects. British Medical Journal 315(7109):640-645.
Teixeira da Silva JA, Dobránszki J. 2015. Problems with traditional science publishing and f
finding a wider niche for post-publication peer review. Accountability in Research 22(1): 22–40.
Timmer A, Hilsden RJ, Sutherland LR. 2001. Determinants of abstract acceptance for the
Digestive Diseases Week--a cross sectional study. BMC Medical Research Methodology 1:13.
Tregenza T. 2002. Gender bias in the refereeing process? Trends in Ecology and Evolution
van Rooyen S, Delamothe T, Evans SJ. 2010. Effect on peer review of telling reviewers that their
signed reviews might be posted on the web: randomised controlled trial. British Medical Journal
van Rooyen S, Godlee F, Evans S, Black N, Smith R. 1999. Effect of open peer review on
quality of reviews and on reviewers' recommendations: a randomised trial. British Medical
van Rooyen S, Godlee F, Evans S, Smith R, Black N. (1998). Effect of blinding and unmasking
on the quality of peer review: a randomized trial. Journal of the American Medical Association
Waisbren SE, Bowles H, Hasan T, Zou KH, Emans SJ, Goldberg C, Gould S, Levine D,
Lieberman E, Loeken M, Longtine J, Nadelson C, Patenaude AF, Quinn D, Randolph AG, Solet
JM, Ullrich N, Walensky R, Weitzman P, Christou H. 2008. Gender differences in research grant
applications and funding outcomes for medical school faculty. Journal of Women’s Health
Pulverer B. 2016. Preparing for preprints. EMBO Journal 35 (24): 2617–19.
Walsh E, Rooney M, Appleby L, Wilkinson G. 2000. Open peer review: a randomised controlled
trial. British Journal of Psychiatry 176:47-51.
Wenneras C, Wold A. 1997. Nepotism and sexism in peer-review. Nature 387(6631):341-343.