The unbearable asymmetry of bullshit
In this essay, I discuss the problem of plausible-sounding bullshit in science, and I describe one particularly insidious method for producing it. Because it takes so much more energy to refute bullshit than it does to create it, and because bullshit can be so damaging to the integrity of empirical research as well as to the policies that are based upon such research, I suggest that addressing this issue should be a high priority for publication ethics.
Earp BD. The unbearable asymmetry of bullshit. HealthWatch Newsletter 2016;101:4-5
THE UNBEARABLE ASYMMETRY OF BULLSHIT
In this piece, Brian Earp discusses the problem of plausible-sounding bullshit in science, and describes one particularly insidious method
for producing it. Because, he says, it takes so much more energy to refute bullshit than it does to create it, and because the result can be so
damaging to the integrity of empirical research as well as to the policies that are based upon such research, Earp suggests that addressing
this issue should be a high priority for publication ethics
In other words, science is flawed. And scientists are people too.
While it is true that most scientists—at least the ones I know and
work with—are hell-bent on getting things right, they are not there-
fore immune from human foibles. If they want to keep their jobs, at
least, they must contend with a perverse ‘publish or perish’ incentive
structure that tends to reward flashy findings and high-volume ‘pro-
ductivity’ over painstaking, reliable research.12 On top of that, they
have reputations to defend, egos to protect, and grants to pursue. They
get tired. They get overwhelmed. They don’t always check their ref-
erences, or even read what they cite.13 They have cognitive and emo-
tional limitations, not to mention biases, like everyone else.14-16
At the same time, as the psychologist Gary Marcus has recently
put it,17 “it is facile to dismiss science itself. The most careful sci-
entists, and the best science journalists, realize that all science is
provisional. There will always be things that we haven’t figured out
yet, and even some that we get wrong.” But science is not just about
conclusions, he argues, which are occa-
sionally (or even frequently)1incorrect.
Instead, “It’s about a methodology for
investigation, which includes, at its
core, a relentless drive towards ques-
tioning that which came before.” You
can both “love science,” he concludes,
“and question it.”
I agree with Marcus. In fact, I agree with him so much that I
would like to go a step further: if you love science, you had better
question it, and question it well, so it can live up to its potential.
And it is with that in mind that I bring up the subject of bullshit.
There is a veritable truckload of bullshit in science.1* When I say
bullshit, I mean arguments, data, publications, or even the official
policies of scientific organizations that give every impression of
being perfectly reasonable—of being well-supported by the highest
quality of evidence, and so forth—but which don’t hold up when
you scrutinize the details. Bullshit has the veneer of truth-like plau-
sibility. It looks good. It sounds right. But when you get right down
to it, it stinks.
THERE ARE many ways to produce scientific bullshit.18 One
way is to assert that something has been ‘proven’, ‘shown’,
or ‘found’, and then cite, in support of this assertion, a study
that has actually been heavily critiqued (fairly and in good faith, let
us say, although that is not always the case, as we soon shall see)
without acknowledging any of the published criticisms of the study
or otherwise grappling with its inherent limitations.19
Another way is to refer to evidence as being of ‘high quality’ sim-
ply because it comes from an in-principle relatively strong study
design, like a randomized control trial, without checking the specif-
ic materials that were used in the study to confirm that they were fit
for purpose.20 There is also the problem of taking data that were
generated in one environment and applying them to a completely
different environment (without showing, or in some cases even
attempting to show, that the two environments are analogous in the
right way).21 There are other examples I have explored in other con-
texts,18 and many of them are fairly well-known.
But there is one example I have only recently come across, and
of which I have not yet seen any serious discussion. I am referring
to a certain sustained, long-term publication strategy, apparently
deliberately carried out (although motivations can be hard to pin
down), that results in a stupefying, and in my view dangerous,
paper-pile of scientific bullshit. It can be hard to detect, at first, with
an untrained eye—you have to know your specific area of research
extremely well to begin to see it—but once you do catch on, it
becomes impossible to un-see.
I don’t know what to call this insidious tactic (although I will
describe it in just a moment). But I can identify its end result, which
I suspect researchers of every stripe will be able to recognize from
their own sub-disciplines: it is the hyper-partisan and polarized,22-23
but by all outward appearances, dispassionate and objective, ‘sys-
tematic review’ of a controversial subject.
To explain how this tactic works, I am going make up a hypo-
thetical researcher who engages in it,
and walk you through his ‘process’,
step by step. Let’s call this hypothetical
researcher Lord Voldemort. While
everything I am about to say is based
on actual events, and on the real-life
behavior of actual researchers, I will
not be citing any specific cases (to
avoid the drama). Moreover, we should be very careful not to con-
fuse Lord Voldemort for any particular individual. He is an amal-
gam of researchers who do this; he is fictional.
In this story, Lord Voldemort is a prolific proponent of a certain
controversial medical procedure, call it X, which many have argued
is both risky and unethical. It is unclear whether Lord Voldemort
has a financial stake in X, or some other potential conflict of inter-
est. But in any event he is free to press his own opinion. The prob-
lem is that Lord Voldemort doesn’t play fair. In fact, he is so intent
on defending this hypothetical intervention that he will stop at noth-
ing to flood the literature with arguments and data that appear to
weigh decisively in its favor.
As the first step in his long-term strategy, he scans various schol-
arly databases. If he sees any report of an empirical study that does
not put X in an unmitigatedly positive light, he dashes off a letter-
to-the-editor attacking the report on whatever imaginable grounds.
Sometimes he makes a fair point—after all, most studies do have
limitations (see above)—but often what he raises is a quibble,
couched in the language of an exposé.
These letters are not typically peer-reviewed (which is not to say
that peer review is an especially effective quality control mecha-
nism);24-25 instead, in most cases, they get a cursory once-over by an
editor who is not a specialist in the area. Since journals tend to print
the letters they receive unless they are clearly incoherent or in some
way obviously out of line (and since Lord Voldemort has mastered the
art of using ‘objective’ sounding scientific rhetoric26 to mask objec-
tively weak arguments and data), they end up becoming a part of the
published record with every appearance of being legitimate critiques.
The subterfuge does not end there.
“I suspect researchers of every stripe will be
able to recognize it—the hyper-partisan and
polarized, but by all outward appearances,
dispassionate and objective, ‘systematic
review’ of a controversial subject.”
SCIENCE and medicine have done a lot for the world. Diseases have been eradicated, rockets have been sent to the moon, and
convincing, causal explanations have been given for a whole range of formerly inscrutable phenomena. Notwithstanding recent
concerns about sloppy research, small sample sizes, and challenges in replicating major findings1-3—concerns I share and which
I have written about at length4-10—I still believe that the scientific method is the best available tool for getting at empirical truth.11 Or
to put it a slightly different way (if I may paraphrase Winston Churchill’s famous remark about democracy): it is perhaps the worst
tool, except for all the rest.
Published by HealthWatch
* There is a lot of non-bullshit in science, too!
The next step is for our anti-hero to write a ‘systematic review’ at
the end of the year (or, really, whenever he gets around to it). In it,
He Who Shall Not Be Named predictably rejects all of the studies
that do not support his position as being ‘fatally flawed,’ or as hav-
ing been ‘refuted by experts’—namely, by himself and his close
collaborators, typically citing their own contestable critiques—
while at the same time he fails to find any flaws whatsoever in stud-
ies that make his pet procedure seem on balance beneficial.
The result of this artful exercise is a heavily skewed benefit-to-
risk ratio in favor of X, which can now be cited by unsuspecting
third-parties. Unless you know what Lord Voldemort is up to, that
is, you won’t notice that the math has been rigged.
SO WHY doesn’t somebody put a stop to all this? As a matter
of fact, many have tried. More than once, the Lord
Voldemorts of the world have been called out for their under-
handed tactics, typically in the ‘author reply’ pieces rebutting their
initial attacks. But rarely are these ripostes—constrained as they are
by conventionally miniscule word limits, and buried as they are in
some corner of the Internet—noticed, much less cited in the wider
literature. Certainly, they are far less visible than the ‘systematic
reviews’ churned out by Lord Voldemort and his ilk, which consti-
tute a sort of ‘Gish Gallop’ that can be hard to defeat.
The term ‘Gish Gallop’ is a useful one to know. It was coined by
the science educator Eugenie Scott in the 1990s to describe the
debating strategy of one Duane Gish.27 Gish was an American bio-
chemist turned Young Earth creationist, who often invited main-
stream evolutionary scientists to spar with him in public venues. In
its original context, it meant to “spew forth torrents of error that the
evolutionist hasn’t a prayer of refuting in the format of a debate.” It
also referred to Gish’s apparent tendency to simply ignore objec-
tions raised by his opponents.
A similar phenomenon can play out in debates in medicine. In the
case of Lord Voldemort, the trick is to unleash so many fallacies,
misrepresentations of evidence, and other misleading or erroneous
statements—at such a pace, and with such little regard for the
norms of careful scholarship and/or charitable academic dis-
course—that your opponents, who do, perhaps, feel bound by such
norms, and who have better things to do with their time than to
write rebuttals to each of your papers, face a dilemma. Either they
can ignore you, or they can put their own research priorities on hold
to try to combat the worst of your offenses.
It’s a lose-lose situation. Ignore you, and you win by default.
Engage you, and you win like the pig in the proverb who enjoys
hanging out in the mud.
As the programmer Alberto Brandolini is reputed to have said:28
“The amount of energy necessary to refute bullshit is an order of
magnitude bigger than to produce it.” This is the unbearable asym-
metry of bullshit I mentioned in my title, and it poses a serious
problem for research integrity. Developing a strategy for overcom-
ing it, I suggest, should be a top priority for publication ethics.
Brian D Earp
Visiting Scholar, The Hastings Center Bioethics Research
Institute (Garrison, NY),
and Research Associate, University of Oxford
A modified version of this essay was published in the online magazine Quillette on February 15, 2016. Please note that the article as it
appears here is the ‘original’ (i.e., the final and definitive version), and should therefore be referred to in case of any discrepancies.
The author thanks Morgan Firestein and Diane O’Leary for feedback on an earlier draft of this manuscript.
Published by HealthWatch
1. Ioannidis JP. Why most published research findings are false. PLoS
2. Button KS et al. Power failure: why small sample size undermines the
reliability of neuroscience. Nature Reviews Neuroscience
3. Open Science Collaboration. Estimating the reproducibility of psycho-
logical science. Science 2015;349(6251):aac4716
4. Earp BD, Trafimow D. Replication, falsification, and the crisis of confi-
dence in social psychology. Frontiers in Psychology 2015;6(621):1-11
5. Earp BD et al. Out, damned spot: can the “Macbeth Effect” be replicat-
ed? Basic and Applied Social Psychology 2014;36(1):91-98
6. Earp BD. Psychology is not in crisis? Depends on what you mean by
“crisis.” Huffington Post, 2 Sept 2015
http: // ww w.huff in gtonp os t. co m/ brian-e ar p/ ps ychol og y- is -n ot-in -
7. Earp BD, Everett JAC. How to fix psychology’s replication crisis.
Chronicle of Higher Education, 25 Oct 2015 http://chronicle.com/arti-
8. Earp BD. Open review of the draft paper, “Replication initiatives will
not salvage the trustworthiness of psychology” by James C Coyne. BMC
Psychology, 2016 [in press]
_enti tled _Replicat ion_initiativ es_will_not_s alvage_the_tr ustworthi-
9. Everett JAC, Earp BD. A tragedy of the (academic) commons: interpret-
ing the replication crisis in psychology as a social dilemma for early-
career researchers. Frontiers in Psychology 2015;6(1152):1-4.
10. Trafimow D, Earp BD. Badly specified theories are not responsible for the
replication crisis in psychology. Theory & Psychology 2016; [in press]
11. Earp BD. Can science tell us what’s objectively true? The New
12. Nosek BA et al. Scientific utopia II. Restructuring incentives and prac-
tices to promote truth over publishability. Perspectives on Psychological
13. Rekdal OB. Academic urban legends. Social Studies of Science
14. Peterson D. The baby factory: difficult research objects, disciplinary
standards, and the production of statistical significance. Socius 2016 [in
15. Duarte JL et al. Political diversity will improve social psychological sci-
ence. Beha vioral and Bra in Sciences 20 15 [in pr ess]
http ://e milkirkegaard.d k/en /wp-content/upl oads /Political- Dive rsity-
16. Ball P. The trouble with scientists. Nautilus, 14 May 2015 http://nau-
17. Marcus G. Science and its skeptics. The New Yorker, 6 Nov 2013
18. Earp BD. Mental shortcuts [unabridged version]. The Hastings Center
Report 2016 [in press] https://www.resear chgate.net/publi catio n/-
19. Ioannidis JP. Limitations are not properly acknowledged in the scientif-
ic literature. Journal of Clinical Epidemiology 2007;60(4):324-329
20. Earp BD. Sex and circumcision. American Journal of Bioethics
21. Bundick S. Promoting infant male circumcision to reduce transmission of
HIV: A flawed policy for the US. Health and Human Rights Journal Blog,
31 Aug 2009 http://www.hhrjournal.org/2009/08/promoting-infant-male-
22. Ploug T, Holm S. Conflict of interest disclosure and the polarisation of
scientific communities. Journal of Medical Ethics 2015;41(4):356-358.
23. Earp BD. Addressing polarisation in science. Journal of Medical Ethics
24. Smith R. Peer review: a flawed process at the heart of science and jour-
nals. Journal of the Royal Society of Medicine 2006;99(4):178-182
25. Smith R. Classical peer review: an empty gun. Breast Cancer Research
26. Roland MC. Publish and perish: hedging and fraud in scientific dis-
course. EMBO Reports 2007;8(5):424-428
27. Scott E. Debates and the globetrotters. The Talk Origins Archive. 1994
28. Brandolini A. The bullshit asymmetry principle. Lecture delivered at
XP2014 in R ome and at ALE2014 in K rakow. 2014
htt p:/ /ww w.sl ide share.ne t/z iobrando /bu lshit-as ymm etry-pri nci ple-
Earp BD. The unbearable asymmetry of bullshit. HealthWatch Newsletter 2016;101:4-5