Those Who Have the Gold Make the Evidence: How
the Pharmaceutical Industry Biases the Outcomes
of Clinical Trials of Medications
Received: 26 September 2010/Accepted: 3 February 2011/Published online: 15 February 2011
? Springer Science+Business Media B.V. 2011
carried out on medications. Poor outcomes from these studies can have negative
effects on sales of medicines. Previous research has shown that company funded
research is much more likely to yield positive outcomes than research with any
other sponsorship. The aim of this article is to investigate the possible ways in
which bias can be introduced into research outcomes by drawing on concrete
examples from the published literature. Poorer methodology in industry-funded
research is not likely to account for the biases seen. Biases are introduced through a
variety of measures including the choice of comparator agents, multiple publication
of positive trials and non-publication of negative trials, reinterpreting data submitted
to regulatory agencies, discordance between results and conclusions, conflict-of-
interest leading to more positive conclusions, ghostwriting and the use of ‘‘seeding’’
trials. Thus far, efforts to contain bias have largely focused on more stringent rules
regarding conflict-of-interest (COI) and clinical trial registries. There is no evidence
that any measures that have been taken so far have stopped the biasing of clinical
research and it’s not clear that they have even slowed down the process. Economic
theory predicts that firms will try to bias the evidence base wherever its benefits
exceed its costs. The examples given here confirm what theory predicts. What will
be needed to curb and ultimately stop the bias that we have seen is a paradigm
Pharmaceutical companies fund the bulk of clinical research that is
J. Lexchin (&)
School of Health Policy and Management, York University, 4700 Keele St., Toronto,
ON M3J 1P3, Canada
University Health Network, Toronto, ON, Canada
Department of Family and Community Medicine, University of Toronto, Toronto, ON, Canada
Sci Eng Ethics (2012) 18:247–261
change in the way that we treat the relationship between pharmaceutical companies
and the conduct and reporting of clinical trials.
Bias ? Clinical trials ? Conflict-of-interest ? Ghostwriting ?
For the past couple of decades the pharmaceutical industry has operated on a
blockbuster model, relying on drugs that generate $1 billion or more in worldwide
sales to provide the rate of return that shareholders have come to demand. Clinical
trials, trials that test drugs in humans, form the basis for the evidence used in the
practice of medicine (Wyatt 1991) and trials that fail to demonstrate effectiveness or
that raise significant safety concerns can dramatically affect the sale of products.
Witness what happened following the July 2002 publication of the results of the
Women’s Health Initiative trial that found that the estrogen/progestin combination
caused an increased risk of cardiovascular disease and breast cancer in postmen-
opausal women (Writing Group for the Women’s Health Initiative Investigators
2002). By June 2003 prescriptions for Prempro?, the most widely sold estrogen/
progestin combination, had declined by 66% in the United States (US) (Hersh et al.
2004) and sales of estrogen replacement therapy were off by a third in Ontario
(Austin et al. 2003).
In the US pharmaceutical and biotechnology firms contributed almost half of the
$94 billion spent on biomedical research in 2004 with the bulk of industry spending
going towards clinical research, that is research aimed at testing medications in
humans (Moses et al. 2005). (While the pharmaceutical industry pays for the great
majority of the clinical studies on drugs, 84% of the funding for the basic research
that produces these drugs comes from the public sector and only 12% from
companies (Light 2006)). In Norway most of the clinical trials approved by five
regional medical research ethics committees were conducted by industry (Hole et al.
2001). Over the period 1994–2003 the vast majority of the most cited randomized
controlled trials (RCT) received funding from industry, and the proportion increased
significantly over time. Eighteen of the 32 most cited trials in the medical literature
that were published after 1999 were funded by industry alone (Patsopoulos et al.
Given that pharmaceutical companies fund most clinical research and how
critical the results of trials are to industry, there is a strong financial pressure to
ensure that results from the research are favourable to the product being studied. In
2003 Lexchin and colleagues found that industry funded research was 4 times more
likely to produce positive outcomes compared to research with any other source of
sponsorship (Lexchin et al. 2003). Their results applied across a wide range of
disease states, drugs and drug classes and were consistent over a period spanning at
least two decades. Since that publication there have been multiple others looking at
the same issue using different methodologies and examining different classes of
drugs. A recent qualitative systematic review examined the evidence subsequent to
the Lexchin article and found 17 additional articles that supported his conclusion
with only 2 dissenting (Sismondo 2008b).
The knowledge that pharmaceutical companies try to ensure that research
generates the outcomes that are commercially favourable to them is becoming more
widespread both among the medical profession and the general public. Beyond
recognizing that this phenomenon exists, it is important to understand specifically
the various ways that bias can be introduced. This knowledge can help ensure that
the medical literature is properly interpreted by alerting people to biases in what
they read and also by pointing the way to how the system of generating clinical
knowledge can be reformed. Other authors have summarized the literature on this
topic but those articles are dated (Bero and Rennie 1996) or draw examples from
trials on just one particular class of drugs (Safer 2002). Sismondo’s article (2008a)
is more recent but his focus is on general mechanisms that are used. The additional
value that this article brings is that it uses published concrete examples of
commonly used techniques. Providing concrete examples has the effect of taking
the discussion out of the theoretical and grounding it in reality, thereby hopefully
increasing the understanding that there are real consequences to what the
pharmaceutical industry is doing. In addition, the many examples cited here helps
to counter the argument that instances of industry malfeasance are relatively
innocuous and infrequent (Stossel 2005).
After first looking at one commonly cited explanation for the positive outcome of
industry studies, I turn to an examination of issues related to the design of trials, the
difference in how positive and negative trials are published, how data is
reinterpreted between the time it is submitted to regulatory agencies and when it
is published, the discordance between results and conclusions, how conflict-of-
interest on the part of investigators affects conclusions, the effect of ghostwriting on
how trials are presented and the use of seeding trials.
The second main contribution that this article makes to the topic of industry-
induced bias is the discussion in the penultimate section of how to combat this bias.
Here I critically examine the most commonly suggested remedies and then go on to
consider more radical proposals.
Better Power and Use of Preliminary Data Do Not Explain Why Industry
Trials Turn Out Better
One major defense offered as to why industry funded trials are more likely to be
positive is that drug companies have the resources to mount trials with large
numbers of patients that are powered to find statistically significant differences.
Another, as articulated by Fries and Krishnan (2004), is that extensive use of
preliminary data allows industry to design studies with a high likelihood of being
positive. Of course, large numbers and preliminary data may mask the other biases
(to be discussed below) that are the real predictors of a positive trial. Moreover, the
former argument raises the question of whether a statistical difference translates into
a clinical difference while the latter ignores the issue of whether laboratory and
animal data is good enough to predict the performance of new drugs in humans.
Those Who Have the Gold Make the Evidence249
In this regard, the success rate for new drugs entering clinical trials is only 1 in 5
(DiMasi et al. 2010).
Leaving aside these questions, these explanations are still inadequate to explain
the superiority associated with industry sponsorship in situations where head-to-
head trials of two different medications have different outcomes depending on
which company is sponsoring the trial. In an examination of trials of second
generation drugs used in the treatment of diseases such as schizophrenia (usually
referred to as ‘‘atypical antipsychotics’’), Heres et al. (2006) looked at different
trials that examined the effectiveness of the same two drugs. They found that
different trials led to contradictory overall conclusions, depending on who
sponsored the study. For example, there were 9 studies comparing olanzapine and
risperidone. Five of these were sponsored by Eli Lilly, makers of olanzapine, and all
favoured that drug, whereas 3 of the 4 sponsored by Janssen, makers of risperidone,
favoured that medicine. Similarly, RCTs of head-to-head comparisons of statins
were more likely to report results and conclusions favouring the sponsor’s product
compared to the comparator drug (Bero et al. 2007). While the funding and
outcomes of meta-analyses and pharmaco-economic (cost:benefit) studies are
outside the scope of this article, both also show the same relationship with funding
as clinical trials do (Bell et al. 2006; Franco et al. 2005; Hartmann et al. 2003;
Miners et al. 2005) and explanations related to trial size and power are not
applicable in either case.
Explanations for Bias
There are likely to be multiple different reasons to explain the superior outcome of
industry funded research but lower quality methodology (that is, poorer design) as it
is usually measured has not been a consistent finding. While some studies do report
that industry funding is associated with poorer methodology scores (Jørgensen et al.
2008; Montgomery et al. 2004) others have reported that there is no difference in
methodologic quality between industry and non-industry funded research or even
that industry sponsorship is associated with higher quality (Heres et al. 2006;
Lexchin et al. 2003; Perlis et al. 2005a).
One reason why industry funded trials may appear to be of high methodologic
quality is the way that quality is typically determined, e.g., calculating the Jadad
score that incorporates items such as method of randomization, blinding and
accounting for dropouts and withdrawals. While trials may score well on the Jadad
(or other) scale, exclusive use of the items in these scales may not be sensitive
enough to pick up other more subtle forms of bias such as the ones explored below.
Inappropriate Choice of Doses, Dosing Intervals and Comparators
In head-to-head trials, companies can use low doses of a comparator agent to make
their drug seem more effective or use high doses of the comparator to make their
250 J. Lexchin
drug appear to have fewer side effects. Using unequal doses violates the scientific
principle of equipoise, the principle that it is only ethical to enroll patients in clinical
trials when there is substantial uncertainty as to which of the trial treatments would
most likely benefit them (Djulbegovic et al. 2003).
Safer (2002) notes that companies comparing their new atypical antipsychotic
medications to the older drug haloperidol have frequently used a fixed high dose of
haloperidol to virtually ensure that their product will have fewer extrapyramidal side
effects (side effects involving involuntary body movements). In head to head trials,
dose ranges of clozapine and olanzapine, both atypical antipsychotics, often are too
strictly limited resulting in low mean daily doses and the conclusion that they are
not as efficacious as products made by the sponsoring company (Heres et al. 2006).
Commercially sponsored studies comparing two antidepressant drugs often schedule
an unusually rapid and substantial dose increase in the one not manufactured by the
sponsoring company (Safer 2002) resulting in that product appearing to have more
In 13 comparative trials of the antifungal fluconazole versus amphotericin B in
cancer patients with low white blood cell counts who were therefore susceptible to
fungal infections, nearly 80% of patients were given the poorly absorbed oral
formulation of amphotericin B instead of the intravenous form. Three antifungal
trials in the same type of patients grouped amphotericin B with nystatin thereby
creating a bias in favour of fluconazole as nystatin is ineffective in this group of
patients (Johansen and Gotzsche 1999). Not only does the conduct of these trials
lead to misleading information but they are probably unethical in so far as they have
the potential to expose patients to harm or to prolong suffering because of a lack of
benefit from inappropriate doses.
Evidence of significant biasing of the published literature is widespread and
systematic. The fact that drug companies keep unfavourable results from being
published and publish favourable ones more prominently and often in multiple
publications has been increasingly coming to light. The net result of this practice is
to introduce a bias into the assessment of the effectiveness of a product.
Melander and colleagues compared published versions of trials for 5 selective
serotonin reuptake inhibitors (SSRI) antidepressants with the versions of these
studies submitted to the Swedish regulatory authority in order to get marketing
approval (Melander et al. 2003). They demonstrated that studies showing positive
effects from these drugs were published as stand alone publications more often than
studies with non-significant results; many publications used a form of statistical
analysis more likely to yield favourable results (per protocol analyses versus
intention to treat analyses); and 21 studies, out of 42, contributed to at least two
publications each, and three studies contributed to five publications each. The latter
point echoes what Gøtzsche (1989) and Huston and Moher (1996) found for
publications about nonsteroidal anti-inflammatories (NSAIDs) and risperidone,
respectively—favourable trials are frequently published more than once. Inclusion
of duplicate data in a meta-analysis of ondansetron lead to a 23% overestimate of its
Those Who Have the Gold Make the Evidence251
antiemetic efficacy (Trame `r et al. 1997). Finally, Spielmans et al. (2010) found that
6 clinical trials on duloxetine had their data utilized as part of 20 or more separately
published pooled analyses. The vast majority of the analyses had at least one author
employed by the manufacturer.
Although the SSRI class of antidepressants was never approved for the treatment
of depression in children or adolescents, drugs in this class were frequently
prescribed off-label to these groups of patients. A meta-analysis of the published
literature indicated that there was a favourable benefit:harm profile for some SSRIs.
However, the equation changed when the unpublished studies were added into the
meta-analysis. When all of the studies, published and unpublished, were combined
the conclusion was that, except for Prozac?(fluoxetine), the risks could outweigh
the benefits (Whittington et al. 2004).
Reinterpreting Data Submitted to Regulatory Agencies
Out of 74 studies registered with the Food and Drug Administration (FDA) dealing
with 12 antidepressants ‘‘a total of 37 studies viewed by the FDA as having positive
results were published; 1 study viewed as positive was not published.’’ An
additional 11 studies that produced either negative or questionable results were
published in a way that conveyed a positive outcome. According to the published
literature, it appeared that 94% of the trials conducted were positive. By contrast,
the FDA analysis showed that 51% were positive (Turner et al. 2008). Just as
disturbing as this selective publication was the fact that for all 12 drugs the effect
size in the published trials was greater than the effect size reported to the FDA by a
mean of 32%, meaning that the drugs appeared much more effective to clinicians
reading the medical literature than they likely were. Wyeth attempted to dismiss its
failure to publish two negative Effexor?(venlafaxine) studies by claiming that they
were ‘failed studies’ instead of studies showing that the drug didn’t work (Ninan
et al. 2008).
What Turner and colleagues found about antidepressant studies is also what
occurs more generally. Another study looked at 164 efficacy trials submitted to the
FDA in the 2001–2002 period in support of 33 new drug applications (NDA). Many
trials were not published 5 years after FDA approval of the drug and those that were
published were much more likely to have positive results. In the 164 trials there
were 43 primary outcomes that did not favour the drug and 20 of these 43 were
excluded in the publications. The statistical significance of 5 of the remaining 23
outcomes was changed in the published literature and 4 of the 5 changes favoured
the drug in question. In total there were 99 conclusions that were present in both the
NDA and publications. Nine of these were changed from the former (NDA) to the
latter (publications) and all favoured the companies’ products (Rising et al. 2008).
Discordance Between Results and Conclusions
Although the results that are reported in clinical trials may be accurate, authors may
distort the meaning of the results and present conclusions that are more favorable
than are warranted from the data. In head-to-head trials of NSAIDs, all paid for by
252 J. Lexchin
pharmaceutical companies, 86% (19/22) concluded that the drug made by the
sponsor of the trial was less toxic than the comparator. However, this conclusion
was only justified by the data presented in 12 of the trials (Rochon et al. 1994).
The results of industry-supported meta-analyses also appear to be ‘‘spun’’ to yield
favourable conclusions. Yank et al. analyzed meta-analyses of antihypertensives
looking at clinical outcomes in adults. Although financial ties with industry were not
associated with favourable results the same was not true of the conclusions that
these meta-analyses reached. Even when controlling for other characteristics of the
meta-analyses, the only factor associated with positive conclusions was if there was
a relationship to industry (Yank et al. 2007). Finally, findings about safety
information are also subject to misinterpretation. If studies of inhaled corticoste-
roids found a statistically significant increase in adverse effects associated with the
study drug, the authors of industry-funded trials were still more likely to conclude
that the medication was ‘‘safe’’ than were authors of trials without industry funding
(Nieto et al. 2007).
Conflict-of-Interest and Conclusions
Conflict-of-interest (COI) was not explicitly looked at as a factor in distorting results
that when authors have COI they are more likely to make a favourable recommen-
dation about a product. As Sismondo (2008a) points out, COI probably does not
operate on a conscious level but rather the act of accepting funding from a
pharmaceutical company creates a gift relationship between the investigator and the
sponsor wherein the person receiving the ‘‘gift’’ feels an obligation to repay the
the recipient a sense of indebtedness. The obligation to directly reciprocate, whether
or not the recipient is conscious of it, tends to influence behavior’’ (Katz et al. 2003).
In this light, researchers need not have any material interest in the outcome of their
research but subconsciously they create conditions that yield the results most
favourable to the company providing the resources to undertake the study.
Kjaergard and Als-Nielsen (2002) looked at all original clinical RCTs published
in the BMJ between 1997 and mid-June 2001. In those publications where authors
declared a financial COI, the conclusions that they reached were significantly more
likely to be positive towards the experimental intervention than if COI was not
present. The association between financial COI and authors’ conclusions was not
explained by methodological quality, statistical power, type of experimental
intervention, type of control intervention or medical specialty. These results from
articles in the BMJ were replicated using material from the New England Journal of
Medicine and JAMA. Once more, there was a strong association between those
studies whose authors had COI and positive findings and that association persisted
after controlling for sample size, study design, and country of primary authors
(Friedman and Richter 2004).
What applies to RCTs in general also applies to individual clinical areas.
Compared to studies where authors did not report a COI, RCTs in dermatology
where there was a COI were significantly more likely to report a positive outcome
Those Who Have the Gold Make the Evidence253
(Perlis et al. 2005a). Once statistical adjustment was made for three factors: industry
funding, the Jadad score and the number of participants in the trial, the relationship
between COI and positive outcomes was no longer significant. In pharmaceutical
company funded RCTs comparing psychiatric drugs to placebo the chance that the
study would report a positive outcome was 8.4 times greater if one of the authors
had a COI. In the absence of industry funding there was no association between
author COI and positive outcomes (Perlis et al. 2005b). Finally, amongst
randomized oncology trials that looked at overall survival, those with COI were
more likely to have positive findings (Jagsi et al. 2009).
Ghostwriters are men and women specifically recruited to take data from clinical
trials and write an article with a ‘‘spin’’ favourable to the drug. The company
making the drug, or someone working on its behalf, then recruits a well-known
academic or doctor to sign the write-up and masquerade as the author. When the
article eventually appears in print there is no acknowledgement of the role played by
the ghostwriter in its production.
There are multiple anecdotal reports of ghostwriting (Dunbar and Tallman 2009;
Fugh-Berman 2005) and the evidence points to this practice being widespread and
systematic (Ross et al. 2008).
In the course of legal proceedings, a document outlining the involvement of the
medical information company Current Medical Directions (CMD) in the preparation
of 85 studies about Zoloft?(sertraline), a SSRI made by Pfizer, was made available
to Healy and Cattell (Healy and Cattell 2003). There were a number of manuscripts
that the document suggested originated within communication agencies, with the
first draft of articles already written and the authors’ names listed as ‘to be
determined’. Using a search of the medical literature Healy and Cattell looked for
evidence of publication of these papers. Out of 55 publications that they found, only
13 did not appear to have had any involvement with CMD. When articles with CMD
involvement were compared to ones on Zoloft that were independently produced,
the former were cited more often and published in more prestigious medical
journals. Since these articles were almost uniformly favourable to Zoloft there is a
high likelihood that the overall literature regarding this drug was biased towards
presenting a more positive view of the drug than was warranted.
Court documents that became public revealed how Merck used ghostwriting to
ensure that articles about Vioxx?(rofecoxib) would present a positive picture about
the safety and effectiveness of the drug. ‘‘Merck employees work[ed] either
independently or in collaboration with medical publishing companies to prepare
manuscripts and subsequently [recruit] external, academically affiliated investiga-
tors to be authors. Recruited authors were frequently placed in the first and second
positions of the authorship list. For the publication of scientific review papers,
documents were found describing Merck marketing employees developing plans for
manuscripts, contracting with medical publishing companies to ghostwrite manu-
scripts, and recruiting external, academically affiliated investigators to be authors.
Recruited authors were commonly the sole author on the manuscript and offered
254 J. Lexchin
honoraria for their participation. Among 96 relevant published articles … 92% (22
of 24) of clinical trial articles published a disclosure of Merck’s financial support,
but only 50% (36 of 72) of review articles published either a disclosure of Merck
sponsorship or a disclosure of whether the author had received any financial
compensation from the company’’ (Ross et al. 2008).
Ghostwriting is not only used to ensure that clinical trials report an outcome
favourable to the sponsoring pharmaceutical company but is also utilized to sow
doubt about unfavourable research. The Heart and Estrogen/progestin Replacement
Study (HERS) trial found that hormone therapy offered no benefit in preventing
cardiovascular events in women with cardiovascular disease (Hulley et al. 1998).
After the publication of this trial Wyeth commissioned ghost written articles
questioning the results and maintaining that hormone therapy had a protective effect
Finally, the increasing use of postmarketing studies, studies done after a drug is
already on the market (Dembner 2002), offers another avenue for introducing bias
into the research process. Doctors who participate in clinical trials involving
medicines are known to increase their use of trial drugs (Andersen et al. 2006).
Companies take advantage of this knowledge to sponsor these trials, referred to as
‘‘seeding’’ trials, which have the sole purpose of getting doctors to start to use a
product with the aim of establishing the drug as a regular part of the doctor’s
prescribing. One executive at a contract research organization, a company hired by a
pharmaceutical firm to conduct clinical trials, is quoted as saying that ‘‘We have
been approached by several pharmaceutical manufacturers to conduct ‘seed’ studies.
These studies are usually intended to increase the use of the manufacturer’s product
and sometimes lack scientific integrity …. The intent is to influence physician and
patient behavior’’ (Psaty and Rennie 2006).
The most widely publicized seeding trial has been the ADVANTAGE study
undertaken by Merck to promote the use of Vioxx?. Based on an analysis of Merck
internal and external correspondence, reports, and presentations Hill and colleagues
showed that ‘‘the trial was designed by Merck’s marketing division to fulfill a
marketing objective; Merck’s marketing division handled both the scientific and the
marketing data, including collection, analysis, and dissemination; and Merck hid the
marketing nature of the trial from participants, physician investigators, and
institutional review board members’’ (Hill et al. 2008).
Protecting the Public Interest
The Reformist Package
Thus far the task of separating pharmaceutical companies from control of the data
originating out of clinical trials has focused on more stringent rules regarding COI,
the use of clinical registries and giving more responsibility to the researchers who
Those Who Have the Gold Make the Evidence255
actually carry out the trials. Declaration of COI in medical journals has been
progressively tightened (Drazen et al. 2010) but as recently as 2004 even in journals
with detailed COI policies 8% of articles had relevant author COI that was not
reported to readers (Goozner 2004). Furthermore, author policies are variable,
depending on the type of manuscript submitted and information collected is often
not published (Cooper et al. 2006). Surveys of academic health science institutions
in Canada and the US have shown considerable variation and laxity with regards to
regulations regarding COI for individual investigators (Lexchin et al. 2008; van
McCrary et al. 2000) and at the institutional level (Rochon et al. 2010; Campbell
et al. 2007). While there may have been improvements since these surveys were
undertaken, the latest update of the annual COI survey of US medical schools
undertaken by the American Medical Student Association gave only 9 of 149
institutions an A with 35 receiving a grade of F (AMSA PharmFree Scorecard 2009,
2009). One reflection of the overall weakness of COI policies is that only 13 of the
top 50 US academic institutions expressly prohibit ghost writing while 26 had no
published policies on the subject (Lacasse and Leo 2010).
The initial idea behind clinical trial registries was to create a public database that
provides basic information about clinical trial protocols in an effort to identify the
existence of trials that pharmaceutical companies (and others) choose not to submit
for publication (or that are rejected for publication). The first large scale registry
came out of the FDA Modernization Act, section 113, that lead to the establishment
of ClinicalTrials.gov in 2000 by the National Library of Medicine on behalf of the
National Institutes of Health (NIH) (Zarin et al. 2005). In September 2005, the
International Committee of Medical Journal Editors, with membership from many of
the world’s leading journals, announced that registration of trials was a prerequisite
for publication. This announcement lead to a significant increase in registration
although entries of industry sponsored trials varied markedly in their degree of
specificity and for some fields, e.g., primary outcome,manyleft the field blank(Zarin
et al. 2005). Trials funded solely by industry were also significantly less likely to
identify the individual responsible for scientific leadership and to provide a contact
email address than were trials with non-industry or partial industry sponsorship
(Sekeres et al. 2008). Crucially, a study of oncology drugs revealed that registration
before publication did not appear to reduce a bias towards results and conclusions
favouring new drugs in the clinical trials literature (Rasmussen et al. 2009).
Further FDA legislation now requires that, effective September 2008, the results
of all clinical trials of drugs except phase I drug trials (preliminary safety studies on
new products) must be reported on ClinicalTrials.gov within a year of the
completion of the trial. Starting in September 2009 this reporting requirement was
extended to adverse effects observed in the trials. These requirements apply to all
drug trials that were ongoing on or after September 27, 2007 if the products under
study have already been approved by the FDA. The results for drugs not yet
approved by the FDA do not have to be posted until the drug receives approval.
These reporting requirements are backed up by substantial fines for those who fail to
comply. However, the results of trials of drugs approved before September 2007 do
not need to be posted and these constitute the vast majority of all drugs in use.
256 J. Lexchin
Furthermore, there is no requirement to post results for drugs that were never
approved (Wood 2009).
Bero and Rennie (1996) were early advocates for a clinical registry to reduce bias
in publications and in addition proposed measures to reduce bias in study design and
in the conduct of drug studies. To achieve the former they suggested that
pharmaceutical companies ‘‘should support investigator-initiated research that
focuses on questions that are shaped by broad scientific interests rather than narrow
commercial’’ ones. They also advocated reform of the drug approval process to
require data comparing new drugs with available alternatives. In an effort to reduce
bias in study design, they called for researchers to think carefully about possible
design biases that would favour the sponsor’s product. They felt that bias could be
reduced in the way that studies are conducted if the industry left the planning and
monitoring of the research design entirely in the hands of the researchers.
In a commentary on the issue of biomedical COI, Schafer (2004) looked at these
proposals, which he termed the ‘‘reformist package’’. While he accepted that the
measures encompassed by this term would, if rigorously implemented, ‘‘almost
certainly improve the quality and scientific integrity of published biomedical
research’’ he was not at all optimistic about this package being realized in any
meaningful way. In particular, he felt that the recommendations from Bero and
Rennie required an unshakable optimism in the willingness of drug companies to act
in opposition to the best interests of their shareholders.
The Sequestration Thesis
As an alternative to the reformist package Schafer proposed what he called the
‘‘sequestration thesis’’ or the separation of researchers from the process of
commercialization that would include the complete isolation of industry from
clinical trial data (Schafer 2004). There are, what I term ‘‘weak’’ and ‘‘strong’’
variations to this thesis. The weak model is exemplified by the proposal from
Finkelstein and Temin (2008). Although they are primarily concerned with drug
prices, they suggest the creation of an independent, public, nonprofit Drug
Development Corporation (DDC) that would act as an intermediary to acquire new
drugs that emerge from private sector R&D and then transfer the rights to sell the
drugs to a different set of firms. In addition to its role in helping to reduce drug
prices, the organization would be mandated to submit ‘‘the results of all basic
scientific studies and clinical trials … for publication in peer-reviewed journals as
soon as patent and other intellectual property considerations permit.’’ The DDC
would also make all negative trials public. Although this function of the DDC would
certainly be helpful in increasing the availability of information it would still leave
the research design and conduct of trials in the hands of the pharmaceutical industry.
The stronger version of this model would see an institution such as the NIH
organize and manage clinical trials and the data that comes out of them with funding
coming from taxes collected from the pharmaceutical industry and/or general tax
revenue (Lewis et al. 2007; Angell 2004). ‘‘Drug companies would no longer
directly compensate scientists for evaluating their own products; instead, scientists
would work for the testing agency’’ (Lewis et al. 2007). In both cases, the authors
Those Who Have the Gold Make the Evidence257
argue that the companies should continue to fund a significant portion of the
research agenda ‘‘in order to discourage the wholesale testing of marginal drugs
with little therapeutic value, or candidate medicines with little chance of clinical
adoption‘‘ (Lewis et al. 2007). While companies would continue to develop and
market their products they would be separated from the process of generating and
interpreting the clinical data about them.
Baker goes even further in arguing for a system whereby all clinical trials would
be publicly financed with the cost of the trials in the US being covered through
lower drug prices under the Medicare drug program and other public health care
programs (Baker 2008).
In an unpublished paper the British economist Alan Maynard notes ‘‘Economic
theory predicts that firms will invest in corruption of the evidence base wherever its
benefits exceed its costs. If detection is costly for regulators, corruption of the
evidence base can be expected to be extensive. Investment in biasing the evidence
base, both clinical and economic, in pharmaceuticals is likely to be detailed and
comprehensive, covering all aspects of the appraisal process. Such investment is
making detection difficult and expensive.’’ This article has shown that what Maynard
to bias in the content of clinical research at every stage in its production.
Defenders of the pharmaceutical industry have tried to minimize its role in biasing
clinical research by pointing out that the pursuit of profits is not the only motivation
for trying to influence the outcome and use of clinical research and that individuals,
government and medical journals are equally guilty (Hirsch 2009). Hirsch is correct,
bias can come from many sources, but no individual or organization has the resources
and the ability to influence the entire process the way that the pharmaceutical
industry can. In this respect, the industry is in a class of its own.
We can reasonably ask that pharmaceutical companies not break the law in their
pursuit of profits but anything beyond that is not realistic. There is no evidence that
any measures that have been taken so far have stopped the biasing of clinical
research and it’s not clear that they have even slowed down the process. What will
be needed to curb and ultimately stop the bias is a paradigm change in the
relationship between pharmaceutical companies and the conduct and reporting of
AMSA PharmFree Scorecard 2009. (2009). Executive summary updated. http://amsascorecard.org/
executive-summary. Accessed 26 Sept 2010.
Andersen, M., Ktragstrup, J., & Søndergaard, J. (2006). How conducting a clinical trial affects
physicians’ guideline adherence and drug preferences. JAMA, 295, 2759–2764.
258 J. Lexchin
Angell, M. (2004). The truth about the drug companies: How they deceive us and what to do about it.
New York: Random House.
Austin, P. C., Mamdani, M. M., Tu, K., & Jaakkimainen, L. (2003). Prescriptions for estrogen
replacement therapy in Ontario before and after publication of the Women’s Health Initiative Study.
JAMA, 289, 3241–3242.
Baker, D. (2008). The benefits and savings from publicly funded clinical trials of prescription drugs.
International Journal of Health Services, 38, 731–750.
Bell, C. M., Urbach, D. R., Ray, J. G., Bayoumi, A., Rosen, A. B., Greenberg, D., et al. (2006). Bias in
published cost effectiveness studies: Systematic review. BMJ, 332, 699–703.
Bero, L., Oostvogel, F., Bacchetti, P., & Lee, K. (2007). Factors associated with findings of published
trials of drug–drug comparisons: Why some statins appear more efficacious than others. PLoS
Medicine, 4, e184.
Bero, L. A., & Rennie, D. (1996). Influences on the quality of published drug studies. International
Journal of Technology Assessment in Health Care, 12, 209–237.
Campbell, E. G., Weissman, J. S., Ehringhaus, S., Rao, S. R., Moy, B., Feibelmann, S., et al. (2007).
Institutional academic–industry relationships. JAMA, 298, 1779–1786.
Cooper, R. J., Gupta, M., Wilkes, M. S., & Hoffman, J. R. (2006). Conflict of interest disclosure policies
and practices in peer-reviewed biomedical journals. Journal of General Internal Medicine, 21,
Dembner, A. (2002). Report raps drug firms’ ’post-approval studies. Boston Globe.
DiMasi, J. A., Feldman, L., Seckler, A., & Wilson, A. (2010). Trends in risks associated with new drug
development: Success rates for investigational drugs. Clinical Pharmacology and Therapeutics, 87,
Djulbegovic, B., Cantor, A., & Clarke, M. (2003). The importance of preservation of the ethical principle
of equipoise in the design of clinical trials: Relative impact of the methodological quality domains
on the treatment effect in randomized controlled trials. Accountability in Research, 10, 301–315.
Drazen, J. M., de Leeuw, P. W., Laine, C., Mulrow, C., DeAngelis, C. D., Frizelle, F. A., et al. (2010).
Toward more uniform conflict disclosures—the updated ICJME conflict of interest reporting forms.
New England Journal of Medicine, 363, 188–189.
Dunbar, C. E., & Tallman, M. S. (2009). ‘Ghostbusting’ at blood. Blood, 113, 502–503.
Finkelstein, S., & Temin, P. (2008). Reasonable Rx: Solving the drug price crisis. Upper Saddle River:
Franco, O. H., Peeters, A., Looman, C. W. N., & Bonneux, L. (2005). Cost effectiveness of statins in
coronary heart disease. Journal of Epidemiology and Community Health, 59, 927–933.
Friedman, L. S., & Richter, E. D. (2004). Relationship between conflicts of interest and research results.
Journal of General Internal Medicine, 19, 51–56.
Fries, J. F., & Krishnan, E. (2004). Equipoise, design bias, and randomized controlled trials: The elusive
ethics of new drug development. Arthritis Research and Therapy, 6, R250–R255.
Fugh-Berman, A. (2005). The corporate coauthor. Journal of General Internal Medicine, 20, 546–548.
Fugh-Berman, A. (2010). The haunting of medical journals: How ghostwriting sold ‘‘HRT’’. PLoS
Medicine, 7(9), e1000335.
Goozner, M. (2004). Unrevealed: Non-disclosure of conflicts of interest in four medical and scientific
journals. Washington, DC: Center for Science in the Public Interest.
Gøtzsche, P. C. (1989). Multiple publication of reports of drug trials. European Journal of Clinical
Pharmacology, 36, 429–432.
Hartmann, M., Knoth, H., Schulz, D., & Knoth, S. (2003). Industry-sponsored economic studies in
oncology vs. studies sponsored by nonprofit organisations. British Journal of Cancer, 89,
Healy, D., & Cattell, D. (2003). Interface between authorship, industry and science in the domain of
therapeutics. British Journal of Psychiatry, 183, 22–27.
Heres, S., Davis, J., Maino, K., Jetzinger, E., Kissling, W., & Leucht, S. (2006). Why olanzapine beats
risperidone, risperidone beats quetiapine, and quetiapine beats olanzapine: An exploratory analysis
of head-to-head comparison studies of second-generation antipsychotics. American Journal of
Psychiatry, 163, 185–194.
Hersh, A. L., Stefanick, M. L., & Stafford, R. S. (2004). National use of postmenopausal hormone
therapy: Annual trends and response to recent evidence. JAMA, 291, 47–53.
Hill, K. P., Ross, J. S., Egilman, D. S., & Krumholz, H. M. (2008). The ADVANTAGE seeding trial: A
review of internal documents. Annals of Internal Medicine, 149, 251–258.
Those Who Have the Gold Make the Evidence 259
Hirsch, L. J. (2009). Conflicts of interest, authorship, and disclosures in industry-related scientific
publications: The tort bar and editorial oversight of medical journals. Mayo Clinic Proceedings, 84,
Hole, O. P., Winther, F. Ø., & Straume, B. (2001). Clinical research: The influence of the pharmaceutical
industry. European Journal of Clinical Pharmacology, 56, 851–853.
Hulley, S., Grady, D., Bush, T., Furberg, C., Herrington, D., Riggs, B., et al. (1998). Randomized trial of
estrogen plus progestin for secondary prevention of coronary heart disease in postmenopausal
women. Heart and Estrogen/progestin Replacement Study (HERS) Research Group. JAMA, 280,
Huston, P., & Moher, D. (1996). Redundancy, disaggregation, and the integrity of medical research.
Lancet, 347, 1024–1026.
Jagsi, R., Sheets, N., Jankovic, A., Motomura, A. R., Amarnath, S., & Ubel, P. A. (2009). Frequency,
nature, effects, and correlates of conflicts of interest in published clinical cancer research. Cancer,
Johansen, H. K., & Gotzsche, P. C. (1999). Problems in the design and reporting of trials of antifungal
agents encountered during meta-analyses. JAMA, 282, 1752–1759.
Jørgensen, A. W., Maric, K. L., Tendal, B., Faurschou, A., & Gøtzsche, P. C. (2008). Industry-supported
meta-analyses compared with meta-analyses with non-profit or no support: Differences in
methodological quality and conclusions. BMC Medical Research Methodology, 8, 60.
Katz, D., Caplan, A. L., & Merz, J. F. (2003). All gifts large and small: Toward an understanding of the
ethics of pharmaceutical industry gift-giving. American Journal of Bioethics, 3, 39–46.
Kjaergard, L. L., & Als-Nielsen, B. (2002). Association between competing interests and authors’
Lacasse, J. R., & Leo, J. (2010). Ghostwriting at elite academic medical centers in the United States.
PLoS Medicine, 7, e1000230.
Lewis, T., Reichman, J., & So, A. (2007). The case for public funding and public oversight of clinical
trials. Economists’ Voice, 4(1), 1–4.
Lexchin, J., Bero, L., Djubegovic, B., & Clark, O. (2003). Pharmaceutical industry sponsored research:
Evidence for a systematic bias. BMJ, 326, 1167–1170.
Lexchin, J., Sekeres, M., Gold, J., Ferris, L. E., Kalkar, S. R., Wu, W., et al. (2008). National evaluation
of policies on individual financial conflicts of interest in Canadian academic health science centers.
Journal of General Internal Medicine, 23, 1896–1903.
Light, D. W. (2006). Basic research funds to discover important new drugs: Who contributes how much?
In M. A. Burke & A. de Francisco (Eds.), Monitoring financial flows for health research 2005:
Behind the global numbers (pp. 29–43). Geneva: Global Fund for Health Research.
Melander, H., Ahlqvist-Rastad, J., Meijer, G., & Beermann, B. (2003). Evidence b(i)ased medicine—
selective reporting from studies sponsored by pharmaceutical industry: Review of studies in new
drug applications. BMJ, 326, 1171–1173.
Miners, A. H., Garau, M., Fidan, D., & Fischer, A. J. (2005). Comparing estimates of cost effectiveness
submitted to the National Institute for Cliinical Excellence (NICE) by different organisations:
retrospective study. BMJ, 330, 65–69.
Montgomery, J. H., Byerly, M., Carmody, T., Li, B., Miller, D. R., Varghese, F., et al. (2004). An analysis
of the effect of funding source in randomized clinical trials of second generation antipsychotics for
the treatment of schizophrenia. Controlled Clinical Trials, 25, 598–612.
Moses, H. I., Dorsey, E. R., Matheson, D. H. M., & Thier, S. O. (2005). Financial anatomy of biomedical
research. JAMA, 294, 1333–1342.
Nieto, A., Mazon, A., Pamies, R., Linana, J. J., Lanuza, A., Jime ´nez, F. O., et al. (2007). Adverse effects
of inhaled corticosteroids in funded and nonfunded studies. Archives of Internal Medicine, 167,
Ninan, P. T., Poole, R. M., & Stiles, G. L. (2008). Selective publication of antidepressant trials. New
England Journal of Medicine, 358, 2180.
Patsopoulos, N. A., Ioannidis, J. P. A., & Analatos, A. A. (2006). Origin and funding of the most
frequently cited papers in medicine: database analysis. BMJ, 332, 1061–1064.
Perlis, C. S., Harwood, M., & Perlis, R. H. (2005a). Extent and impact of industry sponsorship conflicts of
interest in dermatology research. Journal of the American Academy of Dermatology, 52, 967–971.
Perlis, R. H., Perlis, C. S., Wu, Y., Hwang, C., Joseph, M., & Nierenberg, A. A. (2005b). Industry
sponsorship and financial conflict of interest in the reporting of clinical trials in psychiatry.
American Journal of Psychiatry, 162, 1957–1960.
Psaty, B. M., & Rennie, D. (2006). Clinical trial investigators and their prescribing patterns: Another
dimension to the relationship between physician investigators and the pharmaceutical industry.
JAMA, 295, 2787–2789.
Rasmussen, M., Lee, K., & Bero, L. (2009). Association of trial registration with the results and
conclusions of published trials of new oncology drugs. Trials, 10, 116.
Rising, K., Bacchetti, P., & Bero, L. (2008). Reporting bias in drug trials submitted to the Food and Drug
Administration: Review of publication and presentation. PLoS Medicine, 5, e217.
Rochon, P. A., Gurwitz, J. H., Simms, R. W., Fortin, P. R., Felson, D. T., Minaker, K. L., et al. (1994). A
study of manufacturer-supported trials of nonsteroidal anti-inflammatory drugs in the treatment of
arthritis. Archives of Internal Medicine, 154, 157–163.
Rochon, P. A., Sekeres, M., Lexchin, J., Moher, D., Wu, W., Kalkar, S. R., et al. (2010). Institutional
financial conflicts of interest policies at Canadian academic health science centres: A national
survey. Open Medicine, 4, E134–E138.
Ross, J. S., Hill, K. P., Egilman, D. S., & Krumholz, H. M. (2008). Guest authorship and ghostwriting in
publications related to rofecoxib: A case study of industry documents from rofecoxib ligitation.
JAMA, 299, 1800–1812.
Safer, D. J. (2002). Design and reporting modifications in industry-sponsored comparative psychophar-
macology trials. Journal of Nervous and Mental Disease, 190, 583–592.
Schafer, A. (2004). Biomedical conflicts of interest: A defence of the sequestration thesis—learning from
the cases of Nancy Olivieri and David Healy. Journal of Medical Ethics, 30, 8–24.
Sekeres, M., Gold, J. L., Chan, A.-W., Lexchin, J., Moher, D., Van Laethem, M. L. P., et al. (2008). Poor
reporting of scientific leadership information in clinical trial registers. PLoS One, 3, e1610.
Sismondo, S. (2008a). How pharmaceutical industry funding affects trial outcomes: Causal structures and
responses. Social Science and Medicine, 66, 1909–1914.
Sismondo, S. (2008b). Pharmaceutical company funding and its consequences: A qualitative systematic
review. Contemporary Clinical Trials, 29, 109–113.
Spielmans, G. I., Biehn, T. L., & Sawrey, D. L. (2010). A case study of salami slicing: Pooled analyses of
duloxetine for depression. Psychotherapy and Psychosomatics, 79, 97–106.
Stossel, T. P. (2005). Regulating academic–industrial research relationships—solving problems or stifling
progress? New England Journal of Medicine, 353, 1060–1065.
Trame `r, M. R., Reynolds, D. J. M., Moore, R. A., & McQuay, H. J. (1997). Impact of covert duplicate
publication on meta-analysis: A case study. BMJ, 315, 635–640.
Turner, E. H., Matthews, A. M., Linardatos, E., Tell, R. A., & Rosenthal, R. (2008). Selective publication
of antidepressant trials and its influence on apparent efficacy. New England Journal of Medicine,
van McCrary, S., Anderson, C. B., Jakovljevic, J., Khan, T., McCullough, L. B., Wray, N. P., et al.
(2000). A national survey of policies on disclosure of conflicts of interest in biomedical research.
New England Journal of Medicine, 343, 1621–1626.
Whittington, C. J., Kendall, T., Fonagy, P., Cottrell, D., Cotgrove, A., & Boddington, E. (2004). Selective
serotonin reuptake inhibitors in childhood depression: Systematic review of published versus
unpublished data. Lancet, 363, 1341–1345.
Wood, A. J. J. (2009). Progress and deficiencies in the registration of clinical trials. New England Journal
of Medicine, 360, 824–830.
Writing Group for the Women’s Health Initiative Investigators. (2002). Risks and benefits of estrogen
plus progestin in healthy postmenopausal women: Principal results from the Women’s Health
Initiative randomized controlled trial. JAMA, 288, 321–333.
Wyatt, J. (1991). Use and sources of medical knowledge. Lancet, 338, 1368–1373.
Yank, V., Rennie, D., & Bero, L. A. (2007). Financial ties and concordance between results and
conclusions in meta-analyses: Retrospective cohort study. BMJ, 335, 1202–1205.
Zarin, D. A., Tse, T., & Ide, N. C. (2005). Trial registration at ClinicalTrials.gov between May and
October 2005. New England Journal of Medicine, 353, 2779–2787.
Those Who Have the Gold Make the Evidence261