Conference PaperPDF Available

The Consequences of Retractions for Co-authors: Scientific Fraud and Error in Biomedicine

Authors:

Abstract and Figures

In the last decade, major cases of scientific fraud (e.g. Hendrik Schön, Diedrick Stapel, Eric Poehlman and Yoshitaka Fujii) have shocked the scientific community. Such frauds account for more than half of the publications retracted from the scientific literature, which have increased tremendously in the past few years. In the biomedical field, fraud can have consequences not only for the research community, but also for the public. It is a serious deviance from the norms of science, and it most likely ends the career of researchers who get caught doing it. However, researchers rarely work alone, and some of the consequences are presumably shared by their co-authors, although no empirical evidence of this has been provided so far. To evaluate the nature and extent of these shared consequences, we measured the productivity, impact and collaboration of authors who retracted papers between 1996 and 2006. We divided authors in groups according to their rank on the retracted papers' authors list and the cause of retraction (fraud or error) and compared the results for each group to those of a randomly selected control group. We found that retractions do have consequences for the career of co-authors, mostly in terms of scientific output, which are more important in cases of fraud than errors. Furthermore, first authors are generally affected more strongly by retractions than the other co-authors of the retracted publications.
Content may be subject to copyright.
Mongeon & Larivière
404
The consequences of retractions for co-authors: scientific fraud and
error in biomedicine.
Philippe Mongeon * and Vincent Larivière**
*
philippe.mongeon@umontreal.ca;
**
vincent.lariviere@umontreal.ca
École de bibliothéconomie et des sciences de l’information, Université de Montréal, C.P. 6128, Succ. Centre-
Ville, Montréal, QC. H3C 3J7 (Canada)
**Observatoire des sciences et des technologies (OST), Centre interuniversitaire de recherche
sur la science et la technologie (CIRST), Université du Québec à Montréal
CP 8888, Succ. Centre-Ville, Montréal, QC. H3C 3P8, (Canada)
Abstract
In the last decade, major cases of scientific fraud (e.g. Hendrik Schön, Diedrick Stapel, Eric
Poehlman and Yoshitaka Fujii) have shocked the scientific community. Such frauds account
for more than half of the publications retracted from the scientific literature, which have
increased tremendously in the past few years. In the biomedical field, fraud can have
consequences not only for the research community, but also for the public. It is a serious
deviance from the norms of science, and it most likely ends the career of researchers who get
caught doing it. However, researchers rarely work alone, and some of the consequences are
presumably shared by their co-authors, although no empirical evidence of this has been
provided so far. To evaluate the nature and extent of these shared consequences, we measured
the productivity, impact and collaboration of authors who retracted papers between 1996 and
2006. We divided authors in groups according to their rank on the retracted papers’ authors
list and the cause of retraction (fraud or error) and compared the results for each group to
those of a randomly selected control group. We found that retractions do have consequences
for the career of co-authors, mostly in terms of scientific output, which are more important in
cases of fraud than errors. Furthermore, first authors are generally affected more strongly by
retractions than the other co-authors of the retracted publications.
Introduction
The number of retractions has skyrocketed in the last few years (Cokol, Ozbay, & Rodriguez-
Esteban, 2008; Steen, 2011), mostly in the biomedical field (Grieneisen & Zhang, 2012)
going from 20 retractions a year during the 90s to more than 500 in 2012 and in 2013.
According to Fang, Steen and Casadevall (2012), scientific fraud (data fabrication, data
falsification and plagiarism) accounts for more than half of those retractions. Previous
research has mostly focused on the rise of retractions (Cokol et al., 2008; Steen, 2011), it’s
causes (Fang et al., 2012; Steen, Casadevall, & Fang, 2013), the ongoing citations of retracted
papers (Furman, Jensen, & Murray, 2012; A. Neale, Northrup, Dailey, Marks, & Abrams,
2007; A. V. Neale, Dailey, & Abrams, 2010; Pfeifer & Snodgrass, 1990). Others have
investigated and discussed the prevalence of scientific fraud (Fanelli, 2009; Sovacool, 2008;
Steen, 2011), ways to prevent, detect and act upon it (Steneck, 2006), and its potential
consequences for science in general and for the public (Steen, 2012). A few studies have
looked at the consequences of fraud within disciplines (e.g. Azoulay, Furman, Krieger, &
Murray, 2012) and within research teams (e.g. Jin, Jones, Lu, & Uzzi, 2013).
Mongeon & Larivière
405
A researcher found guilty of fraud will most likely see his scientific career decline, or even
come to an end. However, researchers rarely work alone, as science is becoming more and
more collaborative (Wuchty, Jones, & Uzzi, 2007); a long lasting trend that is observed in
almost all disciplines. Authorship confers symbolic capital as well as responsibility (Biagioli,
1999), but defining who did what and who is responsible for specific parts of the work is
made more complex by this collaborative context (Biagioli, 1998; Cronin, 2001).
Furthermore, the coexistence of these two trends (the increase of retractions and
collaboration) may result in an exponential increase of the researchers with a retraction in
their record. This brings into light the importance of investigating how the consequences of
scientific fraud are shared by co-authors. Indeed, it is assumed that other authors of the
fraudulent article also suffer collateral effects of the retraction (Bonetta, 2006), but no
research has yet provided empirical data giving a complete account of these shared
consequences.
Retractions can occur for different reasons, the most common being fraud or error. While
fraud is an serious deviation from the core values and the purpose of science, there is a
general agreement that honest mistakes are normal in the course of science, and that they
“must be seen not as sources of embarrassment or failure, but rather as opportunities for
learning and improvement” (Nath, Marcus, & Druss, 2006). Therefore, we would expect
retractions for fraud to have more impact on researchers’ careers than retractions for error.
Also, the specific contribution of authors to a specific paper is reflected in the order by which
authors are listed. In the biomedical field, this distribution is typically U-shaped (Pontille,
2004) meaning that the first and last authors are supposedly those who contributed the most to
the work, and thus receive more credit for it. Last authors are also typically senior researchers
with tenure that are managing research laboratories, which puts them into a less precarious
position than first authors, who are typically PhD Students, post-docs or junior researchers.
This is reflected in the results of a study by Jin, Jones, Lu and Uzzi (2013), who showed that
fraud has less impact on future citations of eminent co-authors. We would, thus expect the
effect of a retraction to vary according to the researchers’ rank in the list of authors of the
retracted paper.
In this study, we measured the pre- and post-retraction productivity, scientific impact and
collaboration of all the co-authors of papers retracted in PubMed between 1996 and 2006, in
order to provide answers to the following questions: Do retractions have an impact on the co-
authors in terms of productivity, scientific impact, and collaboration? If so, how does this
impact varies according to the retraction cause (fraud vs error), and according to the author’s
rank in the retracted paper’s authors list?
Methods
Retractions sample
We used PubMed to gather all publications that were retracted between 1996 and 2006, which
were then found in the Web of Science for further analysis, keeping only those published in
biomedical and clinical medicine journals (n = 443). Using data from Azoulay et al. (2012)
we identified the articles that were retracted for fraud (n = 179) or error (n = 114) co-authored
by a total of 1,098 researchers.
We then created a control group by randomly selecting, for each of the 443 articles retracted
between 1996 and 2006, a non–retracted article with the same number of authors, published in
the same issue of the same journal. This provided us with a list of 1,862 distinct authors.
Mongeon & Larivière
406
Co-authors sample
Using data by Azoulay et al. (2012) or looking at the retraction notices, we found 79 authors
who were identified as responsible for 159 of the 179 fraud cases. The 66 distinct authors of
the remaining 20 fraud cases were excluded from the sample in order to ensure that no
fraudulent researchers remained. We also excluded of our sample 3 authors who were
identified as responsible for 5 cases of error.
Finally, we divided the authors in three groups (first, middle, and last authors) according to
their rank in the authors list of the retracted papers. Table 1 shows the distribution of authors
within each group.
Table 1. Sample of authors.
For all remaining authors, we searched the WoS for all articles, reviews and notes published
in the five years preceding and following the retraction. For each paper found, the publication
year was normalized by time to retraction (T). For authors with multiple retractions on
different years, we gathered papers from 5 years before the first retraction to 5 years after the
last one. In those cases, T = 0 for years between the first and last publication, inclusively.
After author name disambiguation, we obtained a total of 15,333 distinct articles for the fraud
and error groups, and 55,036 distinct articles for the control group.
Indicators
To measure the effect of retraction on the output of researchers, we used the individual
relative productivity (IRP) calculated for each year by dividing the number of publications on
that year by the total number of publications over the ten years period. We used the average
relative citations (ARC) to measure scientific impact. Two other indicators were used to
assess scientific impact: the number of highly cited papers (top 5% of the discipline), and the
number of papers published in top journal (top 5% of the discipline). Thirdly, collaboration
was assessed using the average number of authors, institutions and countries on the
researchers’ publications, all normalized by discipline.
Results
Scientific output
Figure 1 shows that retractions cause an important decrease in scientific output for all co-
authors, no matter the reason for retractions. Also, for first and last authors, frauds seem to
have more impact than errors, which is not the case for middle authors. First authors who
retracted a paper for fraud seem to suffer a much bigger decline in scientific output than
middle and last authors who retracted papers for the same reason. Furthermore, for all groups
except last authors with a retraction for error, the differences in the median output between
the pre- and post-retraction periods were found, using a Mann-Withney U-test, to be
significantly different than the differences observed for the control groups (P < 0.05).
Fraud
Error
Control
Total
First authors
45
108
411
564
Middle authors
346
366
1,046
1,758
Last authors
77
102
405
584
Total
46
8
576
1,862
2,906
Mongeon & Larivière
407
Figure 11. Median individual relative publications from five years prior to five years after the
retraction.
Scientific impact
Table 2 shows the variation observed between pre- and post-retraction period for the 3
indicators of scientific impact. Since, authors must have published in both the pre- and post-
retraction periods in order to compare their impact for those periods, those who had no
publications in either the pre or post-retraction period were excluded for this part of analysis.
The number of authors in the resulting sub-sample is indicated in table 2. Also, since many
authors do not publish top papers, the 3
rd
quartile (and not the median) is used for that
indicator.
Table 2. Difference between pre- and post-retraction average relative citations, proportion of
top papers and publications in top journals.
Notes: P-values shown are the result from a Mann-Withney U-test, comparing the fraud and
error groups with the control groups.
*
P < 0.1;
**
P < 0.05;
***
P < 0.01
We see, in table 2, that for first and last authors, the differences observed between the fraud or
error groups and the control groups are not statistically different. This may be due to the small
size of this sub-sample. However, for the larger sub-sample of middle authors who retracted
All authors
Top journals (median)
First authors
Middle authors
Last authors
ARC (median)
Top papers (3rd quartile)
Mongeon & Larivière
408
for fraud, decreases observed for all three measures of impact are significantly more
important than the decreases observed for the control groups.
Interestingly, for first, middle and last authors, retractions for error seem to have a positive
impact on average relative citation and the proportion of top papers, in comparison with the
control group. However, this is only statistically significant in the case of middle authors.
This result may still be linked to a Lu, Jin, Uzzi, & Jones (2013), who showed that self-
reported retractions (most likely errors) led to an increase in citations for the authors’ previous
work. Our results suggest that this might also be the case for the authors’ ulterior work.
Furthermore, the proportion of publications does not follow a similar trend. This would
indicate that this increase of citations and top papers is not simply an effect of having more
papers published in top journals.
Due of the small size of the first and last authors subsamples, it might be interesting to look at
aggregated results for all authors. While these results are obviously influenced by the weight
of the middle authors, we can say that, in general, retractions for fraud have a significant
negative impact on citations, top papers and publications in top journals, and that errors have
a significant positive impact on citations.
Collaboration
In the third part of our analysis, we looked at the impact of retraction on co-authors’
collaboration, also using the sub-sample of authors with at least one publications in both the
pre and post-retraction periods (see table 2 above). Figure 2 shows that retraction doesn’t
seem to have any significant impact on the inter-institutional collaboration level of co-authors.
Similar results were obtained looking at the number of authors and number of countries per
paper (not shown). Thus, we conclude that retractions do not appear to have any general effect
on the collaboration practices of co-authors.
Figure 2. Average number of institutions per paper from five years prior to five years after the
retraction.
0,6
0,7
0,8
0,9
1
1,1
1,2
1,3
1,4
-5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5
Time to retraction Time to retraction Time to retraction
Average number of institution
Fraude Erreur Contrôle
First authors
Middle authors
Last authors
Mongeon & Larivière
409
Discussion
The results presented here show that co-authors do share the consequences of fraud. However,
it is mostly the output of researchers that is affected, while the decline of the different
measures of scientific impact decline appears to be less important, and the effect on
collaboration, null. We expected that error would have little or no impact on co-authors’
careers. However, our results show that errors do have important consequences (though not as
important as cases of fraud) for collaborators in terms of publications. These results might be
partly explained by the fact that retractions occur generally in cases of major errors that
invalidate the findings as a whole, while minor error leads most likely to corrections. Also,
our results seem to confirm that the extent of the impact of retraction is related to the position
of the author in article’s authors list. One unexpected finding was the positive impact that
retraction for error seemed to have on the citations received by the author’s subsequent work.
More research will be necessary to confirm and fully understand this phenomenon.
The effect of having participated in a case of scientific fraud goes way beyond a decrease in
papers or loss in scientific impact. Some consequences can be psychological (i.e. scientists
losing trust in science, colleagues and institutions) or a waste of research efforts and funds.
The case of Hendrik Schön, in physics, provides a good example of this waste of efforts: he
forged ‘ground-breaking’ results that many other researchers around the globe were eager to
reproduce and build upon, leading to much wasted funds and time, and the discovery of the
fraud led a few discouraged scientists (mostly PhD and postdoctoral students) to abandon the
idea of pursuing a career in research (Reich, 2009). Moreover, the many cases of fraud that
are discovered almost every day are most likely the tip of the iceberg: in the United States,
allegations of fraud received by U.S. Office of Research Integrity (ORI) have increased to a
point where only a small proportion can actually be investigated (Nature News, 2013). It is,
thus, likely that the number of cases will keep rising and that more and more collaborators
will see their career compromised.
References
Azoulay, P., Furman, J. L., Krieger, J. L., & Murray, F. E. (2012). Retractions. NBER
Working Paper Series, 18449.
Biagioli, M. (1998). The instability of authorship: credit and responsibility in contemporary
biomedicine. FASEB Journal : Official Publication of the Federation of American Societies
for Experimental Biology, 12(1), 3–16.
Biagioli, M. (1999). Aporias of scientific authorship: credit and responsibility in
contemporary biomedicine. In The science studies reader (pp. 12–30).
Bonetta, L. (2006). The Aftermath of Scientific Fraud. Cell, 124(5), 873–875.
Cokol, M., Ozbay, F., & Rodriguez-Esteban, R. (2008). Retraction rates are on the rise.
EMBO Reports, 9(1), 2.
Cronin, B. (2001). Hyperauthorship: A postmodern perversion or evidence of a structural
shift in scholarly communication practices? Journal of the American Society for Information
Science and Technology, 52(7), 558–569.
Fanelli, D. (2009). How Many Scientists Fabricate and Falsify Research? A Systematic
Review and Meta-Analysis of Survey Data (How Many Falsify Research?). PLoS ONE, 4(5),
e5738.
Mongeon & Larivière
410
Fang, F. C., Steen, R. G., & Casadevall, A. (2012). Misconduct accounts for the majority of
retracted scientific publications. Proceedings of the National Academy of Sciences, 109(42),
17028–17033.
Furman, J. L., Jensen, K., & Murray, F. (2012). Governing knowledge in the scientific
community: Exploring the role of retractions in biomedicine. Research Policy, 41(2), 276
290.
Grieneisen, M. L., & Zhang, M. (2012). A Comprehensive Survey of Retracted Articles from
the Scholarly Literature. PLoS ONE, 7(10), e44118.
Jin, G. Z., Jones, B., Lu, S. F., & Uzzi, B. (2013). The Reverse Matthew Effect: Catastrophe
and Consequence in Scientific Teams. NBER Working Paper Series. Retrieved from
http://www.nber.org/papers/w19489
Nath, S. B., Marcus, S. C., & Druss, B. G. (2006). Retractions in the research literature:
misconduct or mistakes? The Medical Journal of Australia, 185(3), 152–154.
Nature News (2013). Seven days: 26 April2 May 2013.http://www.nature.com/news/seven-
days-26-april-2-may-2013-1.12899#/trend
Neale, A., Northrup, J., Dailey, R. K., Marks, E., & Abrams, J. (2007). Correction and use of
biomedical literature affected by scientific misconduct. Science and Engineering Ethics,
13(1), 5–24.
Neale, A. V., Dailey, R. K., & Abrams, J. (2010). Analysis of citations to biomedical articles
affected by scientific misconduct. Science and Engineering Ethics, 16(2), 251–61.
Pfeifer, M. P., & Snodgrass, G. (1990). The continued use of retracted, invalid scientific
literature. JAMA, The Journal of the American Medical Association, 263(10), 1420.
Pontille, D. (2004). La signature scientifique : une sociologie pragmatique de l’attribution.
Paris: CNRS.
Reich, E. S. (2009). Plastic fantastic : how the biggest fraud in physics shook the scientific
world. New York: Palgrave Macmillan.
Sovacool, B. K. (2008). Exploring Scientific Misconduct: Isolated Individuals, Impure
Institutions, or an Inevitable Idiom of Modern Science? Journal of Bioethical Inquiry, 5(4),
271–282.
Steen, R. G. (2011). Retractions in the scientific literature: is the incidence of research fraud
increasing? Journal of Medical Ethics, 37(4), 249–53.
Steen, R. G. (2012). Retractions in the medical literature: how can patients be protected from
risk? Journal of Medical Ethics, 38(4), 228–32.
Steen, R. G., Casadevall, A., & Fang, F. C. (2013). Why Has the Number of Scientific
Retractions Increased? PLoS ONE, 8(7), e68397.
Steneck, N. (2006). Fostering integrity in research: Definitions, current knowledge, and future
directions. Science and Engineering Ethics, 12(1), 53–74.
Wuchty, S., Jones, B. F., & Uzzi, B. (2007). The Increasing Dominance of Teams in
Production of Knowledge. Science , 316 (5827 ), 1036–1039.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Background: The number of retracted scientific publications has risen sharply, but it is unclear whether this reflects an increase in publication of flawed articles or an increase in the rate at which flawed articles are withdrawn. Methods and findings: We examined the interval between publication and retraction for 2,047 retracted articles indexed in PubMed. Time-to-retraction (from publication of article to publication of retraction) averaged 32.91 months. Among 714 retracted articles published in or before 2002, retraction required 49.82 months; among 1,333 retracted articles published after 2002, retraction required 23.82 months (p<0.0001). This suggests that journals are retracting papers more quickly than in the past, although recent articles requiring retraction may not have been recognized yet. To test the hypothesis that time-to-retraction is shorter for articles that receive careful scrutiny, time-to-retraction was correlated with journal impact factor (IF). Time-to-retraction was significantly shorter for high-IF journals, but only ∼1% of the variance in time-to-retraction was explained by increased scrutiny. The first article retracted for plagiarism was published in 1979 and the first for duplicate publication in 1990, showing that articles are now retracted for reasons not cited in the past. The proportional impact of authors with multiple retractions was greater in 1972-1992 than in the current era (p<0.001). From 1972-1992, 46.0% of retracted papers were written by authors with a single retraction; from 1993 to 2012, 63.1% of retracted papers were written by single-retraction authors (p<0.001). Conclusions: The increase in retracted articles appears to reflect changes in the behavior of both authors and institutions. Lower barriers to publication of flawed articles are seen in the increase in number and proportion of retractions by authors with a single retraction. Lower barriers to retraction are apparent in an increase in retraction for "new" offenses such as plagiarism and a decrease in the time-to-retraction of flawed work.
Article
Full-text available
The number of retracted scholarly articles has risen precipitously in recent years. Past surveys of the retracted literature each limited their scope to articles in PubMed, though many retracted articles are not indexed in PubMed. To understand the scope and characteristics of retracted articles across the full spectrum of scholarly disciplines, we surveyed 42 of the largest bibliographic databases for major scholarly fields and publisher websites to identify retracted articles. This study examines various trends among them. We found, 4,449 scholarly publications retracted from 1928-2011. Unlike Math, Physics, Engineering and Social Sciences, the percentages of retractions in Medicine, Life Science and Chemistry exceeded their percentages among Web of Science (WoS) records. Retractions due to alleged publishing misconduct (47%) outnumbered those due to alleged research misconduct (20%) or questionable data/interpretations (42%). This total exceeds 100% since multiple justifications were listed in some retraction notices. Retraction/WoS record ratios vary among author affiliation countries. Though widespread, only miniscule percentages of publications for individual years, countries, journals, or disciplines have been retracted. Fifteen prolific individuals accounted for more than half of all retractions due to alleged research misconduct, and strongly influenced all retraction characteristics. The number of articles retracted per year increased by a factor of 19.06 from 2001 to 2010, though excluding repeat offenders and adjusting for growth of the published literature decreases it to a factor of 11.36. Retracted articles occur across the full spectrum of scholarly disciplines. Most retracted articles do not contain flawed data; and the authors of most retracted articles have not been accused of research misconduct. Despite recent increases, the proportion of published scholarly literature affected by retraction remains very small. Articles and editorials discussing retractions, or their relation to research integrity, should always consider individual cases in these broad contexts. However, better mechanisms are still needed for raising researchers' awareness of the retracted literature in their field.
Article
Full-text available
A detailed review of all 2,047 biomedical and life-science research articles indexed by PubMed as retracted on May 3, 2012 revealed that only 21.3% of retractions were attributable to error. In contrast, 67.4% of retractions were attributable to misconduct, including fraud or suspected fraud (43.4%), duplicate publication (14.2%), and plagiarism (9.8%). Incomplete, uninformative or misleading retraction announcements have led to a previous underestimation of the role of fraud in the ongoing retraction epidemic. The percentage of scientific articles retracted because of fraud has increased ∼10-fold since 1975. Retractions exhibit distinctive temporal and geographic patterns that may reveal underlying causes.
Conference Paper
Over the last 25 years, a small but growing body of research on research behavior has slowly provided a more complete and critical understanding of research practices, particularly in the biomedical and behavioral sciences. The results of this research suggest that some earlier assumptions about irresponsible conduct are not reliable, leading to the conclusion that there is a need to change the way we think about and regulate research behavior. This paper begins with suggestions for more precise definitions of the terms "responsible conduct of research," "research ethics, " and "research integrity. " It then summarizes the findings presented in some of the more important studies of research behavior, looking first at levels of occurrence and then impact. Based on this summary, the paper concludes with general observations about priorities and recommendations for steps to improve the effectiveness of efforts to respond to misconduct and foster higher standards for integrity in research.
Article
â–º We analyze the universe of peer-reviewed scientific articles retracted from the biomedical literature between 1972 and 2006 to identify the correlates, timing, and implications of retraction. â–º Relative to a matched control sample, we find that key predictors of retraction are those related to article prominence, including early citation and author institution status. â–º The system of retraction appears expeditious (the mean time to retraction for articles that are retracted in our sample is less than two years) and democratic (retraction timing is not systematically affected by author prominence). â–º Most significantly, retraction causes an immediate, severe, and long-lived decline in future citations.
Article
This paper identifies three distinct narratives concerning scientific misconduct: a narrative of “individual impurity” promoted by those wishing to see science self-regulated; a narrative of “institutional impropriety” promoted by those seeking greater external control of science; and a narrative of “structural crisis” among those critiquing the entire process of research itself. The paper begins by assessing contemporary definitions and estimates of scientific misconduct. It emphasizes disagreements over such definitions and estimates as a way to tease out tension and controversy over competing visions of scientific research. It concludes by noting that each narrative suggests a different approach for resolving misconduct, and that the difference inherent in these views may help explain much of the discord concerning unethical behavior in the scientific community.
Article
Classical assumptions about the nature and ethical entailments of authorship (the standard model) are being challenged by developments in scientific collaboration and multiple authorship. In the biomedical research community, multiple authorship has increased to such an extent that the trustworthiness of the scientific communication system has been called into question. Documented abuses, such as honorific authorship, have serious implications in terms of the acknowledgment of authority, allocation of credit, and assigning of accountability. Within the biomedical world it has been proposed that authors be replaced by lists of contributors (the radical model), whose specific inputs to a given study would be recorded unambiguously. The wider implications of the ‘hyperauthorship’ phenomenon for scholarly publication are considered.
Article
Medical research so flawed as to be retracted may put patients at risk by influencing treatments. To explore hypotheses that more patients are put at risk if a retracted paper appears in a journal with a high impact factor (IF) so that the paper is widely read; is written by a 'repeat offender' author who has produced other retracted research; or is a clinical trial. English language papers (n=788) retracted from the PubMed database between 2000 and 2010 were evaluated. Only those papers retracting research with humans or freshly derived human material were included; 180 retracted primary papers (22.8%) met inclusion criteria. Subjects enrolled and patients treated were tallied, both in the retracted primary studies and in 851 secondary studies that cited a retracted primary paper. Retracted papers published in high-IF journals were cited more often (p=0.0004) than those in low-IF journals, but there was no difference between high- and low-IF papers in subjects enrolled or patients treated. Retracted papers published by 'repeat offender' authors did not enrol more subjects or treat more patients than papers by one-time offenders, nor was there a difference in number of citations. However, retracted clinical trials treated more patients (p=0.0002) and inspired secondary studies that put more patients at risk (p=0.0019) than did other kinds of medical research. If the goal is to minimise risk to patients, the appropriate focus is on clinical trials. Clinical trials form the foundation of evidence-based medicine; hence, the integrity of clinical trials must be protected.