Article

Publikationsbias in Abhängigkeit von der Art der Finanzierung bei klinischen Studien

Authors:
  • Arzneimittelkommission der Deutschen Ärzteschaft
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Publication bias describes the distortion of data in scientific journals resulting from the fact that studies with significant and positive results are more likely to be published than studies with negative or insignificant results. In studies funded by pharmaceutical companies publication bias has a considerable impact. It has been shown that more than half of the studies that are conducted as part of the drug approval process will remain unpublished. In addition, multiple publications of the same results, the selective use of data and the withholding of data relating to adverse drug reactions were also demonstrated. It is unclear, however, whether the probability of publication of studies funded by pharmaceutical companies is different from those not funded by pharmaceutical companies. Also, data vary as to the correlation between the type of funding of clinical studies and the length of time to publication. For the benefit of patients, everyone involved in clinical studies ought to take responsibility and facilitate access to all data.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Hier bietet sich die Zusammenarbeit der EMA mit kooperativen Studiengruppen und eine mit der Zulassung verbundene Verpflichtung zu nachfolgenden Versorgungsstudien an. Um dem Publikations-und SponsorshipBias entgegen zu wirken, müssen darüber hinaus so schnell wie mög-lich umfassende rechtliche Verpflichtungen zur öffentlich zugänglichen Registrierung von Studienprotokollen und -ergebnissen eingeführt werden [26]. Die in der Neufassung des Arzneimittelgesetzes festgelegte gesetzliche Verpflichtung zur Offenlegung von Ergebnissen klinischer Prüfungen ( § 42 b), die auch für bereits zugelassene Arzneimittel begründet wird [27], ist zu begrü-ßen, sollte sich jedoch auch auf Studienprotokolle erstrecken. ...
Article
Various investigations have identified deficits in clinical studies conducted for the market authorisation of haematological and oncological drugs. Based on data from European Public Assessment Reports (EPAR) of the European Medicines Agency (EMA), an analysis of the quality of these studies, which serve as the basis of marketing authorisation of currently approved drugs, is showing improvement. For example, endpoints recommended by the EMA are frequently used. However, deficits of marketing authorisation studies are still noticeable, e. g., results based on unplanned interim analyses or post hoc subgroup analyses. In addition to the improved quality of studies prior to marketing authorisation, independent clinical studies need to be conducted after marketing authorisation has been obtained, a good example of which are therapy optimisation studies (TOS) in acute lymphatic leukaemia (ALL). A goal of TOS is the examination of multimodal therapy concepts in the real world context of routine clinical practice. They can supply valuable data for drug safety and long-term observation. In order to conduct post-marketing authorisation studies, funding is required and bureaucratic hurdles associated with the 12th amendment to the Pharmaceutical Act will have to be reduced. The results of these studies are needed to efficiently handle limited health resources and to adequately inform and treat patients.
Chapter
Die Konzepte und Praktiken des Pharmamarketings befinden sich in einem Wandlungsprozess. Anreize, mit denen der Außendienst ursprünglich an die Ärzte herangetreten ist und das Verschreibungsverhalten zu beeinflussen versucht hat (Kap. 2), werden neu ausgerichtet und teilweise modifiziert (Kap. 6 bis 9). Allerdings war die materielle Stimulierung nie die einzige Marketing-Strategie. Vielmehr standen und stehen – wie im Folgenden ausgeführt werden wird – daneben immer auch Marketingmethoden und -instrumente, die sich auf den Inhalt medizinischen Wissens als Basis des medizinischen Handelns beziehen.
Article
Zusammenfassung Die Ergebnisse von klinischen Studien zu Arzneimitteln sind eine wesentliche Grundlage für die medikamentöse Behandlung von Patienten. Sie werden derzeit meist von pharmazeutischen Unternehmern (pU) gesponsert. Viele Untersuchungen haben gezeigt, dass pU das Design, die Durchführung und die Publikation der von ihnen gesponserten Studien zu ihren Gunsten beeinflussen. Deswegen müssen mehr öffentliche Gelder für klinische Forschung bereitgestellt werden, um Studien zu Arzneimitteln unabhängig von pU durchzuführen – und eine optimale Behandlung der Patienten zu gewährleisten.
Article
For confirmatory trials that are used by the European Medicines Agency (EMA) for approval or extension of approval for anticancer drugs, detailed guidelines have been described regarding, for example, the aims, patient selection, design and endpoints of the study. Several studies in recent years have shown insufficiencies in pivotal trials, for example where the requirements of the EMA have not been adhered to consistently and where studies were prematurely stopped for benefit after only interim analyses. Licensing studies are not sufficient for an evaluation of the added drug value in everyday clinical practice, because, for example, active comparators are often not adequately selected and the safety of a drug cannot be conclusively assessed by approval studies. Cancer drugs are often approved as drugs for rare diseases (orphan drugs) on the basis of small and often not blinded and non-randomised studies. Frequently, an added value cannot be demonstrated for orphan drugs. For the evaluation of the added value of newly approved drugs in oncology, an improvement in the state of data is necessary as this is the basis for the development of guidelines and evidencebased treatment decisions. This requires improved clinical trials prior to approval, and strict adherence to EMA guidelines. Following approval it is necessary to answer outstanding patientrelevant questions and to conduct independent clinical studies. This requires the provision of public funding and the removal of bureaucratic hurdles. Only then can a more efficient use of limited health resources be realised and the quality of care for cancer patients be improved.
Article
For confirmatory trials that are used by the European Medicines Agency (EMA) for approval or extension of approval for anticancer drugs, detailed guidelines have been described regarding, for example, the aims, patient selection, design and endpoints of the study. Several studies in recent years have shown insufficiencies in pivotal trials, for example where the requirements of the EMA have not been adhered to consistently and where studies were prematurely stopped for benefit after only interim analyses. Licensing studies are not sufficient for an evaluation of the added drug value in everyday clinical practice, because, for example, active comparators are often not adequately selected and the safety of a drug cannot be conclusively assessed by approval studies. Cancer drugs are often approved as drugs for rare diseases (orphan drugs) on the basis of small and often not blinded and non-randomised studies. Frequently, an added value cannot be demonstrated for orphan drugs. For the evaluation of the added value of newly approved drugs in oncology, an improvement in the state of data is necessary as this is the basis for the development of guidelines and evidence-based treatment decisions. This requires improved clinical trials prior to approval, and strict adherence to EMA guidelines. Following approval it is necessary to answer outstanding patient-relevant questions and to conduct independent clinical studies. This requires the provision of public funding and the removal of bureaucratic hurdles. Only then can a more efficient use of limited health resources be realised and the quality of care for cancer patients be improved.
Article
Off-label use (OLU) is defined as the prescription of pharmaceutical drugs for an indication or other application that has not been approved by the governing authorization body. Off-label uses are not only common practice but are often the standard and relate to the need for a patient-oriented and patient-specific therapy. The current thesis discusses off-label use from the perspective of health services research. The question is of particular interest because civil and criminal requirements on the one hand and the health insurance law on the other have a direct impact on the doctor as a health care provider and the patient as a beneficiary. The first part of this thesis (Chapters 1-6) briefly outlines the legal aspects, the second part (chapters 7-9) analyses the possible conditions that enable an off-label use in drug therapy of chronic diseases. The focus is primarily on possible structural and administrative gaps as the cause of an OLU. The present work integrates different areas of supply research. This thesis considers the strengths and limitations of different databases and indicates more trans-parency with regard to off-label use discussing the gap between the admission authorised usage and patient-specific care. Finally it outlines the implications for better health care provision.
Article
The market authorisation or extension of indication for all oncology drugs in Europe is now based on Regulation (EC) No. 726/2004, a centralised procedure of the European Medicines Agency (EMA). Studies in recent years have highlighted deficiencies in pivotal studies. For example, the requirements of the EMA are not always consistently followed and studies are stopped prematurely after only interim analysis that at this time point shows improved efficacy with regard to the comparator arm. Our current analysis of the European Assessment Reports (reporting period: 01/01/2009 to 08/13/2012) on 29 drugs for 39 oncology indications shows that the quality of the trials for market authorisation has improved in several respects. Primary endpoints recommended by the EMA and the Food and Drug Administration (FDA) such as overall survival and progression-free survival are used, and only one study was conducted as a phase II trial with no comparator arm. In contrast, oncology drugs that are approved for the treatment of rare diseases (orphan drugs) are based on small studies which are often carried out without blinding, are not randomised and investigate surrogate endpoints. To answer patient-relevant issues following market authorisation, it is necessary to conduct independent clinical studies. Increased public funding needs to be provided and bureaucratic hurdles have to be reduced. Only this will permit a more efficient use of limited health care resources and allow to improve the quality of care for cancer patients. Copyright © 2013 S. Karger AG, Basel.
Article
Full-text available
Background: The tendency for authors to submit, and of journals to accept, manuscripts for publication based on the direction or strength of the study findings has been termed publication bias. Objectives: To assess the extent to which publication of a cohort of clinical trials is influenced by the statistical significance, perceived importance, or direction of their results. Search strategy: We searched the Cochrane Methodology Register (The Cochrane Library [Online] Issue 2, 2007), MEDLINE (1950 to March Week 2 2007), EMBASE (1980 to Week 11 2007) and Ovid MEDLINE In-Process & Other Non-Indexed Citations (March 21 2007). We also searched the Science Citation Index (April 2007), checked reference lists of relevant articles and contacted researchers to identify additional studies. Selection criteria: Studies containing analyses of the association between publication and the statistical significance or direction of the results (trial findings), for a cohort of registered clinical trials. Data collection and analysis: Two authors independently extracted data. We classified findings as either positive (defined as results classified by the investigators as statistically significant (P < 0.05), or perceived as striking or important, or showing a positive direction of effect) or negative (findings that were not statistically significant (P >/= 0.05), or perceived as unimportant, or showing a negative or null direction in effect). We extracted information on other potential risk factors for failure to publish, when these data were available. Main results: Five studies were included. Trials with positive findings were more likely to be published than trials with negative or null findings (odds ratio 3.90; 95% confidence interval 2.68 to 5.68). This corresponds to a risk ratio of 1.78 (95% CI 1.58 to 1.95), assuming that 41% of negative trials are published (the median among the included studies, range = 11% to 85%). In absolute terms, this means that if 41% of negative trials are published, we would expect that 73% of positive trials would be published.Two studies assessed time to publication and showed that trials with positive findings tended to be published after four to five years compared to those with negative findings, which were published after six to eight years. Three studies found no statistically significant association between sample size and publication. One study found no significant association between either funding mechanism, investigator rank, or sex and publication. Authors' conclusions: Trials with positive findings are published more often, and more quickly, than trials with negative findings.
Article
Full-text available
ZUSAMMENFASSUNG Hintergrund: Verschiedene Untersuchungen der letzten Jahre haben gezeigt, dass von pharmazeutischen Unter-nehmen finanzierte klinische Studien zu Arzneimitteln im Vergleich zu unabhängig von den Firmen durchgeführten Untersuchungen häufiger ein Ergebnis haben, das für den Wirkstoff des Pharmaunternehmens günstig ausfällt. Au-ßerdem wurden unterschiedliche Formen der Einflussnah-me auf Arzneimittelstudien durch pharmazeutische Unter-nehmen festgestellt. Eine Übersicht über aktuelle syste-matische Untersuchungen zum Thema soll die derzeitige Datenlage darstellen. Methode: Literaturstellen einer systematischen Recherche in der Datenbank PubMed (1. 11. 2002 bis 16. 12. 2009) wurden durch zwei Mitarbeiter unabhängig voneinander beurteilt, ausgewählt und durch Publikationen aus den Li-teraturverzeichnissen ergänzt. Ergebnisse: 57 Publikationen wurden in die Auswertung eingeschlossen (Teil 1 und 2 der Publikation). Veröffent-lichte Arzneimittelstudien, die von pharmazeutischen Un-ternehmen finanziert werden oder bei deren Autoren ein finanzieller Interessenkonflikt vorliegt, ergeben häufiger ein für die Pharmafirma vorteilhaftes Ergebnis als aus an-deren Quellen finanzierte Untersuchungen. Außerdem wer-den die Resultate öfter zugunsten des Sponsors interpre-tiert als in unabhängig finanzierten Studien. Es zeigten sich Hinweise, dass pharmazeutische Unternehmen das Studienprotokoll zu ihren Gunsten beeinflussen. Die me-thodische Qualität in von Pharmafirmen finanzierten Stu-dien stellt sich nicht schlechter dar als die Qualität von anders finanzierten Untersuchungen. Schlussfolgerungen: Bei der Beurteilung eines Arzneimit-tels führen Angaben aus publizierten Studien, die von pharmazeutischen Unternehmen finanziert wurden, häufig zu einem verzerrten Bild. Dies wird nicht durch die metho-dische Qualität der Arzneimittelstudien erklärt.
Article
Full-text available
Objective. —To investigate factors associated with the publication of research findings, in particular, the association between "significant" results and publication. Design. —Follow-up study. Setting. —Studies approved in 1980 or prior to 1980 by the two institutional review boards that serve The Johns Hopkins Health Institutions—one that serves the School of Medicine and Hospital and the other that serves the School of Hygiene and Public Health. Population. —A total of 737 studies were followed up. Results. —Of the studies for which analyses had been reported as having been performed at the time of interview, 81% from the School of Medicine and Hospital and 66% from the School of Hygiene and Public Health had been published. Publication was not associated with sample size, presence of a comparison group, or type of study (eg, observational study vs clinical trial). External funding and multiple data collection sites were positively associated with publication. There was evidence of publication bias in that for both institutional review boards there was an association between results reported to be significant and publication (adjusted odds ratio, 2.54; 95% confidence interval, 1.63 to 3.94). Contrary to popular opinion, publication bias originates primarily with investigators, not journal editors: only six of the 124 studies not published were reported to have been rejected for publication. Conclusion. —There is a statistically significant association between significant results and publication.(JAMA. 1992;267:374-378)
Article
Full-text available
There is good evidence of selective outcome reporting in published reports of randomized trials. We examined reporting practices for trials of gabapentin funded by Pfizer and Warner-Lambert's subsidiary, Parke-Davis (hereafter referred to as Pfizer and Parke-Davis) for off-label indications (prophylaxis against migraine and treatment of bipolar disorders, neuropathic pain, and nociceptive pain), comparing internal company documents with published reports. We identified 20 clinical trials for which internal documents were available from Pfizer and Parke-Davis; of these trials, 12 were reported in publications. For 8 of the 12 reported trials, the primary outcome defined in the published report differed from that described in the protocol. Sources of disagreement included the introduction of a new primary outcome (in the case of 6 trials), failure to distinguish between primary and secondary outcomes (2 trials), relegation of primary outcomes to secondary outcomes (2 trials), and failure to report one or more protocol-defined primary outcomes (5 trials). Trials that presented findings that were not significant (P > or = 0.05) for the protocol-defined primary outcome in the internal documents either were not reported in full or were reported with a changed primary outcome. The primary outcome was changed in the case of 5 of 8 published trials for which statistically significant differences favoring gabapentin were reported. Of the 21 primary outcomes described in the protocols of the published trials, 6 were not reported at all and 4 were reported as secondary outcomes. Of 28 primary outcomes described in the published reports, 12 were newly introduced. We identified selective outcome reporting for trials of off-label use of gabapentin. This practice threatens the validity of evidence for the effectiveness of off-label interventions.
Article
Full-text available
Background: ClinicalTrials.gov is a publicly accessible, Internet-based registry of clinical trials managed by the US National Library of Medicine that has the potential to address selective trial publication. Our objectives were to examine completeness of registration within ClinicalTrials.gov and to determine the extent and correlates of selective publication. Methods and findings: We examined reporting of registration information among a cross-section of trials that had been registered at ClinicalTrials.gov after December 31, 1999 and updated as having been completed by June 8, 2007, excluding phase I trials. We then determined publication status among a random 10% subsample by searching MEDLINE using a systematic protocol, after excluding trials completed after December 31, 2005 to allow at least 2 y for publication following completion. Among the full sample of completed trials (n = 7,515), nearly 100% reported all data elements mandated by ClinicalTrials.gov, such as intervention and sponsorship. Optional data element reporting varied, with 53% reporting trial end date, 66% reporting primary outcome, and 87% reporting trial start date. Among the 10% subsample, less than half (311 of 677, 46%) of trials were published, among which 96 (31%) provided a citation within ClinicalTrials.gov of a publication describing trial results. Trials primarily sponsored by industry (40%, 144 of 357) were less likely to be published when compared with nonindustry/nongovernment sponsored trials (56%, 110 of 198; p<0.001), but there was no significant difference when compared with government sponsored trials (47%, 57 of 122; p = 0.22). Among trials that reported an end date, 75 of 123 (61%) completed prior to 2004, 50 of 96 (52%) completed during 2004, and 62 of 149 (42%) completed during 2005 were published (p = 0.006). Conclusions: Reporting of optional data elements varied and publication rates among completed trials registered within ClinicalTrials.gov were low. Without greater attention to reporting of all data elements, the potential for ClinicalTrials.gov to address selective publication of clinical trials will be limited. Please see later in the article for the Editors' Summary.
Article
Full-text available
As of 2005, the International Committee of Medical Journal Editors required investigators to register their trials prior to participant enrollment as a precondition for publishing the trial's findings in member journals. To assess the proportion of registered trials with results recently published in journals with high impact factors; to compare the primary outcomes specified in trial registries with those reported in the published articles; and to determine whether primary outcome reporting bias favored significant outcomes. MEDLINE via PubMed was searched for reports of randomized controlled trials (RCTs) in 3 medical areas (cardiology, rheumatology, and gastroenterology) indexed in 2008 in the 10 general medical journals and specialty journals with the highest impact factors. For each included article, we obtained the trial registration information using a standardized data extraction form. Of the 323 included trials, 147 (45.5%) were adequately registered (ie, registered before the end of the trial, with the primary outcome clearly specified). Trial registration was lacking for 89 published reports (27.6%), 45 trials (13.9%) were registered after the completion of the study, 39 (12%) were registered with no or an unclear description of the primary outcome, and 3 (0.9%) were registered after the completion of the study and had an unclear description of the primary outcome. Among articles with trials adequately registered, 31% (46 of 147) showed some evidence of discrepancies between the outcomes registered and the outcomes published. The influence of these discrepancies could be assessed in only half of them and in these statistically significant results were favored in 82.6% (19 of 23). Comparison of the primary outcomes of RCTs registered with their subsequent publication indicated that selective outcome reporting is prevalent.
Article
Full-text available
Previous studies of drug trials submitted to regulatory authorities have documented selective reporting of both entire trials and favorable results. The objective of this study is to determine the publication rate of efficacy trials submitted to the Food and Drug Administration (FDA) in approved New Drug Applications (NDAs) and to compare the trial characteristics as reported by the FDA with those reported in publications. This is an observational study of all efficacy trials found in approved NDAs for New Molecular Entities (NMEs) from 2001 to 2002 inclusive and all published clinical trials corresponding to the trials within the NDAs. For each trial included in the NDA, we assessed its publication status, primary outcome(s) reported and their statistical significance, and conclusions. Seventy-eight percent (128/164) of efficacy trials contained in FDA reviews of NDAs were published. In a multivariate model, trials with favorable primary outcomes (OR = 4.7, 95% confidence interval [CI] 1.33-17.1, p = 0.018) and active controls (OR = 3.4, 95% CI 1.02-11.2, p = 0.047) were more likely to be published. Forty-one primary outcomes from the NDAs were omitted from the papers. Papers included 155 outcomes that were in the NDAs, 15 additional outcomes that favored the test drug, and two other neutral or unknown additional outcomes. Excluding outcomes with unknown significance, there were 43 outcomes in the NDAs that did not favor the NDA drug. Of these, 20 (47%) were not included in the papers. The statistical significance of five of the remaining 23 outcomes (22%) changed between the NDA and the paper, with four changing to favor the test drug in the paper (p = 0.38). Excluding unknowns, 99 conclusions were provided in both NDAs and papers, nine conclusions (9%) changed from the FDA review of the NDA to the paper, and all nine did so to favor the test drug (100%, 95% CI 72%-100%, p = 0.0039). Many trials were still not published 5 y after FDA approval. Discrepancies between the trial information reviewed by the FDA and information found in published trials tended to lead to more favorable presentations of the NDA drugs in the publications. Thus, the information that is readily available in the scientific literature to health care professionals is incomplete and potentially biased.
Article
Full-text available
The United States (US) Food and Drug Administration (FDA) approves new drugs based on sponsor-submitted clinical trials. The publication status of these trials in the medical literature and factors associated with publication have not been evaluated. We sought to determine the proportion of trials submitted to the FDA in support of newly approved drugs that are published in biomedical journals that a typical clinician, consumer, or policy maker living in the US would reasonably search. We conducted a cohort study of trials supporting new drugs approved between 1998 and 2000, as described in FDA medical and statistical review documents and the FDA approved drug label. We determined publication status and time from approval to full publication in the medical literature at 2 and 5 y by searching PubMed and other databases through 01 August 2006. We then evaluated trial characteristics associated with publication. We identified 909 trials supporting 90 approved drugs in the FDA reviews, of which 43% (394/909) were published. Among the subset of trials described in the FDA-approved drug label and classified as "pivotal trials" for our analysis, 76% (257/340) were published. In multivariable logistic regression for all trials 5 y postapproval, likelihood of publication correlated with statistically significant results (odds ratio [OR] 3.03, 95% confidence interval [CI] 1.78-5.17); larger sample sizes (OR 1.33 per 2-fold increase in sample size, 95% CI 1.17-1.52); and pivotal status (OR 5.31, 95% CI 3.30-8.55). In multivariable logistic regression for only the pivotal trials 5 y postapproval, likelihood of publication correlated with statistically significant results (OR 2.96, 95% CI 1.24-7.06) and larger sample sizes (OR 1.47 per 2-fold increase in sample size, 95% CI 1.15-1.88). Statistically significant results and larger sample sizes were also predictive of publication at 2 y postapproval and in multivariable Cox proportional models for all trials and the subset of pivotal trials. Over half of all supporting trials for FDA-approved drugs remained unpublished >/= 5 y after approval. Pivotal trials and trials with statistically significant results and larger sample sizes are more likely to be published. Selective reporting of trial results exists for commonly marketed drugs. Our data provide a baseline for evaluating publication bias as the new FDA Amendments Act comes into force mandating basic results reporting of clinical trials.
Article
Full-text available
The increased use of meta-analysis in systematic reviews of healthcare interventions has highlighted several types of bias that can arise during the completion of a randomised controlled trial. Study publication bias has been recognised as a potential threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making. Until recently, outcome reporting bias has received less attention. We review and summarise the evidence from a series of cohort studies that have assessed study publication bias and outcome reporting bias in randomised controlled trials. Sixteen studies were eligible of which only two followed the cohort all the way through from protocol approval to information regarding publication of outcomes. Eleven of the studies investigated study publication bias and five investigated outcome reporting bias. Three studies have found that statistically significant outcomes had a higher odds of being fully reported compared to non-significant outcomes (range of odds ratios: 2.2 to 4.7). In comparing trial publications to protocols, we found that 40-62% of studies had at least one primary outcome that was changed, introduced, or omitted. We decided not to undertake meta-analysis due to the differences between studies. Recent work provides direct empirical evidence for the existence of study publication bias and outcome reporting bias. There is strong evidence of an association between significant results and publication; studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported. Publications have been found to be inconsistent with their protocols. Researchers need to be aware of the problems of both types of bias and efforts should be concentrated on improving the reporting of trials.
Article
Full-text available
--To investigate factors associated with the publication of research findings, in particular, the association between "significant" results and publication. --Follow-up study. --Studies approved in 1980 or prior to 1980 by the two institutional review boards that serve The Johns Hopkins Health Institutions--one that serves the School of Medicine and Hospital and the other that serves the School of Hygiene and Public Health. --A total of 737 studies were followed up. --Of the studies for which analyses had been reported as having been performed at the time of interview, 81% from the School of Medicine and Hospital and 66% from the School of Hygiene and Public Health had been published. Publication was not associated with sample size, presence of a comparison group, or type of study (eg, observational study vs clinical trial). External funding and multiple data collection sites were positively associated with publication. There was evidence of publication bias in that for both institutional review boards there was an association between results reported to be significant and publication (adjusted odds ratio, 2.54; 95% confidence interval, 1.63 to 3.94). Contrary to popular opinion, publication bias originates primarily with investigators, not journal editors: only six of the 124 studies not published were reported to have been rejected for publication. --There is a statistically significant association between significant results and publication.
Article
Full-text available
To investigate the association between trial characteristics, findings, and publication. The major factor hypothesized to be associated with publication was "significant" results, which included both statistically significant results and results assessed by the investigators to be qualitatively significant, when statistical testing was not done. Other factors hypothesized to have a possible association with publication were funding institute, funding mechanism (grant versus contract versus intramural), multicenter status, use of comparison groups, large sample size, type of control (parallel versus nonparallel), use of randomization and masking, type of analysis (by treatment received versus by treatment assigned), and investigator sex and rank. Follow-up, by 1988 interview with the principal investigator or surrogate, of all clinical trials funded by the National Institutes of Health (NIH) in 1979, to learn of trial results and publication status. Two hundred ninety-three NIH trials, funded in 1979. Publication of clinical trial results. Of the 198 clinical trials completed by 1988, 93% had been published. Trials with "significant" results were more likely to be published than those showing "nonsignificant" results (adjusted odds ratio [OR] = 12.30; 95% confidence interval [CI], 2.54 to 60.00). No other factor was positively associated with publication. Most unpublished trials remained so because investigators thought the results were "not interesting" or they "did not have enough time" (42.8%). Metaanalysis using data from this and 3 similar studies provided a combined unadjusted OR of 2.88 (95% CI, 2.13 to 3.89) for the association between significant results and publication. Even when the overall publication rate is high, such as for trials funded by the NIH, publication bias remains a significant problem. Given the importance of trials and their utility in evaluating medical treatments, especially within the context of metaanalysis, it is clear that we need more reliable systems for maintaining information about initiated studies. Trial registers represent such a system but must receive increased financial support to succeed.
Article
Full-text available
To determine the extent to which publication is influenced by study outcome. A cohort of studies submitted to a hospital ethics committee over 10 years were examined retrospectively by reviewing the protocols and by questionnaire. The primary method of analysis was Cox's proportional hazards model. University hospital, Sydney, Australia. 748 eligible studies submitted to Royal Prince Alfred Hospital Ethics Committee between 1979 and 1988. Time to publication. Response to the questionnaire was received for 520 (70%) of the eligible studies. Of the 218 studies analysed with tests of significance, those with positive results (P < 0.05) were much more likely to be published than those with negative results (P > or = 0.10) (hazard ratio 2.32 (95% confidence interval 1.47 to 3.66), P = 0.0003), with a significantly shorter time to publication (median 4.8 v 8.0 years). This finding was even stronger for the group of 130 clinical trials (hazard ratio 3.13 (1.76 to 5.58). P = 0.0001), with median times to publication of 4.7 and 8.0 years respectively. These results were not materially changed after adjusting for other significant predictors of publication. Studies with indefinite conclusions (0.05 < or = P < 0.10) tended to have an even lower publication rate and longer time to publication than studies with negative results (hazard ratio 0.39 (0.13 to 1.12), P = 0.08). For the 103 studies in which outcome was rated qualitatively, there was no clear cut evidence of publication bias, although the number of studies in this group was not large. This study confirms the evidence of publication bias found in other studies and identifies delay in publication as an additional important factor. The study results support the need for prospective registration of trials to avoid publication bias and also support restricting the selection of trials to those started before a common date in undertaking systematic reviews.
Article
Full-text available
Within-study selective reporting is widely believed to exist, although to date there have been no empirical studies to assess the extent of the problem in clinical research. The present study aimed to examine this process. We undertook a pilot study, involving a single local research ethics committee (LREC), in which we compared the outcomes, analysis and sample size proposed in the original approved study protocol with the results presented in the subsequent study report. We received 41 (73%) replies from lead researchers of 56 projects, which were a complete cohort of clinical research applications approved in a particular time period by the LREC. Fifteen of these projects, which were completed and published at the time of our study, were further investigated. Only six (40%) stated which outcome variables were of primary interest and four (67%) of these showed consistency in the reports. Eight (53%) of the 15 studies mentioned an analysis plan. However, seven (88%) of these eight studies did not follow their prescribed analysis plan: the analysis of outcome variables or associations between certain variables were found to be missing from the report. Our pilot study has shown that within-study selective reporting may be examined qualitatively by comparing the study report with the study protocol. Our results suggest that it might well be substantial; however, the bias can only be broadly identified as protocols are not sufficiently precise.
Article
Full-text available
Despite increasing awareness about the potential impact of financial conflicts of interest on biomedical research, no comprehensive synthesis of the body of evidence relating to financial conflicts of interest has been performed. To review original, quantitative studies on the extent, impact, and management of financial conflicts of interest in biomedical research. Studies were identified by searching MEDLINE (January 1980-October 2002), the Web of Science citation database, references of articles, letters, commentaries, editorials, and books and by contacting experts. All English-language studies containing original, quantitative data on financial relationships among industry, scientific investigators, and academic institutions were included. A total of 1664 citations were screened, 144 potentially eligible full articles were retrieved, and 37 studies met our inclusion criteria. One investigator (J.E.B.) extracted data from each of the 37 studies. The main outcomes were the prevalence of specific types of industry relationships, the relation between industry sponsorship and study outcome or investigator behavior, and the process for disclosure, review, and management of financial conflicts of interest. Approximately one fourth of investigators have industry affiliations, and roughly two thirds of academic institutions hold equity in start-ups that sponsor research performed at the same institutions. Eight articles, which together evaluated 1140 original studies, assessed the relation between industry sponsorship and outcome in original research. Aggregating the results of these articles showed a statistically significant association between industry sponsorship and pro-industry conclusions (pooled Mantel-Haenszel odds ratio, 3.60; 95% confidence interval, 2.63-4.91). Industry sponsorship was also associated with restrictions on publication and data sharing. The approach to managing financial conflicts varied substantially across academic institutions and peer-reviewed journals. Financial relationships among industry, scientific investigators, and academic institutions are widespread. Conflicts of interest arising from these ties can influence biomedical research in important ways.
Article
Full-text available
To investigate whether funding of drug studies by the pharmaceutical industry is associated with outcomes that are favourable to the funder and whether the methods of trials funded by pharmaceutical companies differ from the methods in trials with other sources of support. Medline (January 1966 to December 2002) and Embase (January 1980 to December 2002) searches were supplemented with material identified in the references and in the authors' personal files. Data were independently abstracted by three of the authors and disagreements were resolved by consensus. 30 studies were included. Research funded by drug companies was less likely to be published than research funded by other sources. Studies sponsored by pharmaceutical companies were more likely to have outcomes favouring the sponsor than were studies with other sponsors (odds ratio 4.05; 95% confidence interval 2.98 to 5.51; 18 comparisons). None of the 13 studies that analysed methods reported that studies funded by industry was of poorer quality. Systematic bias favours products which are made by the company funding the research. Explanations include the selection of an inappropriate comparator to the product being investigated and publication bias.
Article
Full-text available
Selective reporting of outcomes within published studies based on the nature or direction of their results has been widely suspected, but direct evidence of such bias is currently limited to case reports. To study empirically the extent and nature of outcome reporting bias in a cohort of randomized trials. Cohort study using protocols and published reports of randomized trials approved by the Scientific-Ethical Committees for Copenhagen and Frederiksberg, Denmark, in 1994-1995. The number and characteristics of reported and unreported trial outcomes were recorded from protocols, journal articles, and a survey of trialists. An outcome was considered incompletely reported if insufficient data were presented in the published articles for meta-analysis. Odds ratios relating the completeness of outcome reporting to statistical significance were calculated for each trial and then pooled to provide an overall estimate of bias. Protocols and published articles were also compared to identify discrepancies in primary outcomes. Completeness of reporting of efficacy and harm outcomes and of statistically significant vs nonsignificant outcomes; consistency between primary outcomes defined in the most recent protocols and those defined in published articles. One hundred two trials with 122 published journal articles and 3736 outcomes were identified. Overall, 50% of efficacy and 65% of harm outcomes per trial were incompletely reported. Statistically significant outcomes had a higher odds of being fully reported compared with nonsignificant outcomes for both efficacy (pooled odds ratio, 2.4; 95% confidence interval [CI], 1.4-4.0) and harm (pooled odds ratio, 4.7; 95% CI, 1.8-12.0) data. In comparing published articles with protocols, 62% of trials had at least 1 primary outcome that was changed, introduced, or omitted. Eighty-six percent of survey responders (42/49) denied the existence of unreported outcomes despite clear evidence to the contrary. The reporting of trial outcomes is not only frequently incomplete but also biased and inconsistent with protocols. Published articles, as well as reviews that incorporate them, may therefore be unreliable and overestimate the benefits of an intervention. To ensure transparency, planned trials should be registered and protocols should be made publicly available prior to trial completion.
Article
Full-text available
To examine the extent and nature of outcome reporting bias in a broad cohort of published randomised trials. Retrospective review of publications and follow up survey of authors. Cohort All journal articles of randomised trials indexed in PubMed whose primary publication appeared in December 2000. Prevalence of incompletely reported outcomes per trial; reasons for not reporting outcomes; association between completeness of reporting and statistical significance. 519 trials with 553 publications and 10,557 outcomes were identified. Survey responders (response rate 69%) provided information on unreported outcomes but were often unreliable--for 32% of those who denied the existence of such outcomes there was evidence to the contrary in their publications. On average, over 20% of the outcomes measured in a parallel group trial were incompletely reported. Within a trial, such outcomes had a higher odds of being statistically non-significant compared with fully reported outcomes (odds ratio 2.0 (95% confidence interval 1.6 to 2.7) for efficacy outcomes; 1.9 (1.1 to 3.5) for harm outcomes). The most commonly reported reasons for omitting efficacy outcomes included space constraints, lack of clinical importance, and lack of statistical significance. Incomplete reporting of outcomes within published articles of randomised trials is common and is associated with statistical non-significance. The medical literature therefore represents a selective and biased subset of study outcomes, and trial protocols should be made publicly available.
Article
Full-text available
To identify characteristics of submitted manuscripts that are associated with acceptance for publication by major biomedical journals. A prospective cohort study of manuscripts reporting original research submitted to three major biomedical journals (BMJ and the Lancet [UK] and Annals of Internal Medicine [USA]) between January and April 2003 and between November 2003 and February 2004. Case reports on single patients were excluded. Publication outcome, methodological quality, predictors of publication. Of 1107 manuscripts enrolled in the study, 68 (6%) were accepted, 777 (70%) were rejected outright, and 262 (24%) were rejected after peer review. Higher methodological quality scores were associated with an increased chance of acceptance (odds ratio [OR], 1.39 per 0.1 point increase in quality score; 95% CI, 1.16-1.67; P < 0.001), after controlling for study design and journal. In a multivariate logistic regression model, manuscripts were more likely to be published if they reported a randomised controlled trial (RCT) (OR, 2.40; 95% CI, 1.21-4.80); used descriptive or qualitative analytical methods (OR, 2.85; 95% CI, 1.51-5.37); disclosed any funding source (OR, 1.90; 95% CI, 1.01-3.60); or had a corresponding author living in the same country as that of the publishing journal (OR, 1.99; 95% CI, 1.14-3.46). There was a non-significant trend towards manuscripts with larger sample size (>/= 73) being published (OR, 2.01; 95% CI, 0.94-4.32). After adjustment for other study characteristics, having statistically significant results did not improve the chance of a study being published (OR, 0.83; 95% CI, 0.34-1.96). Submitted manuscripts are more likely to be published if they have high methodological quality, RCT study design, descriptive or qualitative analytical methods and disclosure of any funding source, and if the corresponding author lives in the same country as that of the publishing journal. Larger sample size may also increase the chance of acceptance for publication.
Article
The ways to ascertain whether the drug products are effective, safe and non-addictive are discussed. The law does not oblige companies to disclose the findings of their research on licensed medicines. For a long time biased under-reporting of clinical trials is recognized as a major problem. To ensure the effectiveness of drug products the companies should be given only provisional licences if they have not studied the effects of those products on patients and the evidence from successive clinical trials must be accumulated and reviewed systematically.
Article
Altruism and trust lie at the heart of research on human subjects. Altruistic individuals volunteer for research because they trust that their participation will contribute to improved health for others and that researchers will minimize risks to participants. In return for the altruism and trust that make clinical research possible, the research enterprise has an obligation to conduct research ethically and to report it honestly. Honest reporting begins with revealing the existence of all clinical studies, even those that reflect unfavorably on a research sponsor's product.
Article
Numerous studies have addressed the subject of the extent and consequences of financial ties between pharmaceutical companies, academic institutions and clinical researchers. Based on these reports, guidelines for the management of conflicts of interest have been developed which give priority to the patients’ welfare on all levels of interaction between doctors and industrial sponsors. The declaration of potential conflicts of interest and strict adherence to guidelines are mandatory to retain the physicians’ credibility, reputation, and professional autonomy. Enhanced transparency in the conduct and analysis of clinical trials should be regarded as a valuable tool to uncover and avoid undue influence on clinical trials and derived prescription recommendations.
Article
Over the past 2 decades, the pharmaceutical industry has gained unprecedented control over the evaluation of its own products. Drug companies now finance most clinical research on prescription drugs, and there is mounting evidence that they often skew the research they sponsor to make their drugs look better and safer. Two recent articles underscore the problem: one showed that many publications concerning Merck's rofecoxib that were attributed primarily or solely to academic investigators were actually written by Merck employees or medical publishing companies hired by Merck1; the other showed that the company manipulated the data analysis in 2 clinical trials to minimize the increased mortality associated with rofecoxib.2 Bias in the way industry-sponsored research is conducted and reported is not unusual and by no means limited to Merck.3
Article
In a retrospective survey, 487 research projects approved by the Central Oxford Research Ethics Committee between 1984 and 1987, were studied for evidence of publication bias. As of May, 1990, 285 of the studies had been analysed by the investigators, and 52% of these had been published. Studies with statistically significant results were more likely to be published than those finding no difference between the study groups (adjusted odds ratio [OR] 2.32; 95% confidence interval [Cl] 1.25-4.28). Studies with significant results were also more likely to lead to a greater number of publications and presentations and to be published in journals with a high citation impact factor. An increased likelihood of publication was also associated with a high rating by the investigator of the importance of the study results, and with increasing sample size. The tendency towards publication bias was greater with observational and laboratory-based experimental studies (OR = 3.79; 95% Cl = 1.47-9.76) than with randomised clinical trials (OR = 0.84; 95% Cl = 0.34-2.09). We have confirmed the presence of publication bias in a cohort of clinical research studies. These findings suggest that conclusions based only on a review of published data should be interpreted cautiously, especially for observational studies. Improved strategies are needed to identify the results of unpublished as well as published studies.
Article
Reports of clinical trials included in applications submitted by drug companies to licensing authorities in Finland and Sweden in four different years were studied. Many reports were submitted, but most of the trials were uncontrolled and of poor quality. Many of the reports were unpublished, and thus, as the submissions are secret, were not available to doctors. These unpublished reports were in most respects as valuable as the published reports. Most of the reports included some information about adverse effects; the information was often deficient, but skilled analysis might increase its value. This study provides support for those who want to see public disclosure of the reports of trials submitted in licensing applications.
Article
Medical evidence may be biased over time if completion and publication of randomized efficacy trials are delayed when results are not statistically significant. To evaluate whether the time to completion and the time to publication of randomized phase 2 and phase 3 trials are affected by the statistical significance of results and to describe the natural history of such trials. Prospective cohort of randomized efficacy trials conducted by 2 trialist groups from 1986 to 1996. Multicenter trial groups in human immunodeficiency virus infection sponsored by the National Institutes of Health. A total of 109 efficacy trials (total enrollment, 43708 patients). Time from start of enrollment to completion of follow-up and time from completion of follow-up to peer-reviewed publication assessed with survival analysis. The median time from start of enrollment to publication was 5.5 years and was substantially longer for negative trials than for results favoring an experimental arm (6.5 vs 4.3 years, respectively; P<.001; hazard ratio for time to publication for positive vs negative trials, 3.7; 95% confidence interval [CI], 1.8-7.7). This difference was mostly attributable to differences in the time from completion to publication (median, 3.0 vs 1.7 years for negative vs positive trials; P<.001). On average, trials with significant results favoring any arm completed follow-up slightly earlier than trials with nonsignificant results (median, 2.3 vs 2.5 years; P=.045), but long-protracted trials often had low event rates and failed to reach statistical significance, while trials that were terminated early had significant results. Positive trials were submitted for publication significantly more rapidly after completion than were negative trials (median, 1.0 vs 1.6 years; P=.001) and were published more rapidly after submission (median, 0.8 vs 1.1 years; P=.04). Among randomized efficacy trials, there is a time lag in the publication of negative findings that occurs mostly after the completion of the trial follow-up.
Article
The primary aim of the present study was to identify possible occurrence of selective reporting of the results of clinical trials to the Finnish National Agency for Medicines. Selective reporting may lead to poorly informed action or inaction by regulatory authorities. In 1987, 274 clinical drug trials were notified to the Finnish National Agency for Medicines. By December 1993, final reports had been received from 68 of these trials and statements that the trial had been suspended from 24 trials. The sponsors of the non-reported trials were requested to report the outcome. The outcomes, if any, of all reported and non-reported trials were classified as positive, inconclusive or negative. The total number of trials with positive, inconclusive or negative outcome were 111, 33 and 44, respectively; the outcomes of 86 trials could not be assessed. Final reports were received from 42/111 (38%) trials with positive, 6/33 (18%) with inconclusive and 9/44 (20%) with negative outcomes. Substantial evidence of selective reporting was detected, since trials with positive outcome resulted more often in submission of final report to regulatory authority than those with inconclusive or negative outcomes.
Article
To investigate the relative impact on publication bias caused by multiple publication, selective publication, and selective reporting in studies sponsored by pharmaceutical companies. 42 placebo controlled studies of five selective serotonin reuptake inhibitors submitted to the Swedish drug regulatory authority as a basis for marketing approval for treating major depression were compared with the studies actually published (between 1983 and 1999). Multiple publication: 21 studies contributed to at least two publications each, and three studies contributed to five publications. Selective publication: studies showing significant effects of drug were published as stand alone publications more often than studies with non-significant results. Selective reporting: many publications ignored the results of intention to treat analyses and reported the more favourable per protocol analyses only. The degree of multiple publication, selective publication, and selective reporting differed between products. Thus, any attempt to recommend a specific selective serotonin reuptake inhibitor from the publicly available data only is likely to be based on biased evidence.
Article
Large clinical trials are the criterion standard for making treatment decisions, and nonpublication of the results of such trials can lead to bias in the literature and contribute to inappropriate medical decisions. To determine the rate of full publication of large randomized trials presented at annual meetings of the American Society of Clinical Oncology (ASCO), quantify bias against publishing nonsignificant results, and identify factors associated with time to publication. Survey of 510 abstracts from large (sample size, > or =200), phase 3, randomized controlled trials presented at ASCO meetings between 1989 and 1998. Trial results were classified as significant (P< or =.05 for the primary outcome measure) or nonsignificant (P>.05 or not reported), and the type of presentation and sponsorship were identified. Subsequent full publication was identified using a search of MEDLINE and EMBASE, completed November 1, 2001; the search was updated in November 2002, using the Cochrane Register of Controlled Trials. Authors were contacted if the searches did not find evidence of publication. Publication rate at 5 years; time from presentation to full publication. Of 510 randomized trials, 26% were not published in full within 5 years after presentation at the meeting. Eighty-one percent of the studies with significant results had been published by this time compared with 68% of the studies with nonsignificant results (P<.001). Studies with oral or plenary presentation were published sooner than those not presented (P =.002), and studies with pharmaceutical sponsorship were published sooner than studies with cooperative group sponsorship or studies for which sponsorship was not specified (P =.02). These factors remained significant in a multivariable model. The most frequent reason cited by authors for not publishing was lack of time, funds, or other resources. A substantial number of large phase 3 trials presented at an international oncology meeting remain unpublished 5 years after presentation. Bias against publishing nonsignificant results is a problem even for large randomized trials. Nonpublication breaks the contract that investigators make with trial participants, funding agencies, and ethics boards.
Article
Questions concerning the safety of selective serotonin reuptake inhibitors (SSRIs) in the treatment of depression in children led us to compare and contrast published and unpublished data on the risks and benefits of these drugs. We did a meta-analysis of data from randomised controlled trials that evaluated an SSRI versus placebo in participants aged 5-18 years and that were published in a peer-reviewed journal or were unpublished and included in a review by the Committee on Safety of Medicines. The following outcomes were included: remission, response to treatment, depressive symptom scores, serious adverse events, suicide-related behaviours, and discontinuation of treatment because of adverse events. Data for two published trials suggest that fluoxetine has a favourable risk-benefit profile, and unpublished data lend support to this finding. Published results from one trial of paroxetine and two trials of sertraline suggest equivocal or weak positive risk-benefit profiles. However, in both cases, addition of unpublished data indicates that risks outweigh benefits. Data from unpublished trials of citalopram and venlafaxine show unfavourable risk-benefit profiles. Published data suggest a favourable risk-benefit profile for some SSRIs; however, addition of unpublished data indicates that risks could outweigh benefits of these drugs (except fluoxetine) to treat depression in children and young people. Clinical guideline development and clinical decisions about treatment are largely dependent on an evidence base published in peer-reviewed journals. Non-publication of trials, for whatever reason, or the omission of important data from published trials, can lead to erroneous recommendations for treatment. Greater openness and transparency with respect to all intervention studies is needed.
Article
Gustavo Batista Menezes, Webster Glayser Pimenta Reis, Júlia Maria Moreira Santos, Igor Dimitri Gama Duarte, Janetti Nogueira Francischi. (2006) Inhibition of Prostaglandin % MathType!Translator!2!1!AMS LaTeX.tdl!TeX -- AMS-LaTeX!% MathType!MTEF!2!1!+-% feaaeaart1ev0aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbbjxAHX% garmWu51MyVXgatuuDJXwAK1uy0HwmaeHbfv3ySLgzG0uy0Hgip5wz% aebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY-Hhbbf9v8qqaq% Fr0xc9pk0xbba9q8WqFfea0-yr0RYxir-Jbba9q8aq0-yq-He9q8qq% Q8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaeWaeaaakeaajy% aGcaqGgbGcdaWgaaWcbaqcfaOaaeOmaSGaeqySdegabeaaaaa!3C0B! $${\text{F}}_{{{\text{2}}\alpha }} $$ by Selective Cyclooxygenase 2 Inhibitors Accounts for Reduced Rat Leukocyte Migration. Inflammation 29, 163-169 CrossRef
Article
Altruistic motives and trust are central to scientific investigations involving people. These prompt volunteers to participate in clinical trials. However, publication bias and other causes of the failure to report trial results may lead to an overly positive view of medical interventions in the published evidence available. Registration of randomised controlled trials right from the start is therefore warranted. The International Committee of Medical Journal Editors has issued a statement to the effect that the 11 journals represented in the Committee will not consider publication of the results of trials that have not been registered in a publicly accessible register such as www.clinicaltrials.gov. Patients who voluntarily participate in clinical trials need to know that their contribution to better human healthcare is available for decision making in clinical practice.
Article
The cyclo-oxygenase 2 inhibitor rofecoxib was recently withdrawn because of cardiovascular adverse effects. An increased risk of myocardial infarction had been observed in 2000 in the Vioxx Gastrointestinal Outcomes Research study (VIGOR), but was attributed to cardioprotection of naproxen rather than a cardiotoxic effect of rofecoxib. We used standard and cumulative random-effects meta-analyses of randomised controlled trials and observational studies to establish whether robust evidence on the adverse effects of rofecoxib was available before September, 2004. We searched bibliographic databases and relevant files of the US Food and Drug Administration. We included all randomised controlled trials in patients with chronic musculoskeletal disorders that compared rofecoxib with other non-steroidal anti-inflammatory drugs (NSAIDs) or placebo, and cohort and case-control studies of cardiovascular risk and naproxen. Myocardial infarction was the primary endpoint. We identified 18 randomised controlled trials and 11 observational studies. By the end of 2000 (52 myocardial infarctions, 20742 patients) the relative risk from randomised controlled trials was 2.30 (95% CI 1.22-4.33, p=0.010), and 1 year later (64 events, 21432 patients) it was 2.24 (1.24-4.02, p=0.007). There was little evidence that the relative risk differed depending on the control group (placebo, non-naproxen NSAID, or naproxen; p=0.41) or trial duration (p=0.82). In observational studies, the cardioprotective effect of naproxen was small (combined estimate 0.86 [95% CI 0.75-0.99]) and could not have explained the findings of the VIGOR trial. Our findings indicate that rofecoxib should have been withdrawn several years earlier. The reasons why manufacturer and drug licensing authorities did not continuously monitor and summarise the accumulating evidence need to be clarified.
Article
Background: Anticonvulsant drugs have been used in the management of pain since the 1960s. The clinical impression is that they are useful for chronic neuropathic pain, especially when the pain is lancinating or burning. Objectives: To evaluate the analgesic effectiveness and adverse effects of gabapentin for pain management in clinical practice. Search strategy: Randomised trials of gabapentin in acute, chronic or cancer pain were identified by MEDLINE (1966-Nov 2004), EMBASE (1994-Nov 2004), SIGLE (1980-Jan 2004) and the Cochrane Central Register of Controlled Trials (CENTRAL) (Cochrane Library Issue 4, 2004). Additional reports were identified from the reference list of the retrieved papers, and by contacting investigators. Date of most recent search: January 2004. Selection criteria: Randomised trials reporting the analgesic effects of gabapentin in patients, with subjective pain assessment as either the primary or a secondary outcome. Data collection and analysis: Data were extracted by two independent reviewers, and trials were quality scored. Numbers-needed-to-treat (NNTs) were calculated, where possible, from dichotomous data for effectiveness, adverse effects and drug-related study withdrawal. Main results: Fourteen reports describing 15 studies of gabapentin were considered eligible (1468 participants). One was a study of acute pain. The remainder included the following conditions: post-herpetic neuralgia (two studies), diabetic neuropathy (seven studies), a cancer related neuropathic pain (one study) phantom limb pain (one study), Guillain Barré syndrome (one study) , spinal chord injury pain (one study) and various neuropathic pains (one study). The study in acute post-operative pain (70 participants) showed no benefit for gabapentin compared to placebo for pain at rest. In chronic pain, the NNT for improvement in all trials with evaluable data is 4.3 (95%CI 3.5-5.7). Forty two percent of participants improved on gabapentin compared to 19% on placebo. The number needed to harm(NNH) for adverse events leading to withdrawal from a trial was not significant. Fourteen percent of participants withdrew from active arms compared to 10% in placebo arms. The NNH for minor harm was 3.7 (95% CI 2.4 to 5.4). The NNT for effective pain relief in diabetic neuropathy was 2.9 (95% CI 2.2 to 4.3) and for post herpetic neuralgia 3.9 (95% CI 3 to 5.7). Authors' conclusions: There is evidence to show that gabapentin is effective in neuropathic pain. There is limited evidence to show that gabapentin is ineffective in acute pain.
Article
Publication bias has been previously identified as a threat to the validity of a meta-analysis. Recently, new evidence has documented an additional threat to validity, the selective reporting of trial outcomes within published studies. Several diseases have several possible measures of outcome. Some articles might report only a selection of those outcomes, perhaps those with statistically significant results. In this article, we review this problem while addressing the questions: what is within-study selective reporting? how common is it? why is it done? how can it mislead? how can it be detected?, and finally, what is the solution? We recommend that both publication bias and selective reporting should be routinely investigated in systematic reviews.
Article
Prospectively planned collection and analysis of adverse event (AE) data are essential parts of well-conducted clinical trials. The AE data in a trial sponsor's database should be comparable with what is stipulated in the protocol and with the AE data published. We examined whether the published AE data differ from those in the sponsor's database and from the data collection requirements stated in study protocols. We searched the National Cancer Institute (NCI) Clinical Data Update System (CDUS) for studies that used the Common Toxicity Criteria version 2.0 and for which a final study publication was available. We extracted from the protocols information pertaining to AE collection and reporting methods and compared it with the methods cited in the article. We also compared the AE data in the trial publication with the AE data submitted by the investigators to CDUS. We identified 22 studies meeting the criteria for this review. There was considerable inconsistency between AE collection and reporting methods cited in the protocols versus final publications. AE data in the article and CDUS were not identical. Twenty-seven percent of article high-grade AEs could not be matched to agent-attributable AEs in the CDUS. Twenty-eight percent of CDUS high-grade AEs could not be matched to AEs in the corresponding article. In 14 of 22 articles, the number of high-grade AEs in CDUS differed from the number in the articles by 20% or more. Lack of consistency in and reporting of AEs are associated with NCI database and trial publication AE data discrepancy.
Article
To ascertain the extent of publication bias in the reporting of acute stroke clinical trials. We identified controlled acute ischemic stroke clinical trials reported in English over a 45-year period from 1955 to 1999 through systematic search of MEDLINE, the Cochrane Controlled Stroke Trials Register, and additional databases. We analyzed trial methodology, quality, outcome, study sponsorship, and timing of publication to identify various forms of publication bias, including nonpublication bias, abbreviated publication bias, and time-lag bias. One hundred seventy-eight acute ischemic stroke trials, enrolling 73,949 subjects, evaluated 75 agents or nonpharmacologic interventions. A greater proportion of harmful outcomes in unpublished studies (n = 4) compared with published trials (0.75 vs 0.06, p < 0.0001) and underreporting of smaller, nonbeneficial studies in acute stroke suggest nonpublication bias. Although a definite time-lag bias was not evident, nonbeneficial studies were slower to proceed from enrollment completion to publication (2.3 vs 2.0 years, p = 0.207), with an even longer delay for nonbeneficial corporate pharmaceutical sponsored trials (2.8 vs 2.1 years, p = 0.086), despite superior trial report quality scores for corporate-sponsored studies when compared with nonprofit/governmental studies (mean 69.2 +/- 95% CI 3.9 vs 53.4 +/- 95% CI 9.2, p < 0.005). Publication bias is evident in the acute stroke research literature, supporting the need for prospective trial registration.
Article
On September 30, the New York Times reported that the FDA had issued a warning that the antifibrinolytic drug aprotinin could cause renal failure, congestive heart failure, stroke, and death. Dr. Jerry Avorn writes that many aspects of the aprotinin saga are familiar to observers of the drug-evaluation process.
Article
The aim of this methodology review was to assess whether the time taken to publish the results of clinical trials is influenced by the statistical significance of their results (time-lag bias). If clinical trials with positive findings are stopped earlier than planned and published quicker than those trials with null or negative findings, then new interventions might be mistakenly assumed to be effective. Two studies with a total of 196 trials met the inclusion criteria for this review. In both studies just over half of the trials had been published in full. Trials with positive results (i.e. with statistically significant results in favour the experimental arm of the trial) tended to be published in approximately 4 to 5 years. Trials with null or negative results (i.e. not statistically significant or statistically significant in favour of the control arm) were published after about 6 to 8 years. One of the studies suggested that this difference could, in part, be attributed to the length of time taken to publish the results of a trial once follow up has been completed. This study showed that trials with null or negative findings took, on average, just over a year longer to be published than those with positive results. Our review shows that trials with positive results are published sooner than those with null or negative results. This has important implications for the timing of the initiation and updating of a systematic review, especially if there is an association between the inclusion of a trial in a review and its publication status. It is of particular concern when one considers reviews containing only a small number of studies.
Article
The systematic review carried out by Taylor et al1 suggested that selective serotonin reuptake inhibitors (SSRIs) begin to have observable beneficial effects in depression during the first week of treatment. This result was obtained from the analysis of randomized controlled trials of SSRIs vs placebo that reported outcomes for at least 2 points in the first 4 weeks of treatment. From 107 trial reports, Taylor et al identified 50 randomized controlled trials meeting the inclusion criteria, of which 28 were incorporated in the primary analysis. This pattern of trial selection was a consequence of the fact that not all studies took repeated-outcome measures and not all studies that took repeated-outcome measures presented the results for all points.
Article
Evidence-based medicine is valuable to the extent that the evidence base is complete and unbiased. Selective publication of clinical trials--and the outcomes within those trials--can lead to unrealistic estimates of drug effectiveness and alter the apparent risk-benefit ratio. We obtained reviews from the Food and Drug Administration (FDA) for studies of 12 antidepressant agents involving 12,564 patients. We conducted a systematic literature search to identify matching publications. For trials that were reported in the literature, we compared the published outcomes with the FDA outcomes. We also compared the effect size derived from the published reports with the effect size derived from the entire FDA data set. Among 74 FDA-registered studies, 31%, accounting for 3449 study participants, were not published. Whether and how the studies were published were associated with the study outcome. A total of 37 studies viewed by the FDA as having positive results were published; 1 study viewed as positive was not published. Studies viewed by the FDA as having negative or questionable results were, with 3 exceptions, either not published (22 studies) or published in a way that, in our opinion, conveyed a positive outcome (11 studies). According to the published literature, it appeared that 94% of the trials conducted were positive. By contrast, the FDA analysis showed that 51% were positive. Separate meta-analyses of the FDA and journal data sets showed that the increase in effect size ranged from 11 to 69% for individual drugs and was 32% overall. We cannot determine whether the bias observed resulted from a failure to submit manuscripts on the part of authors and sponsors, from decisions by journal editors and reviewers not to publish, or both. Selective reporting of clinical trial results may have adverse consequences for researchers, study participants, health care professionals, and patients.