[Show abstract][Hide abstract] ABSTRACT: Objectives:
Randomized clinical trials that enroll patients in critical or emergency care (acute care) setting are challenging because of narrow time windows for recruitment and the inability of many patients to provide informed consent. To assess the extent that recruitment challenges lead to randomized clinical trial discontinuation, we compared the discontinuation of acute care and nonacute care randomized clinical trials.
Retrospective cohort of 894 randomized clinical trials approved by six institutional review boards in Switzerland, Germany, and Canada between 2000 and 2003.
Randomized clinical trials involving patients in an acute or nonacute care setting.
Subjects and interventions:
We recorded trial characteristics, self-reported trial discontinuation, and self-reported reasons for discontinuation from protocols, corresponding publications, institutional review board files, and a survey of investigators.
Measurements and main results:
Of 894 randomized clinical trials, 64 (7%) were acute care randomized clinical trials (29 critical care and 35 emergency care). Compared with the 830 nonacute care randomized clinical trials, acute care randomized clinical trials were more frequently discontinued (28 of 64, 44% vs 221 of 830, 27%; p = 0.004). Slow recruitment was the most frequent reason for discontinuation, both in acute care (13 of 64, 20%) and in nonacute care randomized clinical trials (7 of 64, 11%). Logistic regression analyses suggested the acute care setting as an independent risk factor for randomized clinical trial discontinuation specifically as a result of slow recruitment (odds ratio, 4.00; 95% CI, 1.72-9.31) after adjusting for other established risk factors, including nonindustry sponsorship and small sample size.
Acute care randomized clinical trials are more vulnerable to premature discontinuation than nonacute care randomized clinical trials and have an approximately four-fold higher risk of discontinuation due to slow recruitment. These results highlight the need for strategies to reliably prevent and resolve slow patient recruitment in randomized clinical trials conducted in the critical and emergency care setting.
Critical care medicine 10/2015; DOI:10.1097/CCM.0000000000001369 · 6.31 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Routinely collected health data, obtained for administrative and clinical purposes without specific a priori research goals, are increasingly used for research. The rapid evolution and availability of these data have revealed issues not addressed by existing reporting guidelines, such as Strengthening the Reporting of Observational Studies in Epidemiology (STROBE). The REporting of studies Conducted using Observational Routinely collected health Data (RECORD) statement was created to fill these gaps. RECORD was created as an extension to the STROBE statement to address reporting items specific to observational studies using routinely collected health data. RECORD consists of a checklist of 13 items related to the title, abstract, introduction, methods, results, and discussion section of articles, and other information required for inclusion in such research reports. This document contains the checklist and explanatory and elaboration information to enhance the use of the checklist. Examples of good reporting for each RECORD checklist item are also included herein. This document, as well as the accompanying website and message board (http://www.record-statement.org), will enhance the implementation and understanding of RECORD. Through implementation of RECORD, authors, journals editors, and peer reviewers can encourage transparency of research reporting.
PLoS Medicine 10/2015; 12(10):e1001885. DOI:10.1371/journal.pmed.1001885 · 14.43 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Objectives:
To investigate the frequency of interim analyses, stopping rules, and data safety and monitoring boards (DSMBs) in protocols of randomized controlled trials (RCTs); to examine these features across different reasons for trial discontinuation; and to identify discrepancies in reporting between protocols and publications.
Study design and setting:
We used data from a cohort of RCT protocols approved between 2000 and 2003 by six research ethics committees in Switzerland, Germany, and Canada.
Of 894 RCT protocols, 289 prespecified interim analyses (32.3%), 153 stopping rules (17.1%), and 257 DSMBs (28.7%). Overall, 249 of 894 RCTs (27.9%) were prematurely discontinued; mostly due to reasons such as poor recruitment, administrative reasons, or unexpected harm. Forty-six of 249 RCTs (18.4%) were discontinued due to early benefit or futility; of those, 37 (80.4%) were stopped outside a formal interim analysis or stopping rule. Of 515 published RCTs, there were discrepancies between protocols and publications for interim analyses (21.1%), stopping rules (14.4%), and DSMBs (19.6%).
Two-thirds of RCT protocols did not consider interim analyses, stopping rules, or DSMBs. Most RCTs discontinued for early benefit or futility were stopped without a prespecified mechanism. When assessing trial manuscripts, journals should require access to the protocol.
Journal of Clinical Epidemiology 06/2015; DOI:10.1016/j.jclinepi.2015.05.023 · 3.42 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Objective: Routinely collected health data, collected for administrative and clinical purposes, without specific a priori research questions, are increasingly used for observational, comparative effectiveness, health services research, and clinical trials. The rapid evolution and availability of routinely collected data for research has brought to light specific issues not addressed by existing reporting guidelines. The aim of the present project was to determine the priorities of stakeholders in order to guide the development of the REporting of studies Conducted using Observational Routinely-collected health Data (RECORD) statement.
Methods: Two modified electronic Delphi surveys were sent to stakeholders. The first determined themes deemed important to include in the RECORD statement, and was analyzed using qualitative methods. The second determined quantitative prioritization of the themes based on categorization of manuscript headings. The surveys were followed by a meeting of RECORD working committee, and re-engagement with stakeholders via an online commentary period.
Results: The qualitative survey (76 responses of 123 surveys sent) generated 10 overarching themes and 13 themes derived from existing STROBE categories. Highest-rated overall items for inclusion were: Disease/exposure identification algorithms; Characteristics of the population included in databases; and Characteristics of the data. In the quantitative survey (71 responses of 135 sent), the importance assigned to each of the compiled themes varied depending on the manuscript section to which they were assigned. Following the working committee meeting, online ranking by stakeholders provided feedback and resulted in revision of the final checklist.
Conclusions: The RECORD statement incorporated the suggestions provided by a large, diverse group of stakeholders to create a reporting checklist specific to observational research using routinely collected health data. Our findings point to unique aspects of studies conducted with routinely collected health data and the perceived need for better reporting of methodological issues.
PLoS ONE 05/2015; 10(5):e0125620. DOI:10.1371/journal.pone.0125620 · 3.23 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: The synthesis of published research in systematic reviews is essential when providing evidence to inform clinical and health policy decision-making. However, the validity of systematic reviews is threatened if journal publications represent a biased selection of all studies that have been conducted (dissemination bias). To investigate the extent of dissemination bias we conducted a systematic review that determined the proportion of studies published as peer-reviewed journal articles and investigated factors associated with full publication in cohorts of studies (i) approved by research ethics committees (RECs) or (ii) included in trial registries.
Four bibliographic databases were searched for methodological research projects (MRPs) without limitations for publication year, language or study location. The searches were supplemented by handsearching the references of included MRPs. We estimated the proportion of studies published using prediction intervals (PI) and a random effects meta-analysis. Pooled odds ratios (OR) were used to express associations between study characteristics and journal publication. Seventeen MRPs (23 publications) evaluated cohorts of studies approved by RECs; the proportion of published studies had a PI between 22% and 72% and the weighted pooled proportion when combining estimates would be 46.2% (95% CI 40.2%-52.4%, I2 = 94.4%). Twenty-two MRPs (22 publications) evaluated cohorts of studies included in trial registries; the PI of the proportion published ranged from 13% to 90% and the weighted pooled proportion would be 54.2% (95% CI 42.0%-65.9%, I2 = 98.9%). REC-approved studies with statistically significant results (compared with those without statistically significant results) were more likely to be published (pooled OR 2.8; 95% CI 2.2-3.5). Phase-III trials were also more likely to be published than phase II trials (pooled OR 2.0; 95% CI 1.6-2.5). The probability of publication within two years after study completion ranged from 7% to 30%.
A substantial part of the studies approved by RECs or included in trial registries remains unpublished. Due to the large heterogeneity a prediction of the publication probability for a future study is very uncertain. Non-publication of research is not a random process, e.g., it is associated with the direction of study findings. Our findings suggest that the dissemination of research findings is biased.
PLoS ONE 12/2014; 9(12):e114023. DOI:10.1371/journal.pone.0114023 · 3.23 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Hintergrund
Praktisch tätige Ärzte treffen täglich eine Vielzahl von medizinischen Entscheidungen. Diese basieren überwiegend auf dem in der Ausbildung Erlernten und persönlicher Erfahrung, sollen heutzutage jedoch auch Patientenpräferenzen und die wissenschaftliche Evidenz für den Nutzen einer Intervention berücksichtigen. Mit dem Ziel der bestmöglichen Versorgung von Patienten bilden diese drei Aspekte gleichgewichtet die Grundlage für das Konzept der evidenzbasierten Medizin (EbM). Ohne Verständnis der methodischen Hintergründe ist die Gefahr von Fehleinschätzungen der Evidenz jedoch hoch und kann Behandlungsfehler zur Konsequenz haben.
Im vorliegenden Beitrag werden das Konzept der systematischen Verzerrungen (Bias) und seine Bedeutung erläutert. Hierzu werden Ursachen, Auswirkungen und Methoden zur Minimierung von Bias dargestellt. Diese Inhalte sollen ein tieferes Verständnis vermitteln, um in der Praxis eine bessere Einschätzung von Studien und ihren Ergebnissen sowie die Umsetzung ihrer Empfehlungen zu erlauben.
Das Risk-of-Bias (RoB)-Tool der Cochrane Collaboration ist ein Instrument zur Bewertung des Verzerrungspotentials in kontrollierten Studien. Zu seinen Stärken zählen eine einfache Anwendung, kurze Bearbeitungszeit, hohe Transparenz der Beurteilung und eine leicht verständliche graphische Darstellung der Ergebnisse. Mit dieser Publikation wird die deutsche Fassung des RoB-Tool veröffentlicht. Sie soll die Anwendung des Instruments auch außerhalb von Expertenkreisen erleichtern und über die Einschätzung der Validität von Studienergebnissen zur Entscheidungsfindung bei medizinischen Fragestellungen beitragen.
[Show abstract][Hide abstract] ABSTRACT: Klinische Studien dienen dazu, die Wirksamkeit und Sicherheit medizinischer Maßnahmen zu evaluieren oder andere ungeklärte, z.B. diagnostische Fragestellungen zu untersuchen. Nicht jede klinische Studie wird jedoch optimal geplant, durchgeführt und ausgewertet und ist somit einem Risiko von systematischen Fehlern ausgesetzt, die die Studienergebnisse verfälschen können (Bias). Um diese Risiken zu reduzieren, sollten Studien nach bestimmten Qualitätskriterien durchgeführt werden. Um die methodische Qualität von Studien einzuschätzen, wurden Instrumente wie das Cochrane ‚Risk of Bias‘-Tool entwickelt.
Zeitschrift für Evidenz Fortbildung und Qualität im Gesundheitswesen 10/2014; 108(8-9). DOI:10.1016/j.zefq.2014.09.022
[Show abstract][Hide abstract] ABSTRACT: Despite the fact that there are more than twenty thousand biomedical journals in the world, research into the work of editors and publication process in biomedical and health care journals is rare. In December 2012, the Esteve Foundation, a non-profit scientific institution that fosters progress in pharmacotherapy by means of scientific communication and discussion organized a discussion group of 7 editors and/or experts in peer review biomedical publishing. They presented findings of past editorial research, discussed the lack of competitive funding schemes and specialized journals for dissemination of editorial research, and reported on the great diversity of misconduct and conflict of interest policies, as well as adherence to reporting guidelines. Furthermore, they reported on the reluctance of editors to investigate allegations of misconduct or increase the level of data sharing in health research. In the end, they concluded that if editors are to remain gatekeepers of scientific knowledge they should reaffirm their focus on the integrity of the scientific record and completeness of the data they publish. Additionally, more research should be undertaken to understand why many journals are not adhering to editorial standards, and what obstacles editors face when engaging in editorial research.
[Show abstract][Hide abstract] ABSTRACT: The discontinuation of randomized clinical trials (RCTs) raises ethical concerns and often wastes scarce research resources. The epidemiology of discontinued RCTs, however, remains unclear.
To determine the prevalence, characteristics, and publication history of discontinued RCTs and to investigate factors associated with RCT discontinuation due to poor recruitment and with nonpublication.
Retrospective cohort of RCTs based on archived protocols approved by 6 research ethics committees in Switzerland, Germany, and Canada between 2000 and 2003. We recorded trial characteristics and planned recruitment from included protocols. Last follow-up of RCTs was April 27, 2013.
Completion status, reported reasons for discontinuation, and publication status of RCTs as determined by correspondence with the research ethics committees, literature searches, and investigator surveys.
After a median follow-up of 11.6 years (range, 8.8-12.6 years), 253 of 1017 included RCTs were discontinued (24.9% [95% CI, 22.3%-27.6%]). Only 96 of 253 discontinuations (37.9% [95% CI, 32.0%-44.3%]) were reported to ethics committees. The most frequent reason for discontinuation was poor recruitment (101/1017; 9.9% [95% CI, 8.2%-12.0%]). In multivariable analysis, industry sponsorship vs investigator sponsorship (8.4% vs 26.5%; odds ratio [OR], 0.25 [95% CI, 0.15-0.43]; P < .001) and a larger planned sample size in increments of 100 (-0.7%; OR, 0.96 [95% CI, 0.92-1.00]; P = .04) were associated with lower rates of discontinuation due to poor recruitment. Discontinued trials were more likely to remain unpublished than completed trials (55.1% vs 33.6%; OR, 3.19 [95% CI, 2.29-4.43]; P < .001).
In this sample of trials based on RCT protocols from 6 research ethics committees, discontinuation was common, with poor recruitment being the most frequently reported reason. Greater efforts are needed to ensure the reporting of trial discontinuation to research ethics committees and the publication of results of discontinued trials.
JAMA The Journal of the American Medical Association 03/2014; 311(10):1045-51. DOI:10.1001/jama.2014.1361 · 35.29 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Many clinical studies are ultimately not fully published in peer-reviewed journals. Underreporting of clinical research is wasteful and can result in biased estimates of treatment effect or harm, leading to recommendations that are inappropriate or even dangerous.
We assembled a cohort of clinical studies approved 2000-2002 by the Research Ethics Committee of the University of Freiburg, Germany. Published full articles were searched in electronic databases and investigators contacted. Data on study characteristics were extracted from protocols and corresponding publications. We characterized the cohort, quantified its publication outcome and compared protocols and publications for selected aspects.
Of 917 approved studies, 807 were started and 110 were not, either locally or as a whole. Of the started studies, 576 (71%) were completed according to protocol, 128 (16%) discontinued and 42 (5%) are still ongoing; for 61 (8%) there was no information about their course. We identified 782 full publications corresponding to 419 of the 807 initiated studies; the publication proportion was 52% (95% CI: 0.48-0.55). Study design was not significantly associated with subsequent publication. Multicentre status, international collaboration, large sample size and commercial or non-commercial funding were positively associated with subsequent publication. Commercial funding was mentioned in 203 (48%) protocols and in 205 (49%) of the publications. In most published studies (339; 81%) this information corresponded between protocol and publication. Most studies were published in English (367; 88%); some in German (25; 6%) or both languages (27; 6%). The local investigators were listed as (co-)authors in the publications corresponding to 259 (62%) studies.
Half of the clinical research conducted at a large German university medical centre remains unpublished; future research is built on an incomplete database. Research resources are likely wasted as neither health care professionals nor patients nor policy makers can use the results when making decisions.
PLoS ONE 02/2014; 9(2):e87184. DOI:10.1371/journal.pone.0087184 · 3.23 Impact Factor