Developing a guideline for clinical trial protocol content: Delphi consensus survey

Trials (Impact Factor: 1.73). 09/2012; 13(1). DOI: 10.1186/1745-6215-13-176


Recent evidence has highlighted deficiencies in clinical trial protocols, having implications for many groups. Existing guidelines for randomized clinical trial (RCT) protocol content vary substantially and most do not describe systematic methodology for their development. As one of three prespecified steps for the systematic development of a guideline for trial protocol content, the objective of this study was to conduct a three-round Delphi consensus survey to develop and refine minimum content for RCT protocols.
Panellists were identified using a multistep iterative approach, met prespecified minimum criteria and represented key stakeholders who develop or use clinical trial protocols. They were asked to rate concepts for importance in a minimum set of items for RCT protocols. The main outcome measures were degree of importance (scale of 1 to 10; higher scores indicating higher importance) and level of consensus for items. Results were presented as medians, interquartile ranges, counts and percentages.
Ninety-six expert panellists participated in the Delphi consensus survey including trial investigators, methodologists, research ethics board members, funders, industry, regulators and journal editors. Response rates were between 88 and 93% per round. Overall, panellists rated 63 of 88 concepts of high importance (of which 50 had a 25th percentile rating of 8 or greater), 13 of moderate importance (median 6 or 7) and 12 of low importance (median less than or equal to 5) for minimum trial protocol content. General and item-specific comments and subgroup results provided valuable insight for further discussions.
This Delphi process achieved consensus from a large panel of experts from diverse stakeholder groups on essential content for RCT protocols. It also highlights areas of divergence. These results, complemented by other empirical research and consensus meetings, are helping guide the development of a guideline for protocol content.

14 Reads
    • "Not only are Levels of Evidence statements themselves based on expert consensus , but so are other tools like the Cochrane Handbook for Systematic Reviews of Interventions (Higgins and Green, 2011), the Consolidated Standards of Reporting Trials (CONSORT) Statement on standards for reporting trials (Begg et al., 1996), and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Statement on systematic reviews and meta-analyses (Moher et al., 2009). Indeed, the Delphi method has been used to develop criteria for quality assessment of randomized clinical trials (Verhagen et al., 1998), guidelines for clinical trial protocol content (Tetzlaff et al., 2012), standards for reporting interventions used in trials (Hoffmann et al., 2014) and methods used in systematic reviews (Pincus et al., 2011). When used in this way, expert consensus methods are a type of foundational methodology upon which all other methodologies rest. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The article gives an introductory overview of the use of the Delphi expert consensus method in mental health research. It explains the rationale for using the method, examines the range of uses to which it has been put in mental health research, and describes the stages of carrying out a Delphi study using examples from the literature. To ascertain the range of uses, a systematic search was carried out in PubMed. The article also examines the implications of 'wisdom of crowds' research for how to conduct Delphi studies. The Delphi method is a systematic way of determining expert consensus that is useful for answering questions that are not amenable to experimental and epidemiological methods. The validity of the approach is supported by 'wisdom of crowds' research showing that groups can make good judgements under certain conditions. In mental health research, the Delphi method has been used for making estimations where there is incomplete evidence (e.g. What is the global prevalence of dementia?), making predictions (e.g. What types of interactions with a person who is suicidal will reduce their chance of suicide?), determining collective values (e.g. What areas of research should be given greatest priority?) and defining foundational concepts (e.g. How should we define 'relapse'?). A range of experts have been used in Delphi research, including clinicians, researchers, consumers and caregivers. The Delphi method has a wide range of potential uses in mental health research. © The Royal Australian and New Zealand College of Psychiatrists 2015.
    Australian and New Zealand Journal of Psychiatry 08/2015; DOI:10.1177/0004867415600891 · 3.41 Impact Factor
  • Source
    The Lancet 01/2013; 381(9861). DOI:10.1016/S0140-6736(12)62160-6 · 45.22 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The protocol of a clinical trial serves as the foundation for study planning, conduct, reporting, and appraisal. However, trial protocols and existing protocol guidelines vary greatly in content and quality. This article describes the systematic development and scope of SPIRIT (Standard Protocol Items:Recommendations for Interventional Trials) 2013, a guideline for the minimum content of a clinical trial protocol.The 33-item SPIRIT checklist applies to protocols for all clinical trials and focuses on content rather than format. The checklist recommends a full description of what is planned; it does not prescribe how to design or conduct a trial. By providing guidance for key content, the SPIRIT recommendations aim to facilitate the drafting of high-quality protocols. Adherence to SPIRIT would also enhance the transparency and completeness of trial protocols for the benefit of investigators, trial participants, patients, sponsors, funders, research ethics committees or institutional review boards, peer reviewers, journals, trial registries, policymakers, regulators, and other key stakeholders.
    Annals of internal medicine 01/2013; 158(3). DOI:10.7326/0003-4819-158-3-201302050-00583 · 17.81 Impact Factor
Show more

Preview (2 Sources)

14 Reads
Available from

Jennifer Tetzlaff