Article

Testing a tool for the classification of study designs in systematic reviews of interventions and exposures showed moderate reliability and low accuracy.

Department of Pediatrics, Alberta Research Center for Health Evidence and the University of Alberta Evidence-based Practice Center, University of Alberta, 11402 University Avenue, Edmonton, Alberta, Canada.
Journal of clinical epidemiology (Impact Factor: 5.48). 04/2011; 64(8):861-71. DOI: 10.1016/j.jclinepi.2011.01.010
Source: PubMed

ABSTRACT To develop and test a study design classification tool.
We contacted relevant organizations and individuals to identify tools used to classify study designs and ranked these using predefined criteria. The highest ranked tool was a design algorithm developed, but no longer advocated, by the Cochrane Non-Randomized Studies Methods Group; this was modified to include additional study designs and decision points. We developed a reference classification for 30 studies; 6 testers applied the tool to these studies. Interrater reliability (Fleiss' κ) and accuracy against the reference classification were assessed. The tool was further revised and retested.
Initial reliability was fair among the testers (κ=0.26) and the reference standard raters κ=0.33). Testing after revisions showed improved reliability (κ=0.45, moderate agreement) with improved, but still low, accuracy. The most common disagreements were whether the study design was experimental (5 of 15 studies), and whether there was a comparison of any kind (4 of 15 studies). Agreement was higher among testers who had completed graduate level training versus those who had not.
The moderate reliability and low accuracy may be because of lack of clarity and comprehensiveness of the tool, inadequate reporting of the studies, and variability in tester characteristics. The results may not be generalizable to all published studies, as the test studies were selected because they had posed challenges for previous reviewers with respect to their design classification. Application of such a tool should be accompanied by training, pilot testing, and context-specific decision rules.

1 Bookmark
 · 
165 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: Objectives To evaluate (1) how often observational studies are included in comparative effectiveness reviews (CERs); (2) the rationale for including observational studies; (3) how data from observational studies are appraised, analyzed, and graded; and (4) the impact of observational studies on strength of evidence (SOE) and conclusions. Study Design and Setting Descriptive study of 23 CERs published through the Effective Health Care Program of the U.S. Agency for Healthcare Research and Quality. Results Authors searched for observational studies in 20 CERs, of which 18 included a median of 11 (interquartile range, 2–31) studies. Sixteen CERs incorporated the observational studies in their SOE assessments. Seventy-eight comparisons from 12 CERs included evidence from both trials and observational studies; observational studies had an impact on SOE and conclusions for 19 (24%) comparisons. There was diversity across the CERs regarding decisions to include observational studies; study designs considered; and approaches used to appraise, synthesize, and grade SOE. Conclusion Reporting and methods guidance are needed to ensure clarity and consistency in how observational studies are incorporated in CERs. It was not always clear that observational studies added value in light of the additional resources needed to search for, select, appraise, and analyze such studies.
    Journal of Clinical Epidemiology 09/2014; · 5.48 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: BACKGROUND: Knowledge translation (KT) aims to close the research-practice gap in order to realize and maximize the benefits of research within the practice setting. Previous studies have investigated KT strategies in nursing and medicine; however, the present study is the first systematic review of the effectiveness of a variety of KT interventions in five allied health disciplines: dietetics, occupational therapy, pharmacy, physiotherapy, and speech-language pathology. METHODS: A health research librarian developed and implemented search strategies in eight electronic databases (MEDLINE, CINAHL, ERIC, PASCAL, EMBASE, IPA, Scopus, CENTRAL) using language (English) and date restrictions (1985 to March 2010). Other relevant sources were manually searched. Two reviewers independently screened the titles and abstracts, reviewed full-text articles, performed data extraction, and performed quality assessment. Within each profession, evidence tables were created, grouping and analyzing data by research design, KT strategy, targeted behaviour, and primary outcome. The published descriptions of the KT interventions were compared to the Workgroup for Intervention Development and Evaluation Research (WIDER) Recommendations to Improve the Reporting of the Content of Behaviour Change Interventions. RESULTS: A total of 2,638 articles were located and the titles and abstracts were screened. Of those, 1,172 full-text articles were reviewed and subsequently 32 studies were included in the systematic review. A variety of single (n = 15) and multiple (n = 17) KT interventions were identified, with educational meetings being the predominant KT strategy (n = 11). The majority of primary outcomes were identified as professional/process outcomes (n = 25); however, patient outcomes (n = 4), economic outcomes (n = 2), and multiple primary outcomes (n = 1) were also represented. Generally, the studies were of low methodological quality. Outcome reporting bias was common and precluded clear determination of intervention effectiveness. In the majority of studies, the interventions demonstrated mixed effects on primary outcomes, and only four studies demonstrated statistically significant, positive effects on primary outcomes. None of the studies satisfied the four WIDER Recommendations. CONCLUSIONS: Across five allied health professions, equivocal results, low methodological quality, and outcome reporting bias limited our ability to recommend one KT strategy over another. Further research employing the WIDER Recommendations is needed to inform the development and implementation of effective KT interventions in allied health.
    Implementation Science 07/2012; 7(70). · 3.47 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Nonrandomized studies (NRSs) are considered to provide less reliable evidence for intervention effects. However, these are included in Cochrane reviews, despite discouragement. There has been no evaluation of when and how these designs are used. Therefore, we conducted an overview of current practice. We included all Cochrane reviews that considered NRS, conducting inclusions and data extraction in duplicate. Of the included 202 reviews, 114 (56%) did not cite a reason for including NRS. The reasons were divided into two major categories: NRS were included because randomized controlled trials (RCTs) are wanted (N = 81, 92%) but not feasible, lacking, or insufficient alone or because RCTs are not needed (N = 7, 8%). A range of designs were included with controlled before-after studies as the most common. Most interventions were nonpharmaceutical and the settings nonmedical. For risk of bias assessment, Cochrane Effective Practice and Organisation of Care Group's checklists were used by most reviewers (38%), whereas others used a variety of checklists and self-constructed tools. Most Cochrane reviews do not justify including NRS. When they do, most are not in line with Cochrane recommendations. Risk of bias assessment varies across reviews and needs improvement.
    Journal of clinical epidemiology 04/2014; · 5.48 Impact Factor

Full-text

Download
65 Downloads
Available from
May 20, 2014