Article

Rating the methodological quality of single-subject designs and n-of-1 trials: introducing the Single-Case Experimental Design (SCED) Scale.

Northern Clinical School, Faculty of Medicine, University of Sydney, Australia.
Neuropsychological Rehabilitation (Impact Factor: 2.07). 09/2008; 18(4):385-401. DOI: 10.1080/09602010802009201
Source: PubMed

ABSTRACT Rating scales that assess methodological quality of clinical trials provide a means to critically appraise the literature. Scales are currently available to rate randomised and non-randomised controlled trials, but there are none that assess single-subject designs. The Single-Case Experimental Design (SCED) Scale was developed for this purpose and evaluated for reliability. Six clinical researchers who were trained and experienced in rating methodological quality of clinical trials developed the scale and participated in reliability studies. The SCED Scale is an 11-item rating scale for single-subject designs, of which 10 items are used to assess methodological quality and use of statistical analysis. The scale was developed and refined over a 3-year period. Content validity was addressed by identifying items to reduce the main sources of bias in single-case methodology as stipulated by authorities in the field, which were empirically tested against 85 published reports. Inter-rater reliability was assessed using a random sample of 20/312 single-subject reports archived in the Psychological Database of Brain Impairment Treatment Efficacy (PsycBITE). Inter-rater reliability for the total score was excellent, both for individual raters (overall ICC = 0.84; 95% confidence interval 0.73-0.92) and for consensus ratings between pairs of raters (overall ICC = 0.88; 95% confidence interval 0.78-0.95). Item reliability was fair to excellent for consensus ratings between pairs of raters (range k = 0.48 to 1.00). The results were replicated with two independent novice raters who were trained in the use of the scale (ICC = 0.88, 95% confidence interval 0.73-0.95). The SCED Scale thus provides a brief and valid evaluation of methodological quality of single-subject designs, with the total score demonstrating excellent inter-rater reliability using both individual and consensus ratings. Items from the scale can also be used as a checklist in the design, reporting and critical appraisal of single-subject designs, thereby assisting to improve standards of single-case methodology.

Download full-text

Full-text

Available from: Michael Perdices, Sep 24, 2014
3 Followers
 · 
433 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Background: In Howard, Best, and Nickels (2015, Optimising the design of intervention studies: Critiques and ways forward. Aphasiology, 2015.), we presented a set of ideas relevant to the design of single-case studies for evaluation of the effects of intervention. These were based on our experience with intervention research and methodology, and a set of simulations. Our discussion and conclusions were not intended as guidelines (of which there are several in the field) but rather had the aim of stimulating debate and optimising designs in the future. Our paper achieved the first aim—it received a set of varied commentaries, not all of which felt we were optimising designs, and which raised further points for debate.
    Aphasiology 12/2015; 29(5):619-643. DOI:10.1080/02687038.2014.1000613 · 1.73 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Background: There is a growing body of research that evaluates interventions for neuropsychological impairments using single-case experimental designs and diversity of designs and analyses employed.
    Aphasiology 12/2014; 29(5):526-562. DOI:10.1080/02687038.2014.985884 · 1.73 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This study examined how specific guidelines and heuristics have been used to identify methodological rigor associated with single-case research designs based on quality indicators developed by Horner et al. Specifically, this article describes how literature reviews have applied Horner et al.'s quality indicators and evidence-based criteria. Ten literature reviews were examined to ascertain how literature review teams (a) used the criteria recommended by Horner et al. as meeting the 5-3-20 evidence-based practice (EBP) thresholds (five studies conducted across three different research teams that include a minimum of 20 participants) to assess single-case methodological rigor; and (b) applied the 5-3-20 thresholds to determine whether the independent variables reviewed qualified as potential effective practices. The 10 literature reviews included 120 single-case designs. This study found that 33% of the reviewed single-case designs met Horner et al.'s quality indicator criteria. Three of the literature reviews concluded that examined practices met criteria to qualify as an EBP. Recommendations related to quality indicator criteria and EBP established by the literature review teams as well as directions for future research are discussed.
    Psychology in the Schools 10/2014; 52(2). DOI:10.1002/pits.21801 · 0.72 Impact Factor