Article

Rating the methodological quality of single-subject designs and n-of-1 trials: introducing the Single-Case Experimental Design (SCED) Scale.

Northern Clinical School, Faculty of Medicine, University of Sydney, Australia.
Neuropsychological Rehabilitation (Impact Factor: 2.07). 09/2008; 18(4):385-401. DOI: 10.1080/09602010802009201
Source: PubMed

ABSTRACT Rating scales that assess methodological quality of clinical trials provide a means to critically appraise the literature. Scales are currently available to rate randomised and non-randomised controlled trials, but there are none that assess single-subject designs. The Single-Case Experimental Design (SCED) Scale was developed for this purpose and evaluated for reliability. Six clinical researchers who were trained and experienced in rating methodological quality of clinical trials developed the scale and participated in reliability studies. The SCED Scale is an 11-item rating scale for single-subject designs, of which 10 items are used to assess methodological quality and use of statistical analysis. The scale was developed and refined over a 3-year period. Content validity was addressed by identifying items to reduce the main sources of bias in single-case methodology as stipulated by authorities in the field, which were empirically tested against 85 published reports. Inter-rater reliability was assessed using a random sample of 20/312 single-subject reports archived in the Psychological Database of Brain Impairment Treatment Efficacy (PsycBITE). Inter-rater reliability for the total score was excellent, both for individual raters (overall ICC = 0.84; 95% confidence interval 0.73-0.92) and for consensus ratings between pairs of raters (overall ICC = 0.88; 95% confidence interval 0.78-0.95). Item reliability was fair to excellent for consensus ratings between pairs of raters (range k = 0.48 to 1.00). The results were replicated with two independent novice raters who were trained in the use of the scale (ICC = 0.88, 95% confidence interval 0.73-0.95). The SCED Scale thus provides a brief and valid evaluation of methodological quality of single-subject designs, with the total score demonstrating excellent inter-rater reliability using both individual and consensus ratings. Items from the scale can also be used as a checklist in the design, reporting and critical appraisal of single-subject designs, thereby assisting to improve standards of single-case methodology.

3 Followers
 · 
451 Views
  • Source
    • "These build on proposals by Kratchowill et al (2010, 2013) but are not used widely in practice and their validity has yet to be tested. Interestingly, while Tate et al. (2008) required statistical analysis, Tate et al. (2013) do not (ROBiNT item 13: Data Analysis), stating, " Controversy remains about whether the appropriate method of analysis in single-case reports is visual or statistical. Nonetheless, 2 points are awarded if systematic visual analysis is used according to steps specified by Kratochwill et al. (2010; 2013), or visual analysis is aided by quasi-statistical techniques, or statistical methods are used where a rationale is provided for their suitability. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Background: In Howard, Best, and Nickels (2015, Optimising the design of intervention studies: Critiques and ways forward. Aphasiology, 2015.), we presented a set of ideas relevant to the design of single-case studies for evaluation of the effects of intervention. These were based on our experience with intervention research and methodology, and a set of simulations. Our discussion and conclusions were not intended as guidelines (of which there are several in the field) but rather had the aim of stimulating debate and optimising designs in the future. Our paper achieved the first aim—it received a set of varied commentaries, not all of which felt we were optimising designs, and which raised further points for debate.
    Aphasiology 12/2015; 29(5):619-643. DOI:10.1080/02687038.2014.1000613 · 1.73 Impact Factor
  • Source
    • "However, restricting the use of this name to this design is misleading. There are single case experimental therapy studies that do not have the features below, but which could also be sensibly termed 'Single Case Experimental Designs' 2 and the term has been used to include many design types(Tate et al., 2008, 2013; Smith, 2012). We too use the term more broadly. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Background: There is a growing body of research that evaluates interventions for neuropsychological impairments using single-case experimental designs and diversity of designs and analyses employed. Aims: This paper has two goals: first, to increase awareness and understanding of the limitations of therapy study designs and statistical techniques and, second, to suggest some designs and statistical techniques likely to produce intervention studies that can inform both theories of therapy and service provision. Main Contribution & Conclusions: We recommend a single-case experimental design that incorporates the following features. First, there should be random allocation of stimuli to treated and control conditions with matching for baseline performance, using relatively large stimulus sets to increase confidence in the data. Second, prior to intervention, baseline testing should occur on at least two occasions. Simulations show that termination of the baseline phase should not be contingent on "stability." For intervention, a predetermined number of sessions are required (rather than performance-determined duration). Finally, treatment effects must be significantly better than expected by chance to be confident that the results reflect change greater than random variation. Appropriate statistical analysis is important: by-item statistical analysis methods are strongly recommended and a methodology is presented using WEighted STatistics (WEST).
    Aphasiology 12/2014; 29(5):526-562. DOI:10.1080/02687038.2014.985884 · 1.73 Impact Factor
  • Source
    • "For several decades, single-case research methods have been employed to determine the effects of planned interventions across a wide array of disciplines (e.g., psychology, special education, school psychology, physical therapy). For example, Tate et al. (2008) reported that 39% of studies archived in the Psychological Database of Brain Impairment Treatment Efficacy employed a single-case experimental method, which was the most frequently used method. Additionally, Beeson and Robey (2006) found that over the past five decades, 41% of studies examining the effects of interventions in aphasiology utilized single-case research methodology. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This study examined how specific guidelines and heuristics have been used to identify methodological rigor associated with single-case research designs based on quality indicators developed by Horner et al. Specifically, this article describes how literature reviews have applied Horner et al.'s quality indicators and evidence-based criteria. Ten literature reviews were examined to ascertain how literature review teams (a) used the criteria recommended by Horner et al. as meeting the 5-3-20 evidence-based practice (EBP) thresholds (five studies conducted across three different research teams that include a minimum of 20 participants) to assess single-case methodological rigor; and (b) applied the 5-3-20 thresholds to determine whether the independent variables reviewed qualified as potential effective practices. The 10 literature reviews included 120 single-case designs. This study found that 33% of the reviewed single-case designs met Horner et al.'s quality indicator criteria. Three of the literature reviews concluded that examined practices met criteria to qualify as an EBP. Recommendations related to quality indicator criteria and EBP established by the literature review teams as well as directions for future research are discussed.
    Psychology in the Schools 10/2014; 52(2). DOI:10.1002/pits.21801 · 0.72 Impact Factor
Show more