Rating the methodological quality of single-subject designs and n-of-1 trials: Introducing the Single-Case Experimental Design (SCED) Scale

Northern Clinical School, Faculty of Medicine, University of Sydney, Australia.
Neuropsychological Rehabilitation (Impact Factor: 1.96). 09/2008; 18(4):385-401. DOI: 10.1080/09602010802009201
Source: PubMed


Rating scales that assess methodological quality of clinical trials provide a means to critically appraise the literature. Scales are currently available to rate randomised and non-randomised controlled trials, but there are none that assess single-subject designs. The Single-Case Experimental Design (SCED) Scale was developed for this purpose and evaluated for reliability. Six clinical researchers who were trained and experienced in rating methodological quality of clinical trials developed the scale and participated in reliability studies. The SCED Scale is an 11-item rating scale for single-subject designs, of which 10 items are used to assess methodological quality and use of statistical analysis. The scale was developed and refined over a 3-year period. Content validity was addressed by identifying items to reduce the main sources of bias in single-case methodology as stipulated by authorities in the field, which were empirically tested against 85 published reports. Inter-rater reliability was assessed using a random sample of 20/312 single-subject reports archived in the Psychological Database of Brain Impairment Treatment Efficacy (PsycBITE). Inter-rater reliability for the total score was excellent, both for individual raters (overall ICC = 0.84; 95% confidence interval 0.73-0.92) and for consensus ratings between pairs of raters (overall ICC = 0.88; 95% confidence interval 0.78-0.95). Item reliability was fair to excellent for consensus ratings between pairs of raters (range k = 0.48 to 1.00). The results were replicated with two independent novice raters who were trained in the use of the scale (ICC = 0.88, 95% confidence interval 0.73-0.95). The SCED Scale thus provides a brief and valid evaluation of methodological quality of single-subject designs, with the total score demonstrating excellent inter-rater reliability using both individual and consensus ratings. Items from the scale can also be used as a checklist in the design, reporting and critical appraisal of single-subject designs, thereby assisting to improve standards of single-case methodology.

Download full-text


Available from: Michael Perdices, Sep 24, 2014
151 Reads
  • Source
    • "These build on proposals by Kratchowill et al (2010, 2013) but are not used widely in practice and their validity has yet to be tested. Interestingly, while Tate et al. (2008) required statistical analysis, Tate et al. (2013) do not (ROBiNT item 13: Data Analysis), stating, " Controversy remains about whether the appropriate method of analysis in single-case reports is visual or statistical. Nonetheless, 2 points are awarded if systematic visual analysis is used according to steps specified by Kratochwill et al. (2010; 2013), or visual analysis is aided by quasi-statistical techniques, or statistical methods are used where a rationale is provided for their suitability. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Background: In Howard, Best, and Nickels (2015, Optimising the design of intervention studies: Critiques and ways forward. Aphasiology, 2015.), we presented a set of ideas relevant to the design of single-case studies for evaluation of the effects of intervention. These were based on our experience with intervention research and methodology, and a set of simulations. Our discussion and conclusions were not intended as guidelines (of which there are several in the field) but rather had the aim of stimulating debate and optimising designs in the future. Our paper achieved the first aim—it received a set of varied commentaries, not all of which felt we were optimising designs, and which raised further points for debate.
    Aphasiology 12/2015; 29(5):619-643. DOI:10.1080/02687038.2014.1000613 · 1.53 Impact Factor
  • Source
    • "A replicated single case design was employed with five participants . This design is recommended when new treatments are developed and evaluated [5] [6]. Single case designs provide an intensive study of the individual, which includes systematic observation, manipulation of variables, repeated measurement before and during the intervention, and mainly visual data analysis . "
    [Show abstract] [Hide abstract]
    ABSTRACT: Dealing with chronic pain is difficult and affects physiological as well as psychological well-being. Patients with chronic pain are often reporting concurrent emotional problems such as low mood and depressive symptoms. Considering this, treatments need to involve strategies for improving mood and promoting well-being in this group of patients. With the rise of the positive psychology movement, relatively simple intervention strategies to increase positive feelings, cognitions, and behaviours have become available. So far, the evidence for positive psychology techniques mainly comes from studies with healthy participants, and from studies with patients expressing emotional problems such as depression or anxiety as their main complaint. This study describes an initial attempt to explore the potential effects of a positive psychology intervention in a small sample of patients suffering from chronic pain.
    Scandinavian Journal of Pain 04/2015; 7:71-79. DOI:10.1016/j.sjpain.2015.01.005
  • Source
    • "However, restricting the use of this name to this design is misleading. There are single case experimental therapy studies that do not have the features below, but which could also be sensibly termed 'Single Case Experimental Designs' 2 and the term has been used to include many design types(Tate et al., 2008, 2013; Smith, 2012). We too use the term more broadly. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Background: There is a growing body of research that evaluates interventions for neuropsychological impairments using single-case experimental designs and diversity of designs and analyses employed. Aims: This paper has two goals: first, to increase awareness and understanding of the limitations of therapy study designs and statistical techniques and, second, to suggest some designs and statistical techniques likely to produce intervention studies that can inform both theories of therapy and service provision. Main Contribution & Conclusions: We recommend a single-case experimental design that incorporates the following features. First, there should be random allocation of stimuli to treated and control conditions with matching for baseline performance, using relatively large stimulus sets to increase confidence in the data. Second, prior to intervention, baseline testing should occur on at least two occasions. Simulations show that termination of the baseline phase should not be contingent on "stability." For intervention, a predetermined number of sessions are required (rather than performance-determined duration). Finally, treatment effects must be significantly better than expected by chance to be confident that the results reflect change greater than random variation. Appropriate statistical analysis is important: by-item statistical analysis methods are strongly recommended and a methodology is presented using WEighted STatistics (WEST).
    Aphasiology 12/2014; 29(5):526-562. DOI:10.1080/02687038.2014.985884 · 1.53 Impact Factor
Show more