Article

Rating the methodological quality of single-subject designs and n-of-1 trials: introducing the Single-Case Experimental Design (SCED) Scale.

Northern Clinical School, Faculty of Medicine, University of Sydney, Australia.
Neuropsychological Rehabilitation (Impact Factor: 2.07). 09/2008; 18(4):385-401. DOI: 10.1080/09602010802009201
Source: PubMed

ABSTRACT Rating scales that assess methodological quality of clinical trials provide a means to critically appraise the literature. Scales are currently available to rate randomised and non-randomised controlled trials, but there are none that assess single-subject designs. The Single-Case Experimental Design (SCED) Scale was developed for this purpose and evaluated for reliability. Six clinical researchers who were trained and experienced in rating methodological quality of clinical trials developed the scale and participated in reliability studies. The SCED Scale is an 11-item rating scale for single-subject designs, of which 10 items are used to assess methodological quality and use of statistical analysis. The scale was developed and refined over a 3-year period. Content validity was addressed by identifying items to reduce the main sources of bias in single-case methodology as stipulated by authorities in the field, which were empirically tested against 85 published reports. Inter-rater reliability was assessed using a random sample of 20/312 single-subject reports archived in the Psychological Database of Brain Impairment Treatment Efficacy (PsycBITE). Inter-rater reliability for the total score was excellent, both for individual raters (overall ICC = 0.84; 95% confidence interval 0.73-0.92) and for consensus ratings between pairs of raters (overall ICC = 0.88; 95% confidence interval 0.78-0.95). Item reliability was fair to excellent for consensus ratings between pairs of raters (range k = 0.48 to 1.00). The results were replicated with two independent novice raters who were trained in the use of the scale (ICC = 0.88, 95% confidence interval 0.73-0.95). The SCED Scale thus provides a brief and valid evaluation of methodological quality of single-subject designs, with the total score demonstrating excellent inter-rater reliability using both individual and consensus ratings. Items from the scale can also be used as a checklist in the design, reporting and critical appraisal of single-subject designs, thereby assisting to improve standards of single-case methodology.

2 Followers
 · 
400 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Background: There is a growing body of research that evaluates interventions for neuropsychological impairments using single-case experimental designs and diversity of designs and analyses employed.
    Aphasiology 12/2014; 29(5):526-562. DOI:10.1080/02687038.2014.985884 · 1.73 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: A systematic review of published intervention studies of acquired Apraxia of Speech, by appointed committee of the Academy of Neurological Communication Disorders and Sciences, updating the previous committee's review from 2006. A systematic search of 11 databases identified 215 articles, with 26 meeting inclusion criteria of (1) stating intention to measure effects of treatment on AOS and (2) data representing treatment effects for at least one individual stated to have AOS. All studies involved within-participant experimental designs, with sample sizes of 1 to 44 (median = 1). Confidence in diagnosis was rated high to reasonable in 18/26 studies. Most studies (24/26) reported on articulatory-kinematic approaches; two applied rhythm/rate control methods. Six studies had sufficient experimental control for Class III rating (American Academy of Neurology Clinical Practice Guidelines Process Manual, 2011) with 15 others satisfying all criteria for Class III except use of independent or objective outcome measurement. The most important global clinical conclusion from this review is that the weight of evidence supports a strong effect for both articulatory-kinematic and rate/rhythm approaches to AOS treatment. The quantity of work, experimental rigor, and reporting of diagnostic criteria continue to improve and strengthen confidence in the corpus of research.
    American Journal of Speech-Language Pathology 03/2015; DOI:10.1044/2015_AJSLP-14-0118 · 1.64 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Background: In Howard, Best, and Nickels (2015, Optimising the design of intervention studies: Critiques and ways forward. Aphasiology, 2015.), we presented a set of ideas relevant to the design of single-case studies for evaluation of the effects of intervention. These were based on our experience with intervention research and methodology, and a set of simulations. Our discussion and conclusions were not intended as guidelines (of which there are several in the field) but rather had the aim of stimulating debate and optimising designs in the future. Our paper achieved the first aim—it received a set of varied commentaries, not all of which felt we were optimising designs, and which raised further points for debate.
    Aphasiology 12/2015; 29(5):619-643. DOI:10.1080/02687038.2014.1000613 · 1.73 Impact Factor

Full-text

Download
28 Downloads
Available from
Sep 24, 2014