Visual analysis in single case experimental design studies: Brief review and guidelines.

a Department of Special Education , The University of Georgia , Athens , GA , USA.
Neuropsychological Rehabilitation (Impact Factor: 2.01). 07/2013; DOI: 10.1080/09602011.2013.815636
Source: PubMed

ABSTRACT Visual analysis of graphic displays of data is a cornerstone of studies using a single case experimental design (SCED). Data are graphed for each participant during a study with trend, level, and stability of data assessed within and between conditions. Reliable interpretations of effects of an intervention are dependent on researchers' understanding and use of systematic procedures. The purpose of this paper is to provide readers with a rationale for visual analysis of data when using a SCED, a step-by-step guide for conducting a visual analysis of graphed data, as well as to highlight considerations for persons interested in using visual analysis to evaluate an intervention, especially the importance of collecting reliability data for dependent measures and fidelity of implementation of study procedures.

  • Source
    Journal of Applied Behavior Analysis 02/1968; 1(1):91-7. · 1.19 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Controversy exists regarding appropriate methods for summarizing treatment outcomes for single-subject designs. Nonregression- and regression-based methods have been proposed to summarize the efficacy of single-subject interventions with proponents of both methods arguing for the superiority of their respective approaches. To compare findings for different single-subject effect sizes, 117 articles that targeted the reduction of problematic behaviors in 181 individuals diagnosed with autism were examined. Four effect sizes were calculated for each article: mean baseline reduction (MBLR), percentage of nonoverlapping data (PND), percentage of zero data (PZD), and one regression-based d statistic. Although each effect size indicated that behavioral treatment was effective, moderating variables were detected by the PZD effect size only. Pearson product-moment correlations indicated that effect sizes differed in statistical relationships to one another. In the present review, the regression-based d effect size did not improve the understanding of single-subject treatment outcomes when compared to nonregression effect sizes.
    Behavior Modification 04/2004; 28(2):234-46. · 1.70 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Most investigators using single-case experimental designs use interobserver agreement (IOA) checks to enhance the credibility of the collected data, and they report the results of those assessments using percentage of agreement estimates. An alternative is to graph both observers’ records of the measured behavior on the primary study graphs. Such graphing leads to greater transparency and is advocated for five reasons: (a) to make explicit how IOA assessments were distributed across the study, (b) to ensure agreement estimates are reported at the level of the measured behavior of interest rather than a broader observational code, (c) to detect observer drift, (d) to detect the effect of observer expectations, and (e) to put the IOA data in a more suitable context for assessing the internal validity of the study by eliminating the need for an arbitrary agreement criterion.
    Remedial and Special Education 01/2010; 31(4). · 0.68 Impact Factor


Available from
May 28, 2014