Visual analysis in single case experimental design studies: Brief review and guidelines

a Department of Special Education , The University of Georgia , Athens , GA , USA.
Neuropsychological Rehabilitation (Impact Factor: 1.96). 07/2013; 24(3-4). DOI: 10.1080/09602011.2013.815636
Source: PubMed


Visual analysis of graphic displays of data is a cornerstone of studies using a single case experimental design (SCED). Data are graphed for each participant during a study with trend, level, and stability of data assessed within and between conditions. Reliable interpretations of effects of an intervention are dependent on researchers' understanding and use of systematic procedures. The purpose of this paper is to provide readers with a rationale for visual analysis of data when using a SCED, a step-by-step guide for conducting a visual analysis of graphed data, as well as to highlight considerations for persons interested in using visual analysis to evaluate an intervention, especially the importance of collecting reliability data for dependent measures and fidelity of implementation of study procedures.

Download full-text


Available from: David L. Gast,
  • Source
    • "While some commentators agree that visual analysis is 'flawed' (Kearns, 2015, this issue), other commentators (e.g., Martin & Kalinyak-Fliszar, 2015, this issue) note that there are techniques that may assist in improving its reliability. Moreover, since we wrote the target article there have been articles that promote a very disciplined approach to visual analysis including a variety of quasi-statistical evaluation methods (e.g., Brossart, Vanest, Davis and Patience, 2014; Lane & Gast, 2014,as part of a special issue of Neuropsychological Rehabilitation (volume 24, issue 3-4, 2014) devoted to single case experimental design for rehabilitation). These build on proposals by Kratchowill et al (2010, 2013) but are not used widely in practice and their validity has yet to be tested. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Background: In Howard, Best, and Nickels (2015, Optimising the design of intervention studies: Critiques and ways forward. Aphasiology, 2015.), we presented a set of ideas relevant to the design of single-case studies for evaluation of the effects of intervention. These were based on our experience with intervention research and methodology, and a set of simulations. Our discussion and conclusions were not intended as guidelines (of which there are several in the field) but rather had the aim of stimulating debate and optimising designs in the future. Our paper achieved the first aim—it received a set of varied commentaries, not all of which felt we were optimising designs, and which raised further points for debate.
    Aphasiology 12/2015; 29(5):619-643. DOI:10.1080/02687038.2014.1000613 · 1.53 Impact Factor
    • "Conducting a visual analysis is the first step when evaluating SCRDs in research practices (see Ray, 2015). Several authors have demonstrated that visual analysis typically uses level, variability, trend, overlap, intercept gap, and consistency of data across phases as six indices of change for making judgments about SCRD data (Franklin, Gorman, Beasley, & Allison, 1996; Kratochwill et al., 2013; Lane & Gast, 2013; Vannest, Davis, & Parker, 2013). These six evaluation points are addressed separately and in combination for decision making. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Single-case research designs have primarily relied on visual analysis for determining treatment effects. However, current foci on evidence-based treatment have given rise to the development of new methods. This article presents descriptions, calculations, strengths and weaknesses, and interpretative guidelines for 5 effect size indices: the percent of nonoverlapping data, the percent of data exceeding the median, improvement rate difference, nonoverlap of all pairs, and Tau-U. © 2015 by the American Counseling Association. All rights reserved.
    10/2015; 93(4). DOI:10.1002/jcad.12038
  • Source
    • "In the context of this broad assessment of the behavior needed to establish the functional relation between an intervention and the individual's response, the analysis of the data gathered plays also an important role, although such analysis is in no way sufficient as evidence for a functional relation. In the context of SCED, visual analysis has been considered as a means for identifying functional relations, although any statement regarding their existence should be grounded on replication across participants, behaviors, or settings (Lane & Gast, 2014). In that sense, we consider the term " design analysis " useful, as it has been explained by Brossart, Vannest, Davis, and Patience (2014). "
    [Show abstract] [Hide abstract]
    ABSTRACT: In the context of the evidence-based practices movement, the emphasis on computing effect sizes and combining them via meta-analysis does not preclude the demonstration of functional relations. For the latter aim, we propose to augment the visual analysis to add consistency to the decisions made on the existence of a functional relation without losing sight of the need for a methodological evaluation of what stimuli and reinforcement or punishment are used to control the behavior. Four options for quantification are reviewed, illustrated, and tested with simulated data. These quantifications include comparing the projected baseline with the actual treatment measurements, on the basis of either parametric or nonparametric statistics. The simulated data used to test the quantifications include nine data patterns in terms of the presence and type of effect and comprise ABAB and multiple-baseline designs. Although none of the techniques is completely flawless in terms of detecting a functional relation only when it is present but not when it is absent, an option based on projecting split-middle trend and considering data variability as in exploratory data analysis proves to be the best performer for most data patterns. We suggest that the information on whether a functional relation has been demonstrated should be included in meta-analyses. It is also possible to use as a weight the inverse of the data variability measure used in the quantification for assessing the functional relation. We offer an easy to use code for open-source software for implementing some of the quantifications.
    Behavior Modification 08/2014; 38(6):878-913. DOI:10.1177/0145445514545679 · 1.70 Impact Factor
Show more