Article

Visual analysis in single case experimental design studies: Brief review and guidelines

a Department of Special Education , The University of Georgia , Athens , GA , USA.
Neuropsychological Rehabilitation (Impact Factor: 1.96). 07/2013; 24(3-4). DOI: 10.1080/09602011.2013.815636
Source: PubMed

ABSTRACT

Visual analysis of graphic displays of data is a cornerstone of studies using a single case experimental design (SCED). Data are graphed for each participant during a study with trend, level, and stability of data assessed within and between conditions. Reliable interpretations of effects of an intervention are dependent on researchers' understanding and use of systematic procedures. The purpose of this paper is to provide readers with a rationale for visual analysis of data when using a SCED, a step-by-step guide for conducting a visual analysis of graphed data, as well as to highlight considerations for persons interested in using visual analysis to evaluate an intervention, especially the importance of collecting reliability data for dependent measures and fidelity of implementation of study procedures.

  • Source
    • "R ) which allows applying 242 the stability envelope to the trend line : ( a ) estimating split - middle trend ( Miller , 1985 ) , ( b ) 243 projecting it into the next phase , and ( c ) constructing an envelope around it . The envelope can be 244 constructed on the basis of the baseline median 2 , so that the lower limit is located 25% of the 245 median below the estimated split - middle trend and the upper limit at the same distance above it 246 ( Lane & Gast , 2014 ) . In case 80% of the data are within those limits , this would indicate trend 247 stability , that is , it would suggest that no change in slope has been produced with the introduction 248 of the intervention . "
    [Show abstract] [Hide abstract]
    ABSTRACT: Two-phase single-case designs, including baseline evaluation followed by an intervention, represent the most clinically straightforward option for combining professional practice and research. However, unless they are part of a multiple-baseline schedule, such designs do not allow demonstrating a causal relation between the intervention and the behavior. Although the statistical options reviewed here cannot help overcoming this methodological limitation, we aim to make practitioners and applied researchers aware of the available appropriate options for extracting maximum information from the data. In the current paper, we suggest that the evaluation of behavioral change should include visual and quantitative analyses, complementing the substantive criteria regarding the practical importance of the behavioral change. Specifically, we emphasize the need to use structured criteria for visual analysis, such as the ones summarized in the What Works Clearinghouse Standards, especially if such criteria are complemented by visual aids, as illustrated here. For quantitative analysis, we focus on the Nonoverlap of all pairs and the Slope and level change procedure, as they offer straightforward information and have shown reasonable performance. An illustration is provided of the use of these three pieces of information: visual, quantitative, and substantive. To make the use of visual and quantitative analysis feasible, open source software is referred to and demonstrated. In order to provide practitioners and applied researchers with a more complete guide, several analytical alternatives are commented on pointing out the situations (aims, data patterns) for which these are potentially useful.
    Full-text · Article · Jan 2016 · Frontiers in Psychology
  • Source
    • "While some commentators agree that visual analysis is 'flawed' (Kearns, 2015, this issue), other commentators (e.g., Martin & Kalinyak-Fliszar, 2015, this issue) note that there are techniques that may assist in improving its reliability. Moreover, since we wrote the target article there have been articles that promote a very disciplined approach to visual analysis including a variety of quasi-statistical evaluation methods (e.g., Brossart, Vanest, Davis and Patience, 2014; Lane & Gast, 2014,as part of a special issue of Neuropsychological Rehabilitation (volume 24, issue 3-4, 2014) devoted to single case experimental design for rehabilitation). These build on proposals by Kratchowill et al (2010, 2013) but are not used widely in practice and their validity has yet to be tested. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Background: In Howard, Best, and Nickels (2015, Optimising the design of intervention studies: Critiques and ways forward. Aphasiology, 2015.), we presented a set of ideas relevant to the design of single-case studies for evaluation of the effects of intervention. These were based on our experience with intervention research and methodology, and a set of simulations. Our discussion and conclusions were not intended as guidelines (of which there are several in the field) but rather had the aim of stimulating debate and optimising designs in the future. Our paper achieved the first aim—it received a set of varied commentaries, not all of which felt we were optimising designs, and which raised further points for debate.
    Full-text · Article · Dec 2015 · Aphasiology
    • "Conducting a visual analysis is the first step when evaluating SCRDs in research practices (see Ray, 2015). Several authors have demonstrated that visual analysis typically uses level, variability, trend, overlap, intercept gap, and consistency of data across phases as six indices of change for making judgments about SCRD data (Franklin, Gorman, Beasley, & Allison, 1996; Kratochwill et al., 2013; Lane & Gast, 2013; Vannest, Davis, & Parker, 2013). These six evaluation points are addressed separately and in combination for decision making. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Single-case research designs have primarily relied on visual analysis for determining treatment effects. However, current foci on evidence-based treatment have given rise to the development of new methods. This article presents descriptions, calculations, strengths and weaknesses, and interpretative guidelines for 5 effect size indices: the percent of nonoverlapping data, the percent of data exceeding the median, improvement rate difference, nonoverlap of all pairs, and Tau-U. © 2015 by the American Counseling Association. All rights reserved.
    No preview · Article · Oct 2015
Show more