Article

Visual analysis in single case experimental design studies: Brief review and guidelines

a Department of Special Education , The University of Georgia , Athens , GA , USA.
Neuropsychological Rehabilitation (Impact Factor: 2.07). 07/2013; 24(3-4). DOI: 10.1080/09602011.2013.815636
Source: PubMed

ABSTRACT Visual analysis of graphic displays of data is a cornerstone of studies using a single case experimental design (SCED). Data are graphed for each participant during a study with trend, level, and stability of data assessed within and between conditions. Reliable interpretations of effects of an intervention are dependent on researchers' understanding and use of systematic procedures. The purpose of this paper is to provide readers with a rationale for visual analysis of data when using a SCED, a step-by-step guide for conducting a visual analysis of graphed data, as well as to highlight considerations for persons interested in using visual analysis to evaluate an intervention, especially the importance of collecting reliability data for dependent measures and fidelity of implementation of study procedures.

27 Followers
 · 
682 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Background: In Howard, Best, and Nickels (2015, Optimising the design of intervention studies: Critiques and ways forward. Aphasiology, 2015.), we presented a set of ideas relevant to the design of single-case studies for evaluation of the effects of intervention. These were based on our experience with intervention research and methodology, and a set of simulations. Our discussion and conclusions were not intended as guidelines (of which there are several in the field) but rather had the aim of stimulating debate and optimising designs in the future. Our paper achieved the first aim—it received a set of varied commentaries, not all of which felt we were optimising designs, and which raised further points for debate.
    Aphasiology 12/2015; 29(5):619-643. DOI:10.1080/02687038.2014.1000613 · 1.73 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A-B-A-B designs are one of the most frequently used single-subject research designs. These designs allow researchers to determine the effectiveness of a given intervention, through continuous and repeated measurements of a specific behavior, throughout alternated and rigorously controlled baseline (A) and intervention or treatment (B) conditions. Using a simple design, that uses the subject as his own control, the researcher is able to compare information within and across conditions, examining whether the treatment causes changes in the subject's behavior. This paper aims to describe
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In the context of the evidence-based practices movement, the emphasis on computing effect sizes and combining them via meta-analysis does not preclude the demonstration of functional relations. For the latter aim, we propose to augment the visual analysis to add consistency to the decisions made on the existence of a functional relation without losing sight of the need for a methodological evaluation of what stimuli and reinforcement or punishment are used to control the behavior. Four options for quantification are reviewed, illustrated, and tested with simulated data. These quantifications include comparing the projected baseline with the actual treatment measurements, on the basis of either parametric or nonparametric statistics. The simulated data used to test the quantifications include nine data patterns in terms of the presence and type of effect and comprise ABAB and multiple-baseline designs. Although none of the techniques is completely flawless in terms of detecting a functional relation only when it is present but not when it is absent, an option based on projecting split-middle trend and considering data variability as in exploratory data analysis proves to be the best performer for most data patterns. We suggest that the information on whether a functional relation has been demonstrated should be included in meta-analyses. It is also possible to use as a weight the inverse of the data variability measure used in the quantification for assessing the functional relation. We offer an easy to use code for open-source software for implementing some of the quantifications.
    Behavior Modification 08/2014; 38(6):878-913. DOI:10.1177/0145445514545679 · 1.70 Impact Factor

Full-text

Download
1,821 Downloads
Available from
May 28, 2014