What counts as evidence for working memory training? Problems with correlated gains and dichotomization

University of Maryland, College Park, MD, USA, .
Psychonomic Bulletin & Review (Impact Factor: 2.99). 12/2013; 21(3). DOI: 10.3758/s13423-013-0560-7
Source: PubMed


The question of whether computerized cognitive training leads to generalized improvements of intellectual abilities has been a popular, yet contentious, topic within both the psychological and neurocognitive literatures. Evidence for the effective transfer of cognitive training to nontrained measures of cognitive abilities is mixed, with some studies showing apparent successful transfer, while others have failed to obtain this effect. At the same time, several authors have made claims about both successful and unsuccessful transfer effects on the basis of a form of responder analysis, an analysis technique that shows that those who gain the most on training show the greatest gains on transfer tasks. Through a series of Monte Carlo experiments and mathematical analyses, we demonstrate that the apparent transfer effects observed through responder analysis are illusory and are independent of the effectiveness of cognitive training. We argue that responder analysis can be used neither to support nor to refute hypotheses related to whether cognitive training is a useful intervention to obtain generalized cognitive benefits. We end by discussing several proposed alternative analysis techniques that incorporate training gain scores and argue that none of these methods are appropriate for testing hypotheses regarding the effectiveness of cognitive training.

Download full-text


Available from: Jeffrey Stephen Chrabaszcz, Apr 29, 2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Established psychological results have been called into question by demonstrations that statistical significance is easy to achieve, even in the absence of an effect. One often-warned-against practice, choosing when to stop the experiment on the basis of the results, is guaranteed to produce significant results. In response to these demonstrations, Bayes factors have been proposed as an antidote to this practice, because they are invariant with respect to how an experiment was stopped. Should researchers only care about the resulting Bayes factor, without concern for how it was produced? Yu, Sprenger, Thomas, and Dougherty (2014) and Sanborn and Hills (2014) demonstrated that Bayes factors are sometimes strongly influenced by the stopping rules used. However, Rouder (2014) has provided a compelling demonstration that despite this influence, the evidence supplied by Bayes factors remains correct. Here we address why the ability to influence Bayes factors should still matter to researchers, despite the correctness of the evidence. We argue that good frequentist properties mean that results will more often agree with researchers' statistical intuitions, and good frequentist properties control the number of studies that will later be refuted. Both help raise confidence in psychological results.
    Psychonomic Bulletin & Review 03/2014; 21(2). DOI:10.3758/s13423-014-0607-4 · 2.99 Impact Factor
  • Source

    Frontiers in Systems Neuroscience 03/2014; 8:34. DOI:10.3389/fnsys.2014.00034
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Training interventions for older adults are designed to remediate performance on trained tasks and to generalize, or transfer, to untrained tasks. Evidence for transfer is typically based on the trained group showing greater improvement than controls on untrained tasks, or on a correlation between gains in training and in transfer tasks. However, this ignores potential correlational relationships between trained and untrained tasks that exist before training. By accounting for crossed (trained and untrained) and lagged (pre-training and post-training) and cross-lagged relationships between trained and untrained scores in structural equation models, the training-transfer gain relationship can be independently estimated. Transfer is confirmed if only the trained but not control participants' gain correlation is significant. Modeling data from the Improvement in Memory with Plasticity-based Adaptive Cognitive Training (IMPACT) study (Smith et al., 2009), transfer from speeded auditory discrimination and syllable span to list and text memory and to working memory was demonstrated in 487 adults aged 65-93. Evaluation of age, sex, and education on pretest scores and on change did not alter this. The overlap of the training with transfer measures was also investigated to evaluate the hypothesis that performance gains in a non-verbal speeded auditory discrimination task may be associated with gains on fewer tasks than gains in a verbal working memory task. Gains in speeded processing were associated with gains on one list memory measure. Syllable span gains were associated with improvement in difficult list recall, story recall, and working memory factor scores. Findings confirmed that more overlap with task demands was associated with gains to more of the tasks assessed, suggesting that transfer effects are related to task overlap in multimodal training.
    Frontiers in Human Neuroscience 08/2014; 8:617. DOI:10.3389/fnhum.2014.00617 · 3.63 Impact Factor
Show more