The use of control groups in artificial grammar learning.

Department of Psychology, University of Bern, Bern, Switzerland.
The Quarterly Journal of Experimental Psychology A (Impact Factor: 2.45). 02/2003; 56(1):97-115. DOI: 10.1080/02724980244000297
Source: PubMed

ABSTRACT Experimenters assume that participants of an experimental group have learned an artificial grammar if they classify test items with significantly higher accuracy than does a control group without training. The validity of such a comparison, however, depends on an additivity assumption: Learning is superimposed on the action of non-specific variables-for example, repetitions of letters, which modulate the performance of the experimental group and the control group to the same extent. In two experiments we were able to show that this additivity assumption does not hold. Grammaticality classifications in control groups without training (Experiments 1 and 2) depended on non-specific features. There were no such biases in the experimental groups. Control groups with training on randomized strings (Experiment 2) showed fewer biases than did control groups without training. Furthermore, we reanalysed published research and demonstrated that earlier experiments using control groups without training had produced similar biases in control group performances, bolstering the finding that using control groups without training is methodologically unsound.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Humans have remarkable statistical learning abilities for verbal speech-like materials and for nonverbal music-like materials. Statistical learning has been shown with artificial languages (AL) that consist of the concatenation of nonsense word-like units into a continuous stream. These ALs contain no cues to unit boundaries other than the transitional probabilities between events, which are high within a unit and low between units. Most AL studies have used units of regular lengths. In the present study, the ALs were based on the same statistical structures but differed in unit length regularity (i.e., whether they were made out of units of regular vs. irregular lengths) and in materials (i.e., syllables vs. musical timbres), to allow us to investigate the influence of unit length regularity on domain-general statistical learning. In addition to better performance for verbal than for nonverbal materials, the findings revealed an effect of unit length regularity, with better performance for languages with regular- (vs. irregular-) length units. This unit length regularity effect suggests the influence of dynamic attentional processes (as proposed by the dynamic attending theory; Large & Jones (Psychological Review 106: 119-159, 1999)) on domain-general statistical learning.
    Psychonomic Bulletin & Review 08/2012; · 2.99 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Complexity has been shown to affect performance on artificial grammar learning (AGL) tasks (categorization of test items as grammatical/ungrammatical according to the implicitly trained grammar rules). However, previously published AGL experiments did not utilize consistent measures to investigate the comprehensive effect of grammar complexity on task performance. The present study focused on computerizing Bollt and Jones's (2000) technique of calculating topological entropy (TE), a quantitative measure of AGL charts' complexity, with the aim of examining associations between grammar systems' TE and learners' AGL task performance. We surveyed the literature and identified 56 previous AGL experiments based on 10 different grammars that met the sampling criteria. Using the automated matrix-lift-action method, we assigned a TE value for each of these 10 previously used AGL systems and examined its correlation with learners' task performance. The meta-regression analysis showed a significant correlation, demonstrating that the complexity effect transcended the different settings and conditions in which the categorization task was performed. The results reinforced the importance of using this new automated tool to uniformly measure grammar systems' complexity when experimenting with and evaluating the findings of AGL studies.
    Frontiers in Psychology 01/2014; 5:1084. · 2.80 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Humans rapidly learn complex structures in many domains. Some findings of above-chance performance of untrained control groups in artificial grammar learning studies raise the question to which extent learning can occur in an untrained, unsupervised testing situation with partially correct and incorrect structures. Computational modelling simulations explore whether an unsupervised online learning effect is theoretically plausible in artificial grammar learning. Symbolic n-gram models and simple recurrent network models were evaluated using a large free parameter space and applying a novel evaluation framework, which models the human experimental situation through alternating evaluation (in terms of forced binary grammaticality judgments) and subsequent learning of the same stimulus. Results indicate a strong online learning effect for n-gram models and a weaker effect for simple recurrent network models. Model performance improves slightly once the window of accessible past responses for the grammaticality decision process is limited. Results suggest that online learning is possible when ungrammatical structures share grammatical chunks to a large extent. Associative chunk strength for grammatical and ungrammatical sequences is found to predict both, chance and above-chance performance for human and computational data.

Full-text (2 Sources)

Available from
Jun 5, 2014