The lexical bias effect is modulated by context, but the standard monitoring account doesn't fly: related beply to Baars, Motley, and MacKay (1975)

University of Antwerp, Belgium
Journal of Memory and Language (Impact Factor: 4.24). 01/2005; 52(1):58-70. DOI: 10.1016/j.jml.2004.07.006


The lexical bias effect is the tendency for phonological speech errors to result in words more often than in nonwords. This effect has been accounted for by postulating feedback from sublexical to lexical representations, but also by assuming that the self-monitor covertly repairs more nonword errors than word errors. The only evidence that appears to exclusively support a monitoring account is Baars, Motley, and MacKay’s (1975) demonstration that the lexical bias is modulated by context: There was lexical bias in a mixed context of words and nonwords, but not in a pure nonword context. However, there are methodological problems with that experiment and theoretical problems with its interpretation. Additionally, a recent study failed to replicate contextual modulation (Humphreys, 2002). We therefore conducted two production experiments that solved the methodological problems. Both experiments showed there is indeed contextual modulation of the lexical bias effect. A control perception experiment excluded the possibility that the comprehension component of the task contributed to the results. In contrast to Baars et al., the production experiments suggested that lexical errors are suppressed in a nonword context. This supports a new account by which there is both feedback and self-monitoring, but in which the self-monitor sets its criteria adaptively as a function of context.

Download full-text


Available from: Martin Corley
  • Source
    • "And even when speech is only produced internally, production errors are still reported (Oppenheim and Dell, 2008). Such pre-articulatory monitoring might affect patterns of speech errors, as shown in studies where participants produce fewer word slips when this slip would result in a taboo utterance or a nonsense word (Baars et al., 1975; Motley et al., 1982; Hartsuiker et al., 2005; Nooteboom and Quené, 2008; Dhooge and Hartsuiker, 2012). In sum, there appears to be an external monitoring channel that monitors speech after articulation, and an internal monitoring channel that monitors speech before articulation. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The Perceptual Loop Theory of speech monitoring assumes that speakers routinely inspect their inner speech. In contrast, Huettig and Hartsuiker (2010) observed that listening to one's own speech during language production drives eye-movements to phonologically related printed words with a similar time-course as listening to someone else's speech does in speech perception experiments. This suggests that speakers use their speech perception system to listen to their own overt speech, but not to their inner speech. However, a direct comparison between production and perception with the same stimuli and participants is lacking so far. The current printed word eye-tracking experiment therefore used a within-subjects design, combining production and perception. Displays showed four words, of which one, the target, either had to be named or was presented auditorily. Accompanying words were phonologically related, semantically related, or unrelated to the target. There were small increases in looks to phonological competitors with a similar time-course in both production and perception. Phonological effects in perception however lasted longer and had a much larger magnitude. We conjecture that this difference is related to a difference in predictability of one's own and someone else's speech, which in turn has consequences for lexical competition in other-perception and possibly suppression of activation in self-perception.
    Full-text · Article · Dec 2013 · Frontiers in Human Neuroscience
  • Source
    • "However, in the pseudoword context, mostly pseudowords have to be named. Then, either a non-lexicality criterion (Hartsuiker et al., 2005 In total, the experiment lasted about 45 minutes. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Current views of lexical selection in language production differ in whether they assume lexical selection by competition or not. To account for recent data with the picture-word interference (PWI) task, both views need to be supplemented with assumptions about the control processes that block distractor naming. In this paper, we propose that such control is achieved by the verbal self-monitor. If monitoring is involved in the PWI task, performance in this task should be affected by variables that influence monitoring such as lexicality, lexicality of context, and time pressure. Indeed, pictures were named more quickly when the distractor was a pseudoword than a word (Experiment 1), which reversed in a context of pseudoword items (Experiment 3). Additionally, under time pressure, participants frequently named the distractor instead of the picture, suggesting that the monitor failed to exclude the distractor response. Such errors occurred more often with word than pseudoword distractors (Experiment 2); however, the effect flipped around in a pseudoword context (Experiment 4). Our findings argue for the role of the monitoring system in lexical selection. Implications for competitive and non-competitive models are discussed.
    Full-text · Article · Jan 2012 · Journal of Memory and Language
  • Source
    • "The method used was basically the same as the one applied by Baars et al. [1]: Subjects were to read silently Dutch equivalents of word pairs like DOVE BALL, DEER BACK, DARK BONE, BARN DOOR, presented one word pair at the time, until a prompt told them to speak aloud the last word pair seen. However, there was no white noise applied to the ears of the subjects as in [1] and [7]. The reason white noise was not applied is that this would very likely make self-repairs of completed speech errors in overt speech rather scarce. "

    Full-text · Article ·
Show more