The lexical bias effect is modulated by context, but the standard monitoring account doesn’t fly: Related beply to Baars et al. (1975)

University of Antwerp, Belgium
Journal of Memory and Language (Impact Factor: 4.24). 01/2005; 52(1):58-70. DOI: 10.1016/j.jml.2004.07.006

ABSTRACT The lexical bias effect is the tendency for phonological speech errors to result in words more often than in nonwords. This effect has been accounted for by postulating feedback from sublexical to lexical representations, but also by assuming that the self-monitor covertly repairs more nonword errors than word errors. The only evidence that appears to exclusively support a monitoring account is Baars, Motley, and MacKay’s (1975) demonstration that the lexical bias is modulated by context: There was lexical bias in a mixed context of words and nonwords, but not in a pure nonword context. However, there are methodological problems with that experiment and theoretical problems with its interpretation. Additionally, a recent study failed to replicate contextual modulation (Humphreys, 2002). We therefore conducted two production experiments that solved the methodological problems. Both experiments showed there is indeed contextual modulation of the lexical bias effect. A control perception experiment excluded the possibility that the comprehension component of the task contributed to the results. In contrast to Baars et al., the production experiments suggested that lexical errors are suppressed in a nonword context. This supports a new account by which there is both feedback and self-monitoring, but in which the self-monitor sets its criteria adaptively as a function of context.

Download full-text


Available from: Martin Corley, Sep 26, 2015
38 Reads
  • Source
    • "And even when speech is only produced internally, production errors are still reported (Oppenheim and Dell, 2008). Such pre-articulatory monitoring might affect patterns of speech errors, as shown in studies where participants produce fewer word slips when this slip would result in a taboo utterance or a nonsense word (Baars et al., 1975; Motley et al., 1982; Hartsuiker et al., 2005; Nooteboom and Quené, 2008; Dhooge and Hartsuiker, 2012). In sum, there appears to be an external monitoring channel that monitors speech after articulation, and an internal monitoring channel that monitors speech before articulation. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The Perceptual Loop Theory of speech monitoring assumes that speakers routinely inspect their inner speech. In contrast, Huettig and Hartsuiker (2010) observed that listening to one's own speech during language production drives eye-movements to phonologically related printed words with a similar time-course as listening to someone else's speech does in speech perception experiments. This suggests that speakers use their speech perception system to listen to their own overt speech, but not to their inner speech. However, a direct comparison between production and perception with the same stimuli and participants is lacking so far. The current printed word eye-tracking experiment therefore used a within-subjects design, combining production and perception. Displays showed four words, of which one, the target, either had to be named or was presented auditorily. Accompanying words were phonologically related, semantically related, or unrelated to the target. There were small increases in looks to phonological competitors with a similar time-course in both production and perception. Phonological effects in perception however lasted longer and had a much larger magnitude. We conjecture that this difference is related to a difference in predictability of one's own and someone else's speech, which in turn has consequences for lexical competition in other-perception and possibly suppression of activation in self-perception.
    Frontiers in Human Neuroscience 12/2013; 7:818. DOI:10.3389/fnhum.2013.00818 · 2.99 Impact Factor
  • Source
    • "The method used was basically the same as the one applied by Baars et al. [1]: Subjects were to read silently Dutch equivalents of word pairs like DOVE BALL, DEER BACK, DARK BONE, BARN DOOR, presented one word pair at the time, until a prompt told them to speak aloud the last word pair seen. However, there was no white noise applied to the ears of the subjects as in [1] and [7]. The reason white noise was not applied is that this would very likely make self-repairs of completed speech errors in overt speech rather scarce. "
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This chapter proposes some improvements on a method for eliciting speech errors, the so-called SLIP technique, including the use of multi-level logistic regression for data analysis. This is demonstrated in an experimental test of a new theory of self- monitoring as the main cause of lexical bias in phonological speech errors.
Show more