Article

The lexical bias effect is modulated by context, but the standard monitoring account doesn’t fly: Related beply to Baars et al. (1975)

University of Antwerp, Belgium
Journal of Memory and Language (Impact Factor: 2.65). 01/2005; DOI: 10.1016/j.jml.2004.07.006

ABSTRACT The lexical bias effect is the tendency for phonological speech errors to result in words more often than in nonwords. This effect has been accounted for by postulating feedback from sublexical to lexical representations, but also by assuming that the self-monitor covertly repairs more nonword errors than word errors. The only evidence that appears to exclusively support a monitoring account is Baars, Motley, and MacKay’s (1975) demonstration that the lexical bias is modulated by context: There was lexical bias in a mixed context of words and nonwords, but not in a pure nonword context. However, there are methodological problems with that experiment and theoretical problems with its interpretation. Additionally, a recent study failed to replicate contextual modulation (Humphreys, 2002). We therefore conducted two production experiments that solved the methodological problems. Both experiments showed there is indeed contextual modulation of the lexical bias effect. A control perception experiment excluded the possibility that the comprehension component of the task contributed to the results. In contrast to Baars et al., the production experiments suggested that lexical errors are suppressed in a nonword context. This supports a new account by which there is both feedback and self-monitoring, but in which the self-monitor sets its criteria adaptively as a function of context.

0 Followers
 · 
50 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In a seminal 1977 article, Rumelhart argued that perception required the simultaneous use of multiple sources of information, allowing perceivers to optimally interpret sensory information at many levels of representation in real time as information arrives. Building on Rumelhart's arguments, we present the Interactive Activation hypothesis—the idea that the mechanism used in perception and comprehension to achieve these feats exploits an interactive activation process implemented through the bidirectional propagation of activation among simple processing units. We then examine the interactive activation model of letter and word perception and the TRACE model of speech perception, as early attempts to explore this hypothesis, and review the experimental evidence relevant to their assumptions and predictions. We consider how well these models address the computational challenge posed by the problem of perception, and we consider how consistent they are with evidence from behavioral experiments. We examine empirical and theoretical controversies surrounding the idea of interactive processing, including a controversy that swirls around the relationship between interactive computation and optimal Bayesian inference. Some of the implementation details of early versions of interactive activation models caused deviation from optimality and from aspects of human performance data. More recent versions of these models, however, overcome these deficiencies. Among these is a model called the multinomial interactive activation model, which explicitly links interactive activation and Bayesian computations. We also review evidence from neurophysiological and neuroimaging studies supporting the view that interactive processing is a characteristic of the perceptual processing machinery in the brain. In sum, we argue that a computational analysis, as well as behavioral and neuroscience evidence, all support the Interactive Activation hypothesis. The evidence suggests that contemporary versions of models based on the idea of interactive activation continue to provide a basis for efforts to achieve a fuller understanding of the process of perception.
    Cognitive Science A Multidisciplinary Journal 08/2014; DOI:10.1111/cogs.12146 · 2.59 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The Perceptual Loop Theory of speech monitoring assumes that speakers routinely inspect their inner speech. In contrast, Huettig and Hartsuiker (2010) observed that listening to one's own speech during language production drives eye-movements to phonologically related printed words with a similar time-course as listening to someone else's speech does in speech perception experiments. This suggests that speakers use their speech perception system to listen to their own overt speech, but not to their inner speech. However, a direct comparison between production and perception with the same stimuli and participants is lacking so far. The current printed word eye-tracking experiment therefore used a within-subjects design, combining production and perception. Displays showed four words, of which one, the target, either had to be named or was presented auditorily. Accompanying words were phonologically related, semantically related, or unrelated to the target. There were small increases in looks to phonological competitors with a similar time-course in both production and perception. Phonological effects in perception however lasted longer and had a much larger magnitude. We conjecture that this difference is related to a difference in predictability of one's own and someone else's speech, which in turn has consequences for lexical competition in other-perception and possibly suppression of activation in self-perception.
    Frontiers in Human Neuroscience 12/2013; 7:818. DOI:10.3389/fnhum.2013.00818 · 2.90 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper investigates self-monitoring for speech errors by means of consonant identification in speech fragments excised from speech errors and their correct controls, as obtained in earlier experiments eliciting spoonerisms. Upon elicitation, segmental speech errors had been either not detected, or early detected or late detected and repaired by the speakers. Results show that misidentifications are rare but more frequent for speech errors than for control fragments. Early detected errors have fewer misidentifications than late detected errors. Reaction times for correct identifications betray effects of varying perceptual ambiguity. Early detected errors result in reaction times that are even faster than those of correct controls, while late detected errors have the longest reaction times. We speculate that in early detected errors speech is initiated before conflict with the correct target arises, and that in both early and late detected errors conflict between competing segments has led to detection.
    Journal of Memory and Language 10/2013; 69(3). DOI:10.1016/j.jml.2013.04.006 · 2.65 Impact Factor

Full-text (2 Sources)

Download
122 Downloads
Available from
May 28, 2014