Recalibration of phonetic categories by lipread speech: measuring aftereffects after a 24-hour delay.

Tilburg University, Dept. of Psychology, Tilburg, The Netherlands.
Language and Speech (Impact Factor: 1.12). 02/2009; 52(Pt 2-3):341-50. DOI: 10.1177/0023830909103178
Source: PubMed

ABSTRACT Listeners hearing an ambiguous speech sound flexibly adjust their phonetic categories in accordance with lipread information telling what the phoneme should be (recalibration). Here, we tested the stability of lipread-induced recalibration over time. Listeners were exposed to an ambiguous sound halfway between /t/ and /p/ that was dubbed onto a face articulating either /t/ or /p/. When tested immediately, listeners exposed to lipread /t/ were more likely to categorize the ambiguous sound as /t/ than listeners exposed to /p/. This aftereffect dissipated quickly with prolonged testing and did not reappear after a 24-hour delay. Recalibration of phonetic categories is thus a fragile phenomenon.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We investigated the effects of adaptation to mouth shapes associated with different spoken sounds (sustained /m/ or /u/) on visual perception of lip speech. Participants were significantly more likely to label ambiguous faces on an /m/-to-/u/ continuum as saying /u/ following adaptation to /m/ mouth shapes than they were in a preadaptation test. By contrast, participants were significantly less likely to label the ambiguous faces as saying /u/ following adaptation to /u/ mouth shapes than they were in a preadaptation test. The magnitude of these aftereffects was equivalent when the same individual was shown in the adaptation and test phases of the experiment and when different individuals were presented in the adaptation and test phases. These findings present novel evidence that adaptation to natural variations in facial appearance influences face perception, and they extend previous research on face aftereffects to visual perception of lip speech.
    Psychonomic Bulletin & Review 08/2010; 17(4):522-8. DOI:10.3758/PBR.17.4.522 · 2.99 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We process information from the world through multiple senses, and the brain must decide what information belongs together and what information should be segregated. One challenge in studying such multisensory integration is how to quantify the multisensory interactions, a challenge that is amplified by the host of methods that are now used to measure neural, behavioral, and perceptual responses. Many of the measures that have been developed to quantify multisensory integration (and which have been derived from single unit analyses), have been applied to these different measures without much consideration for the nature of the process being studied. Here, we provide a review focused on the means with which experimenters quantify multisensory processes and integration across a range of commonly used experimental methodologies. We emphasize the most commonly employed measures, including single- and multiunit responses, local field potentials, functional magnetic resonance imaging, and electroencephalography, along with behavioral measures of detection, accuracy, and response times. In each section, we will discuss the different metrics commonly used to quantify multisensory interactions, including the rationale for their use, their advantages, and the drawbacks and caveats associated with them. Also discussed are possible alternatives to the most commonly used metrics.
    Brain Topography 04/2014; 27(6). DOI:10.1007/s10548-014-0365-7 · 2.52 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Speech perception is shaped by listeners' prior experience with speakers. Listeners retune their phonetic category boundaries after encountering ambiguous sounds in order to deal with variations between speakers. Repeated exposure to an unambiguous sound, on the other hand, leads to a decrease in sensitivity to the features of that particular sound. This study investigated whether these changes in the listeners' perceptual systems can generalise to the perception of speech from a novel speaker. Specifically, the experiments looked at whether visual information about the identity of the speaker could prevent generalisation from occurring. In Experiment 1, listeners retuned auditory category boundaries using audiovisual speech input. This shift in the category boundaries affected perception of speech from both the exposure speaker and a novel speaker. In Experiment 2, listeners were repeatedly exposed to unambiguous speech either auditorily or audiovisually, leading to a decrease in sensitivity to the features of the exposure sound. Here, too, the changes affected the perception of both the exposure speaker and the novel speaker. Together, these results indicate that changes in the perceptual system can affect the perception of speech from a novel speaker and that visual speaker identity information did not prevent this generalisation.
    Journal of Phonetics 03/2014; 43:38–46. DOI:10.1016/j.wocn.2014.01.003 · 1.41 Impact Factor

Full-text (2 Sources)

Available from
May 28, 2014