Recalibration of phonetic categories by lipread speech versus lexical information.
ABSTRACT Listeners hearing an ambiguous phoneme flexibly adjust their phonetic categories in accordance with information telling what the phoneme should be (i.e., recalibration). Here the authors compared recalibration induced by lipread versus lexical information. Listeners were exposed to an ambiguous phoneme halfway between /t/ and /p/ dubbed onto a face articulating /t/ or /p/ or embedded in a Dutch word ending in /t/ (e.g., groot [big]) or /p/ (knoop [button]). In a posttest, participants then categorized auditory tokens as /t/ or /p/. Lipread and lexical aftereffects were comparable in size (Experiment 1), dissipated about equally fast (Experiment 2), were enhanced by exposure to a contrast phoneme (Experiment 3), and were not affected by a 3-min silence interval (Experiment 4). Exposing participants to 1 instead of both phoneme categories did not make the phenomenon more robust (Experiment 5). Despite the difference in nature (bottom-up vs. top-down information), lipread and lexical information thus appear to serve a similar role in phonetic adjustments.
[show abstract] [hide abstract]
ABSTRACT: Understanding foreign speech is difficult, in part because of unusual mappings between sounds and words. It is known that listeners in their native language can use lexical knowledge (about how words ought to sound) to learn how to interpret unusual speech-sounds. We therefore investigated whether subtitles, which provide lexical information, support perceptual learning about foreign speech. Dutch participants, unfamiliar with Scottish and Australian regional accents of English, watched Scottish or Australian English videos with Dutch, English or no subtitles, and then repeated audio fragments of both accents. Repetition of novel fragments was worse after Dutch-subtitle exposure but better after English-subtitle exposure. Native-language subtitles appear to create lexical interference, but foreign-language subtitles assist speech learning by indicating which words (and hence sounds) are being spoken.PLoS ONE 01/2009; 4(11):e7785. · 4.09 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: It is well known that visual information derived from mouth movements (i.e., lipreading) can have profound effects on auditory speech identification (e.g. the McGurk-effect). Here we examined the reverse phenomenon, namely whether auditory speech affects lipreading. We report that speech sounds dubbed onto lipread speech affect immediate identification of lipread tokens. This effect likely reflects genuine cross-modal integration of sensory signals and not just a simple response bias because we also observed adaptive shifts in visual identification of the ambiguous lipread tokens after exposure to incongruent audiovisual adapter stimuli. Presumably, listeners had learned to label the lipread stimulus in accordance with the sound, thus demonstrating that the interaction between hearing and lipreading is genuinely bi-directional.Neuroscience Letters 03/2010; 471(2):100-3. · 2.11 Impact Factor