Article

Seeing and hearing others and oneself talk

Laboratory of Computational Engineering, Helsinki University of Technology, PO Box 9203, FIN-02015 HUT, Finland.
Cognitive Brain Research (Impact Factor: 3.77). 06/2005; 23(2-3):429-35. DOI: 10.1016/j.cogbrainres.2004.11.006
Source: PubMed

ABSTRACT

We studied the modification of auditory perception in three different conditions in twenty subjects. Observing other person's discordant articulatory gestures deteriorated identification of acoustic speech stimuli and modified the auditory percept, causing a strong McGurk effect. A similar effect was found when the subjects watched their own silent articulation in a mirror and acoustic stimuli were simultaneously presented to their ears. Interestingly, a smaller but significant effect was even obtained when the subjects just silently articulated the syllables without visual feedback. On the other hand, observing other person's or one's own concordant articulation and silently articulating a concordant syllable improved identification of the acoustic stimuli. The modification of auditory percepts caused by visual observation of speech and silently articulating it are both suggested to be due to the alteration of activity in the auditory cortex. Our findings support the idea of a close relationship between speech perception and production.

Download full-text

Full-text

Available from: Mikko Sams
  • Source
    • "For example, listeners presented with the auditory signal for " ba " concurrently with the visual signal for " ga " typically report a blended percept, the well-known " McGurk effect. " A recent study by Sams et al. (2005) demonstrated that the McGurk effect occurs even if the source of the visual input is the listener's own face. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Pronunciation training studies have yielded important information concerning the processing of audiovisual (AV) information. Second language (L2) learners show increased reliance on bottom-up, multimodal input for speech perception (compared to monolingual individuals). However, little is known about the role of viewing one's own speech articulation processes during speech training. The current study investigated whether real-time, visual feedback for tongue movement can improve a speaker's learning of non-native speech sounds. An interactive 3D tongue visualization system based on electromagnetic articulography (EMA) was used in a speech training experiment. Native speakers of American English produced a novel speech sound (/ɖ/; a voiced, coronal, palatal stop) before, during, and after trials in which they viewed their own speech movements using the 3D model. Talkers' productions were evaluated using kinematic (tongue-tip spatial positioning) and acoustic (burst spectra) measures. The results indicated a rapid gain in accuracy associated with visual feedback training. The findings are discussed with respect to neural models for multimodal speech processing.
    Full-text · Article · Nov 2015 · Frontiers in Human Neuroscience
  • Source
    • "Just as visual influences on auditory speech processing have long been reported (e.g., Sumby and Pollack, 1954; see Navarra et al., 2012 for review), recent reports have also shown similar effects from articulatory information. For example, subjects' own silent articulations (Sams et al., 2005; Sato et al., 2013; Scott et al., 2013) influence auditory perception in similar ways as seeing visual speech (although see Mochida et al., 2013). Moreover, receiving haptic or tactile input related to another person's articulatory movements can also influence auditory speech processing (Fowler and Dekle, 1991; Gick et al., 2008; Gick and Derrick, 2009; Ito et al., 2009; Treille et al., 2014). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Speech researchers have long been interested in how auditory and visual speech signals are integrated, and the recent work has revived interest in the role of speech production with respect to this process. Here, we discuss these issues from a developmental perspective. Because speech perception abilities typically outstrip speech production abilities in infancy and childhood, it is unclear how speech-like movements could influence audiovisual speech perception in development. While work on this question is still in its preliminary stages, there is nevertheless increasing evidence that sensorimotor processes (defined here as any motor or proprioceptive process related to orofacial movements) affect developmental audiovisual speech processing. We suggest three areas on which to focus in future research: (i) the relation between audiovisual speech perception and sensorimotor processes at birth, (ii) the pathways through which sensorimotor processes interact with audiovisual speech processing in infancy, and (iii) developmental change in sensorimotor pathways as speech production emerges in childhood.
    Full-text · Article · Aug 2014 · Frontiers in Psychology
  • Source
    • "These designs have typically involved delayed or covert speech production. As evidence exists showing similarities in neural activity in overt and covert production tasks, Tian and Poeppel (2010, 2012) including the generation of internal models (Sams et al., 2005; Tian and Poeppel, 2010), covert production often provides a viable substitute for overt production tasks. However, in terms of SMI, the two tasks are different and may not share all the same neurophysiology (Ganushchak et al., 2011), especially in some pathological conditions with compromised sensorimotor control such as stuttering (Max et al., 2003; Loucks and De Nil, 2006; Watkins et al., 2008; Hickok et al., 2011; Cai et al., 2014; Connally et al., 2014). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Activity in anterior sensorimotor regions is found in speech production and some perception tasks. Yet, how sensorimotor integration supports these functions is unclear due to a lack of data examining the timing of activity from these regions. Beta (~20 Hz) and alpha (~10 Hz) spectral power within the EEG μ rhythm are considered indices of motor and somatosensory activity, respectively. In the current study, perception conditions required discrimination (same/different) of syllables pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required covert and overt syllable productions and overt word production. Independent component analysis was performed on EEG data obtained during these conditions to (1) identify clusters of μ components common to all conditions and (2) examine real-time event-related spectral perturbations (ERSP) within alpha and beta bands. 17 and 15 out of 20 participants produced left and right μ-components, respectively, localized to precentral gyri. Discrimination conditions were characterized by significant (pFDR < 0.05) early alpha event-related synchronization (ERS) prior to and during stimulus presentation and later alpha event-related desynchronization (ERD) following stimulus offset. Beta ERD began early and gained strength across time. Differences were found between quiet and noisy discrimination conditions. Both overt syllable and word productions yielded similar alpha/beta ERD that began prior to production and was strongest during muscle activity. Findings during covert production were weaker than during overt production. One explanation for these findings is that μ-beta ERD indexes early predictive coding (e.g., internal modeling) and/or overt and covert attentional/motor processes. μ-alpha ERS may index inhibitory input to the premotor cortex from sensory regions prior to and during discrimination, while μ-alpha ERD may index sensory feedback during speech rehearsal and production.
    Full-text · Article · Jul 2014 · Frontiers in Psychology
Show more