Article

Lip positions in american english vowels

0 Bookmarks
 · 
80 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Sagittal-plane movements of small markers attached to the upper and lower lips were analyzed for ten speakers of American English, and seven speakers of Japanese. Each speaker produced simple utterances containing vowels and labial consonants. The data were analyzed to better understand: (1) patterns of pellet motions associated with labial consonant production; (2) pellet positions at discrete, acoustically-defined moments during selected speech sounds; and (3) the relationship between midline separation between the lip surfaces and inter-lip-pellet distance. Results from the study provide qualitative information about the dynamics of labial gestures for consonants involving lip closure. The data also indicate that the English and Japanese speakers positioned and moved their lips in generally similar ways during the test sounds analyzed. Finally, results suggest that plausible estimates of mid-line inter-lip separation can be derived from the trajectories of two pellets, one on each lip, as long as the possibility of lip-body deformation is taken into account.
    Journal of Phonetics 01/1997; 25(4):405-419. · 1.41 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present work on a new anatomically-based 3D parametric lip model for synchronized speech, which also supports lip motion needed for facial animation. The lip model is represented by a B-spline surface and high- level parameters which dene the articulation of the surface. The model parameterization is muscle-based to allow for specication of a wide range of lip motion. The B-spline surface species not only the external portion of the lips, but the internal surface as well. This complete geometric representation replaces, the possibly incomplete, lip geometry of any facial model. We render the model using a procedural texturing paradigm to give color, lighting and surface texture for increased realism. We use our lip model in a text-to-audio-visual-speech system to achieve speech synchro- nized facial animation.