Table 7 shows the grammatical components and POSs
defined in this study.
The authors would like to thank the National Science Council
of the Republic of China, Taiwan, for financially supporting
this research under Contract No. NSC 94-2614-E-006-073.
 S. Wilcox and P.P. Wilcox, Learning to See. Gallaudet Univ. Press,
 F. Alonso, A. Antonio, J.L. Fuertes, and C. Montes, “Teaching
Communicati on Skills to Hearing-Impaired Children,” IEEE
Multimedia, pp. 55-67, 1995.
 C. Brown, “Assistive Technology Computers and Persons with
Disabilities,” Comm. ACM, vol. 5, pp. 36-46, 1992.
 D.L. Speers, “Representation of American Sign Language for
Machine Translation,” PhD dissertation, Graduate School of Arts
and Sciences, Georgetown Univ., 2001.
 L.L. Lloyd, D.R. Fuller, and H.H. Arvidson, Augmentative and
Alternative Communication: A Handbook of Principles and Practices.
Allyn and Bacon, Inc., 1997.
 C. Vogler and D. Metaxas, “Toward Scalability in ASL Recogni-
tion: Breaking Down Signs into Phonemes,” Lecture Notes in
Artificial Intelligence, vol. 1739, pp. 211-224, 1999.
 C. Vogler and D. Metaxas, “A Framework for Recognizing the
Simultaneous Aspects of American Sign Language,” Computer
Vision and Image Understanding, no. 81, pp. 358-384, 2001.
 T. Starner, J. Weaver, and A. Pentland, “Real-Time American Sign
Language Recognition Using Desk and Wearable Computer-
Based Video,” IEEE Trans. Pattern Analysis and Machine Intelligence,
vol. 20, no. 12, pp. 1371-1375, Dec. 1998.
 M.C. Su, Y.X. Zhao, H. Huang, and H.F. Chen, “A Fuzzy Rule-
Based Approach to Recognizing 3-D Arm Movements,” IEEE
Trans. Neural Systems and Rehabilitation Eng., vol. 9, no. 2, 2001.
 R. Liang, “Continuous Gesture Recognition System for Taiwanese
Sign Language,” PhD dissertation, Nat’l Taiwan Univ., 1997.
 S.C.W. Ong and S. Ranganath, “Aut omatic Sign Language
Analysis: A Survey and the Future beyond Lexical Meaning,”
IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 6,
pp. 873-891, June 2005.
 C.C. Manning and H. Schu
tze, Foundations of Statistical Natural
Language Processing. MIT Press, 1999.
 W. Chou and B.H. Juang, Pattern Recognition in Speech and
Language Processing. CRC Press, 2003.
 P.F. Brown, S.A. Della Pietra, V.J. Della Pietra, and R.L. Mercer, “The
Mathematics of Statistical Machine Translation: Parameter Estima-
tion,” Computational Linguistics, vol. 19, no. 2, pp. 263-311, 1993.
 H. Ney, S. Niessen, F. Och, H. Sawaf, C. Tillmann, and S. Vogel,
“Algorithms for Statistical Translation of Spoken Language,” IEEE
Trans. Speech and Audio Processing, vol. 8, no. 1, pp. 24-36, 2000.
 R. Kennaway, “Synthetic Animation of Deaf Signing Gestures,”
Proc. Fourth Int’l Workshop Gesture and Sign Language Based Human-
Computer Interaction, 2001.
 A. Irving and R. Foulds, “A Param etric Approach to Sign
Language Synthesis,” Proc. SIGACCESS, pp. 212-213, 2005.
 Y. Chen, W. Gao, G. Fang, C. Yang, and Z. Wang, “CSLDS:
Chinese Sign Language Dialog System,” Proc. IEEE Int’l Workshop
Analysis and Modeling of Faces and Gestures, pp. 236-237, 2003.
 A.B. Grieve-Smith, “SignSynth: A Sign Language Synthesis
Application Using Web3D and Perl,” Proc. Gesture Workshop,
pp. 134-145, 2001.
 E.J. Holden, J.C. Wong, and R. Owens, “An Effective Sign
Language Display System,” Proc. Eighth Int’l Symp. Signal
Processing and Its Applications, vol. 1, pp. 54-57, 2005.
 O. Arikan and D.A. Forsyth, “Interactive Motion Generation from
Examples,” Proc. 29th Ann. Conf. Computer Graphics and Interactive
Techniques, pp. 483-490, 2002.
 Y. Li, T. Wang, and H.Y. Shum, “Motion Texture: A Two-Level
Statistical Model for Character Motion Synthesis,” ACM Trans.
Graphics, vol. 21, no. 3, pp. 465-472, 2002.
 L. Kovar, M. Gleicher, and F. Pighin, “Motion Graphs,” Proc. ACM
SIGGRAPH, pp. 473-482, 2002.
 J. Lee, J. Chai, P.S.A. Reitsma, J.K. Hodgins, and N.S. Pollard,
“Interactive Control of Avatars Animated with Human Motion
Data,” Proc. ACM SIGGRAPH, pp. 491-500, 2002.
 S.W. Kim, Z.X. Li, and Y. Aoki, “On Intelligent Avatar Commu-
nication Using Korean, Chinese and Japanese Sign-Languages: An
Overview,” Proc. Eighth Control, Automation, Robotics and Vision
Conf., vol. 1, pp. 747-752, 2004.
 Y. Cao, P. Faloutsos, E. Kohler, and F. Pighin, “Real-Time Speech
Motion Synthesis from Recorded Motions,” Proc. ACM SIG-
GRAPH Eurographics Symp. Computer Animation, pp. 347-355, 2004.
 T. Ezzat, G. Geiger, and T. Poggio, “Trainable Video-Realistic
Speech Animation,” Proc. ACM SIGGRAPH, vol. 21, pp. 388-397,
 C. Bregler, M. Covell, and M. Slaney, “Video Rewrite: Driving
Visual Speech with Audio,” Proc. ACM SIGGRAPH, pp. 353-360,
 F. Solina and S. Krape
z, “Synthesis of the Sign Language of the
Deaf from the Sign Video Clips,” Electrotechnical Rev., vol. 66,
pp. 260-265, 1999.
 Ministry of Education, Division of Special Education, Changyong
Cihui Shouyu Huace (Sign Album of Common Words), vol. 1. Taipei:
Ministry of Education, 2000.
 “The Chinese Knowledge Information Processing Group, Analysis
of Chinese Part of Speech,” CKIP Technical Report, no. 93-05, Inst. of
Information Science, Academic Sinica, Taipei, 1993 (in Chinese).
 Z. Dong, The HowNet Web Site, http://www.keenage.com, 1999.
 Inst. of Linguistics, Nat’l Chung Cheng Univ., Chiayi, Taiwan,
Proc. Int’l Symp. Taiwan Sign Language Linguistics, http://
38 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 29, NO. 1, JANUARY 2007
Grammatical Components and POSs Defined in This Study