Joint Optimization of Word Alignment and Epenthesis Generation for Chinese to Taiwanese Sign Synthesis

Home Network Technology Center, Industrial Technology Research Institute, N200, ITRI Bldg. R1, No. 31, Gongye 2nd Rd., Annan District, Tainan City 709, Taiwan, ROC.
IEEE Transactions on Pattern Analysis and Machine Intelligence (Impact Factor: 5.78). 02/2007; 29(1):28-39. DOI: 10.1109/TPAMI.2007.15
Source: PubMed


This work proposes a novel approach to translate Chinese to Taiwanese sign language and to synthesize sign videos. An aligned bilingual corpus of Chinese and Taiwanese Sign Language (TSL) with linguistic and signing information is also presented for sign language translation. A two-pass alignment in syntax level and phrase level is developed to obtain the optimal alignment between Chinese sentences and Taiwanese sign sequences. For sign video synthesis, a scoring function is presented to develop motion transition-balanced sign videos with rich combinations of intersign transitions. Finally, the maximum a posteriori (MAP) algorithm is employed for sign video synthesis based on joint optimization of two-pass word alignment and intersign epenthesis generation. Several experiments are conducted in an educational environment to evaluate the performance on the comprehension of sign expression. The proposed approach outperforms the IBM Model 2 in sign language translation. Moreover, deaf students perceived sign videos generated by the proposed method to be satisfactory.

Download full-text


Available from: Chung-Hsien Wu, Oct 06, 2015
34 Reads
  • Source
    • "• (Chiu et al., 2007) present a system for the language pair Chinese and Taiwanese sign language. The optimizing methodologies are shown to outperform IBM model 2. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we describe the first data-driven automatic sign-language-to- speech translation system. While both sign language (SL) recognition and translation techniques exist, both use an intermediate notation system not directly intelligible for untrained users. We combine a SL recognizing framework with a state-of-the-art phrase-based machine translation (MT) system, using corpora of both American Sign Language and Irish Sign Language data. In a set of experiments we show the overall results and also illustrate the importance of including a vision-based knowledge source in the development of a complete SL translation system.
  • Source
    • "Chiu et al. discuss a novel approach to translating from written Chinese to Taiwanese Sign Language, producing videos of signs using a bilingual corpus and sign data [Chiu et al. 2007] "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this article we present a multichannel animation system for producing utterances signed in French Sign Language (LSF) by a virtual character. The main challenges of such a system are simultaneously capturing data for the entire body, including the movements of the torso, hands, and face, and developing a data-driven animation engine that takes into account the expressive characteristics of signed languages. Our approach consists of decomposing motion along different channels, representing the body parts that correspond to the linguistic components of signed languages. We show the ability of this animation system to create novel utterances in LSF, and present an evaluation by target users which highlights the importance of the respective body parts in the production of signs. We validate our framework by testing the believability and intelligibility of our virtual signer.
    Transactions on Interactive Intelligent Systems 10/2011; 1(1). DOI:10.1145/2030365.2030371
  • Source
    • "The common problem of the rule-based paradigm is that the coverage and applicability of the manually constructed lexicon and rules are fairly limited, so that most funded projects have focused on a restricted domain such as weather forecast and eGovernment services for practical results. Recently, corpus-based researches on sign language translation appeared [5] [6] [7] [8], but they are still at an early stage. "
    [Show abstract] [Hide abstract]
    ABSTRACT: paper, we propose a method to convert a written sentence in spoken language into a suitable representation in sign language within the framework of Combinatory Categorial Grammar (CCG). The representation reflects the multi-channel nature of sign language performance, including manual and non-manual linguistic signals of multiple channels and information about their coordination. We show that most information needed to address linguistic phenomena in sign language such as word order, spatial references, classifier construction, and verb inflection can be encoded in the CCG sign lexicon. During the CCG derivation process, a semantic representation for sign language expressions is created so that the resulting output can be directly interpreted as a sequence of signs, each containing manual and non-manual components and representing their coordination and spatial relationship. The derivation process with the constructed lexicon is presented with several examples for Korean Sign Language. We discuss implications of our proposal and future directions.
Show more