The signspeak project-bridging the gap between signers and speakers

RWTH, Aachen, Germany; CRIC, Barcelona, Spain; RUN, Nijmegen, The Netherlands; ULg, Liege, Belgium; TID, Granada, Spain; EUD, Brussels, Belgium
Journal of Speech Language and Hearing Research - J SPEECH LANG HEAR RES 01/2010;
Source: DBLP

ABSTRACT The SignSpeak project will be the first step to approach sign language recognition and translation at a scientific level already reached in similar research fields such as automatic speech recognition or statistical machine translation of spoken languages. Deaf communities revolve around sign languages as they are their natural means of communication. Although deaf, hard of hearing and hearing signers can communicate without problems amongst themselves, there is a serious challenge for the deaf community in trying to integrate into educational, social and work environments. The overall goal of SignSpeak is to develop a new vision-based technology for recognizing and translating continuous sign language to text. New knowledge about the nature of sign language structure from the perspective of machine recognition of continuous sign language will allow a subsequent breakthrough in the development of a new vision-based technology for continuous sign language recognition and translation. Existing and new publicly available corpora will be used to evaluate the research progress throughout the whole project.

  • [Show abstract] [Hide abstract]
    ABSTRACT: Sign languages represent an interesting niche for statistical machine translation that is typically hampered by the scarceness of suitable data, and most papers in this area apply only a few, well-known techniques that are not adapted to small-sized corpora. In this article, we analyze existing data collections and emphasize their quality and usability for statistical machine translation. We also offer findings in the proper preprocessing of a sign language corpus, by introducing sentence end markers, splitting compound words and handling parallel communication channels. Then, we focus on optimization procedures that are tailored to scarce resources, such as scaling factor optimization, alignment optimization and system combination. All methods are evaluated on two of the largest sign language corpora available.
    Machine Translation 01/2012;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we used a Replication Function (R. F.)for improvement tracking with dynamic programming. The R. F. transforms values of gray level [0 255] to [0 1]. The resulting images of R. F. are more striking and visible in skin regions. The R. F. improves Dynamic Programming (D. P.) in overlapping hand and face. Results show that Tracking Error Rate 11% and Average Tracked Distance 7% reduced
  • [Show abstract] [Hide abstract]
    ABSTRACT: This article explores the application of data-driven machine translation (MT) to sign languages (SLs). The provision of an SL MT system can facilitate communication between Deaf and hearing people by translating information into the native and preferred language of the individual. In this paper we address data-driven SL MT predominantly for Irish SL (ISL) but also for German SL (DGS/Deutsche Gebärdensprache). We take two different purpose-built corpora to feed our MaTrEx MT system and in a set of experiments translating both to and from the SLs, we investigate the effects of SL data on statistical MT (SMT). Exploiting the bidirectionality of the MaTrEx system, we demonstrate how additional modules, such as recognition and SL animation, can potentially build a full SL MT model for spoken and SL communication in addition to promising evaluation scores. A secondary focus of the article is on the two main issues affecting SL MT, those of transcription and evaluation. We offer a discussion on both these common problems before concluding.
    Machine Translation 03/2013; 27(1).

Full-text (3 Sources)

Available from
May 20, 2014