A Computational Approach to Deciphering Unknown Scripts

Source: CiteSeer


We propose and evaluate computational techniques for deciphering unknown scripts. We focus on the case in which an unfamiliar script encodes a known language. The decipherment of a brief document or inscription is driven by data about the spoken language. We consider which scripts are easy or hard to decipher, how much data is required, and whether the techniques are robust against language change over time.

12 Reads
  • Source
    • ") algorithm has been widely applied for solving the decipherment problem (Knight and Yamada, 1999; Koehn and Knight, 2000). In the E-step, for each source bigram f 1 f 2 , we estimate the expected counts of the latent variables e 1 and e 2 over all the target words in V E . "
    [Show abstract] [Hide abstract]
    ABSTRACT: Orthographic similarities across languages provide a strong signal for probabilistic decipherment, especially for closely related language pairs. The existing decipherment models, however, are not well-suited for exploiting these orthographic similarities. We propose a log-linear model with latent variables that incorporates orthographic similarity features. Maximum likelihood training is computationally expensive for the proposed log-linear model. To address this challenge, we perform approximate inference via MCMC sampling and contrastive divergence. Our results show that the proposed log-linear model with contrastive divergence scales to large vocabularies and outperforms the existing generative decipherment models by exploiting the orthographic features.
  • Source
    • "In this section, we will review previous approaches and highlight similarities and differences to our work. Several steps have been made in this area, such as (Knight and Yamada, 1999), (Ravi and Knight, 2008), or (Snyder et al., 2010), to name just a few. The main difference of our work is, that it allows for much larger vocabulary sizes and more data to be used than previous work while at the same time not being dependent on seed lexica and/or any other knowledge of the lan- guages. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we show how to train statistical machine translation systems on real-life tasks using only non-parallel monolingual data from two languages. We present a modification of the method shown in (Ravi and Knight, 2011) that is scalable to vocabulary sizes of several thousand words. On the task shown in (Ravi and Knight, 2011) we obtain better results with only 5% of the computational effort when running our method with an n-gram language model. The efficiency improvement of our method allows us to run experiments with vocabulary sizes of around 5,000 words, such as a non-parallel version of the VERBMOBIL corpus. We also report results using data from the monolingual French and English GIGAWORD corpora.
    Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1; 07/2012
    • "Several attempts to automatic deciphering of lost languages have been made. Knight and Yamada (Knight and Yamada 1999) developed a computational approach for unknown scripts decipherment. Their approach is based on a study of phonetic and written scripts in verbal languages. "

    CAA 2012; 01/2011
Show more

Preview (2 Sources)

12 Reads
Available from