Article

Salient phonetic features of Indian languages in speech technology

Sadhana (Impact Factor: 0.39). 36(5). DOI: 10.1007/s12046-011-0039-z

ABSTRACT Speech signal is the basic study and analysis material in speech technology as well phonetics. To form meaningful chunks of language, the speech signal should have dynamically varying spectral characteristics, sometimes varying within a stretch of a few milliseconds. Phonetics groups these temporally varying spectral chunks into abstract classes roughly called as allophones. Distribution of these allophones into higher level classes called phonemes takes us closer to their function in a language. Phonemes and letters in the scripts of literate languages – languages which use writing have varying degrees of correspondence. As such a relationship exists, a major part of speech technology deals with the correlation of script letters with chunks of time-varying spectral stretches in that language. Indian languages are said to have a more direct correlation between their sounds and letters. Such similarity gives a false impression of similarity of text-to-sound rule sets across these languages. A given letter which has parallels across various languages may have different degrees of divergence in its phonetic realization in these languages. We illustrate such differences and point out the problem areas where speech scientists need to pay greater attention in building their systems, especially multilingual systems for Indian languages.

0 Bookmarks
 · 
32 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper aims to discuss the implementation of phoneme based Manipuri Keyword Spotting System (MKWSS). Manipuri is a scheduled Indian language of Tibeto-Burman origin. Around 5 hours of read speech are collected from 4 male and 6 female speakers for development of database of MKWSS. The symbols of International Phonetic Alphabet (IPA)(revised in 2005) are used during the transcription of the data. A five state left to right Hidden Markov Model (HMM) with 32 mixture continuous density diagonal covariance Gaussian Mixture Model (GMM) per state is used to build a model for each phonetic unit. We have used HMM tool kit (HTK), version 3.4 for modeling the system. The system can recognize 29 phonemes and a non-speech event (silence) and will detect the present keywords formed by these phonemes. Continuous Speech data have been collected from 5 males and 8 females for analysing the performance of the system. The performance of the system depends on the ability of detection of the keywords. An overall performance of 65.24% is obtained from the phoneme based MKWSS.
    Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG), 2013, IIT, Jodhpur; 12/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper explores pitch synchronous and glottal closure (GC) based spectral features for analyzing the language specific information present in speech. For determining pitch cycles (for pitch synchronous analysis) and GC regions, instants of significant excitation (ISE) are used. The ISE correspond to the instants of glottal closure (epochs) in the case of voiced speech, and some random excitations like onset of burst in the case of nonvoiced speech. For analyzing the language specific information in the proposed features, Indian language speech database (IITKGP-MLILSC) is used. Gaussian mixture models are used to capture the language specific information from the proposed features. Proposed pitch synchronous and glottal closure spectral features are evaluated using language recognition studies. The evaluation results indicate that language recognition performance is better with pitch synchronous and GC based spectral features compared to conventional spectral features derived through block processing. GC based spectral features are found to be more robust against degradations due to background noise. Performance of proposed features is also analyzed on standard Oregon Graduate Institute Multi-Language Telephone-based Speech (OGI-MLTS) database.
    International Journal of Speech Technology 12/2013;