Preprint
To read the file of this research, you can request a copy directly from the authors.

Abstract

Sign Language is argued as the first Language for hearing impaired people. It is the most physical and obvious way for the deaf and dumb people who have speech and hearing problems to convey themselves and general people. So, an interpreter is wanted whereas a general people needs to communicate with a deaf and dumb person. In respect to Bangladesh, 2.4 million people uses sign language but the works are extremely few for Bangladeshi Sign Language (BdSL). In this paper, we attempt to represent a BdSL recognition model which are constructed using of 50 sets of hand sign images. Bangla Sign alphabets are identified by resolving its shape and assimilating its structures that abstract each sign. In proposed model, we used multi-layered Convolutional Neu-ral Network (CNN). CNNs are able to automate the method of structure formulation. Finally the model gained 92% accuracy on our dataset.

No file available

Request Full-text Paper PDF

To read the file of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day
Conference Paper
There is an undeniable communication problem between the Deaf community and the hearing majority. Innovations in automatic sign language recognition try to tear down this communication barrier. Our contribution considers a recognition system using the Microsoft Kinect, convolutional neural networks (CNNs) and GPU acceleration. Instead of constructing complex handcrafted features, CNNs are able to automate the process of feature construction. We are able to recognize 20 Italian gestures with high accuracy. The predictive model is able to generalize on users and surroundings not occurring during training with a cross-validation accuracy of 91.7%. Our model achieves a mean Jaccard Index of 0.789 in the ChaLearn 2014 Looking at People gesture spotting competition.
Conference Paper
In last decade lot of efforts had been made by research community to create sign language recognition system which provide a medium of communication for differently-abled people and their machine translations help others having trouble in understanding such sign languages. Computer vision and machine learning can be collectively applied to create such systems. In this paper, we present a sign language recognition system which makes use of depth images that were captured using a Microsoft Kinect® camera. Using computer vision algorithms, we develop a characteristic depth and motion profile for each sign language gesture. The feature matrix thus generated was trained using a multi-class SVM classifier and the final results were compared with existing techniques. The dataset used is of sign language gestures for the digits 0–9.
Conference Paper
This paper presents a vision-based continuous sign language recognition system to interpret the Taiwanese Sign Language (TSL). The continuous sign language, which consists of a sequence of hold and movement segments, can be decomposed into non-signs and signs. The signs can be either static signs or dynamic signs. The former can be found in the hold segment, whereas the latter can be identified in the combination of hold and movement segments. We use Support Vector Machine (SVM) to recognize the static sign and apply HMM model to identify the dynamic signs. Finally, we use the finite state machine to verify the correctness of the grammar of the recognized TSL sentence, and correct the miss-recognized signs.
Automatic Sign Language Finger Spelling Using Convolutional Neural Network: Analysis
  • M V Beena
  • M N Dr
  • Namboodiri
Beena M.V. and Dr. M. N. Agnisarman Namboodiri, Automatic Sign Language Finger Spelling Using Convolutional Neural Network: Analysis, International Journal of Pure and Applied Mathematics, Volume 177 No. 20 2017, 9-15