Content uploaded by Vinayak H Vinay
Author content
All content in this area was uploaded by Vinayak H Vinay on May 30, 2022
Content may be subject to copyright.
Language Translation for
Impaired People using NLP
Semantics
Prateek M J1, Sai Charan2, Sanjan S Shetty3, Vinayak H4, A B Rajendra
1Vidyavardhaka College of Engineering, Mysuru, India
Abstract Gesture based communication is a visual language used by both discourse
disabled and hearing-impeded individuals as their first language. Individuals
experiencing hearing and communicating in handicaps utilize communication through
gestures as a direct method of correspondence among one another and furthermore
with others yet tragically few out of every odd one of the ordinary individuals can
grasp communication through gestures consequently it brings about an absence of
correspondence and isolation. To the extent both discourse disabled and hearing-
weakened individuals are concerned, approaching gesture based communication is
extremely fundamental for their social, passionate, etymological and social
development of people. Our venture expects to overcome any barrier between these
both discourse disabled and hearing-weakened individuals and ordinary individuals
with the coming of new innovations or technologies. We are building an application
which will change the voice of the client over to communication through signing with
the assistance of normal language preparing semantics.
Keywords: Impaired people, sign language, vocal communication, semantics
1 Introduction
Communication via gestures is a sort of language that utilizes hand developments, outward
appearances and non-verbal communication to convey. It is utilized overwhelmingly by the
hard of hearing and individuals who can hear yet can't talk. However, it is additionally utilized
by some conference individuals, regularly families and family members of the hard of hearing,
and translators who empower the hard of hearing and more extensive networks to speak with
one another [1].
Correspondence through signaling is a sort of correspondence used by people with upset
hearing and talk. People use motion-based correspondence movements as a technique for non-
verbal correspondence to convey their insights and sentiments [2].
However, non-endorsers find it difficult to see, in this manner arranged correspondence
through signals interpreters are needed during clinical and authentic game plans,
informational and instructional gatherings. Over the span of ongoing years, there has been a
growing interest in translating organizations [3]. Various techniques, for instance, video
inaccessible human unravelling using quick Internet affiliations, have been introduced. They
will in this manner give an easy-to-use motion-based correspondence translating organization,
which can be used, yet has huge requirements [4].
Sign language is divided into two i.e. Visual Sign Language & Tactile Sign Language. a)
Visual sign language: It is used by hearing & speech impaired people b) Tactile sign language:
It is used by hearing & sight impaired people. We are basically working on the visual sign
language used by deaf & dumb. Sign Language varies country to country it depends on its
culture as Sign language in India is ISL (Indian Sign Language), America uses ASL
(American Sign Language), China uses CSL (Chinese Sign Language). Sign Language is a
method of communication for deaf & dumb which is composed of various gestures formed
by hand shapes, body orientation & facial expression. Each gesture has a meaning assigned
to it.Alphabets in sign language are composed of different hand shapes & words are composed
of hand shapes with orientation. Complete visual sign language also includes facial
expressions.Visual sign language is an effective means of communication for deaf & dumb.
Though it is true,the hearing-impaired have to challenge communication obstacles in a mostly
hearing capable
society. This research work will concentrate on Visual Sign language interaction. Natural
language is a skill used for understanding human language. It is a part of linguistics and
Artificial Intelligence. NLP (Natural language processing) is a step for developing a system
that can convert the text (words) in human language. POS tagging is the method of NLP and
first introduced in 1960. It is an important method for language processing. For many NLP
applications it is the simplest and most stable step. Part of tagging is the initial step for
machine translation, retrieval of information and etc. Second important method in NLP is
parsing. Parsing is the method which is followed by the compiler. When we thought of sign
recognition, we should consider the major challenges or motivation in sign recognition which
are mentioned below. The main purpose of this project is to build an application which accepts
Audio/Voice as input and converts them to corresponding Sign Language for both speech-
impaired and hearing-impaired people. The interface works in two phases, first converting
Audio to Text using speech to text API (python modules or Google API) and secondly,
applying the semantics of Natural Language Processing (NLTK specifically) and then
produce the output.
Existing system: - A good deal of study has been done on the topic of Indian Sign Language,
to help create systems that can help in the betterment of lives of people with speech and
hearing impairment. There are multiple projects and systems that have been developed and
are being developed to recognize the Indian Sign Language, they make use of different kinds
of technologies that will help in achieving it.[5].They have made use of different machine
learning techniques, for example they classify single and double handed ISL using algorithms
of machine learning, and they have used k-nearest neighbour classification, ANN based and
Convolution Neural Networks based ISL recognition techniques have been used. There are a
very few systems that have been developed that convert audio to Indian Sign Language, for
example a system that converts audio to ISL gloss using WordNet. There are other language
texts that are converted to ISL. But not many systems are created that converts
voice/speech/audio to Indian Sign Language [7].
Proposed system: - The application relies upon changing over the sound signs and has the
opportunity to message using talk to message API (python modules or google API) and
thereafter using the semantics of Natural Language Processing to break down the substance
into more unassuming sensible pieces. Informational collections of predefined gesture based
communication are utilized which the application can utilize to show the changed over sound
into the communication via gestures.
It is word reference based Machine Translation.The motivation of this project is to decrease
the barrier of communication between the impaired people. Deaf people will be able to
communicate just like normal people through the help of this project. They can understand
others' messages once the audio input is changed to video format. The output format is very
easy to understand and there would be no need for complex devices to solve an easy problem.
Designing a software which will be used to convert audio spoken by normal people into sign
languages used by vocally and hearing impaired people. The sign language generated will be
in the form of video which will be displayed in the software.
Objective
The main purpose is to translate audio to sign language. Sign language is the natural way of
communication for challenged people with speaking-hearing disabilities. People use sign
language gestures as a means of non-verbal communication to express their thoughts and
emotions. But non-signers find it extremely difficult to understand hence trained sign language
interpreters are needed during medical and legal appointments, educational and training
sessions.
Our project aims to bridge the gap between these both Speech impaired and hearing-impaired
people and normal people with the advent of new technologies. We have developed an
application which will convert voice of the user to sign language with the help of natural
language processing semantics.
It also helps to convert live audio to sign language where the communication can be easy
between these people. In this project, we translate a complete speech audio file to its
corresponding sign language. The main and most important outcome is to eliminate the
dependency on the human interpreter, where everything can be done by system itself.
Therefore, we can provide an easier alternative for the speech-hearing impaired community to
communicate with the rest of the world. When deployed, there won’t be any need of educating
the users about how to use the application. It creates a user-friendly environment for the user
by providing text output for audio input. Takes comparatively less time for translation of audio
to Indian Sign language.
The proposed Language translator for impaired people using google Application programming
interface, python and natural language processing. This application relies upon changing over
the sound signs had the opportunity to message using talk to message API and using logic of
natural language processing to convert into more sensible pieces.
2 Literature survey
Paper 1- Motionlets matching with adaptive kernels for 3D Indian Sign Language
Recognition. In this paper, an application for identifying indication of Indian sign language
3D motion captured data is created. Here they build a two- phase algorithm which handles
multiple attributes of 3D sign language motion data for machine translation.
Paper 2- A Wearable system for recognizing American Sign Language in real time using IMU
and Surface EMG Sensors. In this paper they have proposed a wearable real-time ASL
recognition system. The author says that the signs performed by the both speechimpaired and
hearing-impaired people into speech are detected by hand gestures using two important
modalities i.e.., Inertial measurement unit and surface electromyography. The author
concentrates on recognizing American sign language. The author states that this was the first
study of ASL recognition system consolidating IMU sensor and sEMG signals that are
compatible to each other. Feature selection is performed to select the best subset of features
from a large number of well-established features and four popular classification algorithms are
investigated for our system design. In this the author uses sign language recognition (SLR) tool
to bridge the communication gap between both speech-impaired and hearing-impaired people.
The system was estimated with 80 commonly used ASL signs in daily conversation and an
average accuracy of 96.16% was achieved with 40 selected features.
Paper 3- Avatar-based Sign Language Interpretation for weather forecast and other TV
programs. The author proposed a system that translated the closed captions of weather forecast
programs into KSL and presented it with 3D avatar animation. This paper generated 3D sign
language animation by translating the closed captions in DTV, for both speech-impaired and
hearing-impaired people to see the weather forecast with sign language interpreter. To identify
the frequency of each word, they analyzed the last three years’ weather forecast scripts from
several sources that are available. They also built sign language synonym dictionary using
KorLex to improve the translation performance. KorLex was also used for the disambiguation
process of word sense. They focused on capturing the motions of a professional sign language
interpreter and build the motion database. The motions were applied with motion blending to
a 3D avatar.
Paper 4- Glove-based Continuous Arabic Sign Language Recognition in User-Dependent
Mode. In this paper, the authors have proposed an application for uninterrupted Arabic Sign
Language Recognition (ArSL) based on data acquired from the two DG5-VHand data gloves.
In this the sensor readings cannot be examined visually for manual labelling, the solution for
manual labelling of glove-based sensor readings is to place a camera to record the signing.
Once the signing is completed, the video recordings can be synchronized with the sensor
readings to detect the boundaries of the words. Raw feature vectors are pretreated in terms of
resampling and the sensor readings are normalized. Window-based statistical features were
used to expand the raw data. This is a key step in the whole process because it captures the
context of the feature vector where the statistical measures are calculated from former and
upcoming raw feature vectors.
Paper 5- Intelligence mobile assistant for hearing impairs to interact with the society language
is a way of words or signs that people use to share feelings and ideas with each other. In view
of the society there is an issue in communication among hearing impaired people and hearing
people. Most of the hearing people have no idea about the sign languages and they are not
having any desire to learn sign language. Thus, typically hearing impairers are used to be
isolated. When considering about all the solutions there is an absence of a Sinhala application
with Sinhala sign language.
Paper 6- In the current fast-moving world, human computer- interactions (HCI) is one of the
main contributors towards the progress of the country. Since the conventional input devices
limit the naturalness and speed of human computer- interactions, Sign Language recognition
system has gained a lot of importance. Different sign languages can be used to express
intentions and intonations or for controlling devices such as home robots. The main focus of
this work is to create a vision based system, a Convolutional Neural Network (CNN) model,
to identify six different sign languages from the images captured. The two CNN models
developed have different type of optimizers, the Stochastic Gradient Descent (SGD) and
Adam.
This chapter records the survey of many journals and articles presented in the Table 1
Table 1 Literature Survey
Paper
name
Author
name
Algorithm
used
Application
limitation
1
Motionlets
matching
with
adaptive
kernels
for
3D Indian
Sign
Language
recognition.
Kishore,
D.A,
Sastry,A.
C. S, &
E. K,
P.V.
Kumar,
They create
a two-phase
algorithm for
device
translations that
maintain many
regions of three-
dimensional sign
language motion
information.
An application
that recognizes
Indian sign
language
indications. It
is generated
3D motion
captured data,
which is then
used to
recognize sign
language.
The model
translates
sign
languages
does not
convert
text to sign
languages.
2
A wearable
system for
recognizing
American
Sign
Language
using IMU
and surface
EMG
sensors.
Wu,
Jian,Lu,
Sun and
Roozbeh
Jafari
The best subset
of highlights
from countless
different
highlights is
selected, and 4
common
different
algorithms are
researched for
device designs.
Hand gestures
are used to
detect signals
performed
by both
speech-
impaired and
hearing
impaired
people into
speech
Using
hand- held
sensors
and
talking
would not
have
the same
level of
precision.
3
Avatar-
based sign
language
interpretatio
n for
weather
forecast and
other TV
programs
oh J, Kim
B, Kim
M, Kang
S, Kwon
H, Kim I,
Song Y
They studied the
previous 3 years'
worth of weather
forecasting
documents from
the variety of
Sources to
determine the
frequency of
each word.
For both
speech-
impaired and
hearing-
impaired
people to see
the weather
forecast with a
sign language
This
system
works
only with
a weather
forecastin
g system.
4
Glove-based
continuous
Arabic Sign
Language
recognition
in user-
dependent
mode
Tubaiz
N,
Shanable
h T,
Assaleh
K
Modified
K- Nearest
Neighbours
classifier.
Continuous
Arabic sign
language
recognition
(ArSL)
Sensor
readings
cannot
be
visually
checked
for
manual
labelling,
5
Intelligent
mobile
assistant for
hearing
impairers to
interact with
the society
in Sinhala
language",
Yasintha
Perera,
Nelunika
Jayalath,
Shenali
Tissera,
oshani
Bandara,
Samanth
a
Thelijjag
oda
Instant
Messaging,
Mobile
Application
, Voice
recognition,
Natural
Language
processing,
Graphic
Interchange
Format
Introductio
n
The
significance is
that it allows
hearing-
impaired
individuals to
communicate
when they are
long distance
apart. This app
will close the
divide between
hearing
impaired
people
The file
Format is
not
compatible
6
Sign
Language
Recognition
System
Using Deep
Neural
Network",
Advanced
Computing
&
Communica
tion Systems
(ICACCS)
Surejya
Suresh,
Haridas
T.P.
Mithun,
M.H.
Supriya.
CNN structure
and a summary
of the planned
construction.
The planned
construction was
studied using
2 plans, the 1st
of which use the
Stochastic
Gradient
optimizer and
the
second of which
used the
Adam
The
convolutional
neural network
was used to
create a basic
version of the
sign
recognition
plan, which
was
successfully
tested.
It is not
user
friendly
3 Modeling and Implementation
This chapter deals with the System Architecture as shown in Figure 1, data flow diagram
in Figure 2, Sequence diagram Figure 3 and Use case diagram Figure 4 respectively. Refer
the sketches.
Figure 1 System Architecture
Figure 2 Audio to processed text Dataflow
3 Testing, Results and Discussion
Testing is the principal part of any venture improvement cycle. A task is inadequate without
effective testing and execution. Various Test cases for our application is shown in the
following tables Table 2, Table 3 and Table 4 respectively.
Table 2 Live Voice
Test case #
1
Test Case
Name
Live voice
Description
Users can use a microphone to give the voice input. This input gets converted to
text and then the sign language video will be displayed on the screen
otherwise it says couldn’t hear properly
Expected
Output
Sign language video is displayed.
Actual
Output
Sign language video is displayed.
Remarks
pass
Table 3 Recorded voice
Table 4 Recorded Voice
Expected Output
Sign language video is displayed.
Actual Output
Sign language video is displayed.
Remarks
pass
Test case #
2
Test Case Name
Recorded Voice
Description
A pre-recorded video file can be selected by the user and it gets converted
to text and then the sign language video will be displayed on the screen
otherwise it says couldn’t hear properly
Results
Figure 1 Figure 6 Output Video 2
Figure 7 Listing Useless words to remove them
Figure 4 Taking Input
Figure 5 Output Video 1
Figure 9 Figure our Application Interface
5 Conclusion and Future Scope
We have been successful in implementing the Sign Language Interpreter. After successful user testing
it has been found that the new system has overcome most of the limitations of the existing systems
especially for ISL. As the ISL is new and very little advancement has been done in this subject,
numerous new recordings for various words can be added to the word reference to augment its degree
and assist with imparting better utilization of this language.
The current system operates on a basic set of words and in order to extend the system, many new words
can be included in the dictionary in future and specialized terms from different fields can be
incorporated too. This project can be made as a mobile application, so that user can install the
application into their mobile phones or laptops and can access it easily.
References
1.P.V.V. Kishore, D.Anil Kumar, ‘’Motionlets Matching with Adaptive Kernels for 3D Indian Sign
Language Recognition’’, 2018, IEEE Sensors Journal 2 DOI 10.1109/JSEN.2018.2810449.
2. Ankita Harkude, Sarika Namade, Shefali Patil, Anita Morey “Audio to Sign Language Translation
for Deaf
People” ISSN: 2277-3754, International Journal of Engineering and Innovative Technology (IJEIT)
Volume
9, Issue 10, April 2020.
3.Seonggyu Jeon, Byungsun Kim, Minho Kim, ‘’Avatar-based Sign Language Interpretation for
weather forecast and other TV programs’’, SMPTE Motion Imaging Journal ( Volume: 126, Issue: 1,
Jan.-Feb. 2017).
4.Noor Tubaiz, Tamer Shanableh, Khaled Assaleh, ‘’Glove-based Continuous Arabic Sign Language
Recognition in User-Dependent Mode’’ IEEE Transactions on Human-Machine Systems ( Volume: 45,
Issue: 4, Aug. 2015)
5.Amitkumar Shinde, Ramesh M. Kagalkar, ‘’Sign language to text and vice versa recognition using
computer vision in Marathi’’, May 2015 International Journal of Computer Applications 118(13):1-7.
6.Neha Poddar, Vrushali Somavanshi, ‘’Study of Sign Language Translation using Gesture
Recognition’’, February 2015 IJARCCE, DOI:10.17148/IJARCCE.2015.4258.
7.V.Padmanabhan, M.SornalathaHand, ‘’Gesture Recognition And Voice Conversion System Using
Sign Language Transcription System’’, International Journal of Scientific & Engineering Research,
Volume 5, Issue 5, May-2014 427 ISSN 2229-5518.
8. Farahanaaz Shaikh, Shreya Darunde,Nikita Wahie, Swapnil Mali “Sign Language Translation
System for
Railway Station Announcements”, Institute of Electrical and Electronics Engineers (IEEE), IEEE
Bombay
Section Signature Conference (IBSSC), 2019.
9. V Aiswarya, N Naren Raju, Singh S Johanan Joy, T Nagarajan, P Vijayalakshmi, "Hidden Markov
ModelBased Sign Language to Speech Conversion System in TAMIL", Biosignals Images and
Instrumentation (ICBSII) 2018 Fourth International Conference on, pp. 206-212, 2018.
10. ) Surejya Suresh, Haridas T.P. Mithun, M.H. Supriya, "Sign Language Recognition System Using
Deep Neural Network", Advanced Computing & Communication Systems (ICACCS) 2019 5th
International Conference on, pp. 614-618, 2019.
11. RabeetFatmi, Sherif Rashad, Ryan Integlia, "Comparing ANN SVM and HMM based Machine
Learning Methods for American Sign Language Recognition using Wearable Motion Sensors",
Computing and Communication Workshop and Conference (CCWC) 2019 IEEE 9th Annual, pp. 0290-
0297, 2019.
12. Ilya Makarov, Nikolay Veldyaykin, Maxim Chertkov, AlekseiPokoev, "Russian Sign Language
Dactyl Recognition", Telecommunications and Signal Processing (TSP) 2019 42nd International
Conference on, 2019.
13. K. Bantupalli and Y. Xie, "American Sign Language Recognition using Deep Learning and
Computer Vision," 2018 IEEE International Conference on Big Data (Big Data), Seattle, WA, USA,
2018.
14. Mittal, A.; Kumar, P.; Roy, P.P.; Balasubramanian, R.; Chaudhuri, B.B. A modified LSTM model
for continuous sign language recognition using leap motion. IEEE Sens 2019.
15. Krishnaraj, N.; Kavitha, M.; Jayasankar, T.; Kumar, K.V. A Glove based approach to recognize
Indian Sign Languages. Int. J. Recent Technol. Eng. IJRTE 2019. 7, 1419–1425.
16. Patel, B.D.; Patel, H.B.; Khanvilkar, M.A.; Patel, N.R.; Akilan, T. ES2ISL: An advancement in
speech to sign language translation using 3D avatar animator. In Proceedings of the 2020 IEEE
Canadian Conference on Electrical and Computer Engineering (CCECE), London, ON, Canada, 30
August–2 September 2020; pp. 1–5.
17. Stoll, S.; Camgöz, N.C.; Hadfield, S.; Bowden, R. Text2Sign: Towards Sign Language Production
Using Neural Machine Translation and Generative Adversarial Networks. Int. J. Comput.
Vis. 2020, 128, 891–908.
18. Zhang, Y.; Vogel, S.; Waibel, A. Interpreting bleu/nist scores: How much improvement do we need
to have a better system? In Proceedings of the Fourth International Conference on Language Resources
and Evaluation, Lisbon, Portugal, 26–28 May 2019; pp. 1650–1654.
19. Mehta, N.; Pai, S.; Singh, S. Automated 3D sign language caption generation for video. Univers.
Access Inf. Soc. 2020, 19, 725–738.
20. Deep learning methods for indian sign language recognition Pratik Likhar, Neel Kamal Bhagat, GN
Rathna 2020 IEEE 10th International Conference on Consumer Electronics (ICCE-Berlin), 1-6, 2020.
21.Indian sign language gesture recognition using image processing and deep learning
Neel Kamal Bhagat, Y Vishnusai, GN Rathna 2019 Digital Image Computing: Techniques and
Applications (DICTA), 1-8, 2019.
22. Signet: A deep learning based indian sign language recognition system CJ Sruthi, A Lijiya
2019 International conference on communication and signal processing (ICCSP), 0596-0600, 2019.