ArticlePDF Available

Improving multimodal web accessibility for deaf people: Sign language interpreter module

Authors:

Abstract and Figures

The World Wide Web is becoming increasingly necessary for everybody regardless of age, gender, culture, health and individual disabilities. Unfortunately, there are evidently still problems for some deaf and hard of hearing people trying to use certain web pages. These people require the translation of existing written information into their first language, which can be one of many sign languages. In previous technological solutions, the video window dominates the screen, interfering with the presentation and thereby distracting the general public, who have no need of a bilingual web site. One solution to this problem is the development of transparent sign language videos which appear on the screen on request. Therefore, we have designed and developed a system to enable the embedding of selective interactive elements into the original text in appropriate locations, which act as triggers for the video translation into sign language. When the short video clip terminates, the video window is automatically closed and the original web page is shown. In this way, the system significantly simplifies the expansion and availability of additional accessibility functions to web developers, as it preserves the original web page with the addition of a web layer of sign language video. Quantitative and qualitative evaluation has demonstrated that information presented through a transparent sign language video increases the users’ interest in the content of the material by interpreting terms, phrases or sentences, and therefore facilitates the understanding of the material and increases its usefulness for deaf people. KeywordsHuman-computer interaction–Usability–Accessibility–Deaf and hard of hearing–Sign language–Video–Transparent video
Content may be subject to copyright.
Improving multimodal web accessibility for deaf people:
sign language interpreter module
MatjažDebevc &PrimožKosec &Andreas Holzinger
Published online: 15 April 2010
#Springer Science+Business Media, LLC 2010
Abstract The World Wide Web is becoming increasingly necessary for everybody regardless
of age, gender, culture, health and individual disabilities. Unfortunately, there are evidently still
problems for some deaf and hard of hearing people trying to use certain web pages. These
people require the translation of existing written information into their first language, which can
be one of many sign languages. In previous technological solutions, the video window
dominates the screen, interfering with the presentation and thereby distracting the general
public, who have no need of a bilingual web site. One solution to this problem is the
development of transparent sign language videos which appear on the screen on request.
Therefore, we have designed and developed a system to enable the embedding of selective
interactive elements into the original text in appropriate locations, which act as triggers for the
video translation into sign language. When the short video clip terminates, the video window is
automatically closed and the original web page is shown. In this way, the system significantly
simplifies the expansion and availability ofadditional accessibility functions to web developers,
as it preserves the original web page with the addition of a web layer of sign language video.
Quantitative and qualitative evaluation has demonstrated that information presented through a
transparent sign language video increases the usersinterest in the content of the material by
interpreting terms, phrases or sentences, and therefore facilitates the understanding of the
material and increases its usefulness for deaf people.
Keywords Human-computer interaction .Usability .Accessibility .Deaf and hard of
hearing .Sign language .Video .Transparent video
Multimed Tools Appl (2011) 54:181199
DOI 10.1007/s11042-010-0529-8
M. Debevc (*):P. Kosec
Faculty of Electrical Engineering and Computer Science, University of Maribor, Smetanova ulica 17,
SI-2000 Maribor, Slovenia
e-mail: matjaz.debevc@uni-mb.si
A. Holzinger
Institute of Medical Informatics, Research Unit HCI4MED, Medical University Graz,
Auenbruggerplatz 2/V, A-8036 Graz, Austria
A. Holzinger
Institute for Information Systems and Computer Media, Graz University of Technology,
Inffeldgasse 16c, 8010 Graz, Austria
1 Introduction
Information and Communications Technology (ICT), with its applications and systems,
such as the World Wide Web (the Web), has contributed significantly to the potential for
improving the status of people with disabilities in the social and socio-occupational area.
Every day, millions of people use it as an effective tool for communication and
information gathering. For example, one can witness the increased introduction of
sophisticated multimedia computer presentations together with options for audio and
video communication.
Despite web applications being so interesting, there is an increasing risk that people
with disabilities could be forced into a subordinate position. Surveys, such as the United
Nations Motivation Global Audit of Web Accessibility in 2006, showed that in 20
countries around the world, only 3 of the 100 entry web pages have reached the base level
of accessibility [39]. In this study, it was found that some web sites could easily be
upgraded with at least the basic interactive elements essential for meeting the require-
ments and needs of people with disabilities. The entry web page of each web site was
evaluated on the basis of the globally recognized recommendations titled Web Content
Accessibility Guidelines version 1.0 (WCAG 1.0), published by the Web Accessibility
Initiative (WAI) [3,43].
In addition to WCAG guidelines, guidelines for the accessibility of material on the Web
were also released by other international standards organizations such as the International
Telecommunication Union (ITU) and the International Organization for Standardization
(ISO). Unfortunately, these guidelines are still too general and often inadequate and
inappropriate for the specific needs of people with disabilities, such as deaf people who use
sign language as their first language. This group mainly offers a solution in written form,
such as through the conversion of speech or sound into written text, such as subtitles. For
the vast majority of deaf people, who communicate in sign language, text written in a
language they consider to be their second language is difficult to comprehend. The
inadequacy of existing guidelines was also reflected in research of usability evaluation by
Ivory and Hearst [22], who conclude that automation of usability evaluation does not
capture important qualitative and subjective information and propose analytical and
simulation modeling before web site development.
On the other hand, the development and increasing speed of data transmission over the
Internet opens additional possibilities for the transfer of more complex applications, such as
high quality video, audio, animations and simulations transmission, which could be one
basis for improving the accessibility of materials for people with disabilities [40]. However,
despite the development of broadband connections, the material on the Web still relays too
many text documents and static images rather than videos, which is, unfortunately,
inappropriate for the majority of deaf and hard-of-hearing people who use sign language as
their first language. Past research has shown that deaf signer users, who use sign language
as a first and desired language, are often helpless and become confused when searching for
information on web sites [6,10].
To help web designers develop sites with greater accessibility, we have provided the
following solutions for deaf users utilizing video technology.
The presentation of transparent sign language videos on existing web pages to describe
part or all of the available material.
The control of the video (size and speed adjustment, pause and stop).
182 Multimed Tools Appl (2011) 54:181199
The objective was to enhance the accessibility of web sites for the deaf so that there
would be no need for a double online system (in the native language and in sign language),
but to use existing materials with an accessibility update to sign language video.
This study demonstrates, by both quantitative and qualitative evaluations, that
information presented through a transparent sign language video increases the users
interest in the content of materials by interpreting terms, phrases or sentences, and therefore
facilitates the understanding of the material and increases its usefulness for deaf people.
This enables deaf people to be even better prepared to read other related texts [5].
Moreover, we show that a transparent web video enables web designers to use the existing
design in place and add appropriate interactive elements that trigger sign language videos
on the web sites. Our theory is confirmed by the results of the experiment.
In Section 2, we describe the problem of hearing loss and motivation for the use of sign
language video. In Section 3, we describe related works in this field. In Section 4,we
analyze the video requirements for deaf and hard-of-hearing users. The main features of our
system and implementation process are presented in Section 5. In Section 6, we present the
design and realization of a prototype tested during evaluation. Section 7reveals the results
and findings from the experiment. In Section 8, we conclude with final thoughts and one
further research work proposal.
2 Hearing loss and motivations for sign language video
The loss or deterioration of hearing occurs when problems arise with the perception of such
sound elements as frequency, pitch, timbre and loudness of the surroundings. Hearing loss
is generally classified in terms of different categories of dB (decibels) loss, such as mild
hearing loss (between 25 and 40 dB), moderate hearing loss (between 40 and 70 dB),
severe hearing loss (between 70 and 95 dB) and profound hearing loss (from 95 dB
onwards) [25]. Types of hearing loss may be conductive, sensorineural, mixed or central.
Conductive hearing loss occurs when sound conduction is lost from the outer and middle
ear in the inner ear, which may occur due to chronic middle ear infections or diseases.
Sensorineural hearing loss results from damage to hair cells in the cochlea in the inner ear.
This happens due to the long exposure to loud noise, diseases (e.g. meningitis) or the use of
certain drugs such as the antibiotics streptomycin and gentamicin.
Some deaf people, but not all, use full natural language to communicate among
themselves, which is known as sign language. Sign language for deaf people is based on
hand movements, face, eyes and lips mimicry, and body movement. It uses a visual-sign
system with defined positions, locations, orientations and movements of hands and fingers,
as well as facial expressions.
Sign language also has its own linguistic structure, independent of the vocal language in
the same geographical area. Word order (which is different from written language) and
grammatical structure are a product of the separate development of a physical language
within the deaf community.
This method of communication has a strong impact on the culture and language of the
deaf community and the individuals within that community. In the case of hearing loss
(deafness), we are therefore talking about two cultures and two languages.
Since deaf people frequently use sign language, with its lack of sound and constant
visual communication, they face difficulties accessing written text in web content and web
applications, placing them at a disadvantage.
Multimed Tools Appl (2011) 54:181199 183
There are several arguments and motivations for providing sign language video on the
Web.
Demographics data.
Literacy and access to information.
Reading ability.
Navigation ability.
Multilanguage requirements.
2.1 Demographics data
Studies by international organizations have shown that there are around 650 million people
with some form of disability, representing almost 10% of the total population in the world.
Around 28 million people living in the United States have a certain degree of hearing loss
[13,17,26]. In the European Union, the tendency towards an aging population is apparent
and is expected to be associated with an increase in the number of people suffering from
hearing loss.
According to research by the Institute of Hearing Research (IHR) in the UK and
according to the survey of Shiel (2006) [31], approximately 71 million people worldwide
had a degree of hearing loss in 2005.
World Health Organization (WHO) estimates a probable 90 million people in 2015 will
be living with some degree of hearing loss greater than 25 dB, due to the aging of the
population. About 4 million people in the European Union are reported as profoundly deaf.
With regard to children, the IHR estimates that there are 174,000 children in Europe as a
whole with severe hearing loss and another 600,000 with mild hearing loss. These statistics
make the hearing-disabled population one of the largest minorities facing the challenge of
communication that is mainly audio-based [2].
2.2 Literacy and access to information
Based on data collected by the World Federation of the Deaf (WFD), around 80% of deaf
people worldwide have an insufficient education and/or literacy problems, lower verbal
skills and mostly chaotic living conditions [45]. Other research shows that deaf people are
often faced with the problem of acquiring new words and notions [25,27]. Because of the
many grammar differences between their mother tongue and sign language, the deaf person
might be fluent in sign language while experiencing problems reading their mother tongue.
According to Holt, the majority of deaf 18-year-olds in the United States have poorer
reading capabilities of English in comparison to 10-year-old students who are not deaf [18].
Some studies that have examined the reading ability of deaf 16-year-olds have shown
that about 50% of the children are illiterate. Of these, 22% displayed a level of knowledge
equivalent to that of a 10-year-old child who is not deaf and only 2.5% of participants
actually possessed the reading skills expected for their age [14]. Also, other studies in the
United States by Farwell have shown that deaf people face difficulties when reading written
text [11]. The average literacy rate of a high school graduate deaf student was similar to that
of a non-deaf student in the third or fourth grade.
Access to information is also important in cases of emergency. Past disasters
around the world have shown that, at the time of an accident, people with disabilities
did not receive the same assistance and appropriate information as other people did.
184 Multimed Tools Appl (2011) 54:181199
The United Nation Convention [38] calls upon States to develop measures for emergency
services (article 9 (1) (b)). Messages using video for deaf people have rapidly become
one of the more popular methods for sending them information but, unfortunately, most
countriesemergency services do not allow for video communications with deaf people.
The reason lies in the communication protocols, which are not compatible with each
other.
2.3 Reading ability
It is surprising and disappointing that many deaf and hard-of-hearing people, particularly
those for whom sign language is the first language, have reading difficulties [13].
The problem arises because writing was developed to record spoken language and
therefore favours those with the ability to speak. Spoken language contains phonemes
whichcanberelatedtothewrittenwordbymindmodelling.Duetothelackofaudible
components and consequently difficulties in understanding written words, this can not
be done by deaf people. However, this is not applicable for all, because some
completely deaf people become excellent readers, while even people with lower hearing
loss may have difficulties with reading. Depending on their education and the social
community, some deaf people use lip-reading for easier communication and assistance
in learning to read.
According to Hanson [13], the language experience for each deaf or hard of hearing
individual is not implicit because it is about different personal knowledge and skills, such as
sign language, clear speaking, lip-reading and textual reading. This knowledge has
implications for web designers who have to use sign language videos to meet the needs of
deaf and hard-of-hearing users.
2.4 Navigation ability
Another motivation for the integration of sign language videos into web sites is that sign
language improves the taxonomic organization of the mental lexicons of deaf people. The
use of sign language on the Web would, in this case, reduce the requirements for the rich
knowledge of the words and notions of another language. A knowledge of words and
notions is of the utmost importance for navigation in and between web sites and for the use
of hyperlinks, such as in online shopping web sites where a lot of different terms and sub-
terms in vertical and horizontal categories appear. Unfortunately, deaf people have
problems understanding the meaning of words and notions, especially when it is necessary
to understand certain notions in order to correctly classify and thus understand either a word
or another notion [25].
2.5 Multilanguage requirements
One of the important requirements is multilanguage support, especially in Europe. For
example, tourist information, governmental and emergency service web designers need to
construct web sites in English, German, Italian, French and even in Hungarian. This is
particularly true for small countries such as Slovenia, which are surrounded by several
countries with rich language backgrounds. In some countries sign language is also
recognized as an official national language, and therefore there is a strong need to include
sign language translations into web sites.
Multimed Tools Appl (2011) 54:181199 185
3 Related work
Several solutions exist which are designed for deaf and hard-of-hearing people with
acknowledge of sign language, all of which share the integration of sign language videos
into the web pages, which requires additional space for video. This method drastically
reduces the area that can be used for the usual positioning of web material such as text,
pictures and other multimedia elements.
In projects, such as SMILE [24], ShowSounds [41], Signing Web [12], ATBG [36],
SignOn [16], History of the Deaf [6] and Signing Savvy for American Sign Language
(ASL) [32] (that originates from the Sign Language Browser project from Stewart [37]), it
is obvious that web sites must be carefully planned to position the video in exact locations
on each web page.
The demand for the constant presence of video on the web site is, unfortunately,
totally inadequate for classical informative web sites which cover huge amounts of
daily information. The question that immediately arises here is this: How can we solve
the problem of including a sign language video clip without breaking the existing design
of the web page, while at the same time, with the minimum coding, enabling the option to
play the video clip?
The basis of the idea for our project of transparent sign language video results from the
method of creating a transparent background for Adobe Flash
TM
video and DHTML
(Dynamic HyperText Markup Language) layers. On the one hand, the transparent
background of the video gives the impression that the sign language interpreter is cut,
as the background of the video is not seen due to the transparency; on the other hand,
DHTML allows the video to appear as an additional layer, which allows the structure of the
web page to be preserved.
The previous work of other projects suggests that a similar idea has not yet been
implemented for deaf people who use sign language as their primary language. Because of
its simplicity and ease-of-useadjusting a web site requires minimal changesthe solution
could be integrated into WCAG Guidelinesfor the needs of deaf and hard-of-hearing
people. Currently, the Web Content Accessibility Guidelines definition states that using
clear and simple language also benefits people whose first language differs from your own,
particularly those people who communicate primarily with sign language[44], and does
not yet determine that transparent video should be used, which would allow for the rapid
and easy integration of sign language videos.
In addition, other studies have shown that natural videos are more welcome and accepted
by the end users than signing avatars and synthetic gestures [29]. Due to this fact, a higher
value has been set on the video quality of the sign language interpreter.
Other studies have aimed at the use of technology to translate spoken words into sign
language, such as surveys of Vogler [42], Bangham (project ViSiCAST) [1] or animation
ASL generator by Huenerfauth [21], and eSign [9]. Although these translation methods
may be appropriate for certain areas, the aim of our project was not translation or
interpretation.
Instead of focusing on the translation between written/spoken words and sign language,
we wanted to find an appropriate solution to simplify the design and implementation of sign
language video clips as transparent videos over the existing web pages in a way that is
friendly to users and authors. Our idea was to provide deaf people with a new option for
easy and rapid access to information, tailored to their needs without discriminating against
other users. In addition to this idea of a transparent video, we wanted to find a link between
the existing elements of the page and the video translations. The missing link in this
186 Multimed Tools Appl (2011) 54:181199
communication is the transfer of the relevant sign language video, which may even be
animated, from the web server (for example, the Sign Savvy portal) to existing web sites.
4 Requirement analysis: videos used by hearing impaired people
It is a fact that deaf and hard-of-hearing users whose first language is sign language require
translations of the written text on web sites which are written in their second language. One
of the limitations of providing such sign language videos is the high costs of producing,
processing, saving and exchanging videos suitable for written parts, whether for single
words or notions. Other obstacles are high demands and the requirements of making videos
usable for all deaf and hard-of-hearing users, including the elderly [20].
The aspects listed below were determined based on the results of other research
projects and the methodology of needs analysis as supported by the European
Commission [34]. Those methods include personal interviews and brainstorming with
end users, such as the directors of educational institutions, specialists for teaching people
with special needs, and teachers who are also themselves people with special needs.
There would also be questionnaires and the organization of discussion panels for
brainstorming and feedback with the help of three workshops for teachers from
educational institutions for people with special needs. These teachers also use ICT in
the deaf educational process.
4.1 Web design requirements
Based on Web Content Accessibility Guidelines (WCAG 2.0), published by the Web
Accessibility Initiative (WAI), web application design for deaf and hard-of-hearing users
should contain a clear presentation of information and data (Guideline 1.4 Distinguishable:
Make it easier for users to see and hear content including separating foreground from
background[44]). In this segment one can find instructions and guidelines for colour,
audio, contrast, resizing and images of text. However, there are no provided instructions or
guidelines for sign language videos aside from those in Guideline 1.2 Time-based Media:
Provide alternatives for time-based media, which is designed to access to time-based and
synchronized media on the Web.
Requirements for signing and subtitling are also defined in the EBU report on Access
Services by the European Broadcasting Union (EBU) standard [8], which precisely defines
recommendations for people with disabilities in the broadcasting industry. This report
presents the recommendations for the broadcasting industry and it would be reasonable to
transfer these recommendations to the Web.
At the moment, the most popular approach that web developers use for designing web
sites for deaf and hard-of-hearing users is the integration of Adobe Flash Player into a
specific section of the page where the sign language video is shown. The advantage of this
approach is the cross-browser compatibility, and the lack of security issues but the obstacles
include the fact that informative web sites frequently contain text, images and photos
expanding throughout the whole page, leaving no space for the video. One of the possible
solutions could be the use of a popup window, although this may interrupt users visual
contact/focus with the content beneath. Also, as a matter of security, some browsers prevent
popup windows by default, and not all of our target users are sufficiently knowledgeable to
tailor their settings accordingly. Other way of presenting the sign language videos include
launching a local application player such as Windows Media Player
TM
, RealPlayer
TM
or
Multimed Tools Appl (2011) 54:181199 187
QuickTime
TM
. In this way, the designers confront difficulties since each player requires
different implementation approaches.
Web portals aimed at deaf people are mostly designed to contain a sign language video
constantly in a certain fragment of the web page, such as in Signing Savvy for ASL [32],
online courses, like ECDL material developed within the DISNET project [7] and
e-Learning material ATBG [36]. The video-sharing portal YouTube [46] is an example of
an environment where video clips are created and represent the primary piece of
information.
A needs analysis showed that the end usersmain requirement was sign language videos,
which should appear on-demand by clicking on the appropriate icon or on a multimedia
element on the web site.
Consequently, our idea was to enable the preparation and embedding, of a sign language
video with a link from any element of the web site, whether that be a word, a sentence, a
paragraph or whole block of text, a picture, an animation or even another video clip.
4.2 Accessibility requirements
The basic aspects of accessibility set out in the needs analysis for sign language video have
been divided into seven functionalities.
Video control.
Video image resizing.
Adding subtitles.
Slowing down the video.
Shifting the video across the web page.
Rapid display of the video.
Adding sound for deaf and hard-of-hearing people.
The first, most simple aspect of accessibility is the ability to pause and stop video
at any time whenever the clip is longer than 5 s (WCAG 2.0 Guidelines2.2.2) [44].
The deaf person can stop it playing to get more control over the use of sign language
video display.
The second aspect is the possibility of increasing the size of videos, so that deaf people
can see the facial expressions and gestures of hands better. However, the increase should
not mean lowering the quality of the video clip, which commonly occurs where the video is
compressed down to a small size, such as 144×176. In this case, it is necessary to compress
the video to a format large enough to allow a fairly clear picture, even in the case of a
gradual size increase.
The third is the inclusion of subtitles, which are used by deaf and hard-of-hearing to
assist their comprehension and the fluency of the sign language interpreter. Providing
equivalent alternatives to auditory and visual content is one of the WCAG 2.0 guidelines.
One of the most important aspects of accessibility for deaf people viewing sign language
online is the option to slow down the video clip so they can more easily follow individual
gestures. This requirement is also reflected in other tools and projects such as Studio
SignSmith, which allows the content developer to manually specify the occurrences of
pauses, and content-scripting tools from the eSign project, which give similar control over
speed, timing and pauses [23]. This enables the adaptation of the video clip to different
users with different literacy abilities in sign language. The latest video players, such as
Windows Media Player
TM
, MPlayer
TM
and Adobe Flash
TM
, already provide functionality
for control over speed, timing and pauses.
188 Multimed Tools Appl (2011) 54:181199
The fifth aspect of accessibility is the possibility of manually moving the video clip
around the web site. When the sign language video is displayed over an existing display, it
makes sense to add the functionality of moving the clip to another part of the screen when
desired by user. This allows the user to simultaneously overview and translates texts and
view the sign language video.
The sixth aspect requires the rapid display of video on the web site. Extended waiting
time for a video to load may lead to the confusion of the deaf person, since there is no
proper feedback on what is going on. Deaf people are especially intolerant to the time lag
while playing and require a quick response from the system. It makes sense, therefore, to
produce a video that contains markers, so that the video is only loaded once in a set of
e-learning materials, and is then switched on by markers when the user moves across web
sites. This principle was also used in the ECDL (European Computer Driving Licenses)
courses for deaf people [7].
The last aspect is the use of sound. Although the sign language video is aimed at deaf
people who do not hear sound, the video can also be used for hard-of-hearing people who
wear hearing aids and still know sign language. Research by Debevc [6] has shown that it is
appropriate to add sound or a spoken translation in the video clip, together with subtitles,
which can be used by hard-of-hearing persons with a hearing aid, who also know sign
language. The combination of audio, video and subtitles allows a user to choose which
object will receive higher attention.
4.3 Video quality requirements
The criteria for the quality of the video was decided by using the research for measuring the
quality for video communication for the deaf and is based on work for standardizing video
presentations [13]. Hellström [15] proposes a resolution for the video of CIF format
(352*288) and 3:4 aspect ratios to frame the upper body and signing space.
Ensuring good quality of service is also important. The quality for enabling videos
requires evaluation of the message media (noise, delays, and jitter) and the clearness
and comprehension (intelligibility) of the message. One of the crucial criteria for the
quality of a sign language video is the minimal frame rate, which has to be higher
than 15 frames per second (fps), as otherwise there will be a significant impact on the
transmission and comprehension of sign language [29]. The compression ratio must be
optimized such that it allows for good visual detection of hand movements and facial
expressions. For the deaf and hard-of-hearing people, it is important that details in
motion can be reproduced so that fingers, eyes and mouth are distinguishable even for
signs consisting of both hands and arms moving with all the fingers displayed. Blurry
fingers in motion (e.g. Hellström) are acceptable, though clearly visibly fingers are
preferred.
And finally, acceptable delivery time of the video is an essential point when making the
usage comfortable for the deaf and hard-of-hearing. According to Hellström, the picture
delay should be less than 1.2 s, which is considered an acceptable delivery time.
A Study from the Cavender project MobileASL [4] regarding using sign language on
mobile phones show that the most appropriate encoding is at least 15 fps, but for deaf and
hard-of-hearing people even encoding at 10 fps can be used with the difference that, in this
case, the facial region must be encoded in better quality than the other regions of the video
image. Using eye tracking research [27], it was found to be normal to perceive the facial
region at a high visual resolution and the movements of the hands and arms at a lower
resolution due to parafoveal vision. Accordingly, it appears that sign language can also be
Multimed Tools Appl (2011) 54:181199 189
used for mobile phones together with web applications if the encoder developed within the
project MobileASL is used.
5 Transparent video for deaf
5.1 Sign language interpreter module
Our proposed solution for information retrieval is the Sign Language Interpreter Module
(SLI Module) [33], which uses a multimodal approach for combining media elements, such
as video, audio, subtitles and media navigation controls, into a new layer. Snoek and
Worring [35] have extended the definition of multimodality from Nigay and Coutaz [28], and
define it as the capacity of an author of a video document to express a predefined semantic
idea, by combining a layout with a specific content, using at least two information channels.
Similarly, SLI Module considers three channels/modalities within a video document.
Visual modalitysign language interpreter.
Auditory modalityspeech.
Textual modalitysubtitles.
In our case, these modalities are manifested as transparent videos, which are exposed
over the existing web pages instead of the usual statically positioned videos. With this
method, the structure of the web site remains unaltered, and provides a simplified addition
to deaf and hard-of-hearing end-users (Fig. 1).
The web site, therefore, combines the video of a sign language interpreter, sound and
subtitles over existing, static web sites as a transparent video triggered on the request of the
user. After the termination of a short video clip with control options (pause, backward,
forward, stop), the original web page is displayed. If necessary, the user only needs to select
the appropriate icon or other content-aware element that represents the hyperlink to the sign
language video and view the translation. The system is thus simple for authors of web sites,
as it is only necessary to add a hyperlink element to play the interpreter video at the
appropriate location and connect to the server where the sign language video for the given
word, text or other multimedia presentation is located.
During the development stage, the language barriers of deaf and hard-of-hearing users
and their bilingualism, characteristics of the major part of this population, have been taken
into account. This offers deaf and hard-of-hearing users the translation of a certain word,
text, image, photo, animation, video or any other element on the web site. In a nutshell, the
SLI Module system offers:
Enabling of transparent videos over an existing web page without altering its structure.
Synchronization of video, audio and subtitles.
On-demand activation by user.
Presentation of videos anywhere on the web page.
Control over usersaccess.
In order to simplify the use of video clips, a central video server that stores sign
language video translations is used; these are then used in web applications. In addition to
the definitions, even longer video clips can be loaded to the server with detailed
explanations of the individual words, concepts or texts, or summarized text for those deaf
people who need a simplified explanation. With the proper icon indicating access to the
sign language video, the users are pointed to the locations where they can start an
190 Multimed Tools Appl (2011) 54:181199
interpreter video. A video of an interpreter is then played in a transparent frame besides this
icon, as an HTML DIV element over the existing web page. Users can hide the video,
which is automatically ended, by clicking the x-sign, which is widely known symbol for the
cancellation of the operation.
5.2 Creating transparent sign language video
The quality of transparent video required for deaf and hard-of-hearing users can be
developed only on the basis of high quality, pure, basic DV video, since these users require
Fig. 1 Modalities of a sign language interpreter module
Multimed Tools Appl (2011) 54:181199 191
clean video in order to focus on the details, such as finger movements for the sign language
and the lip movements for lip-reading. Video is recorded with a person standing in front of
a green background, also known as a chroma key background. It is clear that the person
should not be wearing anything of the same colour as the background. This provides the
necessary contrast between the standing person and the background, and enables the
removal of the background with video modeling software. The result appears as a video
with a so-called transparent (non-opaque) background.
The video recorded on the computer must be uncompressed and in its original size. For
the multimedia container format in our project, we have used uncompressed Audio Video
Interleave (AVI), with a resolution of 750×567, 25 frames per second, with 48 kHz audio
and 32 bit sampling. The video was then imported into the software, where the Colour Key
effect was used to remove the green background and soften the edges between the object
and the background. This procedure and the use of the Shockwave Flash (SWF) format has
resulted in the high quality of the video, presenting the moves of the interpreter clearly
enough for the users to see the facial mimicry and to focus on the fingers without a blurry
image.
Figure 2presents an example the conversion of the image into a transparent form. The
first image represents the original image with a green background; the second shows a
transparent image that has the background removed. The removed background on the
image is shown as a black background. The third image shows us the softening of the edges
of the body so that the person in the picture fits to the background of the web page better.
The original video was first exported into a transparent Flash video with a resolution of
240×320 (QVGA format), with a frame rate of 25 fps and a sampling of 700 kilobits per
second. Such a Flash video is suitable for conversion to Shockwave Flash (SWF) format,
which was designed to deliver vector graphics and animation over the Internet.
With this procedure, we have achieved video details with sufficiently strong sign
language interpreter movement, so that users can see facial expressions and fingers without
blurriness.
5.3 Integration process
As already mentioned, one of the key problems in the construction of web sites for people
with disabilities is the implementation of accessibility features.
Two of the ways in which the implementation process is possible include the designing
of a new web site, similar to the existing non-accessible web site, with added accessibility
features, or an upgrade of the existing web site.
a) ori
g
inal ima
g
e b) ima
g
e with removed back
g
round c) ed
g
e shar
p
enin
g
Fig. 2 Edge sharpening procedure
192 Multimed Tools Appl (2011) 54:181199
Designing a parallel web site in another language is often a time-consuming job;
therefore an upgrade is more suitable. The integration is the crucial step. Consequently, the
integration of new code into the web pages must be as simple as possible.
In planning the SLI Module integration into existing web sites, we considered the ease
of implementation. By integrating HTML and JavaScript code, we were able to achieve the
following conditions.
Cross-browser compatibility (Microsoft Internet Explorer and Mozilla Firefox).
Viewing several transparent video layers on the existing web page.
The inclusion of modalities is illustrated in Fig. 3. Video production is done separately in
the background of the entire process, while its output presents transparent Flash videos.
Video linkage and activation events are integrated into the existing web page with HTML
and are performed with the help of JavaScript code blocks.
All implementation of the video layer is carried out when the user clicks on the icon to
view the video. An example of the activation icon is shown in Fig. 4. From the end-users
perspective, the additional element is the activation icon, which is an image button. The
dimensions of the icon in our system are 15× 15 pixels and can be inserted into the text
between lines of paragraph without altering the text. This feature is a notable distinction in
comparison to current solutions on the market, which use a larger size of statically
integrated sign language videos (240×320 pixels at least).
6 Evaluation
6.1 Goal of testing
During the test of our system, we obtained feedback from end users (deaf and hard-of-
hearing) on user experience satisfaction [19]. Since this is an incremental prototype, we did
Fig. 3 Inclusion of the modalities into existing web pages
Multimed Tools Appl (2011) 54:181199 193
not want to test the effectiveness of the browser application, such as the capture of
quantitative data (for example, the number of errors made). We used a questionnaire to
verify the general impression of the usefulness of the system. In order to gain maximal
insight, such indirect usability tests as questionnaires must be supplemented with direct
usability tests, e.g. thinking aloud or observation [30]. Since standard thinking aloud was
not applicable, we applied the Gestural Talk Aloud Method (see Section 6.3).
6.2 Experimental setting
N=18 deaf people between the ages of 17 and 51 whose first language is sign language
participated in the evaluation of the usefulness of a transparent video model. Participants
were testing a prototype web site which contained four video icons to launch sign language
videos. The sign language videos integrated the general use of the x-button for closing /
ending video playback.
6.3 Methodology
The data in both experiments was collected through the Gestural Talk Aloud Protocol [30]
with a simultaneous translation of speech and sign language, a short questionnaire
consisting of four questions with four closed questions and a group discussion.
6.4 Procedure
The entire procedure lasted approximately 1 h and was conducted in the order presented in
Table 1. Before the users began to work, they received a brief overview presentation of the
entire evaluation. First, we asked them to start the web browser (Microsoft Internet
Explorer or Mozilla Firefox) and to connect to the URL of the system. Then we asked the
users to read the text and activate the interactive video icon as soon as they noticed it in the
text. In our case, the icon was an image of TV. To eliminate prior Internet skills, participants
were first presented with a sketch of the visual appearance of the icons. They could watch
the video, according to their needs, within a 20-minute time frame.
At the end of the test, the participants completed a short questionnaire containing 4
questions with yes / no answers. A group debate on the advantages and disadvantages of
the system and their general impression of a sign language video followed at the end. This
group debate lasted about 30 min.
7 Results and discussion
Based on the results from the questionnaire, all users in the evaluation study showed a
positive attitude towards the video; the first question was answered with yes by all users
Fig. 4 Image button for activating the transparent sign language video
194 Multimed Tools Appl (2011) 54:181199
(Table 2). Most of the users (88%) would prefer to have transparent video on all pages,
while 12% of participants did not show an interest in having video available. The video
resolution in our experiment was 450×550 pixels, which was sufficient for 67% of the users,
while 33% of the participants found the video oversized. This confirmed the necessity of having
a user-controlled video resizing function. The appearance of the interactive video icon in the
shape of TV symbol was well accepted by 83% of the users.
These results and experiences gathered through the evaluation show that the transparent
sign language video idea is appropriate for the users and this satisfies the defined
accessibility requirements.
The evaluation findings revealed that deaf and hard-of-hearing users appreciate
transparent sign language video and would want to use it in the future. The results also
confirm that the transparent video would be an appropriate added value to existent web
sites. We have to stress that the goal of our study was primarily to obtain valuable
information about our prototypes from the aspect of satisfaction and not to test the common
task-oriented parameters when testing browsers, such as effectiveness and efficiency or to
statistically compare our prototype with other approaches (e.g., videos on a separate
window, or videos statically integrated in the layout), which will be done in future work.
8 Conclusion
According to previous research there is an urgent need to improve the accessibility of web
sites, especially for deaf and hard-of-hearing persons who are, unfortunately, fundamentally
deprived of translations in sign language, their first language. Although standards such as
WCAG 2.0 cover the needs of deaf and hard-of-hearing people, they are those standards
more or less adapted for appropriate presentation of information in written form, and are not
Table 1 Evaluation procedure
Activity Time
1. Introduction of the evaluation study 5 min
2. Tasks: 20 min
2.1 Open a browser
2.2 Go to URL of the system
2.3 Read the text and find the interactive icon
2.4 Activate the interactive icon and watch the video
3. Fill out the questionnaire 5 min
4. Participate in a group discussion 30 min
Table 2 Questionnaire results
Questions Yes No
1. Did you like the video? 18 (100%)
2. Would you prefer video to be present on all the Web pages? 16 (88%) 2 (12%)
3. Was the video size too large? 6 (33%) 12 (67%)
4. Did you like the interactive icon used? 15 (83%) 3 (17%)
Multimed Tools Appl (2011) 54:181199 195
sufficient for the vast majority of the deaf and hard-of-hearing for whom it is necessary to
have information presented in the form of a sign language video.
In developing the SLI Module for the presentation of sign language videos with a help of
web layers, the basic requirements necessary for the deaf and hard-of-hearing users to use
the video were taken into account, such as: appropriate video size; quality of service; the
accessibility and linguistic characteristics of the deaf and hard-of-hearing and bilingualism,
all characteristics of a substantial part of this population. The SLI Module offers the
possibility of prioritizing sign language and emphasizing the importance of acquiring
knowledge and delivering information in that language. This offers deaf and hard-of-
hearing people whose first language is sign language the translation of specific words, text,
images, photos, animation or video clips which can be found on the web.
The SLI Module is primarily intended for deaf and hard-of-hearing people, so it adjusts to
their needs on a contextual and technical basis. The novelty of this system is evident from the
fact that the screen of the web site combines video, audio, subtitles and navigation options over
the existing web page, which usually contains a lot of text and static positioning (for example,
e-material, governmental web pages), as a transparent video at the request of the user.
Results of the evaluation show that deaf users appreciated the SLI Module and had a great
desire to use this form of sign language presentation to visit other web sites. In the long run, the
SLI Module offers easy and fast integration of sign language video into existing web pages
without much effort. For the system to function, the JavaScript implementation code for
retrieving and presenting the sign language video must be included, and the interactive icons
positioned into the appropriate locations on the web page.
Using the sign language video in the materials also increases the daily exposure to sign
language and thereby enables deaf and hard-of-hearing people to use the material in a working
and social environment as well as in education, as they can more easily repeat the difficult parts
and learn to be independent. It is expected to increase the literacy of another language, namely
the mother tongue, of deaf people by providing more materials in their sign language. It would
be easier for them to merge into the common social formation, while maintaining their own
identity, improve their self-esteem and developing their culture and language.
Acknowledgement The project is partially supported by the European Commission within the framework
of the Lifelong Learning Programme, project DEAFVOC 2. It is also partially supported by the Slovenian
Research Agency in the framework of the Science to Youth program which provides financial support to
young researchers. Special thanks go to Petra Rezar from the Ljubljana School for the Deaf for her comments
and suggestions while using the first prototype of the application, to the Association of the Deaf and Hard-
Of-Hearing People of Podravje for their help in the evaluation of the application, as well as to Milan
Rotovnik from University of Maribor for his work on transparency.
References
1. Bangham J, Cox SJ, Lincoln M, Marshall I, Tutt M, Wells M (2000) Signing for the deaf using virtual
humans. In: Proceedings of the IEE Seminar on Speech and Language Processing for Disabled and
Elderly People (Ref. No. 2000/025), pp 4/14/5
2. Beskow J, Engwall O, Granström B, Nordqvist P, Wik P (2008) Visualization of speech and audio for
hearing impaired persons. Technol Disabil 20(2):97107
3. Brophy P, Craven J (2007) Web accessibility. Libr Trends 55(4):950972
4. Cavender A, Rahul V, Barney DK, Ladner RE, Riskin EA (2007) MobileASL: intelligibility of sign
language video over mobile phones. Disabil Rehabil Assist Technol 3(1):93105
5. Davis CD (1999) Visual enhancements: improving deaf studentstransition skills using multimedia
technology. Career Dev Except Individ 22(2):267281
196 Multimed Tools Appl (2011) 54:181199
6. Debevc M, Peljhan Z (2004) The role of video technology in on-line lectures for the deaf. Disabil
Rehabil 26(17):10481059
7. Debevc M, StjepanovičZ, Povalej P, VerličM, Kokol P (2007) Accessible and adaptive e-learning
materials: considerations for design and development. In: Universal access in human-computer
interaction. Applications and Services, Lecture Notes in Computer Sciences (LNCS 4556). Springer,
Berlin, Heidelberg, pp 549558
8. EBU Technical-Information I44-2004, EBU report on Access Servicesincludes recommendations,
http://www.ebu.ch/CMSimages/en/tec_text_i44-2004_tcm6-14894.pdf. Accessed 15 Sep 2009
9. Elliott R, Glauert J, Kennaway J, Marshall I, Safr E (2004) Linguistic modelling and language
processing technologies for Avatar-based sign language presentation. Univ Access Inf Soc 6(4):375391
10. Fajardo I, Cañas JJ, Salmerón L, Abascal J (2006) Improving deaf usersaccessibility in hypertext
information retrieval: are graphical interfaces useful for them? Behav Inf Technol 25(6):455467
11. Farwell RM (1976) Speechreading: a research review. Am Ann Deaf 121(1):1822
12. Fels DI, Richards J, Hardman JL, Daniel G (2006) Sign language web pages. Am Ann Deaf 151(4):423433
13. Hanson VL (2009) Computing technologies for deaf and hard of hearing users. In: Sears A, Jacko JA
(eds) Human-computer interaction handbook: fundamentals, evolving technologies and emerging
applications, 2nd edn. Erlbaum, Mahwah, pp 885893
14. Hermans D, Knoors H, Ormel E, Verhoeven L (2008) The relationship between the reading and signing
skills of deaf children in bilingual education programs. J Deaf Stud Deaf Educ 13(4):518530
15. Hellström G (1997) Quality measurement on video communication for sign language. In: Proceedings of
the 16th International Symposium on Human Factors in Telecommunications, Oslo, Norway
16. Hilzensauer M (2006) Information technology for deaf people. In: Kacprzyk J (ed) Studies in
computational intelligence. Springer, Berlin/Heidelberg, pp 183206
17. Holley MC (2005) Keynote review: the auditory system, hearing loss and potential targets for drug
development. Drug Discov Today 10(19):12691282
18. Holt J (1995) Efficiency of screening procedures for assigning levels of the stanford achievement test
(Eighth Edition) to students who are deaf or hard of hearing. Am Ann Deaf 140(1):2327
19. Holzinger A (2005) Usability engineering for software developers. Commun ACM 48(1):7174
20. Holzinger A, Searle G, Nischelwitzer A (2007) On some aspects of improving mobile applications for
the elderly. In: Constantine S (ed) Coping with diversity in universal access, Lecture Notes in Computer
Science (LNCS 4554). Springer, Berlin, pp 923932
21. Huenerfauth M (2008) Generating American Sign Language animation: overcoming misconceptions and
technical challenges. Univ Access Inf Soc 6(4):419434
22. Ivory M, Hearst M (2001) State of the art in automating usability evaluation of user interfaces. ACM
Comput Surv 33(4):147
23. Kennaway J, Glauert J, Zwitserlood I (2007) Providing signed content on the Internet by synthesized
animation. ACM Transactions on Computer-Human Interaction 14(3)
24. Kronreif G, Dotter F, Bergmeister E, Krammer K, Hilzensauer M, Okorn I, Skant A, Orter R, Rezzonico
S, Barreto B (2000) SMILE: demonstration of a cognitively oriented solution to the improvement of
written language competence of deaf people. In: Proceedings of the 7th International Conference on
Computers Helping People with Special NeedsICCHP. Karlsruhe, Germany
25. Marschark M, Green V, Hindmarsh G, Walker S (2000) Understanding theory of mind in children who
are deaf. J Child Psychol Psychiatry Allied Discipl 41(8):10671073
26. Martini A, Mazzoli M (1999) Achievements of the European Working Group on genetics of hearing
impairment. Int J Pediatr Otorhinolaryngol 49(1):155158
27. Muir L, Richardson I (2005) Perception of sign language and its application to visual communications
for deaf people. J Deaf Stud Deaf Educ 10(4):390401
28. Nigay L, Coutaz J (1993) A design space for multimodal systems: concurrent processing and data fusion.
In: Proceedings of the INTERACT 93 and CHI 93 conference on Human factors in computing systems
table of contents, Amsterdam, The Netherlands, pp.172178
29. Olivrin GJL (2007) Is video on the web for sign languages. In: Proceedings of the W3C Video on the
Web Workshop. San Jose, California and Brussels, Belgium
30. Roberts VL, Fels DI (2006) Methods for inclusion: employing think aloud protocols in software usability
studies with individuals who are deaf. Int J Hum Comput Stud 64(6):489501
31. Shiel B (2006) Evaluation of the social and economic costs of hearing impairment. In: Report for HEAR-
IT. http://www.hear-it.org/multimedia/Hear_It_Report_October_ 2006.pdf. Accessed 15 Sep 2009
32. Signing Savvy (2009) http://www.signingsavvy.com/. Accessed 15 Sep 2009
33. Sign Language Interpreter Module Project (2010). http://www.slimodule.com/. Accessed 19 Feb 2010
34. Smith C, Mayes T (1995) Telematics applications for education and training: usability guide. Version 2,
DGXIII 3/c, Commission of the European Communities. Brussels, Belgium
Multimed Tools Appl (2011) 54:181199 197
35. Snoek CGM, Worring M (2004) Multimodal video indexing: a review of the state-of-the-art. Multimedia
Tools and Applications 25(1):535
36. Straetz K, Kaibel A, Raithel V, Specht M, Grote K, Kramer F (2004) An e-learning environment for deaf
adults. In: Proceedings of the 8th ERCIM Workshop User Interfaces for All. Vienna, Austria
37. Stewart D, Schein J, Cartwright B (1998) Sign language interpreting: exploring its art and science. Allyn
& Bacon, Needham Heights
38. United Nations (2006) Convention on the rights of persons with disabilities. New York: UN. http://www.
un.org/disabilities/default.asp?navid=12&pid=150. Accessed 15 Sep 2009
39. United Nations (2006) Global Audit of Web Accessibility. New York: UN. http://www.nomensa.com/
resources/research/united-nationsglobal-audit-of-accessibility.html. Accessed 15 Sep 2009
40. Uskov V, Uskov A (2006) Web-based education: 20062010 perspectives. International Journal of
Advanced Technology for Learning 3(3):113
41. Vanderheiden GC (1992) Full visual annotation of auditorially presented information for users who are
deaf: ShowSounds. In: Proceedings of the RESNA International Conference. Toronto
42. Vogler C, Metaxas D (2001) A framework for recognizing the simultaneous aspects of American sign
language. Comput Vis Image Underst 81(3):358384
43. Web Content Accesssibility Guidelines (WCAG 1.0) W3C Recommendation 5 May 1999. Chisholm W,
Vanderheiden G, Jacobs I (ed). http://www.w3.org/TR/WCAG10/. Accessed 15 Sep 2009
44. Web Content Accesssibility Guidelines (WCAG 2.0) W3C Recommendation 11 December 2008.
Caldwell B, Cooper M, Reid LG, Vanderheiden G (ed). http://www.w3.org/TR/WCAG20/. Accessed 15
Sep 2009
45. World Federation of the Deaf (WFD) (2003) Position Paper regarding the United Nations Convention on
the Rights of People with Disabilities. Ad Hoc Committee on a Comprehensive and Integral International
Convention on the Protection and Promotion of the Rights and Dignity of Persons with Disabilities, 24
June 2003. http://www.un.org/esa/socdev/enable/rights/contrib-wfd.htm. Accessed 15 Sep 2009
46. YouTube (2010). http://www.youtube.com/. Accessed 19 Feb 2010
MatjažDebevc received his PhD in technical sciences from the University of Maribor, Slovenia in 1995.
Currently, he is an associate professor of computer science at the Faculty of Electrical engineering and computer
sciences at the same university. From 1999 until 2003, he was the head of the Centre for Distance Education
Development at the University of Maribor and often serves as a consultant for educational technologies to other
institutions. His professional interests include human-computer interaction, user interface design, adaptive user
interfaces, internet applications, cable TV, distance education and technologies for disabled. Dr. MatjažDebevc
has received a UNESCO award for his work in human-computer interaction, best conference paper award and
several awards for his work with young researchers. He (co-)organized, chaired, or served as programme
committee member for several international events, and serves as reviewer for several scientific journals.
198 Multimed Tools Appl (2011) 54:181199
PrimožKosec received his B. Sc in Computer Science at the University of Maribor, Faculty of Electrical
Engineering and Computer Science in 2006. Afterwards he joined the Institute of Automation at the same
University and started his research doctorate in telecommunications. He was involved in several national and
international projects. One of the biggest achievements in 2008 was the international recognition of the
VeLAP system, which was ranked among the top three innovative, non-commercial educational systems of
the world. He is a teaching assistant of Computer Science. His professional interests include object-oriented
programming languages, such as Java, C++ and C#; however, the last few years he has been prioritizing
usability engineering. His current interests focus on developing applications for persons with special needs
(deaf/hard of hearing, blind/weak sighted) and usability evaluations.
Andreas Holzinger received his PhD in Cognitive Science from Graz University in 1997 and his second
doctorate in Applied Information Processing in 2003 from the Faculty of Computer Science of Graz
University of Technology. Currently, he is head of the research unit HCI4MED at the Institute of Medical
Informatics at the Medical University of Graz and Associate Professor at the Institute for Information
Systems and Computer Media of Graz University of Technology. His professional interests include human
centred design, e-Education and using new technologies to support people with special needs. He has served
as visiting professor in Berlin, Innsbruck, Vienna and London. Dr. Andreas Holzinger was the Austrian
delegate to the Lisbon conference 2000 and co-authored the white paper: e-Europe 2010: Towards a
knowledge society for all.
Multimed Tools Appl (2011) 54:181199 199
... The system also allows for repeated learning, especially for difficult topics. According to Debevc (2011), the design of inclusive programs is made for atypically and typically developing children. These programs make use of assistive technology together instructional technology, which supports the development of important skills in the students, argue that podcasting is used in the creation of a classroom environment, or third space pedagogy using multiple tools of mediation to help students in literacy development. ...
... Zirzow (2015) argues that the use of VR is highly motivating, facilitates self-pacing and repetition, provides the opportunity to feel or see, and allows for control over one's environment. Most deaf people make use of full natural language for communication among themselves known as the sign language (Debevc, 2011). ...
... Online education for the deaf, for instance, may be encouraged. Debevc, (2011) consolidation of audio and visual content in a single device, such as a tablet or an iPad, can get rid of the act of juggling experienced by most deaf students in a class. ...
Article
The study examines the use of technology to assist the Deaf students to learn English as a foreign language in Saudi Arabia. The growth in technology has changed the delivery mode in education, including that of the disabled. The study evaluates the level of technology application in teaching the deaf students and determines the possibility of improving it. The importance of the study is to provide ways in which the deaf students can be motivated to learn English effectively. The study uses qualitative research strategy. In addition, interviews have been used as data collection instruments. The study concludes that the use of technology to teach English to the deaf students in Saudi Arabia improves its effectiveness. The study encourages the application of these technologies in teaching English for the deaf students in Saudi Arabia.
... Hearing impaired people use ISL as the primary form of communication in their daily lives. Sign Language has its own syntax and grammar structure to perform a gesture corresponding to spoken word [23]. Figure 1 displays the hierarchy of ISL types based on various parameters. ...
... As per the World Federation of Deaf (WFD) survey in 2009, there are 13 countries that do not have any provision of sign language interpreters [11,29]. The World Wide Web resources are considered to be the important tools and their demand is increasing with time [23]. But there is a lack of web resources related to learning and translation in sign language [9,14,24,70,78,116]. ...
Article
Full-text available
Sign language (SL) is the best suited communication medium for hearing impaired people. Even with the advancement of technology, there is a communication gap between the hearing impaired and hearing people. The aim of this research work is to bridge this gap by developing an automatic system that translates the speech to Indian Sign Language using Avatar (SISLA). The whole system works in three phases: (i) The first phase includes the speech recognition (SR) of isolated words for English, Hindi and Punjabi in speaker independent environment (ii) The second phase translates the source language into Indian Sign Language (ISL) (iii) HamNoSys based 3D avatar represents the ISL gestures. The four major implementation modules for SISLA include: requirement analysis, data collection, technical development and evaluation. The multi-lingual feature makes the system more efficient. The training and testing speech sample files for English (12,660, 4218), Hindi (12,610, 4211) and Punjabi (12,600, 4193) have been used to train and test the SR models. Empirical results of automatic machine translation show that the proposed trained models have achieved the minimum accuracy of 91%, 89% and 89% for English, Punjabi and Hindi respectively. Sign language experts have also been used to evaluate the sign error rate through feedback. Future directions to enhance the proposed system using non-manual SL features along with the sentence level translation has been suggested. Usability testing based on survey results confirm that the proposed SISLA system is suitable for education as well as communication purpose for hearing impaired people.
... Therefore, a detailed preparation of the signed content is necessary [13]. Furthermore, different guidelines on how TV content can be made accessible for people with hearing impairments have been compiled, i.e. by [9] or [19]. They address various features like screen layout, color combination, positioning, shape and size of the signer, the layout, the preparation of sign language content as well as the integration and combination of audio, subtitles and sign language. ...
... In general, the expectations to signed TV are in line with recommendations regarding signed TV content known from other experiences, i.e. [9,13,19], but named users' expectations are also more detailed than the guidelines could be. In the studies reported in this paper, it appears that especially expectations regarding the arrangement are influenced by practices the users are familiar with from consuming national TV. ...
Article
Full-text available
When developing a novel technological system, it is important to meet the users’ needs and expectations to ensure that the product will be accepted and applied by future users. To achieve that, a broadly based qualitative user-requirements analysis is essential. Following a user-centered design approach, a qualitative analysis was conducted to identify users’ needs and expectations for a future TV system. A prototype of the system has been developed with a focus on accessibility for deaf consumers by providing sign language via a virtual interpreter inserted into the traditional TV picture. The requirements analysis based on three different methods: the user interviews, the online questionnaire and the workshop. The statements given in the different studies handle similar topics. The respective detailed expectations by young and elder signers are mostly complementary, but partly even contrary. The results highlight the importance to set a user-requirements analysis on a broad database when developing technological systems. Only asking individual representatives of the user group and only applying individual methods would give very limited insight into users’ expectations and needs toward a technical system. To gain a detailed and representative picture for the whole community, a multimethod approach should be followed allowing for a feedback of several subgroups of the targeted population.
... A primeira língua da pessoa surda no Brasil é a língua brasileira de sinais (Libras) e muitos têm a língua portuguesa como segunda língua. De acordo com dados coletados pela Federação Mundial dos Surdos, 80% dos surdos de todo o mundo têm baixa escolaridade (FEDERAÇÃO MUNDIAL DOS SURDOS, 2003;DEBEVC;KOSEC;HOLZINGER, 2011). Apesar de estarmos observando mudanças significativas no cenário da educação de surdos, muitos deles ainda possuem pouca familiaridade com a língua portuguesa (PEREIRA, 2014). ...
... A primeira língua da pessoa surda no Brasil é a língua brasileira de sinais (Libras) e muitos têm a língua portuguesa como segunda língua. De acordo com dados coletados pela Federação Mundial dos Surdos, 80% dos surdos de todo o mundo têm baixa escolaridade (FEDERAÇÃO MUNDIAL DOS SURDOS, 2003;DEBEVC;KOSEC;HOLZINGER, 2011). Apesar de estarmos observando mudanças significativas no cenário da educação de surdos, muitos deles ainda possuem pouca familiaridade com a língua portuguesa (PEREIRA, 2014). ...
Book
Full-text available
Sonhamos e idealizamos que os museus e centros de ciências sejam participativos, acessíveis e abertos a todas as pessoas. A heterogeneidade de públicos que podem [e devem] circular nas exposições e participar das suas atividades é reflexo da diversidade humana. Tornar esse ideal em um cenário possível é muitas vezes desafiador, em várias esferas, institucional, política, financeira, social e de recursos humanos – só para citar algumas. Por essa razão, entendemos que as parcerias e a aprendizagem coletiva podem nos revelar ainda mais oportunidades de inclusão social. Visando compartilhar relatos de experiência e de prática, desafios, barreiras e pesquisas, convidamos profissionais que trabalham ou estudam a acessibilidade em museus e centros de ciências para participar dessa iniciativa, realizada pela Fundação Centro de Ciências e Educação Superior a Distância do Estado do Rio de Janeiro (Fundação Cecierj) e pelo Grupo Museus e Centros de Ciências Acessíveis (Grupo MCCAC), e também parte do projeto apoiado pela FAPERJ de Jovem Cientista do Nosso Estado. Para fins deste livro, entendemos museus e centros de ciências uma ampla rede de instituições que realizam divulgação da ciência em distintas áreas do conhecimento, incluindo, centros interativos de ciência e tecnologia, museus de história natural, planetário, aquário, parques ambientais, etc. Trouxemos também alguns museus de arte e espaços culturais por se destacarem no trabalho que desenvolvem. Buscamos ter um olhar atento para a terminologia utilizada para abordar questões relativas às pessoas com deficiências, porém, nos casos de citações diretas de outros autores ou documentos, preferimos manter como está no original, pois reflete o pensamento da época. Todas as imagens do livro possuem audiodescrição e a versão htlm dos textos estão disponíveis no site do Grupo MCCAC (grupomccac.org) onde podem ser interpretados em Libras pelos avatares da HandTalk. Por fim, não ensejamos esgotar as possibilidades de reflexões, mas, sim, fomentar as trocas e estudos sobre acessibilidade nas instituições museais e de divulgação científica. Por serem experiências múltiplas, sobre públicos diversos e de grande riqueza, que muitas vezes se complementam, preferimos não dividir o livro em seções. Desejamos que desfrutem dos 32 capítulos deste livro, escritos por mais de 60 autores e autoras preocupados com uma sociedade cada vez mais inclusiva. Jessica Norberto Rocha (Organizadora)
... Sign language recognition (SLR) aims to recognize the sign video into the word (isolated SLR) or sentence (continuous SLR) and promote communication between ordinary people and deaf people. For example, a web sign-language recognition system can enhance the accessibility of websites for the deaf and reduce the difficulty of the deaf people getting information from the Internet [7]. SLR depends on both the spatial and temporal contents of signing frame sequences. ...
Article
Full-text available
Heavy 3D CNNs for spatiotemporal modeling have gained impressive performance in sign language recognition (SLR). However, as for memory/computation cost, heavy 3D CNNs are expensive and unsuitable for real-time applications. In this paper, we seek for efficient spatiotemporal modeling with respect to model size and speed for both isolated and continuous SLR. Specifically, we first build several efficient 3D CNNs including 3D-MobileNets, 3D-ShuffleNets, and X3Ds. Then, we further boost the performance by designing a random knowledge distillation strategy (RKD) which concurrently considers the temperature of the distillation process, the ratio of true labels to soft labels, and multi-teacher networks to transfer the knowledge from larger teacher models for isolated SLR. We finally apply these lightweight models as spatiotemporal feature extractors in the framework of attention-based sequence-to-sequence for the more challenging continuous SLR. In our experiments, the best 16-frame MobileNetv2-1.0-S obtains 95.12% test accuracy on the isolated CSL-500 dataset, and the efficient sequence-to-sequence framework obtains 2.2 Word Error Rate (WER) on the CSL-continuous dataset. The experimental results achieve competitive performance across all datasets while being 10s to 100s faster than the state-of-the-art methods.
... System was tested with five close symbolic gestures and the result was compared for both binary and depth images, where higher accuracy was found for depth images. Unlike many conventional works, Debevc et al. [3] emphasized on working with sign language over the web wherein they provide translation of signs or gestures to specific words, texts, images, photos, animation or video clips by employing Gestural Talk Aloud Protocol. ...
Article
Full-text available
Deaf and hearing-impaired persons communicate by means of signs and gestures. In course of time, this form of communication has evolved as natural languages with its own grammars and lexicons. Automatic hand gesture recognition is an important task in development of human computer interaction system for deaf mute community. In this paper, we report the development of a novel feature descriptor named Multi-Radii Circular Signature (MRCS) and associated automatic hand gesture recognition pipeline. This descriptor has certain desirable aspects such as translation, scale and rotation invariance, variable number of feature extraction, and symbol reconstruction. Multiple sets of experiments for various feature combinations with multiple classifiers have been carried out on three publicly available benchmark datasets viz. NTU 10-gesture dataset, HKU EEE DSP dataset and Senz3D dataset. Consistently high performance across multiple datasets and feature combinations reveals the robustness and generality of the descriptor. Its code and usage guidelines are also released at https://github.com/iilabau/MRCS for greater interest.
... Problems with university electronic systems and the poor orientation of Deaf students in educational websites affected the availability of higher education [11]. The identified barriers set out to improve the technological base of universities and to acquire technological skills from all persons with hearing impairment [12]. ...
Article
Full-text available
The article deals with the problem of organizing distance learning for Deaf students in a pandemic situation. This problem is evaluated differently by experts, many of whom speak of a complete failure. However, the description of how two Siberian universities, which have a relatively large number of students with hearing impairments, organized a remote format, allows us to talk about the possibility of more or less successful practices. The article presents empirical studies of the Trans-Baikal State University and Novosibirsk State Technical University, on the basis of which the analysis is carried out, the opinions of students, teachers and interpreters of the Russian sign language are compared. The results of the study prove that the system of distance education of the Deaf more satisfied the requests of Deaf students where sign language interpreter worked, who simultaneously carried out control and support. During the study, organizational problems were discovered. Solving the identified problems will allow us to build a more successful model, which will make it possible to prevent risks and find effective remote technologies for teaching the Deaf.
Article
Full-text available
Neste artigo, partimos da hipótese as interações humanas demandam elementos/aspectos multimodais, por exemplo, as linguagens verbais e não-verbais, modalidades e semioses. Os objetivos desta pesquisa serão: (i) descrever as categorias de multimodalidade e de tradução/interpretação multimodal; e (ii) sintetizar as categorias de tradução e interpretação multimodal em línguas de sinais. O arcabouço teórico-analítico está debruçado nos Estudos de Multimodalidade e na Tradução Multimodal em línguas orais e sinalizadas. A metodologia da pesquisa é qualitativa e bibliográfica, com análises de dados de publicações, tematizando multimodalidade e tradução/interpretação de línguas de sinais. As 13 pesquisas em línguas de sinais apontam para alinhamentos teóricos, sendo: (i) no âmbito dos Estudos da Interpretação, com aproximações entre Interpretação, Multimodalidade, Semiótica, Translinguagem e Sociolinguística Interacional; (ii) no âmbito dos Estudos da Tradução, com aproximações entre Tradução, Multimodalidade, Translinguagem, Tecnologias e Mídias, Literatura e Educação. Os dados apontam que: as categorias de multimodalidade são vinculadas à semiótica, por exemplo, os conceitos de códigos e modo; a tradução multimodal abarca elementos sentidos, culturais, situacionais e interacionais ao lidar com textos de trabalho tradutório. E com a revisão de literatura pudemos sintetizar as categorias de tradução/interpretação multimodal de línguas de sinais, em oito níveis (linguístico, prosódico, paralinguístico, pragmático, tradutório, literário, cinematográfico e tecnológico), contendo elementos multimodais e semióticos.
Article
Full-text available
Speech and sounds are important sources of information in our everyday lives for communication with our environment, be it interacting with fellow humans or directing our attention to technical devices with sound signals. For hearing impaired persons this acoustic information must be supplemented or even replaced by cues using other senses. We believe that the most natural modality to use is the visual, since speech is fundamentally audiovisual and these two modalities are complementary. We are hence exploring how different visualization methods for speech and audio signals may support hearing impaired persons. The goal in this line of research is to allow the growing number of hearing impaired persons, children as well as the middle-aged and elderly, equal participation in communication. A number of visualization techniques are proposed and exemplified with applications for hearing impaired persons.
Article
Full-text available
Written information is often of limited accessibility to deaf people who use sign language. The eSign project was undertaken as a response to the need for technologies enabling efficient production and distribution over the Internet of sign language content. By using an avatar-independent scripting notation for signing gestures and a client-side web browser plug-in to translate this notation into motion data for an avatar, we achieve highly efficient delivery of signing, while avoiding the inflexibility of video or motion capture. Tests with members of the deaf community have indicated that the method can provide an acceptable quality of signing.
Article
Full-text available
The objective of this paper is to present a learning management system (LMS) which offers German Sign Language videos in correspondence to every text in the learning environment. The system is designed notably for deaf adults who want to maintain and improve their mathematical and reading/writing skills. The described LMS offers deaf students a new paradigm of learning: For the first time they will be enabled to learn self-determined in their own language, the sign language. 1 INTRODUCTION The presented system is developed in the context of the AILB-project (Aachener Internet-Lernsoftware zur Berufsqualifizierung von Gehörlosen) supported by the Federal Ministry of Health and Social Security (2003-2005). The aim of the AILB project is to develop a bilingual web based learning system for deaf adults who want to maintain and improve their mathematical and reading/writing skills. In this project, the special needs of deaf learners are taken into consideration, as e.g. bilingual information (text and sign language), a high level of visualization, interactive and explorative learning, and the possibility of learning in peer groups via video conferencing. AILB is a joint project of Aachen University (content development), Fraunhofer Institute for Applied Information Technology FIT (software development), and bureau42 GmbH (specification and consulting).
Article
Full-text available
Efficient and effective handling of video documents depends on the availability of indexes. Manual indexing is unfeasible for large video collections. In this paper we survey several methods aiming at automating this time and resource consuming process. Good reviews on single modality based video indexing have appeared in literature. Effective indexing, however, requires a multimodal approach in which either the most appropriate modality is selected or the different modalities are used in collaborative fashion. Therefore, instead of separately treating the different information sources involved, and their specific algorithms, we focus on the similarities and differences between the modalities. To that end we put forward a unifying and multimodal framework, which views a video document from the perspective of its author. This framework forms the guiding principle for identifying index types, for which automatic methods are found in literature. It furthermore forms the basis for categorizing these different methods.
Article
The very nature of deafness demands a visual orientation for the student to access information. Over the past 20 years, educational multimedia technology has developed at a rapid pace, from Beta and VHS videotape to CD-ROM and DVD. Recent developments in technology provide considerable potential in delivering high-quality visual access to information and multimedia instruction for deaf sign language users that was never before feasible. This article identifies a variety of considerations in using multimedia products to instruct the deaf learner and describes the pros and cons of a using a variety of media in the context of several multimedia projects.
Article
This position paper treats the feasibility of making Sign Language part of the Web fabric. The position I adopt is that Sign Languages need video as a requirement for communication and informational purposes. There is no single accepted writing system nor graphical notation for Sign Languages and gestures and facial expressions are still too impersonal and computationally hard to fake. We will review the challenges and the requirements of Deaf users and see that these problems are addressable and will allow Deaf people to become first class citizens on the Internet. If Video on the Web "is for Sign Languages", then this paper suggests an action plan to make it a reality.
Article
Research on theory of mind began in the context of determining whether chimpanzees are aware that individuals experience cognitive and emotional states. More recently, this research has involved various groups of children and various tasks, including the false belief task. Based almost exclusively on that paradigm, investigators have concluded that although “normal” hearing children develop theory of mind by age 5, children who are autistic or deaf do not do so until much later, perhaps not until their teenage years. The present study explored theory of mind by examining stories told by children who are deaf and hearing (age 9–15 years) for statements ascribing behaviour-relevant states of mind to themselves and others. Both groups produced such attributions, although there were reliable differences between them. Results are discussed in terms of the cognitive abilities assumed to underlie false belief and narrative paradigms and the implications of attributing theory of mind solely on the basis of performance on the false belief task.
Conference Paper
The aspect of accessibility and adaptivity is important for future of e-Learning applications. Creating e-Learning applications for everybody, including people with special needs, remains the question. The problem with development of e-Learning applications for everybody is that learner ability and weaknesses are usually neglected as important factors while developing applications. Most of nowadays applications offer lots of unclear information, unsuitable contents and non-adapted mechanisms. This paper suggests basic guidelines for successful design and structuring accessible and adaptive e-Learning applications that consider the requests and needs of people with special needs. It provides an example of design and realization of e-Learning application for receiving ECDL certificate, which includes easy adaptivity and basic accessibility factors. Experimental results of usability testing and pedagogical effectiveness have shown that material, designed following these guidelines, is appropriate and that there must be extra attention paid to learnability factor in the future.
Chapter
In our modern information and communication society, daily life would be unthinkable without technology. Information and Communications Technology (ICT) is also very useful for people with special needs. As the acoustic channel is barred to the deaf, all acoustic data have to be presented in visual form, ideally in sign language. This chapter presents a general overview of the use of ICT for deaf people using current as well as future technologies. The focus is on communication, e-learning and barrier-free access to information. Various projects of the Center for Sign Language serve as practical examples, and illustrate the Austrian situation.