Chapter

Developing and Evaluating a Novel Gamified Virtual Learning Environment for ASL

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The use of sign language is a highly effective way of communicating with individuals who experience hearing loss. Despite extensive research, many learners find traditional methods of learning sign language, such as web-based question-answer methods, to be unengaging. This has led to the development of new techniques, such as the use of virtual reality (VR) and gamification, which have shown promising results. In this paper, we describe a gamified immersive American Sign Language (ASL) learning environment that uses the latest VR technology to gradually guide learners from numeric to alphabetic ASL. Our hypothesis is that such an environment would be more engaging than traditional web-based methods. An initial user study showed that our system scored highly in some aspects, especially the hedonic factor of novelty. However, there is room for improvement, particularly in the pragmatic factor of dependability. Overall, our findings suggest that the use of VR and gamification can significantly improve engagement in ASL learning. KeywordsHuman Computer InteractionASL LearningVR

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Chapter
Full-text available
Sign languages enable effective communication between deaf and hearing people. Despite years of extensive pedagogical research, learning sign language online comes with a number of difficulties that might be frustrating for some students. Indeed, most of the existing approaches rely heavily on learning resources uploaded on websites, assuming that users will frequently consult them; however, this approach may feel tedious and uninspiring. To address this issue, several researchers have started looking into learning sign language in a game-based environment. However, the majority of the existing work still relies on website-based designs, with only a very few proposed systems providing an immersive virtual environment, and there are no user studies comparing website-based and immersive virtual environments. In this paper, we present an immersive environment for learning numbers 0–9 in American Sign Language (ASL). Our hypothesis is that an immersive virtual environment can provide users with a better learning experience and that users will show a higher level of engagement compared to website-based learning. We conducted a questionnaire-based user survey, and our initial findings suggest that users prefer to learn in an immersive virtual environment.KeywordsHuman Computer Interaction (HCI)Interaction DesignEmpirical Studies in Interaction DesignASL Learning
Chapter
Full-text available
Sign language can make possible effective communication between hearing and deaf-mute people. Despite years of extensive pedagogical research, learning sign language remains a formidable task, with the majority of the current systems relying extensively on online learning resources, presuming that users would regularly access them; yet, this approach can feel monotonous and repetitious. Recently, gamification has been proposed as a solution to the problem, however, the research focus is on game design, rather than user experience design. In this work, we present a system for user-defined interaction for learning static American Sign Language (ASL), supporting gesture recognition for user experience design, and enabling users to actively learn through involvement with user-defined gestures, rather than just passively absorbing knowledge. Early findings from a questionnaire-based survey show that users are more motivated to learn static ASL through user-defined interactions.KeywordsHuman Computer InteractionSign LanguageUser Study
Article
Full-text available
One in every six persons in the UK suffers a hearing loss, either as a condition they have been born with, or they developed during their life. Nine hundred thousand people in the UK are severely or profoundly deaf. Based on a study by Action on Hearing Loss UK in 2013 only 17 percent of this population, can use the British Sign Language (BSL). That leaves a massive proportion of people with a hearing impediment who do not use sign language struggling in social interaction and suffering from emotional distress. It also leaves even a larger proportion of Hearing people who cannot communicate with those of the deaf community. This paper presents a Serious Game (SG) that aims to close the communication gap between able hearing people and people with hearing impairment by providing a tool that facilitates BSL learning targeting the adult population. The paper presents the theoretical framework supporting adult learning based on which a SG game using Virtual Reality (VR) technology has been developed. The paper explains the experimental framework of the study. It presents the creation of the research instruments to facilitate the study comprising of a SG that integrates video and conventional video-based educational material. It reports and analyses the study results that demonstrate the advantage of the SG in effectively supporting users learning a set of BSL signs. It also presents qualitative outcomes that inform the further development of the game to serve learning needs. The paper closes with conclusions, directions for further development of this educational resource, and future studies.
Article
Full-text available
As an important component of universal sign language and the basis of other sign language learning, finger sign language is of great significance. This paper proposed a novel fingerspelling identification method for Chinese Sign Language via AlexNet-based transfer learning and Adam optimizer, which tested four different configurations of transfer learning. Besides, in the experiment, Adam algorithm was compared with stochastic gradient descent with momentum (SGDM) and root mean square propagation (RMSProp) algorithms, and comparison of using data augmentation (DA) against not using DA was executed to pursue higher performance. Finally, the best accuracy of 91.48% and average accuracy of 89.48 ± 1.16% were yielded by configuration M1 (replacing the last FCL8) with Adam algorithm and using 181x DA, which indicates that our method can identify Chinese finger sign language effectively and stably. Meanwhile, the proposed method is superior to other five state-of-the-art approaches.
Conference Paper
Full-text available
Prior work on Sign Language Translation has shown that having a mid-level sign gloss representation (effectively recognizing the individual signs) improves the translation performance drastically. In fact, the current state-of-the-art in translation requires gloss level tokenization in order to work. We introduce a novel transformer based architecture that jointly learns Continuous Sign Language Recognition and Translation while being trainable in an end-to-end manner. This is achieved by using a Connectionist Temporal Classification (CTC) loss to bind the recognition and translation problems into a single unified architecture. This joint approach does not require any ground-truth timing information, simultaneously solving two co-dependant sequence-to-sequence learning problems and leads to significant performance gains. We evaluate the recognition and translation performances of our approaches on the challenging RWTH-PHOENIX-Weather-2014T (PHOENIX14T) dataset. We report state-of-the-art sign language recognition and translation results achieved by our Sign Language Transformers. Our translation networks outperform both sign video to spoken language and gloss to spoken language translation models, in some cases more than doubling the performance (9.58 vs. 21.80 BLEU-4 Score). We also share new baseline translation results using transformer networks for several other text-to-text sign language translation tasks.
Article
Full-text available
The TensorFlow Distributions library implements a vision of probability theory adapted to the modern deep-learning paradigm of end-to-end differentiable computation. Building on two basic abstractions, it offers flexible building blocks for probabilistic computation. Distributions provide fast, numerically stable methods for generating samples and computing statistics, e.g., log density. Bijectors provide composable volume-tracking transformations with automatic caching. Together these enable modular construction of high dimensional distributions and transformations not possible with previous libraries (e.g., pixelCNNs, autoregressive flows, and reversible residual networks). They are the workhorse behind deep probabilistic programming systems like Edward and empower fast black-box inference in probabilistic models built on deep-network components. TensorFlow Distributions has proven an important part of the TensorFlow toolkit within Google and in the broader deep learning community.
Article
Full-text available
Questionnaires are a cheap and highly efficient tool for achieving a quantitative measure of a product’s user experience (UX). However, it is not always easy to decide, if a questionnaire result can really show whether a product satisfies this quality aspect. So a benchmark is useful. It allows comparing the results of one product to a large set of other products. In this paper we describe a benchmark for the User Experience Questionnaire (UEQ), a widely used evaluation tool for interactive products. We also describe how the benchmark can be applied to the quality assurance process for concrete projects.
Conference Paper
Full-text available
A good user experience is central for the success of interactive products. To improve products concerning these quality aspects it is thus also important to be able to measure user experience in an efficient and reliable way. But measuring user experience is not an end in itself. Several different questions can be the reason behind the wish to measure the user experience of a product quantitatively. We discuss several typical questions associated with the measurement of user experience and we show how these questions can be answered with a questionnaire with relatively low effort. In this paper the user experience questionnaire UEQ is used, but the general approach may be transferred to other questionnaires as well.
Chapter
This paper proposes a model based on convolutional neural network for hand gesture recognition and classification. The dataset uses 26 different hand gestures, which map to English alphabets A–Z. Standard dataset called Hand Gesture Recognition available in Kaggle website has been considered in this paper. The dataset contains 27,455 images (size 28 * 28) of hand gestures made by different people. Deep learning technique is used based on CNN which automatically learns and extracts features for classifying each gesture. The paper does comparative study with four recent works. The proposed model reports 99% test accuracy.
Article
In the realm of multimodal communication, sign language is, and continues to be, one of the most understudied areas. In line with recent advances in the field of deep learning, there are far reaching implications and applications that neural networks can have for sign language interpretation. In this paper, we present a method for using deep convolutional networks to classify images of both the the letters and digits in American Sign Language.
Article
Bilingualism in Development is an examination of the language and cognitive development of bilingual children focusing primarily on the preschool years. It begins by defining the territory for what is included in bilingualism and how language proficiency can be conceptualized. Using these constraints, the discussion proceeds to review the research relevant to various aspects of children's development and assesses the role that bilingualism has in each. The areas covered include language acquisition, metalinguistic ability, literacy skill, and problem-solving ability. In each case, the performance of bilingual children is compared to that of similar monolinguals, and differences are interpreted in terms of a theoretical framework for cognitive development and processing. The studies show that bilingualism significantly accelerates children's ability to selectively attend to relevant information and inhibit attention to misleading information or competing responses. This conclusion is used as the basis for examining a set of related issues regarding the education and social circumstances of bilingual children.
Recognition of sign language using deep neural network
  • P Pallavi
  • D Sarvamangala