Conference Paper

A New Robotic Platform as Sign Language Tutor

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This paper presents preliminary results from a socially interactive humanoid robot assisted Sign Language (SL) tutoring system for children with communication impairments, by means of imitation based interaction games. In this study, the 5 fingered robot platform Robovie R3 will be able to express a set of chosen words in SL using hand movements, body and face gestures. The study consists of an introductory phase, where participants are passive observers of the robot, an imitation based learning phase, where participants are motivated to imitate the signs demonstrated by the robot, and the game phase where the signs taught in the previous phases are tested within an interaction game. The aim of this interactive game phase is the reinforcement of the signs taught visually, semantically and kinematically in the previous phases in a motivating and engaging way, as well as testing the learning performance of the participants. The game also aims to improve children’s imitation and turn-taking skills and to teach the words semantically. Current paper presents results from the preliminary study with adults (without hearing disabilities).We use the humanoid robot as an assistive educational medium in this game, to evaluate the participant’s sign languagelearning ability from a robot, and comparison of different robot platforms within this setup.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In [33], 15 words are taught in one scenario, and this decreased the performance. In [34], using 10 words in both Nao and R3 robots resulted in a better performance. In [28] 5 words were taught in the game, and the results are quite successfull. ...
... The feedback is gived using flashcards, which enables the robot to continue its predefined order of actions. In [33] and [34] the order is defined and initiated by the flashcards. The tactiles of the robot are used to start the game and in some experiments enable the robot to dance, to motivate and entertain the participants between the tests. ...
... One of the reasons for this is the fact that the Nao robot has only 3 dependent fingers while most of the words from the i.e., TSL are performed by using 5 fingers (mostly independent, i.e one pointing a part of the face and other 4 are curled). Our current experiments are duplicated on a humanoid platform with 5 fingers and more DOF on the arms (a modified R3) to overcome these limitations [34]. ...
Chapter
This paper investigates the role of interaction and communication kinesics in human-robot interaction. It is based on a project on Sign Language (SL) tutoring through interaction games with humanoid robots. The aim of the study is to design a computational framework, which enables to motivate the children with communication problems (i.e., ASD and hearing impairments) to understand and imitate the signs implemented by the robot using the basic upper torso gestures and sound in a turn-taking manner. This framework consists of modular computational components to endow the robot the capability of perceiving the actions of the children, carrying out a game or storytelling task and tutoring the children in any desired mode, i.e., supervised and semi-supervised. Visual (colored cards), vocal (storytelling, music), touch (using tactile sensors on the robot to communicate), and motion (recognition and implementation of gestures including signs) based cues are proposed to be used for a multimodal communication between the robot, child and therapist/parent. We present an empirical and exploratory study investigating the effect of basic non-verbal gestures consisting of hand movements, body and face gestures expressed by a humanoid robot, and having comprehended the word, the child will give relevant feedback in SL or visually to the robot, according to the context of the game.
Conference Paper
Full-text available
The aim of this study is to present usable educational tools of students with special needs. Hearing impaired students cannot use text and speech based technological, educational material that is becoming a crucial tool for modern education. Like videos of signers, a signing avatar for T˙ID (Turkish Sign Language) would allow communicating information in the form of visual gestures, thus becoming usable when the use of text is unfeasible. However, compared to videos, animated avatars offer several advantages like easy reproducible of gesture sequences, control of point of view, adjusting the speed of the sign and being smaller in storage and bandwidth then videos. For this study, a signing avatar was developed to represent a portion of the social sciences course for Turkish primary education curriculum. To analyze the success and effectiveness of the interface, the performance of the text, and avatar-based interaction in a humancomputer interaction scheme using elementary school students have been compared. The results demonstrate that avatar based tutoring was more effective in assessing the child’s knowledge of certain sign language words. Since the aim was to make the signing avatar as comprehensible as possible, the results demonstrate that goal have been.
Thesis
Full-text available
The aim of this thesis is to present usable educational tools of students with special needs. Hearing impaired students cannot use text and speech based technological, educational material that is becoming a crucial tool for modern education. While they are not a direct substitute for human teachers, technological support materials in education have been shown to motivate children and aid instructors. To allow the use of such methods, three applications aimed at educating deaf children using the robot, the web, and avatar-based technologies was designed. One such technological medium that can be used to connect with such children are robots. Robots are often used in children's education by making use of their inherent personified nature compared to other non-human teaching mediums. Children prefer to interact by touching during a training session. In this study, we developed a method, where two humanoid robots namely R3 and NAO are used to perform sign language gestures to communicate with children using sign languages with creating human-like hand, arm and body gestures. The most powerful aspect of the robot is its embodiment that causes children to interact with the robot socially, rather than treating it as a simple, book like education tool. One other educational tool we have developed is a signing avatar. Like videos of signers, a signing avatar for TİD would allow communicating information in the form of visual gestures, thus becoming usable when the use of text is infeasible. However, compared to videos, animated avatars offer several advantages like easy reproducibility of gesture sequences, control of point of view, adjusting the speed of the sign and being smaller in storage and bandwidth then videos. For this thesis, a signing avatar was developed to represent a portion of the social sciences course for Turkish primary education curriculum. Finally, a web and android based experimental framework was developed to perform customized human-machine interaction experiments. Sign language and drum based interaction game schemes were developed to perform in-depth analysis of different education based modalities with users from different demographics. By performing usability studies and evaluations in a gamification, scheme using turn-taking games, the effectiveness of the described three methods were analyzed. The observations and the reactions of children demonstrate that of the three technologies implemented, robot technologies may be the most effective method for assisting children. While a prototype interaction scheme and used in controlled education settings have been constructed, in their current technology level, price range and ease of operability robots such as Nao do not present an implementable option of presenting the entire primary school corpora. For that reason, a signing avatar and recreated content from the Social Sciences corpora have been implemented. While the signing avatar an imitation of videos created by signing users, it possesses the advantage of easily synthesizing novel sequences of content from existing isolated sign videos. To analyze the success and effectiveness of the developed methods, the performance of the text, signer video and avatar-based interaction in a human-computer interaction scheme using elementary school students have been compared. The results demonstrate that of the three methods, video-based methods were the most comprehensible closely followed by avatar based methods among kids who did not know how to read. Since the aim was to make the signing avatar as comprehensible as possible, the results demonstrate that goal have been.
Conference Paper
Full-text available
This work is part of an ongoing work for sign language tutoring with imitation based turn-taking and interaction games (iSign) with humanoid robots and children with communication impairments. The paper focuses on the extension of the game, mainly for children with autism. Autism Spektrum Disorder (ASD) involves communication impairments, limited social interaction, and limited imagination. Many such children show interest in robots and find them engaging. Robots can facilitate social interaction between the child and teacher. In this work, a Nao H25 Humanoid robot assisted the human teacher to teach some signs and basic upper torso actions which were observed and imitated by the participants. Kinect camera based system was used to recognize the signs and other actions, and the robot gave visual and audial feedback to the participants based on the performance.
Conference Paper
Full-text available
This work presents preliminary observations from a study of children (N=19, age 5-12) interacting in multiple sessions with a humanoid robot in a scenario involving game activities. The main purpose of the study was to see how their perception of the robot, their engagement, and their enjoyment of the robot as a companion evolve across multiple interactions, separated by one-two weeks. However, an interesting phenomenon was observed during the experiment: most of the children soon adapted to the behaviors of the robot, in terms of speech timing, speed and tone, verbal input formulation, nodding, gestures, etc. We describe the experimental setup and the system, and our observations and preliminary analysis results, which open interesting questions for further research.
Conference Paper
Full-text available
Abstract—The work presented in this paper was carried out within an on-going project which aims to assist in Sign language tutoring to children with communication problems by means of interaction games with a humanoid robot. In this project, the robot is able to express words in the Turkish Sign language (TSL) and American Sign Language (ASL), and having comprehended the word, the child is motivated to give relevant feedback to the robot with colored flashcards, signs and sound. Within the multi-modal turn-taking games, child both ...
Article
Full-text available
This article provides a comprehensive introduction to the design of the minimally expressive robot KASPAR which is particularly suitable for human-robot interaction studies. A low-cost design with off-the-shelf components has been used in a novel design inspired from a multi-disciplinary viewpoint, including comics design and Japanese Noh theatre. The design rationale of the robot and its technical features are described in detail. Three research studies will be presented that have been using KASPAR extensively. Firstly, we present its application in robot-assisted play and therapy for children with autism. Secondly, we illustrate its use in human-robot interaction studies investigating the role of interaction kinesics and gestures. Lastly, we describe a study in the field of developmental robotics into computational architectures based on interaction histories for robot ontogeny. The three areas differ in the way how the robot is being operated and its role in social interaction scenarios. Each will be introduced briefly and examples of the results are presented. Reflections on the specific design features of KASPAR that were important in these studies and lessons learnt from these studies concerning the design of humanoid robots for social interaction will be discussed. An assessment of the robot in terms of utility of the design for human-robot interaction experiments concludes the paper.
Article
Full-text available
In our curricula, freshmen use an autonomous robotic platform to get introduced to fundamental concepts in Electrical and Computer Engineering. Using this platform, teams of students interested by the challenge are invited to apply knowledge acquired during their first year of studies by participating in a toy robot design contest. Initiated in 1999, the challenge is to design a mobile robot to help autistic children. The goal of this paper is to describe the contest, its organization, its pedagogic principles and its impacts in order to show how open design projects can create meaningful and exciting learning experiences for students in Electrical and Computer Engineering.
Article
Full-text available
In conversation, two people inevitably know different amounts about the topic of discussion, yet to make their references understood, they need to draw on knowledge and beliefs that they share. An expert and a novice talking with each other, therefore, must assess each other's expertise and accommodate to their differences. They do this in part, it is proposed, by assessing, supplying, and acquiring expertise as they collaborate in completing their references. In a study of this accommodation, pairs of people who were or were not familiar with New York City were asked to work together to arrange pictures of New York City landmarks by talking about them. They were able to assess each other's level of expertise almost immediately and to adjust their choice of proper names, descriptions, and perspectives accordingly. In doing so, experts supplied, and novices acquired, specialized knowledge that made referring more efficient.
Conference Paper
Full-text available
There is an on-going study which aims to assist in teaching Sign Language to hearing impaired children by means of non-verbal communication and imitation based interaction games between a humanoid robot and the child. In this study, the robot will be able to express a word in Sign Language among a set of chosen words using hand movements, body and face gestures and having comprehended the word, the child will give relevant feedback to the robot. This study proposes an interactive game between a NAO H25 humanoid robot and preschool children based on Sign Language. Currently the demo is in Turkish Sign Language (TSL) but it will be extended to ASL, too. Since the children do not know how to read and write, and are not familiar with sign language, we prepared a short story including special words where the robot realized the specially selected word with sign language as well as pronouncing the word verbally. After realizing every special word with sign language the robot waited for response from children, where the children were asked to show colour flashcards with the illustration of the word. If the flashcard and the word match the robot pronounces the word verbally and continues to tell the story. At the end of the story the robot realizes the words one by one with sign language in a random order and asks the children to put the sticker of the relevant flashcard on their play cards which include the story with illustrations of the flashcards. We also carried the game to internet and tablet pc environments. The aim is to evaluate the children's sign language learning ability from a robot, in different embodiments and make the system available to children disregarding the cost of the robot, transportation and knowhow issues.
Conference Paper
Full-text available
Robots can be very helpful therapeutic tools, especially for children with special needs. In the present paper we describe the application of two robotic platforms with different design parameters in interaction with children with autism and other cognitive impairments. IROMEC is a mobile robotic platform designed for children with different levels of disabilities to encourage them to be engaged in social interactions. KASPAR is a humanoid child-size robot designed for social interaction. KASPAR has been used extensively in studies with children with autism. The aim of this study is to examine how KASPAR and IROMEC can support social interaction and facilitate the cognitive and social development of children with special needs via play activities. Natural engagement in social play behaviour is often a problem in the development of children with disabilities. Due to the nature of their disabilities they are often excluded from such activities. As part of a long-term study we carried out different play scenarios based on imitation, turn taking and the cause and effect game according to the main educational and therapeutic objectives considered important for child development. In this paper we focus on the turn taking and the imitation game scenarios. A preliminary analysis of the data showed encouraging results. The level of the improvement of the children depended on the level and nature of their disabilities.
Article
Full-text available
We present a global overview of image- and video-processing-based methods to help the communication of hearing impaired people. Two directions of communication have to be considered: from a hearing person to a hearing impaired person and vice versa. In this paper, firstly, we describe sign language (SL) and the cued speech (CS) language which are two different languages used by the deaf community. Secondly, we present existing tools which employ SL and CS video processing and recognition for the automatic communication between deaf people and hearing people. Thirdly, we present the existing tools for reverse communication, from hearing people to deaf people that involve SL and CS video synthesis.
Conference Paper
Full-text available
This work is part of an ongoing work for sign language tutoring with imitation based turn-taking and interaction games (iSign) with humanoid robots and children with communication impairments. The paper focuses on the extension of the game mainly for children with autism. Autism Spectrum Disorder (ASD) involves communication impairments, limited social interaction, and limited imagination. Researchers are interested in using robots in treating children with ASD. Many such children show interest in robots and find them engaging. Robots can facilitate interaction between the child and teacher. Every child with autism has different needs. Robot behavior needs to be changed to accommodate individual children's needs and as each individual child makes progress. In this multimodal game, a humanoid robot implements some uppertorso gestures including signs from American and Turkish Sign Languages, and the children are encouraged to imitate the gestures by the robot. Then the gesture is recognized by the robot bymeans of a RGBD camera (Kinect) and robot motivates the child both verbally and with approving gestures. The gestures are selected by showing colored pictures to the robot by the child and the care taker. The game involves three stages requiring different levels of caretaker’s interruption and games will be designed individually for each children using different gestures and signs for each children.
Article
Full-text available
Takayuki Kanda is a computer scientist with interests in intelligent robots and human-robot interaction; he is a researcher in the Intelligent Robotics and Communication Laboratories at ATR (Advanced Telecommunications Re-search Institute), Kyoto, Japan. Takayuki Hirano is a computer scientist with an interest in human–robot interaction; he is an intern researcher in the Intelli-gent Robotics and Communication Laboratories at ATR, Kyoto, Japan. Daniel Eaton is a computer scientist with an interest in human–robot interaction; he is an intern researcher in the Intelligent Robotics and Communication Labora-tories at ATR, Kyoto, Japan. Hiroshi Ishiguro is a computer scientist with in-terests in computer vision and intelligent robots; he is Professor of Adaptive Machine Systems in the School of Engineering at Osaka University, Osaka, Ja-pan, and a visiting group leader in the Intelligent Robotics and Communication Laboratories at ATR, Kyoto, Japan. ABSTRACT Robots increasingly have the potential to interact with people in daily life. It is believed that, based on this ability, they will play an essential role in human society in the not-so-distant future. This article examined the proposition that robots could form relationships with children and that children might learn from robots as they learn from other children. In this article, this idea is studied in an 18-day field trial held at a Japanese elementary school. Two English-speak-ing "Robovie" robots interacted with first-and sixth-grade pupils at the perime-ter of their respective classrooms. Using wireless identification tags and sensors, these robots identified and interacted with children who came near them. The robots gestured and spoke English with the children, using a vocabulary of about 300 sentences for speaking and 50 words for recognition.
Conference Paper
Full-text available
Turn-taking is fundamental to the way humans engage in information exchange, but robots currently lack the turn-taking skills required for natural communication. In order to bring effective turn-taking to robots, we must first understand the underlying processes in the context of what is possible to implement. We describe a data collection experiment with an interaction format inspired by “Simon says,” a turn-taking imitation game that engages the channels of gaze, speech, and motion. We analyze data from 23 human subjects interacting with a humanoid social robot and propose the principle of minimum necessary information (MNI) as a factor in determining the timing of the human response.We also describe the other observed phenomena of channel exclusion, efficiency, and adaptation. We discuss the implications of these principles and propose some ways to incorporate our findings into a computational model of turn-taking.
Conference Paper
Full-text available
Many deaf people have significant reading problems. Written content, e.g. on internet pages, is therefore not fully accessible for them. Embodied agents have the potential to communicate in the native language of this cultural group: sign language. However, state-of-the-art systems have limited comprehensibility and standard evaluation methods are missing. In this paper, we present methods and discuss challenges for the creation and evaluation of a signing avatar. We extended the existing EMBR character animation system with prerequisite functionality, created a gloss-based animation tool and developed a cyclic content creation workflow with the help of two deaf sign language experts. For evaluation, we introduce delta testing, a novel way of assessing comprehensibility by comparing avatars with human signers. While our system reached state-of-the-art comprehensibility in a short development time we argue that future research needs to focus on nonmanual aspects and prosody to reach the comprehensibility levels of human signers.
Conference Paper
Full-text available
This paper presents the results of our research in automatic recognition of the Mexican Sign Language (MSL) alphabet as control element for a service robot. The technique of active contours was used for image segmentation in order to recognize de signs. Once segmented, we proceeded to obtain the signature of the corresponding sign and trained a neural network for its recognition. Every symbol of the MSL was assigned to a task that the robotic system had to perform; we defined eight different tasks. The system was validated using a simulation environment and a real system. For the real case, we used a mobile platform (Powerbot) equipped with a manipulator with 6 degrees of freedom (PowerCube). For simulation of the mobile platforms, RoboWorks was used as the simulation environment. In both, simulated and real platforms, tests were performed with different images to those learned by the system, obtaining in both cases a recognition rate of 95.8%.
Conference Paper
Full-text available
The Wizard of Oz (WOz) technique is an experimental evaluation mechanism. It allows the observation of a user operating an apparently fully functioning system whose missing services are supplemented by a hidden wizard. From our analysis of existing WOz systems, we observe that this technique has primarily been used to study natural language interfaces. With recent advances in interactive media, multimodal user interfaces are becoming popular but our current understanding on how to design such systems is still primitive. In the absence of generalizable theories and models, the WOz technique is an appropriate approach to the identification of sound design solutions. We show how the WOz technique can be extended to the analysis of multimodal interfaces and we formulate a set of requirements for a generic multimodal WOz platform. The Neimo system is presented as an illustration of our early experience in the development of such platforms.
Article
Full-text available
Around 96 percent of children with hearing loss are born to parents with intact hearing, who may initially know little about deafness or sign language. Therefore, such parents will need information and support in making decisions about the medical, linguistic, and educational management of their child. Some of these decisions are time-sensitive and irreversible and come at a moment of emotional turmoil and vulnerability (when some parents grieve the loss of a normally hearing child). Clinical research indicates that a deaf child's poor communication skills can be made worse by increased level of parental depression. Given this, the importance of reliable and up-to-date support for parents' decisions is critical to the overall well-being of their child. In raising and educating a child, parents are often offered an exclusive choice between an oral environment (including assistive technology, speech reading, and voicing) and a signing environment. A heated controversy surrounds this choice, and has since at least the late 19th century, beginning with the International Congress on the Education of the Deaf in Milan, held in 1880. While families seek advice from many sources, including, increasingly, the internet, the primary care physician (PCP) is the professional medical figure the family interacts with repeatedly. The present article aims to help family advisors, particularly the PCP and other medical advisors in this regard. We argue that deaf children need to be exposed regularly and frequently to good language models in both visual and auditory modalities from the time hearing loss is detected and continued throughout their education to ensure proper cognitive, psychological, and educational development. Since there is, unfortunately, a dearth of empirical studies on many of the issues families must confront, professional opinions, backed by what studies do exist, are the only option. We here give our strongly held professional opinions and stress the need for improved research studies in these areas.
Article
Full-text available
Affirmative legislative action in many countries now requires that public spaces and services be made accessible to disabled people. Although this is often interpreted as access for people with mobility impairments, such legislation also covers those who are hearing or vision impaired. In these cases, it is often the provision of advanced technological devices and aids which enables people with sensory impairments to enjoy the theatre, cinema or a public meeting to the full. Assistive Technology for the Hearing-impaired, Deaf and Deafblind shows the student of rehabilitation technology how this growing technical provision can be used to support those with varying reductions in auditory ability and the deafblind in modern society. Features: instruction in the physiology of the ear together with methods of measurement of hearing levels and loss; the principles of electrical engineering used in assistive technology for the hearing impaired; description and demonstration of electrical engineering used in hearing aids and other communications enhancement technologies; explanation of many devices designed for every-day living in terms of generic electrical engineering; sections of practical projects and investigations which will give the reader ideas for student work and for self teaching. The contributors are internationally recognised experts from the fields of audiology, electrical engineering, signal processing, telephony and assistive technology. Their combined expertise makes Assistive Technology for the Hearing-impaired, Deaf and Deafblind an excellent text for advanced students in assistive and rehabilitation technology and to professional engineers and medics working in assistive technology who wish to maintain an up-to-date knowledge of current engineering advances.
Article
Full-text available
We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American sign language (ASL) using a single camera to track the user's unadorned hands. The first system observes the user from a desk mounted camera and achieves 92 percent word accuracy. The second system mounts the camera in a cap worn by the user and achieves 98 percent accuracy (97 percent with an unrestricted grammar). Both experiments use a 40-word lexicon
Article
Full-text available
Instrumented gloves -- gloves equipped with sensors for detecting finger bend, hand position and orientation -- were conceived to allow a more natural interface to computers. However, the extension of their use for recognising sign language, and in this case Auslan (Australian Sign Language), is possible. Several researchers have already explored these possibilities and have successfully achieved finger-spelling recognition with high levels of accuracy, but progress in the recognition of sign language as a whole has been limited.
Article
This paper investigates the role of interaction and communication kinesics in human–robot interaction. This study is part of a novel research project on sign language (SL) tutoring through interaction games with humanoid robots. The main goal is to motivate the children with communication problems to understand and imitate the signs implemented by the robot using basic upper torso gestures and sound. We present an empirical and exploratory study investigating the effect of basic nonverbal gestures consisting of hand movements, body and face gestures expressed by a humanoid robot, and having comprehended the word, the participants will give relevant feedback in SL. This way the participant is both a passive observer and an active imitator throughout the learning process in different phases of the game. A five-fingered R3 robot platform and a three-fingered Nao H-25 robot are employed within the games. Vision-, sound-, touch- and motion-based cues are used for multimodal communication between the robot, child and therapist/parent within the study. This paper presents the preliminary results of the proposed game tested with adult participants. The aim is to evaluate the SL learning ability of participants from a robot, and compare different robot platforms within this setup.
Conference Paper
This work presents the preliminary results of an ongoing project which aims to use humanoid robots as sign language tutors. The study mainly focuses on children who have some problems when they are communicating with other individuals such as hearing-impaired or autistic children. In this paper, an interactive game, which is based on sign language, between a humanoid robot and a human participant is introduced. The game consists of an imitation based learning phase where the signs are taught in the first step and they are tested in the second step within the frame of an interaction game. The goal of the interactive game is to reinforce the semantic meaning of the signs in a motivating and engaging way, as well as to test the learning performance of the participants. We aim to design a comfortable learning environment by using the humanoid robot as an educational medium. The game also improves the participants' imitation and turn-taking skills and teaches the semantic meanings of the signs.
Article
The results are from an on-going study which aims to assist in the teaching of Sign Language (SL) to hearing impaired children by means of non-verbal communication and imitation based interaction games between a humanoid robot and the child. In this study, the robot will be able to express a word in the SL among a set of chosen words using hand movements, body and face gestures and having comprehended the word, the child will give relevant feedback to the robot. This paper reports the findings of such an evaluation on a subset of sample words chosen from Turkish Sign Language (TSL) via the comparison of their video representations carried out by human teachers and the Nao H25 robot. Within this study, several surveys and user studies have been realized to reveal the resemblance between the two types of videos involving the performance of the robot simulator and the human teacher for each chosen word. In order to investigate the perceived level of similarity between human and robot behavior, participants with different sign language acquaintance levels and age groups have been asked to evaluate the videos using paper based and online questionnaires. The results of these surveys have been summarized and the most significant factors affecting the comprehension of TSL words have been discussed.
Book
For more information, go to editor's website : http://www.hup.harvard.edu/catalog.php?recid=25615 Excerpts available on Google Books.
Conference Paper
There are so many interaction methods between human and robots using visual sensor as camera. In this paper, we try to make rock, scissors, paper game using robot camera. The main problem is occurred by robot height, as more preciously camera height, because hand shapes are dependent on camera angle. If robot's camera height is smaller than human, it is difficult to compare with rock and scissors shape, when human stretch out his hand to the robot camera. To solve this problem, we propose that human stretch out his hand to his shoulder. In this circumstance, we can use skin model to separate human hands using face detector that is base on ada-boosting algorithm including OpenCV library. If we detect a human face, and we made a skin models from detected face region, we can get the robust skin information according to the illumination variance
Article
Fisher kernels combine the powers of discriminative and generative classifiers by mapping the variable-length sequences to a new fixed length feature space, called the Fisher score space. The mapping is based on a single generative model and the classifier is intrinsically binary. We propose a multi-class classification strategy that applies a multi-class classification on each Fisher score space and combines the decisions of multi-class classifiers. We experimentally show that the Fisher scores of one class provide discriminative information for the other classes as well. We compare several multi-class classification strategies for Fisher scores generated from the hidden Markov models of sign sequences. The proposed multi-class classification strategy increases the classification accuracy in comparison with the state of the art strategies based on combining binary classifiers. To reduce the computational complexity of the Fisher score extraction and the training phases, we also propose a score space selection method and show that, similar or even higher accuracies can be obtained by using only a subset of the score spaces. Based on the proposed score space selection method, a signer adaptation technique is also presented that does not require any re-training.
Article
STARS is a vision based real time gestural interface that allows both communicative and manipulative 3D hand gestures, which vary in motion and appearance, to control target generic personal computer applications. This input–output HMM based framework attains high recognition rates on a database consisting of 20 complex hand gestures.
Conference Paper
This paper introduces a video based system that recognizes gestures of Turkish Sign Language (TSL). Hidden Markov Models (HMMs) have been applied to design a sign language recognizer because of the fact that HMMs seem ideal technology for gesture recognition due to its ability of handling dynamic motion. It is seen that sampling only four key-frames is enough to detect the gesture. Concentrating only on the global features of the generated signs, the system achieves a word accuracy of 95.7%.
Conference Paper
There is an on-going study which aims to assist in teaching Sign Language (SL) to hearing impaired children by means of non-verbal communication and imitation based interaction games between a humanoid robot and the child. In this study, the robot will be able to express a word in SL among a set of chosen words using hand movements, body and face gestures. Having comprehended the word, the child will give relevant feedback to the robot. In the current study, we propose an interactive story telling game between a NAO H25 humanoid robot and preschool children based on Turkish Sign Language (TSL). Since most of the children do not know how to read and write, and they are not familiar with sign language, we prepared a short story including specially selected words which is performed by the robot verbally and with sign language as well. The children are expected to give feedback to the robot with matching colour flashcards when it implements a word in sign language. The robotic event covered 106 preschool children. The aim is to evaluate the children's sign language learning ability from a robot, and comparison of these results with the results of video based studies.
Article
We present results from an empirical study investigating the effect of embodiment and minimal gestures in an interactive drumming game consisting of an autonomous child-sized humanoid robot (KASPAR) playing with child participants. In this study, each participant played three games with a humanoid robot that played a drum whilst simultaneously making (or not making) head gestures. The three games included the participant interacting with the real robot (physical embodiment condition), interacting with a hidden robot when only the sound of the robot is heard (disembodiment condition; note that the term 'disembodiment' is used in this paper specifically to refer to an experimental condition where a physical robot produces the sound cues, but is not visible to the participants), or interacting with a real-time image of the robot (virtual embodiment condition). We used a mixed design where repeated measures were used to evaluate embodiment effects and independent-groups measures were used to study the gestures effects. Data from the implementation of a human–robot interaction experiment with 66 children are presented, and statistically analyzed in terms of participants' subjective experiences and drumming performance of the human–robot pair. The subjective experiences showed significant differences for the different embodiment conditions when gestures were used in terms of enjoyment of the game, and perceived intelligence and appearance of the robot. The drumming performance also differed significantly within the embodiment conditions and the presence of gestures increased these differences significantly. The presence of a physical, embodied robot enabled more interaction, better drumming and turn-taking, as well as enjoyment of the interaction, especially when the robot used gestures.
Article
People who are both deaf and blind can experience extreme social and informational isolation due to their inability to converse easily with others. To communicate, many of these individuals employ a tactile version of fingerspelling and/or sign language, gesture systems representing letters or words, respectively. These methods are far from ideal, however, as they permit interaction only with others who are in physical proximity, knowledgeable in sign language or fingerspelling, and willing to engage in one of these "hands-on-hands" communication techniques. The problem is further exacerbated by the fatigue of the fingers, hands, and arms during prolonged conversations. Mechanical hands that fingerspell may offer a solution to this communication situation. These devices can translate messages typed at a keyboard in person-to-person communication, receive TDD (Telecommunication Devices for the Deaf) telephone calls, and gain access to local and remote computers and the information they contain.
Conference Paper
This research explores feasibility of using intellectual robots as a language instruction tool for young children. Since the intellectual robots have several sensors which can recognize status of the other parties, it can perform various bi-directional interaction strategies. This study verified how such bi-directional interaction using intellectual robots affects the improvement of linguistic ability. The subjects were 34 4-year-old children, 17 of whom were in the traditional media-assisted reading program, and 17 of whom were in the Robot-assisted reading program. The results indicated that the children in the robot-assisted groups improved significantly compared with media-assisted groups in linguistic ability (story making, understanding, word recognition, PPVT (Peabody Picture Vocabulary Test)). Furthermore, the benefits and limitations of using Intelligent Robot in linguistic education are discussed.
Conference Paper
This paper gives a short overview about new human-centered robotic approaches applied to the rehabilitation of gait and upper-extremity functions in patients with movement disorders. So-called "patient-cooperative" strategies can take into account the patient's intention and efforts rather than imposing any predefined movement. It is hypothesized that such human-centered robotic approaches can improve the therapeutic outcome compared to classical rehabilitation strategies
Article
Investigation into robot-assisted intervention for children with autism spectrum disorder (ASD) has gained momentum in recent years. Therapists involved in interventions must overcome the communication impairments generally exhibited by children with ASD by adeptly inferring the affective cues of the children to adjust the intervention accordingly. Similarly, a robot must also be able to understand the affective needs of these children-an ability that the current robot-assisted ASD intervention systems lack-to achieve effective interaction that addresses the role of affective states in human-robot interaction and intervention practice. In this paper, we present a physiology-based affect-inference mechanism for robot-assisted intervention where the robot can detect the affective states of a child with ASD as discerned by a therapist and adapt its behaviors accordingly. This paper is the first step toward developing ldquounderstandingrdquo robots for use in future ASD intervention. Experimental results with six children with ASD from a proof-of-concept experiment (i.e., a robot-based basketball game) are presented. The robot learned the individual liking level of each child with regard to the game configuration and selected appropriate behaviors to present the task at his/her preferred liking level. Results show that the robot automatically predicted individual liking level in real time with 81.1% accuracy. This is the first time, to our knowledge, that the affective states of children with ASD have been detected via a physiology-based affect recognition technique in real time. This is also the first time that the impact of affect-sensitive closed-loop interaction between a robot and a child with ASD has been demonstrated experimentally.
Desirable features of a "humanoid" robot-therapist
  • P Morasso
  • M Casadio
  • P Giannoni
  • L Masia
  • V Sanguineti
  • V Squeri
  • E Vergaro
Morasso P, Casadio M, Giannoni P, Masia L, Sanguineti V, Squeri V, Vergaro E (2009) Desirable features of a "humanoid" robot-therapist. In: Proc. of Annual International Conference of the IEEE on Engineering in Medicine and Biology Society (EMBC), pp. 2418-2421
Face detection technique of humanoid robot Nao for application in robotic assistive therapy
  • L Ismail
  • S Shamsuddin
  • H Yussof
  • H Hashim
  • S Bahari
  • A Jaafar
  • I Zahari
Ismail L, Shamsuddin S, Yussof H, Hashim H, Bahari S, Jaafar A, Zahari I (2011) Face detection technique of humanoid robot Nao for application in robotic assistive therapy. In: Proc. of IEEE International Conference on Control System, Computing and Engineering (ICCSCE), pp. 517-521
Virtual signer co-articulation in octopus, a sign language generation platform
  • A Braffort
  • L Bolot
  • J Segouat
Braffort A, Bolot L, Segouat J (2011) Virtual signer co-articulation in octopus, a sign language generation platform. In: Proc. of The 9th International Gesture Workshop; Gesture in Embodied Communication and Human-Computer Interaction
Lecture Notes in Computer Science, Gesture and Sign Language in Human-Computer Interaction and Embodied Communication
  • D Anastasiou
Anastasiou D (2012) Gestures in assisted living environments. In: Efthimiou E, Kouroupetroglou G, Fotinea SE (eds) Lecture Notes in Computer Science, Gesture and Sign Language in Human-Computer Interaction and Embodied Communication. Springer, 7206:1-12
Anatomy and physiology of hearing, hearing impairment and treatment
  • M A Hersh
  • M A Johnson
Hersh MA, Johnson MA (2003) Anatomy and physiology of hearing, hearing impairment and treatment. In: Hersh MA, Johnson MA (eds) Assistive Technology for the Hearing-impaired Deaf and Deafblind. Springer, pp. 1-39
Analysis and synthesis of sign language gestures: from meaning to movement production
  • S Gibet
Gibet S (2011) Analysis and synthesis of sign language gestures: from meaning to movement production. In: Proc. of The 9th International Gesture Workshop; Gesture in Embodied Communication and Human-Computer Interaction
Multi-path architecture for machine translation of English text into American Sign language animation
  • M A Huenerfauth
Huenerfauth MA (2004) Multi-path architecture for machine translation of English text into American Sign language animation. In: Proc. of the Student Research Workshop at HLT-NAACL Association for Computational Linguistics, pp. 25-30