ArticlePDF Available

Sensor-Based Assistive Devices for Visually-Impaired People: Current Status, Challenges, and Future Directions

Authors:

Abstract and Figures

The World Health Organization (WHO) reported that there are 285 million visually-impaired people worldwide. Among these individuals, there are 39 million who are totally blind. There have been several systems designed to support visually-impaired people and to improve the quality of their lives. Unfortunately, most of these systems are limited in their capabilities. In this paper, we present a comparative survey of the wearable and portable assistive devices for visually-impaired people in order to show the progress in assistive technology for this group of people. Thus, the contribution of this literature survey is to discuss in detail the most significant devices that are presented in the literature to assist this population and highlight the improvements, advantages, disadvantages, and accuracy. Our aim is to address and present most of the issues of these systems to pave the way for other researchers to design devices that ensure safety and independent mobility to visually-impaired people.
Content may be subject to copyright.
A preview of the PDF is not available
... Trained dogs and the white cane are the most basic and cost-effective navigational instruments. Despite their popularity, these technologies cannot offer the blind all of the information and functionality that persons with sight have access to for safe movement [15,16]. ...
... In addition, we gave a quantitative assessment of the intended system's development based on the key criteria that must be offered by any system that provides a service to visually impaired persons. It has been established that a system designed for the blind person must have several features, including clear and concise information in seconds, consistent performance during the day and night, indoors and outdoors operation; detects objects from close and far distances; and detects static and dynamic objects to handle any sudden appearance of objects; otherwise, the user's life is in danger [15]. Table 4 shows the results of the assessment and score of the Smart Cane navigation system's characteristics, such as solar battery charging, Bluetooth connectivity, sensor detection responses, GPS/SMS processing, and so on. ...
Article
Full-text available
The Blind Navigation System using Arduino and 1sheeld is a system that intends to enhance blind peoples' access to the environment, particularly in Ghana, Africa. This research aimed at designing a safe navigation system to allow seamless transitions for visually impaired people from one location to another, as well as a tool to assist them in communicating with their surroundings and guardians when in a difficult situation. The design uses PVC pipe as the cane, 1Sheeld, Arduino Uno, ultrasonic and water sensors for processing and monitoring, a buzzer and vibration motor to offer an alarm system via vibration and sound, housed within a circuit box and the handle, and finally powered by a portable mini solar panel with a rechargeable battery. The phone of the blind is connected to the 1sheeld board via Bluetooth link and the 1sheeld App is installed on the mobile phone. The guardian receives a call or an SMS with the GPS coordinates, which can be tracked when the blind person is lost through Google Map. The simulations related to the design's overall purpose were precise, and the trial findings from volunteers obtained from the final test were encouraging and ensured the safety and speed of mobility. As a result, the goal of designing a safe navigation system to detect impediments and provide the exact location of the visually impaired through GPS/SMS processing and powered by a mini solar panel with rechargeable battery system were achieved.
... For a visually impaired person, unassisted traveling requires two levels of navigation: micro-navigation and macro-navigation [4]. Macro-navigation is the ability of the user to know their current location, orientation, and to have information about the route to follow to reach the destination. ...
Article
Full-text available
People with visual impairment are the second largest affected category with limited access to assistive products. A complete, portable, and affordable smart assistant for helping visually impaired people to navigate indoors, outdoors, and interact with the environment is presented in this paper. The prototype of the smart assistant consists of a smart cane and a central unit; communication between user and the assistant is carried out through voice messages, making the system suitable for any user, regardless of their IT skills. The assistant is equipped with GPS, electronic compass, Wi-Fi, ultrasonic sensors, an optical sensor, and an RFID reader, to help the user navigate safely. Navigation functionalities work offline, which is especially important in areas where Internet coverage is weak or missing altogether. Physical condition monitoring, medication, shopping, and weather information, facilitate the interaction between the user and the environment, supporting daily activities. The proposed system uses different components for navigation, provides independent navigation systems for indoors and outdoors, both day and night, regardless of weather conditions. Preliminary tests provide encouraging results, indicating that the prototype has the potential to help visually impaired people to achieve a high level of independence in daily activities.
... Over the years, several researchers have approached the substitution of the visual sense using the hearing or tactile senses [14,15]. For visual-to-audio SSDs, two of the most popular devices are "the vOICe" [16][17][18] and "EyeMusic" [19]. ...
Article
Full-text available
This paper introduces the design of a novel indoor and outdoor mobility assistance system for visually impaired people. This system is named the MAPS (Mobility Assistance Path Planning and orientation in Space), and it is based on the theoretical frameworks of mobility and spatial cognition. Its originality comes from the assistance of two main functions of navigation: locomotion and wayfinding. Locomotion involves the ability to avoid obstacles, while wayfinding involves the orientation in space and ad hoc path planning in an (unknown) environment. The MAPS architecture proposes a new low-cost system for indoor–outdoor cognitive mobility assistance, relying on two cooperating hardware feedbacks: the Force Feedback Tablet (F2T) and the TactiBelt. F2T is an electromechanical tablet using haptic effects that allow the exploration of images and maps. It is used to assist with maps’ learning, space awareness emergence, path planning, wayfinding and effective journey completion. It helps a VIP construct a mental map of their environment. TactiBelt is a vibrotactile belt providing active support for the path integration strategy while navigating; it assists the VIP localize the nearest obstacles in real-time and provides the ego-directions to reach the destination. Technology used for acquiring the information about the surrounding space is based on vision (cameras) and is defined with the localization on a map. The preliminary evaluations of the MAPS focused on the interaction with the environment and on feedback from the users (blindfolded participants) to confirm its effectiveness in a simulated environment (a labyrinth). Those lead-users easily interpreted the system’s provided data that they considered relevant for effective independent navigation.
... Smartphone apps can facilitate teachers' efforts to meet disabled students' unique learning needs and use differentiated instruction strategies (Watanabe, Yamaguchi, and Minatani 2015). For example, apps have been developed for taking notes, accessing materials, identifying objects, reading braille, and converting text to voice, which provide teachers with a wide range of options for including visually impaired students in individual and group activities (Beal and Rosenblum 2018;Elmannai and Elleithy 2017). To date, studies exploring the use of smartphone apps by visually impaired individuals have focused on sensory modalities such as speech or voice recognition (Nuanmeesri 2020;Siagian and Hutauruk 2018), haptic (touch-based) feedback (Ozioko et al. 2020), multimodal input (Duarte et al. 2017), and non-speech auditory feedback (Adebiyi et al. 2017). ...
Article
Full-text available
Assistive technologies are frequently used to help disabled students to access core and expanded core curricula and to improve their functional, communication and literacy skills. Non-optical low vision devices and optical and tactile graphics technologies are some of the assistive technologies used by visually impaired students in today’s classrooms. In this study, we used a smartphone app, a Braille printer, and the Braille Spotdot embosser to design and develop a science activity to teach the concept of constant speed to a visually impaired student and two non-impaired peers using the 5E instructional model. The goal of the activity was to help students develop such skills as defining a problem, designing an experiment, and drawing graphs as well as to learn to compute unit rates and comprehend proportional relationships among variables.
... The social interaction and natural development of autistic children are negatively affected by their communication disorder. This can be overcome by communication skills improvement and allow the children to express themselves better via signals, pictures, signs, or gestures (Elmannai & Elleithy, 2017). ...
Article
Full-text available
The study aimed to investigate the effectiveness of an educational intervention in improving teachers' knowledge and attitudes toward the use of assistive technology devices. Methods: A quasi-experimental research design was applied with 68 purposive samples of teachers selected conveniently from four settings located in Jeddah, Saudi Arabia. Three tools were used including participants' demographic and personal data, knowledge scale and attitudes questionnaire. Results: A highly significant difference was reported between pre and post-test among studied teachers according to their total knowledge in pre-assessment (66.1 ± 11.4) compared with (72.9 ± 12.0) in post-test and attitude in pre (77.9 ± 11.2) compared with post total score (86.4 ± 11.2) at p-value < .05. Conclusion: The program is effective in developing the knowledge and attitudes of the participants with a highly statistically significant difference between the pre and post interventions. Therefore, a well-planned and structured educational program should be undertaken to improve the level of awareness of special education teachers.
Article
Vision is very important in our life. Loss of vision is a very serious problem for anyone. We propose a helmet-based system that aids the visually impaired. It consists of three working systems that is for face recognition, fall detection and obstacle detection. Face recognition is implemented in Raspberry Pi. Fall detection which is proposed in emergency situations and it is implemented using the IC Atmega 16. The fall detection module transfers location coordinates of the individual to the emergency contacts upon a sudden fall by usage of the GSM module. Ultra sonic sensor a widely used obstacle detection sensor is established in the system in order to ensure safety without the help of another.
Article
Background: Individuals with visual impairment currently rely on walking sticks and guide dogs for mobility. However, both tools require the user to have a mental map of the area and cannot help the user establish detailed information about their surroundings, including weather, location, and businesses. Purpose and Methods: This study designed a navigation and recommendation system with context awareness for individuals with visual impairment. The study used Process for Agent Societies Specification and Implementation (PASSI), which is a multiagent development methodology that follows the Foundation for Intelligent Physical Agents framework. The model used the Agent Unified Modeling Language (AUML). Results: The developed system contains a context awareness module and a multiagent system. The context awareness module collects data on user context through sensors and constructs a user profile. The user profile is transferred to the multiagent system for service recommendations. The multiagent system has four agents: a consultant agent, search agent, combination agent, and dispatch agent and integrates machine and deep learning. AUML tools were used to describe the implementation and structure of the system through use-case graphics and kit, sequence, class, and status diagrams. Conclusions: The developed system understands the needs of the user through the context awareness module and finds services that best meet the user's needs through the agent recommendation mechanism. The system can be used on Android phones and tablets and improves the ease with which individuals with visual impairment can obtain the services they need.
Article
Since the emergence of the pandemic caused by the SARS-CoV-2 virus (coronavirus disease or COVID-19), the generalities since its emergence, from the clinical picture, as well as the findings observed in AI (Artificial Intelligence) diagnostic methods applied to medicine personalized. This article is a literature review regarding the use of personalized medicine combined with artificial intelligence to monitor people with covid-19. The continuous evolution of intelligent systems aims to provide better reasoning and more efficient use of collected data. This use is not restricted to retrospective interpretation, that is, to provide diagnostic conclusions. It can also be extended to prospective interpretation, providing an early prognosis. That said, physicians who could be assisted by these systems find themselves in the gap between the clinical case and in-depth technical analyses. What is missing is a clear starting point for approaching the world of machine learning in medicine.
Conference Paper
Remote sighted assistance (RSA) has emerged as a conversational assistive technology for people with visual impairments (VI), where remote sighted agents provide realtime navigational assistance to users with visual impairments via video-chat-like communication. In this paper, we conducted a literature review and interviewed 12 RSA users to comprehensively understand technical and navigational challenges in RSA for both the agents and users. Technical challenges are organized into four categories: agents’ difficulties in orienting and localizing the users; acquiring the users’ surroundings and detecting obstacles; delivering information and understanding user-specific situations; and coping with a poor network connection. Navigational challenges are presented in 15 real-world scenarios (8 outdoor, 7 indoor) for the users. Prior work indicates that computer vision (CV) technologies, especially interactive 3D maps and realtime localization, can address a subset of these challenges. However, we argue that addressing the full spectrum of these challenges warrants new development in Human-CV collaboration, which we formalize as five emerging problems: making object recognition and obstacle avoidance algorithms blind-aware; localizing users under poor networks; recognizing digital content on LCD screens; recognizing texts on irregular surfaces; and predicting the trajectory of out-of-frame pedestrians or objects. Addressing these problems can advance computer vision research and usher into the next generation of RSA service.
Article
Full-text available
Sport is one of the best ways to promote the social integration of people affected by physical disability, because it helps them to increase their self-esteem by facing difficulties and overcoming their disabilities. Nowadays, a large number of sports can be easily played by visually impaired and blind athletes without any special supports, but, there are some disciplines that require the presence of a sighted guide. In this work, the attention will be focused on marathons, during which athletes with visual disorders have to be linked to the sighted guide by means of a non-stretchable elbow tether, with an evident reduction of their performance and autonomy. In this context, this paper presents a fixed electromagnetic infrastructure to equip a standard running racetrack in order to help a blind athlete to safely run without the presence of a sighted guide. The athlete runs inside an invisible hallway, just wearing a light and a comfortable sensor unit. The patented system has been homemade, designed, realized and finally tested by a blind Paralympic marathon champion with encouraging results and interesting suggestions for technical improvements. In this paper (Part I), the transmitting unit, whose main task is to generate the two magnetic fields that delimit the safe hallway, is presented and discussed.
Article
Full-text available
This paper presents the development of a portable system with the aim of allowing blind people to detect and recognize Euro banknotes. The developed device is based on a Raspberry Pi electronic instrument and a Raspberry Pi camera, Pi NoIR (No Infrared filter) dotted with additional infrared light, which is embedded into a pair of sunglasses that permit blind and visually impaired people to independently handle Euro banknotes, especially when receiving their cash back when shopping. The banknote detection is based on the modified Viola and Jones algorithms, while the banknote value recognition relies on the Speed Up Robust Features (SURF) technique. The accuracies of banknote detection and banknote value recognition are 84% and 97.5%, respectively.
Article
Full-text available
In the most recent report published by theWorld Health Organization concerning people with visual disabilities it is highlighted that by the year 2020, worldwide, the number of completely blind people will reach 75 million, while the number of visually impaired (VI) people will rise to 250 million. Within this context, the development of dedicated electronic travel aid (ETA) systems, able to increase the safe displacement of VI people in indoor/outdoor spaces, while providing additional cognition of the environment becomes of outmost importance. This paper introduces a novel wearable assistive device designed to facilitate the autonomous navigation of blind and VI people in highly dynamic urban scenes. The system exploits two independent sources of information: ultrasonic sensors and the video camera embedded in a regular smartphone. The underlying methodology exploits computer vision and machine learning techniques and makes it possible to identify accurately both static and highly dynamic objects existent in a scene, regardless on their location, size or shape. In addition, the proposed system is able to acquire information about the environment, semantically interpret it and alert users about possible dangerous situations through acoustic feedback. To determine the performance of the proposed methodology we have performed an extensive objective and subjective experimental evaluation with the help of 21 VI subjects from two blind associations. The users pointed out that our prototype is highly helpful in increasing the mobility, while being friendly and easy to learn.
Article
Full-text available
In this paper, we introduce a real-time face recognition (and announcement) system targeted at aiding the blind and low-vision people. The system uses a Microsoft Kinect sensor as a wearable device, performs face detection, and uses temporal coherence along with a simple biometric procedure to generate a sound associated with the identified person, virtualized at his/her estimated 3-D location. Our approach uses a variation of the K-nearest neighbors algorithm over histogram of oriented gradient descriptors dimensionally reduced by principal component analysis. The results show that our approach, on average, outperforms traditional face recognition methods while requiring much less computational resources (memory, processing power, and battery life) when compared with existing techniques in the literature, deeming it suitable for the wearable hardware constraints. We also show the performance of the system in the dark, using depth-only information acquired with Kinect’s infrared camera. The validation uses a new dataset available for download, with 600 videos of 30 people, containing variation of illumination, background, and movement patterns. Experiments with existing datasets in the literature are also considered. Finally, we conducted user experience evaluations on both blindfolded and visually impaired users, showing encouraging results.
Book
Equal access to services and public places is now required by law in many countries. In the case of the visually impaired, it is often the use of assistive technology that facilitates their full participation in many societal activities ranging from meetings and entertainment to the more personal activities of reading books or making music. In this volume, the engineering techniques and design principles used in many solutions for vision-impaired and blind people are described and explained. Features: • a new comprehensive assistive technology model structures the volume into groups of chapters on vision fundamentals, mobility, communications and access to information, daily living, education and employment, and finally recreational activities; • contributions by international authors from the diverse engineering and scientific disciplines needed to describe and develop the necessary assistive technology solutions; • systematic coverage of the many different types of assistive technology devices, applications and solutions used by visually impaired and blind people; • chapters open with learning objectives and close with sets of test questions and details of practical projects that can be used for student investigative work and self-study. Assistive Technology for Vision-impaired and Blind People is an excellent self-study and reference textbook for assistive technology and rehabilitation engineering students and professionals. The comprehensive presentation also allows engineers and health professionals to update their knowledge of recent assistive technology developments for people with sight impairment and loss.
Conference Paper
We propose an improved version of a wearable lightweight device to support visually impaired people during their everyday lives by facilitating autonomous navigation and obstacle avoidance. The system deploys two retina-inspired Dynamic Vision Sensors for visual information gathering. These sensors are characterized by very low power consumption, low latency and drastically reduced data rate in comparison with regular CMOS/ CCD cameras which makes them well suited for real-time mobile applications. Event-based algorithms operating on the visual data stream extract depth information in real-time which is translated into the acoustic domain. Spatial auditory signals are simulated at the computed origin of visual events in the real world. These sounds are modulated according to the position in the field of view which the user can change by moving their head. Here, different tests with eleven subjects are conducted to evaluate the performance of the system. These tests show that the modulation helps to improve object localization performance significantly in comparison to prior experiments. Further trials estimate the visual acuity a user of the device would have using the Landolt C test. The low power consumption of all integrated components in a final system will allow for a long lasting battery life of a small portable device, which might ultimately combine perceived visual information and environmental knowledge to provide a higher quality of life for the visually impaired.
Conference Paper
Proposed is a prototype of a wearable mobility device which aims to assist the blind with navigation and object avoidance via auditory-vision-substitution. The described system uses two dynamic vision sensors and event-based information processing techniques to extract depth information. The 3D visual input is then processed using three different strategies, and converted to a 3D output sound using an individualized head-related transfer function. The performance of the device with different processing strategies is evaluated via initial tests with ten subjects. The outcome of these tests demonstrate promising performance of the system after only very short training times of a few minutes due to the minimal encoding of outputs from the vision sensors which are translated into simple sound patterns easily interpretable for the user. The envisioned system will allow for efficient real-time algorithms on a hands-free and lightweight device with exceptional battery life-time.
Chapter
One of the most debilitating human conditions is blindness or severe visual impairment. Many assistive devices have been developed during the past decades to ameliorate the two primary challenges associated with this disability: access to reading material and independent mobility. This article presents an overview of the consequences of this impairment and examines the various technological aids currently available and under development to address these problems. This article discusses the consequences of blindness and severe visual impairment; assistive technology that facilitates access to written information; assistive devices that enhance independence in the activities of daily living; and mobility and navigational aids that enable persons who are blind or severe visual impaired to achieve safer and more independent travel. The article concludes with a prospective look at assistive technology likely to become available in the not too distant future. Keywords: electronic devices; braille; mobility aids; wayfinding; laser cane; navigational guide