ArticlePDF Available

Abstract and Figures

This paper reports on a study that helps visually-impaired people to walk more confidently. The study hypothesizes that a smart cane that alerts visually-impaired people over obstacles in front could help them in walking with less accident. The aim of the paper is to address the development work of a cane that could communicate with the users through voice alert and vibration, which is named Smart Cane. T he development work involves coding and physical installation. A series of tests have been carried out on the smart cane and the results are discussed. This study found that the Smart Cane functions well as intended, in alerting users about the obstacles in front
Content may be subject to copyright.
A preview of the PDF is not available
... The origins of object detection in computer vision trace back to 1966, when a platform was developed to automatically differentiate between foreground and background in images and extract distinct, non-overlapping objects from real-world scenes [7]. Since then, object detection has evolved from simple detection algorithms to sophisticated deep learning networks, revolutionizing various sectors, including autonomous navigation [8]- [10], robotics [11]- [13], surveillance systems [14]- [16], and assistive technologies [17], [18]. ...
... Technological advancements have led to various solutions designed to assist individuals with visual impairments in their daily lives. These solutions include smartphone apps [160], wearable devices [18], smart canes [17], indoor navigation systems utilizing computer vision [161], guide dogs [162], and smart glasses [163]. Each of these tools addresses the challenges of autonomous navigation for the blind and visually impaired. ...
... For BVI navigation, smart canes with SLAM, RFID, and ultrasonic sensors improve indoor navigation, but have a limited detection range [17], [198]- [200]. Wearable AI, using YOLO-based vision models and IoT, provides real-time guidance, but is hindered by battery life constraints [18], [173], [201]- [203]. ...
Article
Full-text available
Object detection plays a pivotal role in advancing computer vision systems by enabling machines to perceive and interact intelligently with their environment. Despite significant advancements, a comprehensive exploration of its evolution and applications in navigation remains underrepresented. This review paper examines the evolution of object detection technologies, from early methodologies to contemporary advancements, significantly on their critical role in navigation tasks. The emphasis is on the significance of contextual learning in enhancing object detection performance by leveraging spatial and temporal information. Furthermore, the limitations of conventional approaches that rely heavily on hand-engineered features are examined. It is then demonstrated how contextual learning facilitates automated feature extraction, resulting in improved accuracy exceeding more than a 50% increase and adaptability in diverse applications. The review concludes by outlining future trends and opportunities for further advancements in object detection, underscoring its transformative impact on autonomous navigation and beyond. In summary, this review contributes to a comprehensive understanding of object detection technologies by offering insights into their evolution, highlighting their application in navigation, and providing guidance for future research in context-aware systems.
... and a 5V battery to run the apparatus, the components have been connected as in "Ref. [25]". Below is the full Smart Stick for the blind circuit schematic for the proposed system shown in Figure 4. Since we just need to connect one JQ6500 voice module and three ultrasonic sensors, it is rather easy to do. ...
... To reduce the voltage of the Arduino, a 1kΩ resistor is placed between the digital pin 9 of the Arduino UNO and the MP3 module RX "Ref. [25]". Figure 5 illustrates where the sensors are located; each sensor is positioned at a different angle to maximize coverage of the area in front of it and enable various obstacle detection techniques. ...
Article
Full-text available
This study focuses on developing an assistive system for blind individuals for collision avoidance of obstacles by combining artificial intelligence techniques; Convolutional Neural Networks (CNN), fuzzy logic control (FLC), and genetic algorithms(GA), This integrated system, named the (NFG) Neural Fuzzy Genetic). The proposed system combines artificial intelligence techniques through detecting and tracking objects, measuring the distance between objects and the blind person, and providing movement guidance using three ultrasonic sensors with FLC and optimization GA. The integration of these technologies offers an innovative solution to enhance the mobility and safety of blind individuals.Specifically, object detection and tracking are applied through CNN, with an obstacle detection range of up to 40 meters. The obstacle recognition system is trained on the ResNet50 model, which includes 50 million trained images and more than 1,000 obstacle classifications, resulting in high accuracy in identifying and detecting obstacles. When tested, the accuracy of the trainer model reached 99.9%. FLC is then used to provide motor guidance and help make appropriate decisions in the presence of obstacles, navigate safely and independently, and determine movement directions in obstacle-free paths with the help of three sensors.
... The most studied device has been the technology-assisted white cane, or 'smart cane'. Earlier versions of a smart cane were built with ultrasonic sensors to detect objects at various distances, of which the cane would then alert the user through audio output, such as a voice message or beeping tones [3], [4]. Later iterations have included more advanced obstacle detection systems, smartphone connectivity, and embedded navigation software, paired with a control panel to allow inputs from the user [5]- [7]. ...
Preprint
Dog guides offer an effective mobility solution for blind or visually impaired (BVI) individuals, but conventional dog guides have limitations including the need for care, potential distractions, societal prejudice, high costs, and limited availability. To address these challenges, we seek to develop a robot dog guide capable of performing the tasks of a conventional dog guide, enhanced with additional features. In this work, we focus on design research to identify functional and aesthetic design concepts to implement into a quadrupedal robot. The aesthetic design remains relevant even for BVI users due to their sensitivity toward societal perceptions and the need for smooth integration into society. We collected data through interviews and surveys to answer specific design questions pertaining to the appearance, texture, features, and method of controlling and communicating with the robot. Our study identified essential and preferred features for a future robot dog guide, which are supported by relevant statistics aligning with each suggestion. These findings will inform the future development of user-centered designs to effectively meet the needs of BVI individuals.
... Наприклад, такі технології, як ультразвукові тростини та GPS-навігатори, допомагають користувачам орієнтуватися в просторі, проте вони мають суттєві обмеження. Серед найбільш поширених недоліківнадмірна залежність від аудіоканалів, що може створювати додаткове навантаження на слухові рецептори, які стають критично важливими для людей із втратою зору [3]. ...
Article
У статті розглянуто конструктивні особливості, принципи дії та функції нового пристрою для осіб з втратою зору на основі оптоелектронних аналізаторів. Основною перевагою цієї технології є її здатність надавати інформацію з допомогою відчуття легкого тиску для користувача, не перевантажуючи інші задіяні канали відчуттів. Технологія використовує 6 ультразвукових та 4 інфрачервоних сенсори, які перетворюють імпульси в механічну силу, що дозволяє користувачам отримувати важливу інформацію про оточуюче середовище, розпізнавати об'єкти та допомагати в навігації. Конструктивно пристрій складається з трьох основних модулів: модуль збору даних, модуль обробки та модуль виводу інформації. Модуль збору даних включає ультразвукові та інфрачервоні сенсори, розташовані на корпусі пристрою. Модуль обробки використовує мікроконтролер на базі ARM Cortex-M4 для аналізу сигналів з сенсорів та генерації відповідних керуючих команд. Сигнал передається через бездротовий інтерфейс Bluetooth до модуля виводу інформації. Модуль виводу інформації забезпечує передачу цих команд у вигляді відчуттів легкого тиску на шкіру користувача через 16 мініатюрних вібромоторів
... Different strategies are suggested for helping the blind and visually impaired. Cane or guide dog, infrared cane-based aid, voice-activated navigation system, laser-based walker, and ultrasonic cane are some of these methods [44]- [47]. The 2018 ICT Award has been given to this initiative. ...
Article
Full-text available
Mobility, navigation and reading are some of the major problems for the Blind or Visually Impaired People (BVIP). BVIP are still struggling with all of them. In the current scenario of the booming of Artificial Intelligence (AI) where everything is shifted to be "smart", we are not able to solve the problems of these BVIP. They can't even do their daily chores properly whereas we all are getting our house operated on voice assistants. As we know wearable technologies take a hike after the merging with AI. This can give the BVIP a great shift to ease their living. In this paper a prototype is presented. It is wearable like a badge on the chest or can be fitted on the spectacles that will assist the BVIP in reading documents, objects identification, and face recognition. It is implemented using the state-of-the art models provided by the Nvidia NGC. It carries out real-time video processing of visual input provided by the camera installed in it. After processing, it classifies different objects and measures their distances from it. It does this using people net and B13D proximity segmentation models of Nvidia NGC. Once the image or video analysis is done, auditory feedback is given to the user. There is a reading assistant in the gadget that can read any document present in front of it. Reading is performed using text extraction models from images.
... Different strategies are suggested for helping the blind and visually impaired. Cane or guide dog, infrared cane-based aid, voice-activated navigation system, laser-based walker, and ultrasonic cane are some of these methods [44]- [47]. The 2018 ICT Award has been given to this initiative. ...
Article
Full-text available
Mobility, navigation and reading are some of the major problems for the Blind or Visually Impaired People (BVIP). BVIP are still struggling with all of them. In the current scenario of the booming of Artificial Intelligence (AI) where everything is shifted to be "smart", we are not able to solve the problems of these BVIP. They can't even do their daily chores properly whereas we all are getting our house operated on voice assistants. As we know wearable technologies take a hike after the merging with AI. This can give the BVIP a great shift to ease their living. In this paper a prototype is presented. It is wearable like a badge on the chest or can be fitted on the spectacles that will assist the BVIP in reading documents, objects identification, and face recognition. It is implemented using the state-of-the art models provided by the Nvidia NGC. It carries out real-time video processing of visual input provided by the camera installed in it. After processing, it classifies different objects and measures their distances from it. It does this using people net and B13D proximity segmentation models of Nvidia NGC. Once the image or video analysis is done, auditory feedback is given to the user. There is a reading assistant in the gadget that can read any document present in front of it. Reading is performed using text extraction models from images.
Chapter
In this work, we propose a novel indoor navigation system based on wearable haptic technologies, aimed at increasing the autonomy and improving the quality of life for blind individuals. Despite promising research outcomes in the field of technological travel aids, these solutions have seen limited acceptance in real-world applications, due in part to the insufficient involvement of end-users in the conceptual and design phases. Our proposed system was developed with continuous feedback from visually impaired persons. It consists of an RGB-D camera, a processing unit that computes visual information for obstacle avoidance, and the CUFF device, which provides normal and tangential force cues for guidance in unfamiliar indoor environments. Experiments with blindfolded subjects and visually impaired participants demonstrate that our system could be an effective support during indoor navigation and a viable tool for training blind individuals in the use of travel aids.
Article
Full-text available
This paper describes a wheelchair for physically disabled people developed within the UMIDAM Project. A dependent-user recognition voice system and ultrasonic and infrared sensor systems has been integrated in this wheelchair. In this way we have obtained a wheelchair which can be driven with using voice commands and with the possibility of avoiding obstacles and downstairs or hole detection. The wheelchair has also been developed to allow autonomous driving (for example, following walls). The project, in which two prototypes have been produced, has been carried out totally in the Electronics Department of the University of Alcalá (Spain). It has been financed by the ONCE. Electronic system configuration, a sensor system, a mechanical model, control (low level control, control by voice commands), voice recognition and autonomous control are considered. The results of the experiments carried out on the two prototypes are also given.
Article
Full-text available
This paper introduces error eliminating rapid ultrasonic firing (EERUF), a new method for firing multiple ultrasonic sensors in mobile robot applications. EERUF allows ultrasonic sensors to fire at rates that are five to ten times faster than those customary in conventional applications. This is possible because EERUF reduces the number of erroneous readings due to ultrasonic noise by one to two orders of magnitude. While faster firing rates improve the reliability and robustness of mobile robot obstacle avoidance and are necessary for safe travel at higher speed (e.g., V>0.3 m/sec), they introduce more ultrasonic noise and increase the occurrence rate of crosstalk. However, EERUF almost eliminates crosstalk, making fast firing feasible. Furthermore, ERRUF's unique noise rejection capability allows multiple mobile robots to collaborate in the same environment, even if their ultrasonic sensors operate at the same frequencies. The authors have implemented and tested the EERUF method on a mobile robot and they present experimental results. With EERUF, a mobile robot was able to traverse an obstacle course of densely spaced, pencil-thin (8 mm-diameter) poles at up to 1 m/sec
Article
An understanding of radiation patterns and the target's effect on echoes is essential to evaluating candidate sensors in terms of frequency variations, accuracy and resolution, target range, effective beam angle, and the influence of ambient temperature variations on sensor performance.
Article
The PIC microcontroller is enormously popular both in the U.S. and abroad. The first edition of this book was a tremendous success because of that. However, in the 4 years that have passed since the book was first published, the electronics hobbyist market has become more sophisticated. Many users of the PIC are now comfortable shelling out the 250forthepriceoftheProfessionalversionofthePICBasic(theregularversionsellsfor250 for the price of the Professional version of the PIC Basic (the regular version sells for 100). This new edition is fully updated and revised to include detailed directions on using both versions of the microcontroller, with no-nonsense recommendations on which is better served in different situations. Table of contentsChapter 1: MicrocontrollersChapter 2: Installing the CompilerChapter 3: Installing the EPIC SoftwareChapter 4: CodeDesignerChapter 5: How to Use DOS Instead of Windows to Code, Compile, and ProgramChapter 6: Testing the PIC MicrocontrollerChapter 7: PIC 16F84 MicrocontrollerChapter 8: Reading I/O LinesChapter 9: PICBasic Language ReferenceChapter 10: Additional Command Reference for PICBasic ProChapter 11: Speech SynthesizerChapter 12: Creating a New I/O PortChapter 13: Liquid Crystal Display (LCD)Chapter 14: Reading Resistive SensorsChapter 15: Analog-to-Digital (A/D)ConvertersChapter 16: DC Motor ControlChapter 17: Stepper MotorsChapter 18: ServomotorsChapter 19: Controlling AC AppliancesChapter 20: A Few More ProjectsBinary CheckSetting the ClockDigital Geiger CounterFrequency GeneratorIn ClosingSuppliers ListHexadecimal NumbersIndex
Conference Paper
Advances in technology provide an opportunity for the visually-impaired (VI) people to learn and grasp new knowledge. It is as a medium that could fill a gap between VI and normal people. Even though various technologies are available to the VI people, yet there is no Assistive Courseware (AC) designed especially for the VI people. This paper aims to develop an AC for VI learners based on the guidelines that has been tested with the users in the preliminary study and later revised in this study. In the introduction part, the term of AC is discussed. The objectives of the paper are to redesign the storyboard of the earlier-designed prototype and develop a new version prototype of AC for VI. The study is based on Iterative Triangulation Methodology combined with the IntView Methodology. Summary of methodology is also discussed in this part consisting of four phases starting with specification identification until prototype development based on the revised guidelines. The final part of the paper discusses the problems gathered during preliminary testing, implementation issues and new features added in the AC in order to overcome the problems. The findings that this paper addresses include the guidelines, the storyboard and the prototype.
Article
Ultrasonic sensors have been widely used in recognizing the working environment for a mobile robot. However, their intrinsic problems, such as the specular reflection, the wide beam angle, and the slow propagation velocity, require an excessive number of sensors to be integrated to achieve the various sensing goals. This paper proposes a new measurement scheme which uses only two sets of ultrasonic sensors to determine the location and the type of target surface. By measuring the time difference between the returned signals from the target surface, which are generated by two transmitters with 1 ms difference, it classifies the type and determines the pose of the target surface. Since the proposed sensor system uses only the two sets of ultrasonic sensors to recognize and localize the target surface, it significantly simplifies the sensing system and reduces the signal processing time so that the working environment can be recognized in real time.
Conference Paper
Research in aiding disabled people with electronic applications should be one of the focuses in the area of human-computer interaction. The population of disabled is big and increasing, so they should be entertained well so that they will be potential resources to the nation. This paper discusses the Assistive Courseware (AC) in terms of how visually-impaired (VI) can make full use of them. It aims to propose guidelines for developing AC for VI people. In the introduction part, current issues regarding disabilities and the population is discussed. It also outlines the objectives of the paper. Next, the research methodology is outlined. There are three phases in which AC development is in the second phase which adapts the IntView courseware development methodology. The final part of the paper contains results of the test, which proposes basic guidelines for developing AC for VI.
Conference Paper
Ultrasonic sensors have been widely used in recognizing the working environment for a mobile robot. However, their intrinsic problems, such as the specular reflection, the wide beam angle, and the slow propagation velocity, require an excessive number of sensors to be integrated to achieve the various sensing goals. This paper proposes new measurement scheme which uses only two sets of ultrasonic sensors to determine the location and the type of target surface. By measuring the time difference between the returned signals from the target surface, which are generated by two transmitters with 1 ms difference, it classifies type and determines the pose of the target surface. Since the proposed sensor system uses only the two sets of ultrasonic sensors to recognize and localize the target surface, it significantly simplifies the sensing system and reduces the signal processing time so that the working environment can be recognized in real time
Article
The GuideCane is a device designed to help blind or visually impaired users navigate safely and quickly among obstacles and other hazards. During operation, the user pushes the lightweight GuideCane forward. When the GuideCane's ultrasonic sensors detect an obstacle, the embedded computer determines a suitable direction of motion that steers the GuideCane and the user around it. The steering action results in a very noticeable force felt in the handle, which easily guides the user without any conscious effort on his/her part