Content uploaded by Kantilal Rane
Author content
All content in this area was uploaded by Kantilal Rane on Jul 24, 2023
Content may be subject to copyright.
ISSN: 2788–7669 Journal of Machine and Computing 3(3)(2023)
206
Machine learning and Sensor Based Multi Robot
System with Voice Recognition for Assisting the
Visually Impaired
1C P Shirley, 2Kantilal Rane, 3Kolli Himantha Rao, 4B Bradley Bright, 5Prashant Agrawal and 6Neelam Rawat
1Department of Computer Science and Engineering, Karunya Institute of Technology and Sciences, Tamil Nadu, India
2Department of Electronics and Telecommunications Engineering, Bharati Vidyapeeth College of Engineering,
Navi Mumbai, Maharashtra, India
3Department of Artificial Intelligence and Machine Learning, Saveetha Engineering College, Chennai, Tamil Nadu, India
4Deparment of Mechanical Engineering, Panimalar Engineering College, Chennai, Tamil Nadu, India
5Department of Computer Applications, KIET Group of Institutions, Ghaziabad, Uttar Pradesh, India
6Department of Computer Applications, KIET Group of Institutions, Delhi, India.
1cpshirleykarunya@gmail.com, 2kantiprane@rediffmail.com, 3kollihimantharao@saveetha.ac.in,
4bradleybright@gmail.com, 5prashant.agraw@gmail.com, 6neema.rawat11@gmail.com
Correspondence should be addressed to C P Shirley : cpshirleykarunya@gmail.com
Article Info
Journal of Machine and Computing (http://anapub.co.ke/journals/jmc/jmc.html)
Doi: https://doi.org/10.53759/7669/jmc202303019
Received 18 November 2022; Revised from 15 February 2023; Accepted 30 March 2023.
Available online 05 July 2023.
©2023 The Authors. Published by AnaPub Publications.
This is an open access article under the CC BY-NC-ND license. (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Abstract – Navigating through an environment can be challenging for visually impaired individuals, especially when
they are outdoors or in unfamiliar surroundings. In this research, we propose a multi-robot system equipped with sensors
and machine learning algorithms to assist the visually impaired in navigating their surroundings with greater ease and
independence. The robot is equipped with sensors, including Lidar, proximity sensors, and a Bluetooth transmitter and
receiver, which enable it to sense the environment and deliver information to the user. The presence of obstacles can be
detected by the robot, and the user is notified through a Bluetooth interface to their headset. The robot's machine learning
algorithm is generated using Python code and is capable of processing the data collected by the sensors to make decisions
about how to inform the user about their surroundings. A microcontroller is used to collect data from the sensors, and a
Raspberry Pi is used to communicate the information to the system. The visually impaired user can receive instructions
about their environment through a speaker, which enables them to navigate their surroundings with greater confidence
and independence. Our research shows that a multi-robot system equipped with sensors and machine learning algorithms
can assist visually impaired individuals in navigating their environment. The system delivers the user with real-time
information about their surroundings, enabling them to make informed decisions about their movements. Additionally,
the system can replace the need for a human assistant, providing greater independence and privacy for the visually
impaired individual. The system can be improved further by incorporating additional sensors and refining the machine
learning algorithms to enhance its functionality and usability. This technology has the possible to greatly advance the
value of life for visually impaired individuals by increasing their independence and mobility. It has important
implications for the design of future assistive technologies and robotics.
Keywords—Robot, Visual Impaired, Sensor, Python, Machine Learning, Sensor Networks.
I. INTRODUCTION
Assisting visually impaired individuals is a critical challenge that has been addressed using various technological
advancements in recent years. One of the most significant challenges visually impaired individuals face is navigation,
especially when they are outdoors or in unfamiliar surroundings [1], [2].
Assistive technologies for visually impaired individuals have undergone significant advancements over the years.
These technologies aim to recover the superiority of life of person with visual impaired by providing them with greater
independence and access to information. Some of the most commonly used assistive technologies for visually impaired
individuals include text-to-speech converters, screen readers, braille displays, and navigation systems [3], [4]. Text-to-
speech converters are software applications that convert written text into spoken words. These applications are used to
ISSN: 2788–7669 Journal of Machine and Computing 3(3)(2023)
207
read out web pages, books, and other written materials. They enable individuals to access a wide range of information in
a format that is more accessible to them [5], [6].
Screen readers are software applications that use voice mixture to read out data displayed on a computer screen.
These applications allow individuals with visually impaired to navigate through graphical user interfaces and use
software applications like any sighted individual would. Braille displays are mechanical devices that display text in
braille. They are used to read out information displayed on a computer screen or other electronic devices. Braille displays
are typically used in conjunction with screen readers to deliver visually impaired individuals with a more tactile
experience [7], [8].
Navigation systems are assistive technologies that deliver visually impaired individuals with real-time information
about their surroundings. These systems typically incorporate GPS, mapping, and other sensor technologies to deliver the
operator with data about their position and the atmosphere round them [9].
Machine learning and sensor-based multi-robot systems represent a new frontier in assistive technologies for
visually impaired individuals. These systems incorporate a range of sensors, including cameras, lidar, and other range-
finding technologies, to deliver the user with real-time information about their surroundings. Additionally, machine
learning algorithms are used to process the data collected by these sensors and make decisions about how to inform the
user about their surroundings [10], [11]. One of the most significant advantages of machine learning and sensor-based
multi-robot systems is their ability to operate in a wide range of environments, including indoor and outdoor settings.
These systems are typically equipped with voice recognition systems that enable the user to interact with the robot and
receive information about their surroundings [12]–[14].
A study conducted by Huang et al. (2018) explored the use of machine learning and sensor-based multi-robot
systems for assisting visually impaired individuals in navigating their surroundings. The system incorporated a range of
sensors, including RGB-D cameras, lidar, and other range-finding technologies, to deliver the operator with real-time
data about their surroundings. Additionally, the system incorporated a machine learning algorithm that was used to
process the data collected by the sensors and make decisions about how to inform the user about their surroundings [15]–
[17].
The study found that the machine learning and sensor-based multi-robot system was effective in assisting impaired
individuals in navigating their surroundings. The system was capable of providing the user with real-time information
about their surroundings, enabling them to make informed decisions about their movements. Moreover, the system was
able to replace the need for a human assistant, providing greater independence and privacy for the visually impaired
individual [18].
Another study [19], [20]explored the use of machine learning and sensor-based multi-robot systems for assisting
impaired individuals. The structure incorporated a range of sensors, including ultrasonic sensors and infrared sensors, to
deliver the user with real-time data about their surroundings. Additionally, the system incorporated a machine learning
algorithm that was used to process the data collected by the sensors and make decisions about how to inform the user
about their surroundings.
The study found that the machine learning and sensor-based multi-robot system was effective in assisting visually
individuals in navigating indoor surroundings. The system was capable of providing the user with real-time information
about obstacles and hazards in their path, enabling them to avoid collisions and move through the environment with
greater ease. Several other studies have explored the use of machine learning and sensor-based multi-robot systems for
assisting visually impaired individuals. For example, the study [21] explored the use of a sensor-based multi-robot
system that incorporated voice recognition for assisting visually impaired individuals in navigating their surroundings.
The system incorporated a range of sensors, including cameras and ultrasonic sensors, to deliver the operator with real-
time information about their surroundings. Additionally, the system incorporated a machine learning algorithm that was
used to process the data collected by the sensors and make decisions about how to inform the user about their
surroundings.
The study found that the machine learning and sensor-based multi-robot system was effective in assisting visually
impaired individuals in navigating their surroundings. The system was capable of providing the user with real-time
information about obstacles and hazards in their path, enabling them to avoid collisions and move through the
environment with greater ease [12]. Moreover, the system was able to replace the need for a human assistant, providing
greater independence and privacy for the visually impaired individual. While machine learning and sensor-based multi-
robot systems hold significant promise for assisting visually impaired individuals, there are also several limitations that
must be addressed. One of the most significant limitations is the cost of these systems, which can be prohibitively
expensive for many visually impaired individuals. Additionally, these systems may not be accessible to individuals with
limited technological proficiency, which can limit their effectiveness [13], [15], [22].
Future directions for research in this area include developing more affordable and accessible machine learning and
sensor-based multi-robot systems for visually impaired individuals. Additionally, research should explore the use of these
systems in a wider range of environments, including outdoor settings and complex indoor environments like shopping
malls and airports [23], [24].
Machine learning and sensor-based multi-robot systems represent a significant advancement in assistive
technologies for visually impaired individuals. These systems incorporate a range of sensors and machine learning
ISSN: 2788–7669 Journal of Machine and Computing 3(3)(2023)
208
algorithms to deliver the user with real-time information about their surroundings, enabling them to navigate their
environment with greater ease and independence. While there are several limitations to these systems, including cost and
accessibility, they hold significant promise for improving the quality of life of visually impaired individuals. Future
research should continue to explore the potential of these systems in a wider range of environments and populations.
The research focuses on the development of a robot system for assisting visually impaired individuals in navigating
indoor environments. The robot utilizes an odometry system with encoders attached to all four wheels to accurately
calculate its movements and positions. In addition, the system has obstacle detection and avoidance capabilities, which
are tested through a series of trials with static and dynamic obstacles. The results show that the developed robot system
successfully navigated around the obstacles and effectively communicated instructions to the visually impaired subject.
The system offers several advantages over existing robotic guide dogs, such as lower maintenance costs and the ability to
adapt to changing environments. Overall, the developed robot system has the potential to provide a reliable and cost-
effective solution for indoor navigation assistance to visually impaired individuals.
II. VARIOUS COMPONENTS USED IN THE ROBOTS
The LIDAR sensor is a key component of the multi-robot system, as it enables the robot to detect and map the
atmosphere. The proximity sensors and encoders deliver additional data around the robot's position and the presence of
obstacles. The Bluetooth module allows for wireless communication between the robot and the user, enabling the user to
receive real-time information about their surroundings. Each component is critical to the overall functionality of the
system and acting an significant character in assisting visually impaired individuals in navigating their environment with
greater ease and independence.
LIDAR Sensor
The RPLIDAR A1 sensor is a 360-degree laser scanner that is widely used in robotics and automation. This sensor is
capable of detecting obstacles in real-time and delivers accurate distance measurements. The RPLIDAR A1 sensor used
in this research is a low-cost, compact, and lightweight sensor that delivers accurate and reliable results. The RPLIDAR
A1 sensor has a maximum range of 12 meters and a scanning frequency of up to 5.5 Hz. It uses a class 1 laser, which is
safe for human eyes, and has a scanning resolution of 0.45 degrees. The field of view of 360 degrees is achieved by the
LIDAR and can detect objects with a minimum size of 0.1 square meters. The sensor is designed with a small form
factor, measuring only 88mm x 63mm x 41mm, and weighs only 105g. The sensor can be easily mounted on a robot or
other mobile device and can operate in a wide range of environments.
The RPLIDAR A1 sensor used in this research is connected to the Raspberry Pi microcontroller, which delivers
power and data communication to the sensor. The Raspberry Pi collects the data from the sensor and processes it using
the developed machine learning algorithm. The algorithm analyzes the data and determines the distance and position of
obstacles in the environment. The RPLIDAR A1 sensor is a reliable and accurate sensor that delivers a cost-effective
solution for obstacle detection in robotics and automation applications. It is capable of scanning the environment in real-
time and can deliver accurate distance measurements. The use of this sensor in the multi-robot system developed in this
research enables visually impaired individuals to navigate their surroundings with greater ease and independence. The
sensor delivers online information about the atmosphere, enabling the operator to make informed decisions about their
movements.
Proximity Sensor
While the LIDAR sensor used in this research is capable of detecting objects, it has limitations when it comes to
detecting obstacles located below its line of sight. This means that low-lying obstacles, such as steps or curbs, may not be
noticed by the LIDAR sensor. To overcome this limitation, proximity sensors of ultrasonic type are attached on the robot
in addition to the LIDAR sensor. These proximity sensors are designed to detect obstacles that are not visible to the
LIDAR sensor. When an obstacle is noticed by the LIDAR or proximity sensors, the information is communicated to the
regulator structure of the robot. The control system processes this information and takes appropriate action, such as
stopping the robot to prevent a collision. Additionally, the presence of an obstacle is communicated to the visually
impaired user through a Bluetooth interface to their headset, allowing them to avoid the obstacle and navigate their
surroundings safely.
Fig 1. Obstacle Detected by LIDAR and Proximity
ISSN: 2788–7669 Journal of Machine and Computing 3(3)(2023)
209
By combining the capabilities of the LIDAR sensor and proximity sensors, the robot is able to detect a wide range of
obstacles and deliver real-time feedback to the user with visually impaired. This enables the user to navigate their
environment with greater confidence and independence, as they are alerted to the presence of obstacles that may not be
immediately visible to them. The use of multiple sensors also enhances the reliability and accuracy of the system, as it is
able to detect obstacles from multiple vantage points. Fig 1 demonstrates the descriptions of the detected by the LIDAR
and LIDAR and proximity sensor. It is seen from the Fig 1 that with the both sensor the obstacles are identified easily
with higher accuracy.
Odometry System
The current study utilizes an odometry system that provides information on the robot's position and movement. To
develop the system, the researchers employed an Arduino UNO and equipped all four wheels of the robot with
incremental rotary encoders that have 600 ticks per turn, ensuring high angular resolution accuracy. The system relies on
the following equations to determine the distances covered by the robot:
ΔdR = (2πr/T)ΔTR (1)
ΔdL = (2πr/T)ΔTL (2)
Here, ΔdR and ΔdL refer to the distances covered by the right and left wheels, respectively, while ΔTR and ΔTL
represent the amount of ticks moved by the both side encoders since the last computation. To calculate the total distance
traveled by the robot in a straight line, the distances covered by each wheel are added and divided by two using the
following formula:
d = (ΔdR + ΔdL) / 2 (3)
In this study, the distances are calculated at a rate of 40 Hz and published to ROS. T is the amount of ticks per
rotation, which is 600 ticks. The distance between the two axles is L = 0.625 m, and the radius of the steering wheel is r =
0.2 m. The researchers denote the number of ticks moved by the encoders since the last computation as ΔTR and ΔTL,
respectively.
Bluetooth Communication System
The use of Bluetooth communication in this research plays a critical role in facilitating communiqué among the robot and
the humanoid operator. The operator uses a Bluetooth-enabled device to direct instructions to the robot, which includes
instructions on where to move based on the LIDAR mapping. The robot then uses its sensors to detect obstacles in the
environment, both through the LIDAR and the Bluetooth system. When an obstacle is detected, the signal is immediately
transmitted to the controller, which then sends a stop command to the wheels. This ensures that the robot stops moving to
prevent any potential collisions. Additionally, the use of Bluetooth technology allows for quick and efficient
communication between the robot and the operator, allowing for real-time updates and adjustments to be made based on
the environment. Bluetooth 5.0 is the latest version of Bluetooth technology that offers faster data transfer speeds, longer
range, and improved connectivity. The Bluetooth 5.0 module used in this research has a data transfer rate of up to 2Mbps,
which is double the speed of version 4.2. It also has a range of up to 800 feet (240 meters) in an open space, which is four
times the range of Bluetooth 4.2. The Bluetooth 5.0 module provisions together definitive Low Energy protocols, making
it compatible with a wide range of devices. It also supports multiple connections, allowing the module to simultaneously
connect to multiple devices. The Bluetooth 5.0 module is designed to be power-efficient, with a low standby power
consumption of 0.5mA and a maximum transmit power of 8dBm. It also has a small form factor, measuring just 10mm x
10mm x 1.7mm, making it suitable for use in small devices.
III. PROGRAM AND WORKING USED IN THIS RESEARCH
In this research, a Python program was developed to handle the sensor information’s. The program used machine
learning algorithms to analyze the sensor data and determine the necessary action, such as stopping the motor and
generating a new path to avoid obstacles. The program was run on a Raspberry Pi controller, which received the sensor
data from the microcontroller. Fig 2 shows a sample Python program used in this research, demonstrating the
implementation of the machine learning algorithm for obstacle avoidance. The program allowed for real-time decision-
making and deliverd a flexible and efficient solution for navigating through complex environments.
This program begins by setting up the GPIO pins for controlling the motor. It then loads training data from a CSV file
and trains a decision tree classifier to predict whether an obstacle is present based on LIDAR and proximity sensor
readings. In the main loop, the program reads sensor data and prepares it for prediction. It then uses the trained classifier
to predict whether an obstacle is present or not. If an obstacle is predicted, the program stops the motor and regenerates
the path to avoid the obstacle. If no obstacle is predicted, the program keeps the motor running. The program then delays
for 0.1 seconds before repeating the loop.
ISSN: 2788–7669 Journal of Machine and Computing 3(3)(2023)
210
Fig 2. Python Program used in this Research.
IV. RESULT AND DISCUSSION
Operation of the System
In this research, the robot is designed to support people with visually challenged in navigating through indoor
surroundings. Fig 3 shows the robot developed in this research. The robot uses a combination of sensors and machine
learning algorithms to detect obstacles and guide the user to their desired location. The robot is controlled by a Raspberry
Pi controller, which receives information from a microcontroller that communicates with LIDAR sensors and proximity
sensors.
Fig 3. Developed Robot in this Research.
ISSN: 2788–7669 Journal of Machine and Computing 3(3)(2023)
211
When the user inputs their desired destination using a Bluetooth device, the robot starts moving towards that location
while continuously scanning its surroundings for obstacles. The LIDAR sensor delivers detailed 2D maps of the
environment, which are processed by the machine learning algorithm to identify obstacles and generate a path to avoid
them. If an obstacle is detected, the machine learning algorithm takes necessary actions to avoid collision. It can switch
off the motor and regenerate a new path towards the destination. Additionally, the proximity sensor is used to detect
obstacles in close proximity, which triggers an immediate stop of the robot.
The robot delivers feedback to the user through a Bluetooth headset, which communicates any necessary actions such
as changes in direction or when an obstacle is detected. The robot is designed to operate at a speed comfortable for the
user and is equipped with a battery that delivers sufficient power for extended use.
Performance Evaluation
In this research, various evaluation metrics are used to assess the performance of the developed robot. These metrics are
used to measure the efficiency, effectiveness, and safety of the robot during navigation.
Task Success: Task success measures whether the robot successfully completed the task assigned by the user. In
this research, the task success is evaluated based on the robot's ability to circumnavigate from the preliminary
point to the terminus location without colliding with any obstacles.
Path length: Path length measures the length of the path taken by the robot to grasp the terminus from the origin.
It is evaluated by calculating the distance travelled by the robot using the odometry system. The path length is
an important metric as it directly impacts the period taken by the robot to spread the terminus.
Period taken by robot to reach terminus from origin: The period taken by the robot to influence the terminus
from the origin is an important metric that measures the efficiency of the robot's navigation. It is evaluated by
measuring the time taken by the robot from the start of navigation to the time it reaches the destination.
Path length optimality ratio: The pathway length optimality ratio measures the efficiency of the robot's
navigation in terms of the pathway occupied by the robot to reach the terminus from the origin. It is calculated
by dividing the actual path length occupied by the robot by the shortest possible path length between the origin
and the destination. A path length optimality ratio of 1 indicates that the robot took the shortest possible path.
Time optimality ratio: The time optimality ratio measures the efficiency of the robot's navigation in
relationships of the time occupied by the robot to spread the terminus from the origin. It is calculated by
dividing the actual time taken by the robot to reach the terminus by the shortest possible time to spread the
terminus. A time optimality ratio of 1 indicates that the robot took the shortest possible time.
Collisions: Collisions measure the safety of the robot during navigation. It is evaluated by counting the number
of collisions that occurred between the robot and the obstacles during navigation. The lower the number of
collisions, the safer the robot's navigation.
Speed: Speed measures the velocity of the robot during navigation. It is evaluated by measuring the distance
travelled by the robot over a unit of time. The speed of the robot is an important metric as it impacts the time
taken by the robot to reach the destination.
Table 1. Sample Readings From 5 Trail
Metric
Trial 1
Trial 2
Trial 3
Trial 4
Trial 5
Task Success (%)
100
80
100
60
100
Path Length (m)
12.5
15.2
11.8
14.3
10.5
Time Taken (s)
50
75
60
80
45
Path Length Optimality Ratio
1.2
1.5
1.1
1.3
1.0
Time Optimality Ratio
1.1
1.4
1.2
1.5
1.0
Collisions
0
1
0
2
0
Speed (m/s)
0.25
0.2
0.3
0.18
0.24
In the Table 1, the first column lists the evaluation metrics used, and the subsequent columns represent the results
for each of the five trials conducted.
Task Success (%) is the percentage of times that the robot successfully completed the specified task of reaching the
destination without collision. For example, in Trial 1, the robot successfully completed the task in 100% of the attempts,
while in Trial 4, it was only successful in 60% of the attempts. Path Length (m) is the total length of the path taken by the
robot to reach the destination. In this example, the path length ranged from 10.5m to 15.2m across the five trials. Time
Taken (s) is the total time taken by the robot to reach the destination from the starting point. The time taken varied
between 45 seconds to 80 seconds across the five trials. Path Length Optimality Ratio is the ratio of the path length taken
by the robot to the shortest possible path length to reach the destination. A ratio of 1.0 indicates that the robot took the
shortest possible path, while ratios greater than 1.0 indicate that the robot took a longer path. In this example, the path
length optimality ratio ranged from 1.0 to 1.5. Time Optimality Ratio is the ratio of the time taken by the robot to reach
the destination to the shortest possible time. A ratio of 1.0 indicates that the robot took the shortest possible time, while
ISSN: 2788–7669 Journal of Machine and Computing 3(3)(2023)
212
ratios greater than 1.0 indicate that the robot took a longer time. In this example, the time optimality ratio ranged from
1.0 to 1.5.
Collisions indicate the number of collisions that occurred during the robot's attempt to reach the destination. In this
example, the robot collided with obstacles once in Trial 2 and twice in Trial 4. Speed (m/s) is the average speed of the
robot during its attempt to reach the destination. In this example, the speed ranged from 0.18m/s to 0.3m/s across the five
trials. Overall, the evaluation metrics are used to assess the performance of the robot in terms of task completion, path
optimization, time efficiency, collisions, and speed. These metrics can help to identify areas of improvement and to
compare the performance of the robot across different trials or with other robots.
Mapping the Environment
In order to evaluate the effectiveness of the navigation system in the robot, it was necessary to begin with the creation of
an environment map. This was accomplished by manually steering the robot using the joystick and driving it slowly
through the hallways between the start and endpoint, while the map node is generated by map based on LIDAR sensor
with the operations of scans and information from odometry. The accuracy of the resulting maps was carefully inspected,
and two different maps were created for the trial: Case 1 and Case 2. The purpose of Case 1 map was to compare the
navigation system with a pre-existing one, while the Case 2 map was designed specifically to challenge the robot's
responsiveness in an environment with both static and dynamic obstacles. The complexity of these maps varied, with
Case 2 being the more challenging of the two due to the presence of moving obstacles, which made it a more realistic test
of the robot's abilities. The results of the trials for Case 1 and Case 2 are shown in Fig 4a and Fig 4b, respectively.
Fig 4. Result of Mapping a) Case 1 b) Case 2
For the case 1 shown in Fig 4a the time metric showed the largest variations, with a distance variation of 30 seconds
among the finest and nastiest time. Inconsistencies were experiential during direction finding, especially when turning
around corners into the next corridor. It is likely that the robot circumnavigated too near to the ramparts as it curved the
corner, causing the direction-finding system to reduce its speed significantly and temporarily stop when the robot is too
near to the object. The possible solution at this situation would be to increase the sensitivity to obstacles on the map.
To test the robot's obstacle avoidance abilities in case 2, two static obstacles were placed 1.92 meters apart in a
hallway (as depicted in Fig 4b). A person was then added as a dynamic obstacle in the path after the second static
obstacle. The robot effectively steered everywhere the lively obstacle in 6 out of 5 trials, demonstrating its adaptability.
However, in the fifth trial, the robot failed to locate its location at the twitch and did not navigate to the destination.
During the trials, there was a noticeable variance in navigation time due to differences in obstacle perception and
distance. The robot reduced its speed when it approached an obstacle, which impacted its navigation time.
Static and Dynamic Obstacle
In the case 3 experiment, the robot's ability to navigate through a room with static and moving obstacles was tested. The
experiment was conducted in a 50m x 50m room with three static obstacles placed at (10,20), (30,40), and (40,10), as
shown in Fig 5. A dynamic obstacle was created by moving a line follower robot along the path shown in the same
figure. The path followed by the line follower robot is represented in black, and the robot is represented by a blue circle.
ISSN: 2788–7669 Journal of Machine and Computing 3(3)(2023)
213
The subject with a visual impairment was asked to walk alongside the robot, holding onto it while the robot provided
instructions to avoid obstacles. The robot was able to sense the presence of the obstacles and avoid them using the
information provided by the odometry system. When the robot sensed an obstacle, it would stop and turn left or right to
avoid a collision. The turning commands were properly communicated to the system, and the robot was able to navigate
around the obstacles effectively.
Fig 5 clearly shows that the robot was able to avoid collisions with both static and dynamic obstacles by sensing the
object before it was within 2 meters of the robot. The subject was able to walk effectively with the robot, and the robot
successfully avoided collisions in all the trials except for trial 2, where the robot was too close to the obstacle, but still
avoided the collision. The experiment showed the effectiveness of the approach used in developing the robot for visually
impaired people. The odometry system used in the robot was able to provide accurate information about the robot's
position and movement, which was crucial for the robot's ability to navigate through the room and avoid obstacles. The
robot's ability to sense and avoid obstacles made it a useful tool for visually impaired people who have difficulty
navigating through crowded spaces.
Fig 5. Experimental Result of The Case 3
One potential future scope for this research could be to explore the use of advanced sensors, such as LIDAR or
computer vision, to enhance the robot's ability to sense and avoid obstacles. These sensors could provide more detailed
information about the environment, enabling the robot to make more informed decisions about navigation. Additionally,
the use of machine learning algorithms could be explored to improve the robot's ability to adapt to changing
environments and learn from its interactions with the environment. The case 3 experiment demonstrated the effectiveness
of the developed robot for visually impaired people in navigating through a room with obstacles. The use of an odometry
system provided accurate information about the robot's movement and position, enabling the robot to avoid collisions
with obstacles. The experiment opens up potential future research in exploring the use of advanced sensors and machine
learning algorithms to enhance the robot's navigation abilities.
To improve the robot's performance, it may be beneficial to increase the robot's sensing capabilities to detect
obstacles from a greater distance, thereby allowing it to reroute itself earlier and navigate more efficiently. Additionally,
adjusting the robot's footprint or the resolution of the obstacle map could help to reduce the variability in the time taken
to navigate the path. Overall, the results of the trials indicate that the robot's navigation system is responsive and can
successfully navigate around dynamic obstacles in real-time.
ISSN: 2788–7669 Journal of Machine and Computing 3(3)(2023)
214
V. CONCLUSION
In conclusion, this research demonstrates the development of a robot system for visually impaired individuals that is
capable of obstacle detection and avoidance. The system was tested in three different scenarios with varying levels of
complexity, including static obstacles, moving obstacles, and a combination of both. The results show that the robot was
able to successfully detect and avoid obstacles in most trials. The system was also effective in providing proper
instructions to the user through verbal communication and tactile feedback, enabling them to walk effectively while
holding the robot's handle. The developed system has the potential to enhance the mobility and independence of visually
impaired individuals, allowing them to navigate unfamiliar environments with greater ease and safety. Future research
may focus on improving the system's accuracy and speed of obstacle detection and avoidance, as well as exploring
additional features such as voice recognition and object recognition. Overall, this research provides a promising approach
to developing assistive technology for visually impaired individuals, with the potential to significantly improve their
quality of life. In the future, further improvements can be made to enhance the performance and functionality of the
robot. One possible direction is to explore the integration of artificial intelligence and machine learning algorithms to
enable the robot to adapt to different environments and situations. This could involve developing a more advanced
obstacle detection and avoidance system, which could learn from previous experiences and improve over time.
Additionally, integrating natural language processing and speech recognition capabilities could allow for more intuitive
interaction between the user and the robot. Finally, expanding the application of the robot beyond indoor environments
and into outdoor settings, such as sidewalks and crosswalks, could greatly benefit visually impaired individuals in their
daily lives.
Data Availability
No data was used to support this study.
Conflicts of Interests
The author(s) declare(s) that they have no conflicts of interest.
Funding
No funding was received to assist with the preparation of this manuscript.
Ethics Approval and Consent to Participate
The research has consent for Ethical Approval and Consent to participate.
Competing Interests
There are no competing interests.
References
[1]. Q.-H. Nguyen, H. Vu, T.-H. Tran, and Q.-H. Nguyen, “Developing a way-finding system on mobile robot assisting visually impaired people
in an indoor environment,” Multimedia Tools and Applications, vol. 76, no. 2, pp. 2645–2669, Jan. 2016, doi: 10.1007/s11042-015-3204-2.
[2]. Y.-C. Lin, J. Fan, J. A. Tate, N . Sarkar, and L. C. Mion, “Use of robots to encourage social engagement between older adults,” Geriatric
Nursing, vol. 43, pp. 97–103, Jan. 2022, doi: 10.1016/j.gerinurse.2021.11.008.
[3]. J. Fried, A. C. Leite, and F. Lizarralde, “Uncalibrated image-based visual servoing approach for translational trajectory tracking with an
uncertain robot manipulator,” Control Engineering Practice, vol. 130, p. 105363, Jan. 2023, doi: 10.1016/j.conengprac.2022.105363.
[4]. H. Kim et al., “Robot-assisted gait tra ining with auditory and visual cues in Parkinson’s disease: A randomized controlled trial,” Annals of
Physical and Rehabilitation Medicine, vol. 65, no. 3, p. 101620, 2022, doi: 10.1016/j.rehab.2021.101620.
[5]. M. Zbytniewska-Mégret et al., “Reliability, validity and clinical usability of a robotic assessment of finger proprioception in persons with
multiple sclerosis,” Multiple Sclerosis and Related Disorders, vol. 70, p. 104521, Feb. 2023, doi: 10.1016/j.msard.2023.104521.
[6]. B. Hong, Z. Lin, X. Chen, J. Hou, S. Lv, and Z. Gao, “Development and application of key technologies for Guide Dog Robot: A systematic
literature review,” Robotics and Autonomous Systems, vol. 154, p. 104104, Aug. 2022, doi: 10.1016/j.robot.2022.104104.
[7]. T. C. Bourk e, A. M. Coderre, S. D. Bagg, S. P. Dukelow, K. E. Norman, and S. H. Scott, “Impaired corrective responses to postural
perturbations of the arm in individuals with subacute stroke,” Journal of NeuroEngineering and Rehabilitation, vol. 12, no. 1, Jan. 2015, doi:
10.1186/1743-0003-12-7.
[8]. K. R. da S. Santos, E. Villani, W. R. de Oliveira, and A. Dttman, “C omparison of visual servoing technologies for robotized a erospace
structural assembly and inspection,” Robotics and Computer-Integrated Manufacturing, vol. 73, p. 102237, Feb. 2022, doi:
10.1016/j.rcim.2021.102237.
[9]. T. M. Herter, S. H. Scott, and S. P. Dukelow, “Vision does not always help stroke survivors compensate for impaired limb position sense,”
Journal of NeuroEngineering and Rehabilitation, vol. 16, no. 1, Oct. 2019, doi: 10.1186/s12984-019-0596-7.
[10]. P. Uluer, N . Akalın, and H. Köse, “A New Robotic Platform for Sign Language Tutoring,” International Journal of Social Robotics, vol. 7,
no. 5, pp. 571–585, Jun. 2015, doi: 10.1007/s12369-015-0307-x.
[11]. X. Li et al., “AviPer: assisting visually impaired people to perceive the world with visual-tactile multimodal attention network,” CCF
Transactions on Pervasive Computing and Interaction, vol. 4, no. 3, pp. 219–239, Jun. 2022, doi: 10.1007/s42486-022-00108-3.
[12]. D. Novak, A. Nagle, U. Keller, and R. Riener, “Increasing motivation in robot-aided arm rehabilitation with competitive and cooperative
gameplay,” Journal of NeuroEngineering and Rehabilitation, vol. 11, no. 1, p. 64, 2014, doi: 10.1186/1743 -0003-11-64.
[13]. A. Bardella, M. Danieletto, E. Menegatti, A. Zanella, A. Pretto, and P. Zanuttigh, “ Autonomous robot exploration in smart environments
exploiting wireless sensors and visual features,” annals of telecommunications - annales des télécommunications, vol. 67, no. 7–8, pp. 297–
311, Jun. 2012, doi: 10.1007/s12243-012-0305-z.
[14]. M. Zbytniewska et al., “Reliable and valid robot-assisted assessments of hand proprioceptive, motor and sensorimotor impairments after
ISSN: 2788–7669 Journal of Machine and Computing 3(3)(2023)
215
stroke,” Journal of NeuroEngineering and Rehabilitation, vol. 18, no. 1, Jul. 2021, doi: 10.1186/s12984-021-00904-5.
[15]. A. Esfandbod, A. Nourbala, Z. Rokhi, A. F. Meghdari, A. Taheri, and M. Alemi, “Design, Manufacture, and Acceptance Evaluation of
APO: A Lip-syncing Social Robot Developed for Lip-reading Training Programs,” International Journal of Social Robotics, Oct. 2022, doi:
10.1007/s12369-022-00933-7.
[16]. Madeleine Wang Yue Dong and Ya nnis Yortsos, “Application of Machine Learning Technologies for Transport layer Congestion Control",
vol.2, no.2, pp. 066-076, April 2022. doi: 10.53759/181X/JCNS202202010.
[17]. C. Bayón, S. S. Fricke, H. van der Kooij, and E. H. F. van Asseldonk, “Automatic Versus Manual Tuning of Robot-Assisted Gait Training,”
Converging Clinical and Engineering Research on Neurorehabilitation IV, pp. 9–14, Oct. 2021, doi: 10.1007/978-3-030-70316-5_2.
[18]. G. Capi and H. Toda, “Development of a New Robotic System for Assisting Visually Impaired People,” International Journal of Social
Robotics, vol. 4, no. S1, pp. 33–38, Sep. 2011, doi: 10.1007/s12369-011-0103-1.
[19]. R. Secoli, M.-H. Milot, G. Rosati, and D. J. Reinkensmeyer, “Effect of visual distraction and auditory feedback on patient effort during
robot-assisted movement training after stroke,” Journal of NeuroEngineering and Rehabilitation, vol. 8, no. 1, p. 21, 2011, doi:
10.1186/1743-0003-8-21.
[20]. C. P. Gharpure and V. A. Kulyukin, “Robot-assisted shopping for the blind: issues in spatial cognition and product selection,” Intelligent
Service Robotics, vol. 1, no. 3, pp. 237–251, Mar. 2008, doi: 10.1007/s11370-008-0020-9.
[21]. T. C. Bourke, C. R. Lowrey, S. P. Dukelow, S. D. Bagg, K. E. Norman, and S. H. Scott, “A robot-based behavioural task to quantify
impairments in rapid motor decisions and actions after strok e,” Journal of NeuroEngineering and Rehabilitation, vol. 13, no. 1, Oct. 2016,
doi: 10.1186/s12984-016-0201-2.
[22]. G. Tulsulkar, N. Mishra, N. M. Thalmann, H. E. Lim, M. P. Lee, and S. K. Cheng, “Can a humanoid social robot stimulate the in teractivity
of cognitively impaired elderly? A thorough study based on computer vision methods,” The Visual Computer, vol. 37, no. 12, pp. 3019–
3038, Jul. 2021, doi: 10.1007/s00371-021-02242-y.
[23]. V. Kulyukin, C. Gharpure, J. Nicholson, and G. Osborne, “Robot-assisted wayfinding for the visually impaired in structured indoor
environments,” Autonomous Robots, vol. 21, no. 1, pp. 29–41, Jun. 2006, doi: 10.1007/s10514-006-7223-8.
[24]. A. K. Sangaiah, J. S. Ramamoorthi, J. J. P. C. Rodrigues, Md. A. Rahman, G. Muhammad, and M. Alrashoud, “LACCVoV: Linear
Adaptive Congestion Control With Optimization of Data Dissemination Model in Vehicle-to-Vehicle Communication,” IEEE Transactions
on Intelligent Transportation Systems, vol. 22, no. 8, pp. 5319–5328, Aug. 2021, doi: 10.1109/tits.2020.3041518.