ArticlePDF Available

Machine learning and Sensor-Based Multi-Robot System with Voice Recognition for Assisting the Visually Impaired

Authors:
  • Bharati Vidyapeeth College of Engineering

Abstract and Figures

Navigating through an environment can be challenging for visually impaired individuals, especially when they are outdoors or in unfamiliar surroundings. In this research, we propose a multi-robot system equipped with sensors and machine learning algorithms to assist the visually impaired in navigating their surroundings with greater ease and independence. The robot is equipped with sensors, including Lidar, proximity sensors, and a Bluetooth transmitter and receiver, which enable it to sense the environment and deliver information to the user. The presence of obstacles can be detected by the robot, and the user is notified through a Bluetooth interface to their headset. The robot's machine learning algorithm is generated using Python code and is capable of processing the data collected by the sensors to make decisions about how to inform the user about their surroundings. A microcontroller is used to collect data from the sensors, and a Raspberry Pi is used to communicate the information to the system. The visually impaired user can receive instructions about their environment through a speaker, which enables them to navigate their surroundings with greater confidence and independence. Our research shows that a multi-robot system equipped with sensors and machine learning algorithms can assist visually impaired individuals in navigating their environment. The system delivers the user with real-time information about their surroundings, enabling them to make informed decisions about their movements. Additionally, the system can replace the need for a human assistant, providing greater independence and privacy for the visually impaired individual. The system can be improved further by incorporating additional sensors and refining the machine learning algorithms to enhance its functionality and usability. This technology has the possible to greatly advance the value of life for visually impaired individuals by increasing their independence and mobility. It has important implications for the design of future assistive technologies and robotics.
Content may be subject to copyright.
ISSN: 27887669 Journal of Machine and Computing 3(3)(2023)
206
Machine learning and Sensor Based Multi Robot
System with Voice Recognition for Assisting the
Visually Impaired
1C P Shirley, 2Kantilal Rane, 3Kolli Himantha Rao, 4B Bradley Bright, 5Prashant Agrawal and 6Neelam Rawat
1Department of Computer Science and Engineering, Karunya Institute of Technology and Sciences, Tamil Nadu, India
2Department of Electronics and Telecommunications Engineering, Bharati Vidyapeeth College of Engineering,
Navi Mumbai, Maharashtra, India
3Department of Artificial Intelligence and Machine Learning, Saveetha Engineering College, Chennai, Tamil Nadu, India
4Deparment of Mechanical Engineering, Panimalar Engineering College, Chennai, Tamil Nadu, India
5Department of Computer Applications, KIET Group of Institutions, Ghaziabad, Uttar Pradesh, India
6Department of Computer Applications, KIET Group of Institutions, Delhi, India.
1cpshirleykarunya@gmail.com, 2kantiprane@rediffmail.com, 3kollihimantharao@saveetha.ac.in,
4bradleybright@gmail.com, 5prashant.agraw@gmail.com, 6neema.rawat11@gmail.com
Correspondence should be addressed to C P Shirley : cpshirleykarunya@gmail.com
Article Info
Journal of Machine and Computing (http://anapub.co.ke/journals/jmc/jmc.html)
Doi: https://doi.org/10.53759/7669/jmc202303019
Received 18 November 2022; Revised from 15 February 2023; Accepted 30 March 2023.
Available online 05 July 2023.
©2023 The Authors. Published by AnaPub Publications.
This is an open access article under the CC BY-NC-ND license. (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Abstract Navigating through an environment can be challenging for visually impaired individuals, especially when
they are outdoors or in unfamiliar surroundings. In this research, we propose a multi-robot system equipped with sensors
and machine learning algorithms to assist the visually impaired in navigating their surroundings with greater ease and
independence. The robot is equipped with sensors, including Lidar, proximity sensors, and a Bluetooth transmitter and
receiver, which enable it to sense the environment and deliver information to the user. The presence of obstacles can be
detected by the robot, and the user is notified through a Bluetooth interface to their headset. The robot's machine learning
algorithm is generated using Python code and is capable of processing the data collected by the sensors to make decisions
about how to inform the user about their surroundings. A microcontroller is used to collect data from the sensors, and a
Raspberry Pi is used to communicate the information to the system. The visually impaired user can receive instructions
about their environment through a speaker, which enables them to navigate their surroundings with greater confidence
and independence. Our research shows that a multi-robot system equipped with sensors and machine learning algorithms
can assist visually impaired individuals in navigating their environment. The system delivers the user with real-time
information about their surroundings, enabling them to make informed decisions about their movements. Additionally,
the system can replace the need for a human assistant, providing greater independence and privacy for the visually
impaired individual. The system can be improved further by incorporating additional sensors and refining the machine
learning algorithms to enhance its functionality and usability. This technology has the possible to greatly advance the
value of life for visually impaired individuals by increasing their independence and mobility. It has important
implications for the design of future assistive technologies and robotics.
KeywordsRobot, Visual Impaired, Sensor, Python, Machine Learning, Sensor Networks.
I. INTRODUCTION
Assisting visually impaired individuals is a critical challenge that has been addressed using various technological
advancements in recent years. One of the most significant challenges visually impaired individuals face is navigation,
especially when they are outdoors or in unfamiliar surroundings [1], [2].
Assistive technologies for visually impaired individuals have undergone significant advancements over the years.
These technologies aim to recover the superiority of life of person with visual impaired by providing them with greater
independence and access to information. Some of the most commonly used assistive technologies for visually impaired
individuals include text-to-speech converters, screen readers, braille displays, and navigation systems [3], [4]. Text-to-
speech converters are software applications that convert written text into spoken words. These applications are used to
ISSN: 27887669 Journal of Machine and Computing 3(3)(2023)
207
read out web pages, books, and other written materials. They enable individuals to access a wide range of information in
a format that is more accessible to them [5], [6].
Screen readers are software applications that use voice mixture to read out data displayed on a computer screen.
These applications allow individuals with visually impaired to navigate through graphical user interfaces and use
software applications like any sighted individual would. Braille displays are mechanical devices that display text in
braille. They are used to read out information displayed on a computer screen or other electronic devices. Braille displays
are typically used in conjunction with screen readers to deliver visually impaired individuals with a more tactile
experience [7], [8].
Navigation systems are assistive technologies that deliver visually impaired individuals with real-time information
about their surroundings. These systems typically incorporate GPS, mapping, and other sensor technologies to deliver the
operator with data about their position and the atmosphere round them [9].
Machine learning and sensor-based multi-robot systems represent a new frontier in assistive technologies for
visually impaired individuals. These systems incorporate a range of sensors, including cameras, lidar, and other range-
finding technologies, to deliver the user with real-time information about their surroundings. Additionally, machine
learning algorithms are used to process the data collected by these sensors and make decisions about how to inform the
user about their surroundings [10], [11]. One of the most significant advantages of machine learning and sensor-based
multi-robot systems is their ability to operate in a wide range of environments, including indoor and outdoor settings.
These systems are typically equipped with voice recognition systems that enable the user to interact with the robot and
receive information about their surroundings [12][14].
A study conducted by Huang et al. (2018) explored the use of machine learning and sensor-based multi-robot
systems for assisting visually impaired individuals in navigating their surroundings. The system incorporated a range of
sensors, including RGB-D cameras, lidar, and other range-finding technologies, to deliver the operator with real-time
data about their surroundings. Additionally, the system incorporated a machine learning algorithm that was used to
process the data collected by the sensors and make decisions about how to inform the user about their surroundings [15]
[17].
The study found that the machine learning and sensor-based multi-robot system was effective in assisting impaired
individuals in navigating their surroundings. The system was capable of providing the user with real-time information
about their surroundings, enabling them to make informed decisions about their movements. Moreover, the system was
able to replace the need for a human assistant, providing greater independence and privacy for the visually impaired
individual [18].
Another study [19], [20]explored the use of machine learning and sensor-based multi-robot systems for assisting
impaired individuals. The structure incorporated a range of sensors, including ultrasonic sensors and infrared sensors, to
deliver the user with real-time data about their surroundings. Additionally, the system incorporated a machine learning
algorithm that was used to process the data collected by the sensors and make decisions about how to inform the user
about their surroundings.
The study found that the machine learning and sensor-based multi-robot system was effective in assisting visually
individuals in navigating indoor surroundings. The system was capable of providing the user with real-time information
about obstacles and hazards in their path, enabling them to avoid collisions and move through the environment with
greater ease. Several other studies have explored the use of machine learning and sensor-based multi-robot systems for
assisting visually impaired individuals. For example, the study [21] explored the use of a sensor-based multi-robot
system that incorporated voice recognition for assisting visually impaired individuals in navigating their surroundings.
The system incorporated a range of sensors, including cameras and ultrasonic sensors, to deliver the operator with real-
time information about their surroundings. Additionally, the system incorporated a machine learning algorithm that was
used to process the data collected by the sensors and make decisions about how to inform the user about their
surroundings.
The study found that the machine learning and sensor-based multi-robot system was effective in assisting visually
impaired individuals in navigating their surroundings. The system was capable of providing the user with real-time
information about obstacles and hazards in their path, enabling them to avoid collisions and move through the
environment with greater ease [12]. Moreover, the system was able to replace the need for a human assistant, providing
greater independence and privacy for the visually impaired individual. While machine learning and sensor-based multi-
robot systems hold significant promise for assisting visually impaired individuals, there are also several limitations that
must be addressed. One of the most significant limitations is the cost of these systems, which can be prohibitively
expensive for many visually impaired individuals. Additionally, these systems may not be accessible to individuals with
limited technological proficiency, which can limit their effectiveness [13], [15], [22].
Future directions for research in this area include developing more affordable and accessible machine learning and
sensor-based multi-robot systems for visually impaired individuals. Additionally, research should explore the use of these
systems in a wider range of environments, including outdoor settings and complex indoor environments like shopping
malls and airports [23], [24].
Machine learning and sensor-based multi-robot systems represent a significant advancement in assistive
technologies for visually impaired individuals. These systems incorporate a range of sensors and machine learning
ISSN: 27887669 Journal of Machine and Computing 3(3)(2023)
208
algorithms to deliver the user with real-time information about their surroundings, enabling them to navigate their
environment with greater ease and independence. While there are several limitations to these systems, including cost and
accessibility, they hold significant promise for improving the quality of life of visually impaired individuals. Future
research should continue to explore the potential of these systems in a wider range of environments and populations.
The research focuses on the development of a robot system for assisting visually impaired individuals in navigating
indoor environments. The robot utilizes an odometry system with encoders attached to all four wheels to accurately
calculate its movements and positions. In addition, the system has obstacle detection and avoidance capabilities, which
are tested through a series of trials with static and dynamic obstacles. The results show that the developed robot system
successfully navigated around the obstacles and effectively communicated instructions to the visually impaired subject.
The system offers several advantages over existing robotic guide dogs, such as lower maintenance costs and the ability to
adapt to changing environments. Overall, the developed robot system has the potential to provide a reliable and cost-
effective solution for indoor navigation assistance to visually impaired individuals.
II. VARIOUS COMPONENTS USED IN THE ROBOTS
The LIDAR sensor is a key component of the multi-robot system, as it enables the robot to detect and map the
atmosphere. The proximity sensors and encoders deliver additional data around the robot's position and the presence of
obstacles. The Bluetooth module allows for wireless communication between the robot and the user, enabling the user to
receive real-time information about their surroundings. Each component is critical to the overall functionality of the
system and acting an significant character in assisting visually impaired individuals in navigating their environment with
greater ease and independence.
LIDAR Sensor
The RPLIDAR A1 sensor is a 360-degree laser scanner that is widely used in robotics and automation. This sensor is
capable of detecting obstacles in real-time and delivers accurate distance measurements. The RPLIDAR A1 sensor used
in this research is a low-cost, compact, and lightweight sensor that delivers accurate and reliable results. The RPLIDAR
A1 sensor has a maximum range of 12 meters and a scanning frequency of up to 5.5 Hz. It uses a class 1 laser, which is
safe for human eyes, and has a scanning resolution of 0.45 degrees. The field of view of 360 degrees is achieved by the
LIDAR and can detect objects with a minimum size of 0.1 square meters. The sensor is designed with a small form
factor, measuring only 88mm x 63mm x 41mm, and weighs only 105g. The sensor can be easily mounted on a robot or
other mobile device and can operate in a wide range of environments.
The RPLIDAR A1 sensor used in this research is connected to the Raspberry Pi microcontroller, which delivers
power and data communication to the sensor. The Raspberry Pi collects the data from the sensor and processes it using
the developed machine learning algorithm. The algorithm analyzes the data and determines the distance and position of
obstacles in the environment. The RPLIDAR A1 sensor is a reliable and accurate sensor that delivers a cost-effective
solution for obstacle detection in robotics and automation applications. It is capable of scanning the environment in real-
time and can deliver accurate distance measurements. The use of this sensor in the multi-robot system developed in this
research enables visually impaired individuals to navigate their surroundings with greater ease and independence. The
sensor delivers online information about the atmosphere, enabling the operator to make informed decisions about their
movements.
Proximity Sensor
While the LIDAR sensor used in this research is capable of detecting objects, it has limitations when it comes to
detecting obstacles located below its line of sight. This means that low-lying obstacles, such as steps or curbs, may not be
noticed by the LIDAR sensor. To overcome this limitation, proximity sensors of ultrasonic type are attached on the robot
in addition to the LIDAR sensor. These proximity sensors are designed to detect obstacles that are not visible to the
LIDAR sensor. When an obstacle is noticed by the LIDAR or proximity sensors, the information is communicated to the
regulator structure of the robot. The control system processes this information and takes appropriate action, such as
stopping the robot to prevent a collision. Additionally, the presence of an obstacle is communicated to the visually
impaired user through a Bluetooth interface to their headset, allowing them to avoid the obstacle and navigate their
surroundings safely.
Fig 1. Obstacle Detected by LIDAR and Proximity
ISSN: 27887669 Journal of Machine and Computing 3(3)(2023)
209
By combining the capabilities of the LIDAR sensor and proximity sensors, the robot is able to detect a wide range of
obstacles and deliver real-time feedback to the user with visually impaired. This enables the user to navigate their
environment with greater confidence and independence, as they are alerted to the presence of obstacles that may not be
immediately visible to them. The use of multiple sensors also enhances the reliability and accuracy of the system, as it is
able to detect obstacles from multiple vantage points. Fig 1 demonstrates the descriptions of the detected by the LIDAR
and LIDAR and proximity sensor. It is seen from the Fig 1 that with the both sensor the obstacles are identified easily
with higher accuracy.
Odometry System
The current study utilizes an odometry system that provides information on the robot's position and movement. To
develop the system, the researchers employed an Arduino UNO and equipped all four wheels of the robot with
incremental rotary encoders that have 600 ticks per turn, ensuring high angular resolution accuracy. The system relies on
the following equations to determine the distances covered by the robot:
ΔdR = (2πr/T)ΔTR (1)
ΔdL = (2πr/T)ΔTL (2)
Here, ΔdR and ΔdL refer to the distances covered by the right and left wheels, respectively, while ΔTR and ΔTL
represent the amount of ticks moved by the both side encoders since the last computation. To calculate the total distance
traveled by the robot in a straight line, the distances covered by each wheel are added and divided by two using the
following formula:
d = (ΔdR + ΔdL) / 2 (3)
In this study, the distances are calculated at a rate of 40 Hz and published to ROS. T is the amount of ticks per
rotation, which is 600 ticks. The distance between the two axles is L = 0.625 m, and the radius of the steering wheel is r =
0.2 m. The researchers denote the number of ticks moved by the encoders since the last computation as ΔTR and ΔTL,
respectively.
Bluetooth Communication System
The use of Bluetooth communication in this research plays a critical role in facilitating communiqué among the robot and
the humanoid operator. The operator uses a Bluetooth-enabled device to direct instructions to the robot, which includes
instructions on where to move based on the LIDAR mapping. The robot then uses its sensors to detect obstacles in the
environment, both through the LIDAR and the Bluetooth system. When an obstacle is detected, the signal is immediately
transmitted to the controller, which then sends a stop command to the wheels. This ensures that the robot stops moving to
prevent any potential collisions. Additionally, the use of Bluetooth technology allows for quick and efficient
communication between the robot and the operator, allowing for real-time updates and adjustments to be made based on
the environment. Bluetooth 5.0 is the latest version of Bluetooth technology that offers faster data transfer speeds, longer
range, and improved connectivity. The Bluetooth 5.0 module used in this research has a data transfer rate of up to 2Mbps,
which is double the speed of version 4.2. It also has a range of up to 800 feet (240 meters) in an open space, which is four
times the range of Bluetooth 4.2. The Bluetooth 5.0 module provisions together definitive Low Energy protocols, making
it compatible with a wide range of devices. It also supports multiple connections, allowing the module to simultaneously
connect to multiple devices. The Bluetooth 5.0 module is designed to be power-efficient, with a low standby power
consumption of 0.5mA and a maximum transmit power of 8dBm. It also has a small form factor, measuring just 10mm x
10mm x 1.7mm, making it suitable for use in small devices.
III. PROGRAM AND WORKING USED IN THIS RESEARCH
In this research, a Python program was developed to handle the sensor information’s. The program used machine
learning algorithms to analyze the sensor data and determine the necessary action, such as stopping the motor and
generating a new path to avoid obstacles. The program was run on a Raspberry Pi controller, which received the sensor
data from the microcontroller. Fig 2 shows a sample Python program used in this research, demonstrating the
implementation of the machine learning algorithm for obstacle avoidance. The program allowed for real-time decision-
making and deliverd a flexible and efficient solution for navigating through complex environments.
This program begins by setting up the GPIO pins for controlling the motor. It then loads training data from a CSV file
and trains a decision tree classifier to predict whether an obstacle is present based on LIDAR and proximity sensor
readings. In the main loop, the program reads sensor data and prepares it for prediction. It then uses the trained classifier
to predict whether an obstacle is present or not. If an obstacle is predicted, the program stops the motor and regenerates
the path to avoid the obstacle. If no obstacle is predicted, the program keeps the motor running. The program then delays
for 0.1 seconds before repeating the loop.
ISSN: 27887669 Journal of Machine and Computing 3(3)(2023)
210
Fig 2. Python Program used in this Research.
IV. RESULT AND DISCUSSION
Operation of the System
In this research, the robot is designed to support people with visually challenged in navigating through indoor
surroundings. Fig 3 shows the robot developed in this research. The robot uses a combination of sensors and machine
learning algorithms to detect obstacles and guide the user to their desired location. The robot is controlled by a Raspberry
Pi controller, which receives information from a microcontroller that communicates with LIDAR sensors and proximity
sensors.
Fig 3. Developed Robot in this Research.
ISSN: 27887669 Journal of Machine and Computing 3(3)(2023)
211
When the user inputs their desired destination using a Bluetooth device, the robot starts moving towards that location
while continuously scanning its surroundings for obstacles. The LIDAR sensor delivers detailed 2D maps of the
environment, which are processed by the machine learning algorithm to identify obstacles and generate a path to avoid
them. If an obstacle is detected, the machine learning algorithm takes necessary actions to avoid collision. It can switch
off the motor and regenerate a new path towards the destination. Additionally, the proximity sensor is used to detect
obstacles in close proximity, which triggers an immediate stop of the robot.
The robot delivers feedback to the user through a Bluetooth headset, which communicates any necessary actions such
as changes in direction or when an obstacle is detected. The robot is designed to operate at a speed comfortable for the
user and is equipped with a battery that delivers sufficient power for extended use.
Performance Evaluation
In this research, various evaluation metrics are used to assess the performance of the developed robot. These metrics are
used to measure the efficiency, effectiveness, and safety of the robot during navigation.
Task Success: Task success measures whether the robot successfully completed the task assigned by the user. In
this research, the task success is evaluated based on the robot's ability to circumnavigate from the preliminary
point to the terminus location without colliding with any obstacles.
Path length: Path length measures the length of the path taken by the robot to grasp the terminus from the origin.
It is evaluated by calculating the distance travelled by the robot using the odometry system. The path length is
an important metric as it directly impacts the period taken by the robot to spread the terminus.
Period taken by robot to reach terminus from origin: The period taken by the robot to influence the terminus
from the origin is an important metric that measures the efficiency of the robot's navigation. It is evaluated by
measuring the time taken by the robot from the start of navigation to the time it reaches the destination.
Path length optimality ratio: The pathway length optimality ratio measures the efficiency of the robot's
navigation in terms of the pathway occupied by the robot to reach the terminus from the origin. It is calculated
by dividing the actual path length occupied by the robot by the shortest possible path length between the origin
and the destination. A path length optimality ratio of 1 indicates that the robot took the shortest possible path.
Time optimality ratio: The time optimality ratio measures the efficiency of the robot's navigation in
relationships of the time occupied by the robot to spread the terminus from the origin. It is calculated by
dividing the actual time taken by the robot to reach the terminus by the shortest possible time to spread the
terminus. A time optimality ratio of 1 indicates that the robot took the shortest possible time.
Collisions: Collisions measure the safety of the robot during navigation. It is evaluated by counting the number
of collisions that occurred between the robot and the obstacles during navigation. The lower the number of
collisions, the safer the robot's navigation.
Speed: Speed measures the velocity of the robot during navigation. It is evaluated by measuring the distance
travelled by the robot over a unit of time. The speed of the robot is an important metric as it impacts the time
taken by the robot to reach the destination.
Table 1. Sample Readings From 5 Trail
Metric
Trial 1
Trial 2
Trial 3
Trial 4
Trial 5
Task Success (%)
100
80
100
60
100
Path Length (m)
12.5
15.2
11.8
14.3
10.5
Time Taken (s)
50
75
60
80
45
Path Length Optimality Ratio
1.2
1.5
1.1
1.3
1.0
Time Optimality Ratio
1.1
1.4
1.2
1.5
1.0
Collisions
0
1
0
2
0
Speed (m/s)
0.25
0.2
0.3
0.18
0.24
In the Table 1, the first column lists the evaluation metrics used, and the subsequent columns represent the results
for each of the five trials conducted.
Task Success (%) is the percentage of times that the robot successfully completed the specified task of reaching the
destination without collision. For example, in Trial 1, the robot successfully completed the task in 100% of the attempts,
while in Trial 4, it was only successful in 60% of the attempts. Path Length (m) is the total length of the path taken by the
robot to reach the destination. In this example, the path length ranged from 10.5m to 15.2m across the five trials. Time
Taken (s) is the total time taken by the robot to reach the destination from the starting point. The time taken varied
between 45 seconds to 80 seconds across the five trials. Path Length Optimality Ratio is the ratio of the path length taken
by the robot to the shortest possible path length to reach the destination. A ratio of 1.0 indicates that the robot took the
shortest possible path, while ratios greater than 1.0 indicate that the robot took a longer path. In this example, the path
length optimality ratio ranged from 1.0 to 1.5. Time Optimality Ratio is the ratio of the time taken by the robot to reach
the destination to the shortest possible time. A ratio of 1.0 indicates that the robot took the shortest possible time, while
ISSN: 27887669 Journal of Machine and Computing 3(3)(2023)
212
ratios greater than 1.0 indicate that the robot took a longer time. In this example, the time optimality ratio ranged from
1.0 to 1.5.
Collisions indicate the number of collisions that occurred during the robot's attempt to reach the destination. In this
example, the robot collided with obstacles once in Trial 2 and twice in Trial 4. Speed (m/s) is the average speed of the
robot during its attempt to reach the destination. In this example, the speed ranged from 0.18m/s to 0.3m/s across the five
trials. Overall, the evaluation metrics are used to assess the performance of the robot in terms of task completion, path
optimization, time efficiency, collisions, and speed. These metrics can help to identify areas of improvement and to
compare the performance of the robot across different trials or with other robots.
Mapping the Environment
In order to evaluate the effectiveness of the navigation system in the robot, it was necessary to begin with the creation of
an environment map. This was accomplished by manually steering the robot using the joystick and driving it slowly
through the hallways between the start and endpoint, while the map node is generated by map based on LIDAR sensor
with the operations of scans and information from odometry. The accuracy of the resulting maps was carefully inspected,
and two different maps were created for the trial: Case 1 and Case 2. The purpose of Case 1 map was to compare the
navigation system with a pre-existing one, while the Case 2 map was designed specifically to challenge the robot's
responsiveness in an environment with both static and dynamic obstacles. The complexity of these maps varied, with
Case 2 being the more challenging of the two due to the presence of moving obstacles, which made it a more realistic test
of the robot's abilities. The results of the trials for Case 1 and Case 2 are shown in Fig 4a and Fig 4b, respectively.
Fig 4. Result of Mapping a) Case 1 b) Case 2
For the case 1 shown in Fig 4a the time metric showed the largest variations, with a distance variation of 30 seconds
among the finest and nastiest time. Inconsistencies were experiential during direction finding, especially when turning
around corners into the next corridor. It is likely that the robot circumnavigated too near to the ramparts as it curved the
corner, causing the direction-finding system to reduce its speed significantly and temporarily stop when the robot is too
near to the object. The possible solution at this situation would be to increase the sensitivity to obstacles on the map.
To test the robot's obstacle avoidance abilities in case 2, two static obstacles were placed 1.92 meters apart in a
hallway (as depicted in Fig 4b). A person was then added as a dynamic obstacle in the path after the second static
obstacle. The robot effectively steered everywhere the lively obstacle in 6 out of 5 trials, demonstrating its adaptability.
However, in the fifth trial, the robot failed to locate its location at the twitch and did not navigate to the destination.
During the trials, there was a noticeable variance in navigation time due to differences in obstacle perception and
distance. The robot reduced its speed when it approached an obstacle, which impacted its navigation time.
Static and Dynamic Obstacle
In the case 3 experiment, the robot's ability to navigate through a room with static and moving obstacles was tested. The
experiment was conducted in a 50m x 50m room with three static obstacles placed at (10,20), (30,40), and (40,10), as
shown in Fig 5. A dynamic obstacle was created by moving a line follower robot along the path shown in the same
figure. The path followed by the line follower robot is represented in black, and the robot is represented by a blue circle.
ISSN: 27887669 Journal of Machine and Computing 3(3)(2023)
213
The subject with a visual impairment was asked to walk alongside the robot, holding onto it while the robot provided
instructions to avoid obstacles. The robot was able to sense the presence of the obstacles and avoid them using the
information provided by the odometry system. When the robot sensed an obstacle, it would stop and turn left or right to
avoid a collision. The turning commands were properly communicated to the system, and the robot was able to navigate
around the obstacles effectively.
Fig 5 clearly shows that the robot was able to avoid collisions with both static and dynamic obstacles by sensing the
object before it was within 2 meters of the robot. The subject was able to walk effectively with the robot, and the robot
successfully avoided collisions in all the trials except for trial 2, where the robot was too close to the obstacle, but still
avoided the collision. The experiment showed the effectiveness of the approach used in developing the robot for visually
impaired people. The odometry system used in the robot was able to provide accurate information about the robot's
position and movement, which was crucial for the robot's ability to navigate through the room and avoid obstacles. The
robot's ability to sense and avoid obstacles made it a useful tool for visually impaired people who have difficulty
navigating through crowded spaces.
Fig 5. Experimental Result of The Case 3
One potential future scope for this research could be to explore the use of advanced sensors, such as LIDAR or
computer vision, to enhance the robot's ability to sense and avoid obstacles. These sensors could provide more detailed
information about the environment, enabling the robot to make more informed decisions about navigation. Additionally,
the use of machine learning algorithms could be explored to improve the robot's ability to adapt to changing
environments and learn from its interactions with the environment. The case 3 experiment demonstrated the effectiveness
of the developed robot for visually impaired people in navigating through a room with obstacles. The use of an odometry
system provided accurate information about the robot's movement and position, enabling the robot to avoid collisions
with obstacles. The experiment opens up potential future research in exploring the use of advanced sensors and machine
learning algorithms to enhance the robot's navigation abilities.
To improve the robot's performance, it may be beneficial to increase the robot's sensing capabilities to detect
obstacles from a greater distance, thereby allowing it to reroute itself earlier and navigate more efficiently. Additionally,
adjusting the robot's footprint or the resolution of the obstacle map could help to reduce the variability in the time taken
to navigate the path. Overall, the results of the trials indicate that the robot's navigation system is responsive and can
successfully navigate around dynamic obstacles in real-time.
ISSN: 27887669 Journal of Machine and Computing 3(3)(2023)
214
V. CONCLUSION
In conclusion, this research demonstrates the development of a robot system for visually impaired individuals that is
capable of obstacle detection and avoidance. The system was tested in three different scenarios with varying levels of
complexity, including static obstacles, moving obstacles, and a combination of both. The results show that the robot was
able to successfully detect and avoid obstacles in most trials. The system was also effective in providing proper
instructions to the user through verbal communication and tactile feedback, enabling them to walk effectively while
holding the robot's handle. The developed system has the potential to enhance the mobility and independence of visually
impaired individuals, allowing them to navigate unfamiliar environments with greater ease and safety. Future research
may focus on improving the system's accuracy and speed of obstacle detection and avoidance, as well as exploring
additional features such as voice recognition and object recognition. Overall, this research provides a promising approach
to developing assistive technology for visually impaired individuals, with the potential to significantly improve their
quality of life. In the future, further improvements can be made to enhance the performance and functionality of the
robot. One possible direction is to explore the integration of artificial intelligence and machine learning algorithms to
enable the robot to adapt to different environments and situations. This could involve developing a more advanced
obstacle detection and avoidance system, which could learn from previous experiences and improve over time.
Additionally, integrating natural language processing and speech recognition capabilities could allow for more intuitive
interaction between the user and the robot. Finally, expanding the application of the robot beyond indoor environments
and into outdoor settings, such as sidewalks and crosswalks, could greatly benefit visually impaired individuals in their
daily lives.
Data Availability
No data was used to support this study.
Conflicts of Interests
The author(s) declare(s) that they have no conflicts of interest.
Funding
No funding was received to assist with the preparation of this manuscript.
Ethics Approval and Consent to Participate
The research has consent for Ethical Approval and Consent to participate.
Competing Interests
There are no competing interests.
References
[1]. Q.-H. Nguyen, H. Vu, T.-H. Tran, and Q.-H. Nguyen, “Developing a way-finding system on mobile robot assisting visually impaired people
in an indoor environment,” Multimedia Tools and Applications, vol. 76, no. 2, pp. 2645–2669, Jan. 2016, doi: 10.1007/s11042-015-3204-2.
[2]. Y.-C. Lin, J. Fan, J. A. Tate, N . Sarkar, and L. C. Mion, “Use of robots to encourage social engagement between older adults,” Geriatric
Nursing, vol. 43, pp. 97103, Jan. 2022, doi: 10.1016/j.gerinurse.2021.11.008.
[3]. J. Fried, A. C. Leite, and F. Lizarralde, “Uncalibrated image-based visual servoing approach for translational trajectory tracking with an
uncertain robot manipulator,” Control Engineering Practice, vol. 130, p. 105363, Jan. 2023, doi: 10.1016/j.conengprac.2022.105363.
[4]. H. Kim et al., “Robot-assisted gait tra ining with auditory and visual cues in Parkinson’s disease: A randomized controlled trial,” Annals of
Physical and Rehabilitation Medicine, vol. 65, no. 3, p. 101620, 2022, doi: 10.1016/j.rehab.2021.101620.
[5]. M. Zbytniewska-Mégret et al., “Reliability, validity and clinical usability of a robotic assessment of finger proprioception in persons with
multiple sclerosis,” Multiple Sclerosis and Related Disorders, vol. 70, p. 104521, Feb. 2023, doi: 10.1016/j.msard.2023.104521.
[6]. B. Hong, Z. Lin, X. Chen, J. Hou, S. Lv, and Z. Gao, “Development and application of key technologies for Guide Dog Robot: A systematic
literature review,” Robotics and Autonomous Systems, vol. 154, p. 104104, Aug. 2022, doi: 10.1016/j.robot.2022.104104.
[7]. T. C. Bourk e, A. M. Coderre, S. D. Bagg, S. P. Dukelow, K. E. Norman, and S. H. Scott, “Impaired corrective responses to postural
perturbations of the arm in individuals with subacute stroke,” Journal of NeuroEngineering and Rehabilitation, vol. 12, no. 1, Jan. 2015, doi:
10.1186/1743-0003-12-7.
[8]. K. R. da S. Santos, E. Villani, W. R. de Oliveira, and A. Dttman, “C omparison of visual servoing technologies for robotized a erospace
structural assembly and inspection,” Robotics and Computer-Integrated Manufacturing, vol. 73, p. 102237, Feb. 2022, doi:
10.1016/j.rcim.2021.102237.
[9]. T. M. Herter, S. H. Scott, and S. P. Dukelow, “Vision does not always help stroke survivors compensate for impaired limb position sense,”
Journal of NeuroEngineering and Rehabilitation, vol. 16, no. 1, Oct. 2019, doi: 10.1186/s12984-019-0596-7.
[10]. P. Uluer, N . Akalın, and H. Köse, “A New Robotic Platform for Sign Language Tutoring,” International Journal of Social Robotics, vol. 7,
no. 5, pp. 571585, Jun. 2015, doi: 10.1007/s12369-015-0307-x.
[11]. X. Li et al., “AviPer: assisting visually impaired people to perceive the world with visual-tactile multimodal attention network,” CCF
Transactions on Pervasive Computing and Interaction, vol. 4, no. 3, pp. 219239, Jun. 2022, doi: 10.1007/s42486-022-00108-3.
[12]. D. Novak, A. Nagle, U. Keller, and R. Riener, “Increasing motivation in robot-aided arm rehabilitation with competitive and cooperative
gameplay,” Journal of NeuroEngineering and Rehabilitation, vol. 11, no. 1, p. 64, 2014, doi: 10.1186/1743 -0003-11-64.
[13]. A. Bardella, M. Danieletto, E. Menegatti, A. Zanella, A. Pretto, and P. Zanuttigh, “ Autonomous robot exploration in smart environments
exploiting wireless sensors and visual features,” annals of telecommunications - annales des télécommunications, vol. 67, no. 78, pp. 297
311, Jun. 2012, doi: 10.1007/s12243-012-0305-z.
[14]. M. Zbytniewska et al., “Reliable and valid robot-assisted assessments of hand proprioceptive, motor and sensorimotor impairments after
ISSN: 27887669 Journal of Machine and Computing 3(3)(2023)
215
stroke,” Journal of NeuroEngineering and Rehabilitation, vol. 18, no. 1, Jul. 2021, doi: 10.1186/s12984-021-00904-5.
[15]. A. Esfandbod, A. Nourbala, Z. Rokhi, A. F. Meghdari, A. Taheri, and M. Alemi, “Design, Manufacture, and Acceptance Evaluation of
APO: A Lip-syncing Social Robot Developed for Lip-reading Training Programs,” International Journal of Social Robotics, Oct. 2022, doi:
10.1007/s12369-022-00933-7.
[16]. Madeleine Wang Yue Dong and Ya nnis Yortsos, “Application of Machine Learning Technologies for Transport layer Congestion Control",
vol.2, no.2, pp. 066-076, April 2022. doi: 10.53759/181X/JCNS202202010.
[17]. C. Bayón, S. S. Fricke, H. van der Kooij, and E. H. F. van Asseldonk, “Automatic Versus Manual Tuning of Robot-Assisted Gait Training,”
Converging Clinical and Engineering Research on Neurorehabilitation IV, pp. 914, Oct. 2021, doi: 10.1007/978-3-030-70316-5_2.
[18]. G. Capi and H. Toda, “Development of a New Robotic System for Assisting Visually Impaired People,” International Journal of Social
Robotics, vol. 4, no. S1, pp. 3338, Sep. 2011, doi: 10.1007/s12369-011-0103-1.
[19]. R. Secoli, M.-H. Milot, G. Rosati, and D. J. Reinkensmeyer, “Effect of visual distraction and auditory feedback on patient effort during
robot-assisted movement training after stroke,” Journal of NeuroEngineering and Rehabilitation, vol. 8, no. 1, p. 21, 2011, doi:
10.1186/1743-0003-8-21.
[20]. C. P. Gharpure and V. A. Kulyukin, “Robot-assisted shopping for the blind: issues in spatial cognition and product selection,” Intelligent
Service Robotics, vol. 1, no. 3, pp. 237251, Mar. 2008, doi: 10.1007/s11370-008-0020-9.
[21]. T. C. Bourke, C. R. Lowrey, S. P. Dukelow, S. D. Bagg, K. E. Norman, and S. H. Scott, “A robot-based behavioural task to quantify
impairments in rapid motor decisions and actions after strok e,” Journal of NeuroEngineering and Rehabilitation, vol. 13, no. 1, Oct. 2016,
doi: 10.1186/s12984-016-0201-2.
[22]. G. Tulsulkar, N. Mishra, N. M. Thalmann, H. E. Lim, M. P. Lee, and S. K. Cheng, “Can a humanoid social robot stimulate the in teractivity
of cognitively impaired elderly? A thorough study based on computer vision methods,” The Visual Computer, vol. 37, no. 12, pp. 3019
3038, Jul. 2021, doi: 10.1007/s00371-021-02242-y.
[23]. V. Kulyukin, C. Gharpure, J. Nicholson, and G. Osborne, “Robot-assisted wayfinding for the visually impaired in structured indoor
environments,” Autonomous Robots, vol. 21, no. 1, pp. 2941, Jun. 2006, doi: 10.1007/s10514-006-7223-8.
[24]. A. K. Sangaiah, J. S. Ramamoorthi, J. J. P. C. Rodrigues, Md. A. Rahman, G. Muhammad, and M. Alrashoud, “LACCVoV: Linear
Adaptive Congestion Control With Optimization of Data Dissemination Model in Vehicle-to-Vehicle Communication,” IEEE Transactions
on Intelligent Transportation Systems, vol. 22, no. 8, pp. 53195328, Aug. 2021, doi: 10.1109/tits.2020.3041518.
... those techniques search for uncommon styles of conduct that a fraudster is likely to use and is able to stumble on in a much quicker time than manual detection methods. An instance of this is using cluster evaluation to discover clusters of similar transactions that are possible to be fraudulent [3].while mixed, each supervised and unsupervised mastering strategy can form a complete fraud detection gadget. ...
Conference Paper
Full-text available
The goal of computerized fraud detection in online banking systems is to identify and save you fraudulent transactions in actual time. Device-gaining knowledge of strategies can provide a powerful means to detect anomalies in consumer behavior that might suggest feasible fraud. This consists of spotting transactions that can be out of the norm, primarily based on purchaser spending habits, and using exceptionally sophisticated algorithms to discover complex styles of fraudulent pastimes. Similarly, gadgets gaining knowledge of fashions may be used to identify suspicious consumer behavior and expect future fraudulent sports. With the aid of leveraging device learning, banks can proactively screen accounts for signs of fraud and speedy cope with online safety threats.
Article
Full-text available
Lack of educational facilities for the burgeoning world population, financial barriers, and the growing tendency in favor of inclusive education have all helped channel a general inclination toward using various educational assistive technologies, e.g., socially assistive robots. Employing social robots in diverse educational scenarios could enhance learners’ achievements by motivating them and sustaining their level of engagement. This study is devoted to manufacturing and investigating the acceptance of a novel social robot named APO, designed to improve hearing-impaired individuals’ lip-reading skills through an educational game. To accomplish the robot’s objective, we proposed and implemented a lip-syncing system on the APO social robot. The proposed robot’s potential with regard to its primary goals, tutoring and practicing lip-reading, was examined through two main experiments. The first experiment was dedicated to evaluating the clarity of the utterances articulated by the robot. The evaluation was quantified by comparing the robot’s articulation of words with a video of a human teacher lip-syncing the same words. In this inspection, due to the adults’ advanced skill in lip-reading compared to children, twenty-one adult participants were asked to identify the words lip-synced in the two scenarios (the articulation of the robot and the video recorded from the human teacher). Subsequently, the number of words that participants correctly recognized from the robot and the human teacher articulations was considered a metric to evaluate the caliber of the designed lip-syncing system. The outcome of this experiment revealed that no significant differences were observed between the participants’ recognition of the robot and the human tutor’s articulation of multisyllabic words. Following the validation of the proposed articulatory system, the acceptance of the robot by a group of hearing-impaired participants, eighteen adults and sixteen children, was scrutinized in the second experiment. The adults and the children were asked to fill in two standard questionnaires, UTAUT and SAM, respectively. Our findings revealed that the robot acquired higher scores than the lip-syncing video in most of the questionnaires’ items, which could be interpreted as a greater intention of utilizing the APO robot as an assistive technology for lip-reading instruction among adults and children.
Article
Full-text available
Unlike able-bodied persons, it is difficult for visually impaired people, especially those in the educational age, to build a full perception of the world due to the lack of normal vision. The rapid development of AI and sensing technologies has provided new solutions to visually impaired assistance. However, to our knowledge, most previous studies focused on obstacle avoidance and environmental perception but paid less attention to educational assistance for visually impaired people. In this paper, we propose AviPer, a system that aims to assist visually impaired people to perceive the world via creating a continuous, immersive, and educational assisting pattern. Equipped with a self-developed flexible tactile glove and a webcam, AviPer can simultaneously predict the grasping object and provide voice feedback using the vision-tactile fusion classification model, when a visually impaired people is perceiving the object with his gloved hand. To achieve accurate multimodal classification , we creatively embed three attention mechanisms, namely temporal, channel-wise, and spatial attention in the model. Experimental results show that AviPer can achieve an accuracy of 99.75% in classification of 10 daily objects. We evaluated the system in a variety of extreme cases, which verified its robustness and demonstrated the necessity of visual and tactile modal fusion. We also conducted tests in the actual use scene and proved the usability and user-friendliness of the system. We opensourced the code and self-collected datasets in the hope of promoting research development and bringing changes to the lives of visually impaired people.
Article
Background Multiple sclerosis often leads to proprioceptive impairments of the hand. However, it is challenging to objectively assess such deficits using clinical methods, thereby also impeding accurate tracking of disease progression and hence the application of personalized rehabilitation approaches. Objective We aimed to evaluate test-retest reliability, validity, and clinical usability of a novel robotic assessment of hand proprioceptive impairments in persons with multiple sclerosis (pwMS). Methods The assessment was implemented in an existing one-degree of freedom end-effector robot (ETH MIKE) acting on the index finger metacarpophalangeal joint. It was performed by 45 pwMS and 59 neurologically intact controls. Additionally, clinical assessments of somatosensation, somatosensory evoked potentials and usability scores were collected in a subset of pwMS. Results The test-retest reliability of robotic task metrics in pwMS was good (ICC=0.69-0.87). The task could identify individuals with impaired proprioception, as indicated by the significant difference between pwMS and controls, as well as a high impairment classification agreement with a clinical measure of proprioception (85.00-86.67%). Proprioceptive impairments were not correlated with other modalities of somatosensation. The usability of the assessment system was satisfactory (System Usability Scale ≥73.10). Conclusion The proposed assessment is a promising alternative to commonly used clinical methods and will likely contribute to a better understanding of proprioceptive impairments in pwMS.
Article
This work presents an adaptive image-based visual servoing approach for uncertain robot manipulators using an eye-to-hand camera configuration, which does not require any calibration procedure and image velocity measurement. A monocular camera provides visual feedback for the robot’s joint control, allowing successful tracking of the translational trajectory, prescribed in image space, for a given target object attached to the robot end-effector. As our contribution, an indirect adaptive control approach is proposed to deal with the uncertain parameters of the camera–robot system and any camera misalignment concerning the robot frame. The indirect adaptive visual servoing approach is then combined with a direct adaptive motion controller using a cascade control framework to consider the uncertain robot dynamics in our stand-alone robot vision system. Lastly, an adaptive model-based state observer is designed to avoid the need for measuring the image feature velocity in the interconnected control loops. The Lyapunov stability theory and the passivity paradigm are synergized to demonstrate the stability properties of the overall closed-loop system. Experimental results, obtained with a 4-DoF robot manipulator carrying out translational trajectory tracking tasks using a commercial webcam, illustrate the performance and effectiveness of the proposed adaptive control methodology.
Article
Due to the advent of technology, humans now live in the modern age of information and data. In this form of world, different objects are interlinked to data sources, and every aspect of human’s lives are recorded in a digital form. For example, the present electronic globe has an abundance of distinct forms of data e.g., health data, social media fata, smartphone data, business data, smart city data, cybersecurity data and Internet of Things (IoT) data, including Covid-19 data. Data can be unstructured, semi-structured and structured, and this is increasing on a daily basis. Machine Learning (ML) is significantly employed in different aspects of real-life e.g., Congestion Control (CC). This paper provides an evaluation of the aspect ML employed in CC. CC has emerged as a fundamental viewpoint in communications system infrastructure in the recent years, since network operations, and network capacity have enhanced at a rapid rate.
Article
In the current situation of many visually handicapped people worldwide, yet the corresponding number of guide dogs is quite rare. It activates the application of advanced technology to broaden their horizons and allow them to embrace the world. This paper will review the research state of the Guide Dog Robot (GDR) for people with visual impairment and present some views. According to the application scenes, we have divided the GDR into two categories: specific scene applicable type and universal scene applicable type, with the description of different performances under various scenes. Then the current research focuses are elaborated, including localization and navigation technology, recognition of traffic signs, human–robot interaction (HRI), speed coordination, and walking structure design. Subsequently, the studying directions and challenges of GDR are discussed, and collaborative human–robot mode is believed to become the research mainstream. Finally, we conclude this review and explain why few GDR has realized commercialization. The limitations of current studies and some recommendations for future research are presented.
Article
Background: Robot-assisted gait training (RAGT) may have beneficial effects on Parkinson's disease (PD); however, the evidence to date is inconsistent. Objectives: This study compared the effects of RAGT and treadmill training (TT) on gait speed, dual-task gait performance, and changes in resting-state brain functional connectivity in individuals with PD. Methods: In this prospective, single-center, randomized controlled trial with a parallel two-group design, 44 participants were randomly allocated to undergo 12 sessions (3 times per week for 4 weeks) of RAGT or TT. The primary outcome was gait speed on the 10-m walk test (10mWT) under comfortable walking conditions. Secondary outcomes included dual-task interference on gait speed, balance, disability scores, fear of falling, freezing of gait, and brain functional connectivity changes. All clinical outcomes were measured before (T0), immediately after (T1), and 1 month after treatment (T2). Results: The mean (SD) age of the participants was 68.1 (8.1) years, and mean disease duration 108.0 (61.5) months. The groups did not significantly differ on the 10mWT (T0-T1, p=0.726, Cohen's d=0.133; T0-T2, p=0.778, Cohen's d=0.121). We observed a significant time-by-group interaction (F=3.236, p=0.045) for cognitive dual-task interference, controlling for confounders. After treatment, coupling was decreased to a greater extent with RAGT than TT between the visual and dorsal attention networks (p=0.015), between bilateral fronto-parietal networks (p=0.043), and between auditory and medial temporal networks (p=0.018). Improvement in cognitive dual-task interference was positively correlated with enhanced visual and medial temporal network coupling overall (r=0.386, p=0.029) and with TT (r=0.545, p=0.024) but not RAGT (r=0.151, p=0.590). Conclusions: RAGT was not superior to intensity-matched TT on improving gait functions in individuals with PD but may be beneficial in improving gait ability under cognitive dual-task conditions. The therapeutic mechanism and key functional connectivity changes associated with improvement may differ between treatment strategies.
Article
We designed a robotic architecture system within a commercially available socially assistive robot to engage pairs of older adults in multimodal activities over 3 weeks for 6 sessions. The study took place in two assisted living facilities. Seven pairs (14 individuals) completed the experiment. Ages ranged from 70 to 90 years with a mean age of 83.0 (± 6.1). Most were women (79%). Three adults were screened as having normal cognition, 10 had mild cognitive impairment, and 1 adult self-reported a diagnosis of Alzheimer's disease. All sessions were video recorded and analyzed using Noldus Observer XT. Individuals demonstrated high levels of both human-human interaction and human-robot interaction, but the activity influenced the type of interaction. Engagement measures (visual, verbal, behavioral) also varied by type of activity. Future studies will focus on further development of activities that can engage older adults with varying levels of cognitive impairment and apathy.
Chapter
Robot-assisted gait training (RAGT) is a promising rehabilitation technique that is increasingly used in the clinic to improve walking ability after a neurological disorder. The effectiveness of RAGT might depend on the customization of the robotic therapy, which in most of the cases is done either manually by the clinical practitioner (MT) or by adaptive controllers developed to automatically adjust the assistance (AT). In this contribution we present a comparison of automatic versus manual tuning of RAGT, where we assessed the differences in the adjustment of the therapy for ten participants with neurological disorders (six stroke, four spinal cord injury). The AT approach reached stable assistance levels quicker than the MT approach. Moreover, the AT ensured a good performance for all subtasks of walking with lower assistance levels than the MT. Future clinical trials need to be performed to show whether these apparent advantages result in better clinical outcomes.
Article
This work presents a novel approach for visual servoing of robotized aerospace manufacturing cells, based on the combined use of a camera and a 2D-beam scanner and a 1-D beam distance sensor attached to the end-effector of a collaborative robot. The proposed system can detect features associated with mechanical bounds over the aircraft structure, making possible the robot automatic online trajectory/path generation when the robot performs a target task over an aeronautical part. The effectiveness of this method is demonstrated by means of experimental evaluations carried out in unstructured environments without illumination and temperature control (simulating real shop floor conditions), evincing that the proposed approach is more robust. We also show that it is able to automatically generate and follow a target path with an accuracy of 0.40 mm and repeatability of 0.59 mm, which is roughly 2 times more accurate than the classical computer vision servoing used in the experiments. The proposed solution is suitable to applications in modern collaborative robotized aerospace assembly cells.