Conference PaperPDF Available

A novel obstacle avoidance system for guiding the visually impaired through the use of fuzzy control logic

A Novel Obstacle Avoidance System for Guiding the
Visually Impaired through the use of Fuzzy Control
Wafa M. Elmannai, Member, IEEE; Khaled M. Elleithy, Senior Member, IEEE
Department of Computer Science and Engineering
University of Bridgeport
Bridgeport, USA;
Abstract—This paper presents an intelligent framework that
includes several types of sensors embedded in a wearable device to
support the visually impaired (VI) community. The proposed
work is based on an integration of sensor-based techniques and a
computer vision-based technology in order to introduce an
efficient and economical visual device. The 98% accuracy rate of
the proposed sequence is based on a wide detection view that used
two camera modules and a detection range of approximate
9meteres. In addition, we introduce a novel obstacle avoidance
approach based on the image depth and fuzzy control rules. In this
approach, each frame divided into three areas. By using the fuzzy
logic, we were able to provide precise information to help the VI
user in avoiding front obstacles. The strength of this proposed
approach aids the VI users in avoiding 100% of all identified
objects. Once the device is initialized, the VI user can confidently
enter unfamiliar surroundings. Therefore, this implemented
device can be described as following: accurate, reliable, friendly,
light, and economically accessible that facilitates the indoor and
outdoor mobility of VI people and does not require any previous
knowledge of the surrounding environment.
Keywords— assistive wearable devices, computer vision
techniques, obstacle detection, obstacle avoidance, visual
impairment, fuzzy logic.
In 2014, The World Health Organization (WHO) [1]
reported that 285 million people are Visually Impaired (VI)
people worldwide ; Out of this number, Thirty-nine million
people are completely blind. In the USA, approximately 8.7
million people are VI, whereas approximately 1.3 million people
are blind [2]. Both the National Federation for the Blind [2] and
the American Foundation for the Blind [3] reported that 100,000
VI people are students. During the last decade, the
accomplishment of public health performance was a decrease in
the number of diseases that cause blindness. Globally, ninety
percent of VI people are low-income and live in developing
countries [1]. In addition, 82% of VI people are older than 50
years old [1]. This number is estimated to increase
approximately 2 million per decade. By 2020, this number is
estimated to double [4].
VI people encounter many challenges when performing
most natural activities that are performed by human beings, such
as detecting static or dynamic objects and safely navigating
through paths. These activities are highly difficult and may be
dangerous for VI people, especially if the environment is
unknown. Therefore, VI people use the same route every time
by remembering unique elements.
We have investigated several solutions that assist VI people.
A taxonomy was the result of our intensive study to provide a
technical classification to compare any system with other
systems. This taxonomy is presented in a literature survey paper
that was published in [5]. None of these studies provides a
complete solution that can assist VI people in all aspects of their
lives. Thus, the objective of this work is to design an efficient
framework that significantly improves the life of VI people.
Their framework can overcome the limitations of previous
systems by providing a wider range of detection that works
indoors and outdoors and provides a navigational service.
The organization of this paper is the following: Section II
presents a study of the state-of-the art assistive technologies for
VI people. The proposed framework is described in Section III.
Implementation and experimental setup results are presented in
Section IV. Section V demonstrates the results and evaluation.
Section VI concludes the paper.
Although several solutions were proposed in the last decade,
none of these solutions are a complete solution that can assist VI
people in all aspects of their lives. This section presents a
representative set of the proposed techniques.
Recently, we noticed a rapid propagation of assistive
systems due to the improvement and progress of computer
vision techniques that add more value and services with
flexibility. Some of these proposed systems are [6], which
introduced a fusion of artificial vision and map matching [7] and
GPS as an enhanced navigational system that supports VI people
in their mobility. The SpikNet recognition algorithm was
employed for image processing [8]. In addition, ccognitive
guidance devices are designed by integrating the Kinect sensor’s
output, vanishing points and fuzzy decisions to navigate the VI
person through a known environment [9]. An independent
mobility aid was proposed in [10] using the indoor and outdoor
obstacle avoidance [11] for static/dynamic object detection.
Lucas-Kanade, RANSAC, adapted HOG descriptor, Bag of
$20,000 fellowship grant by the American Association of University Women
(AAUW) was used to partially support Wafa Elmannai to conduct this research.
2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC)
978-1-5386-4790-5/18/$31.00 ©2018 IEEE
Visual Words (BoVW), and SVM are employed for object
detection and recognition. The authors of [12] introduced an
object detection system using an RGB-D Sensor and computer
vision techniques. The classification process was based on the
use of a canny edge detector [13], the Hough Line Transform
and RANSA, which are applied on the RGB-D’s output. The
work in [14] is based on the integration of both ultrasonic-based
technology and computer vision algorithms, which used for the
proposed obstacle detection and recognition system.
Details about the systems presented in this section are
provided in [5]. To overcome the limitations of the discussed
systems, we propose a novel system that integrates both sensor-
based techniques and computer vision techniques to provide a
complete solution for VI people indoors and outdoors with other
complementary features. The proposed approach is described in
the following section.
A. Proposed Methodology of the Data Fusion Algorithm
The proposed framework includes hardware and software
components. The hardware design is composed of two camera
modules, a compass module, a GPS module, a gyroscope
module, a music (audio output) module, a microphone module,
and a wi-fi module. However, the aim of the software is to
develop an efficient data fusion algorithm that is based on
sensory data to enhance and provide a highly accurate object
detection/avoidance and navigational system with the help of
computer vision methods to provide safe mobility. Fig1 shows
the communication methodology between the components.
Fig. 1. Proposed methodology of interaction process among the hardware
The system is designed to navigate the user to their desired
location and to avoid any obstacles in front of the user after it is
detected. Using the fused data from multiple sensors and the
computer vision methods, the user will receive navigational
instructions in an audio format. Two camera sensors are used for
object detection, which is processed using computer vision
methods. The remote server handles the image processing
component. Based on the depth of the image and fuzzy logic, we
can approximately measure the distance between the obstacle
and the VI person. A compass is employed for orientation
adjustment. A gyroscope (gyro) sensor is employed for rotation
measurements in degrees. The GPS provides the user’s location.
All components are connected with the microcontroller board,
which communicates with a remote server. Route guidance is
provided by a GIS. Thus, we use a gyro, compass, and GPS to
track the user’s directions, locations, and orientations to provide
accurate route guidance information.
A.1. Static/ Dynamic Obstacle Detection
A sequence of well-known algorithms are used to provide a
robot obstacle detection system.
The Oriented FAST and Rotated BRIEF (ORB) is the
approach that we use for static/dynamic object detection. ORB
is characterized by a fast computation for panorama stitching
and low power consumption. The ORB algorithm is an open
source that was presented by Ethan Rublee, Vincent Rabaud,
Kurt Konolige and Gary R. Bradski in 2011 as a suitable
alternative of SIFT due to its effective matching performance
and low computing cost [15]. Unlike other extraction
algorithms, ORB has a descriptor. ORB is used to extract the
interest points in each frame.
In addition, we use the K-Nearest Neighbor (KNN)
algorithm where k = 2 to match the descriptors of extracted
interest points between two frames of an object’s presence. In
this paper, we use the Brute Force matcher, which is the simple
version of the KNN. In our case, the Brute Force will match the
closest two corresponding descriptors with the descriptors that
are in the current frame by trying each corresponding descriptor
in the corresponding frame. The Hamming Distance method is
applied between two pairs since the descriptor of ORB is a
binary string. Each descriptor of an interest point will be
represented as vector f, which was generated by BRIEF. If the
descriptors of two interest points are equal, the result is 0;
otherwise, the result is 1. The Hamming distance will ensure
correct matching by counting the difference between the
attributes, in which the pair of two instances differ.The K-
Nearest Neighbor Algorithm finds the best match of the
descriptor of an extracted interest point to the corresponding
descriptor [16]. However, RANSAC algorithm reduces the false
positive match when the presence of an object is determined but
an actual object does not exist [17].
The final step is the K-Means clustering technique to cluster
n extracted points of a particular frame. However, more than one
cluster may represent the same object. Therefore, a merging
method needs to be applied in the case of any intersections
among the clusters. We made a slight modification to the K-
Means clustering technique. We merged two intersected clusters
if the first cluster’s centroid is within the second cluster or vice
versa. For example, S1 (Cs1 is the centroid point of S1) and S2
(Cs2 is the centroid point of S2) are clusters. The two mentioned
clusters can be merged into one cluster if ((ሺ
is not null)
AND (the centroid
is within
)) then merges
Otherwise, merging does not occur even if ሺ
is not null.
The result of the combination of adopted algorithms is
represented in Fig. 2. The two green dots, which are represented
on the floor in Fig. 2 (b), denote the detection range from where
the user is standing.
2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC)
Fig. 2. Representation of the proposed object detection technique; (a) original frame, (b) the frame after applying proposed sequence of algorithms for object
detection and (c) the frame after applying K-means clustering and merging to identify each object
A.2. Reliable Obstacle Avoidance System using Fuzzy Logic
The existing systems use sensors to measure the distance
between the user and the obstacle; a technique that supports
distance measurement for this type of system is not available. In
this section, we propose a proximity measurement methodology
to approximately measure the distance between the user and the
obstacle using mathematical models and an obstacle avoidance
approach based on fuzzy logic. This proposed approach uses a
camera that faces a slight angle down to have a fixed distance
between a VI person and the ground. This view enables us to
have a reference to determine whether an object is an
obstruction. We have determined that the average distance
between a VI person and the ground is 9 meters with the device
facing down on an angle. This result enables us to identify an
obstacle within a 9-meter range; however, a VI person would
only need to react to an object within the 3-meter range. Our
proposed method divides the frame into three areas—left, right,
and center—as shown in Fig. 3.
Fig. 3. Approximate distance measurement for object avoidance
We have assumed that an object in the upper part of the
frame is further away than on object in the lower 1/3 and that an
object detected in the lower 1/3 is an obstruction to the VI
person. We can represent the frame in an
Ǧ. Let be the width and be the height.
The calculation of the right and left frames is expressed as
follows: ͳ
͵ܪǡ ʹ
͵ܪǡ ͵
ͷܹሻ (1)
Equation (1) represents the corners of the middle area, where
we detect objects and inform the VI person that an obstacle is in
front of them. Two green dots, which are equal to
of the
frame, represent the threshold of the free collision area. Objects
between the two green dots and the start point must be avoided.
An object is deemed an obstruction if and when the lower
corners of the objects represented by
enter below the area of equation
(1). If an object exists in front of the VI person, an alternative
path is required. We determine this path by searching for an
object on the left or right of the area enclosed by (1). If no objects
are detected on the left side of equation (1), the system issues a
turn left and go straight command to the VI person. If an object
on the left is detected, then the system searches for an object on
the right. If an object on the right is not detected, then a turn right
and go straight command is issued to the VI person. If objects
are detected in all three areas, the system issues a stop and wait
command until a suitable path is identified for the VI to
continue. We calculated 20% of the middle quadrant to provide
accurate information to the user. If the obstacle exists within
20% of the middle quadrant, he/she does not need to move to
slightly to the otherside sides as long as the object appears in one
of the 20% of the middle quadrant but not both.
A.2.1. Fuzzy Control Logic
In order to implement the abovementioned strategy, we use
fuzzy logic to determine the precise decision that the user will
take in order to avoid front obstacles based on multiple inputs.
Fig. 4 shows the fuzzy controller system for obstacle avoidance
algorithm which includes: fuzzier that converts the inputs to
2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC)
number of fuzzy sets based on the defined variables and member
functions; interface engine which generates fuzzy results based
on the fuzzy rules; Each fuzzy output will be mapped by
member functions to get the precise output the user should
seek[18]. We used Mathlab R2017b software in order to
implement the rules of the fuzzy logic.
Fig.4. The Fuzzy structure for obstacle avoidance system
Step 1: Input and output Determination
The input variables for the proposed system are seven inputs.
Those inputs are based on the position of the detected obstacles,
the obstacle range and the user position. They are donated as:
{ObsRange, UserPosition, ObsLeft, Obs20%LeftMid,
ObsMiddle, Obs20%RightMid, and ObsRight}. The output is
the feedback that the user needs for a path to the endpoint. Fig.
5(a) is a representation of applying the proposed obstacle
avoidance (Mamdani) System using Fuzzy Inference System
(FIS) in Maltlab software. Fig. 5(b) displays the design of the
fuzzy system for the obstacle avoidance approach which is
illustrating the inputs and their membership functions and
outputs and their membership.
Step2: Fuzzification
We have divided each input into membership functions.
Since the user is wearing the devices on his/her chest, there are
only three options in term of the user’s position which are: {Left,
middle, and right}. However, since we are using two cameras;
and the processed frames of those two cameras are stitched
every time as one, the user’s position is going to be always in
middle. Therefore, the membership function of the user’s
position is donated as shown in Fig. 6. The range of this
membership function is 300cm as it is considered to be the width
of the scene. The used membership function is Gaussian
Function. Gaussian function is represented in (2) using the
middle value and ɐ൐ͲǤ As ɐ gets smaller, the bell gets
Table 2 describes the terms of obstacle’s position. The
obstacle’s range is divided into two membership functions
{Near, Far} within the scene’s height which is [0 -900cm] . The
threshold is set to be 300cm. Consequently, the obstacle is near
if it exists within the range of [0 – 300cm], however, the obstacle
is far if it is far than 300cm. Fig. 7 represents the membership
function of the obstacle’s range within the height of the scene
(frame or view). In addition, the obstacle’s position is divided
into {ObsLeft, Obs20%LeftMid, ObsMiddle,
Obs20%RightMid, ObsRight}. However, in order to have more
control on the fuzzy rules, we had to divide each part of the
obstacle’s position into two membership functions that exist or
does exist {ObsEx, Obs_NEx}.
Fig. 5(a). The Fuzzy Inference System (FIS) in Maltlab for the proposed
obstacle Avoidance (Mamdani) System.
Fig. 5(b). A high-level diagram of an FIS for the proposed obstacle
Avoidance System
2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC)
Fig. 6. Membership function for the user’s position
Term Meaning
Left User is in the left 0 – 1/3 of scene’s
width [0:50:100]
Middle User is in the middle
1/3 of the scene’s
width to 2/3 of
scene’s width
Right User is in the right last 1/3 of the scene’s
width [200:250:300]
Fig. 7. Membership function of object presentence in two ranges of the scene
Fig. 8 represents the membership function of the obstacle’s
position in the left side of the scene. Same function will be
presented for the remaining inputs for the obstacle’s position.
The negative values indicates that the obstacle does not exist in
that side, whereas, the positive values exemplifies the existence
of the obstacle in that side. Assume the value of the obstacle’s
position is and in range, where Ԗ. Consequently, four
parameters ǡǡǡ are used to express the Trapezoidal-shaped
membership function in the following equation (3):
݈െ݇ǡͲ (3)
Fig. 8. Membership function for obstacle’s position in the left side
ObsLeft Left [0-100] cm
ObsRight Right [200-300] cm
ObsMiddle Middle [100 – 200] cm
Obstacle is the left side , yet it is within the
20% of middle quadrant from left side. [75 -
125] cm
Obstacle is in the right side yet it is within
the 20% of middle quadrant from right side.
[175 – 225] cm
The output is divided into six membership functions that are
based on the fused input variables. The output can be:
{MoveLeft, SlightLeftStraight, GoStraight, SlightRightStraight,
MoveRight, and Stop}. We used the Trapezoidal-shaped
membership function for MoveRight and MoveLeft
membership values. However, we used Triangular membership
function as shown in Fig. 9 to represent {SlightLeftStraight,
GoStraight, SlightRightStraight, MoveLeft, MoveRight, and
Stop}. The model value, lower limit a and upper limit b, can
define the Triangular membership function; where a < m < b.
This function can be expressed in (4):
Ͳ ݂݅ݔܽ
Ͳ ݂݅ݔܾ
2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC)
Fig. 9. Membership function of the output (feedback/directions)
Step3: Creating Fuzzy Rules
The fuzzy rules can be produced based on observing and
employing the Knowledge that was introduced in Table 1 ,Table
2, member function and variables. The rules were implemented
using five conditions of the obstacle’s position, the obstacles’
range, and user’s position. There are 18 rules for the fuzzy
controller system. The implemented 18 rules are presented in
Appendix A. 1. Hence, the rule viewer for these implemented
rules is displayed in Appendix A.2.
We have used the union operation to connect the
membership values. AND is a representation of minimum result
between two values, whereas, OR is the representation of
maximum result between two values. Let Ɋ
and Ɋ
are two
membership values, thus, the fuzzy AND is described as
Step4: Defuzzification
Defuzzification is the last step of the fuzzy controller system.
The output is produced based on the set of inputs, membership
functions and values, and the fuzzy rules. The defized effect of
the user’s position and the obstacles’ position on the feedback is
calculated using the defuzzification method the Large Of
Maximum (LOM) method. Fig. 10 illustrates the surface viewer
that displays the boundary of the differences and combination of
obstacle’s position and user’s position. The user will be allowed
to receive the accurate and precise feedback in order to avoid
front obstacles based on the combination of the described
membership values.
Furthermore, fuzzy logic is used to assist the VI person from
colliding with front obstacles in front of them. The proposed
system was built based on the user’s position, obstacle’s
position and one output. After the device’s initialization step
occurs, the information of the obstacle and user position will be
fed to the fuzzy controller. Then the decision will be made
based on the 18 fuzzy rules. This feedback will be sent to the
user through their headphones. The whole process will be
recursively employed. In case an obstacle does not exist, user
will continue his/her path (straight) with no change.
Fig. 10. The Surface Viewer that examines the output surface of an FIS for
obstacle’s position and user’s position using fuzzy logic toolbox
Fig. 11 illustrates the designed platform, which aims
to detect static and dynamic objects, obstacle avoidance, and
provide navigational information. Fig. 12 demonstrates the
design of the hardware diagram designed using .Net Gadgeteer
on Visual studio, hardware components and snapshot of one of
the real-time scenarios we performed.
Fig. 11. Assistive System for the Visually Impaired
2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC)
2 Serial Camera L2
Compass Modules GPS Module
Gyro Module
Wi-Fi RS21 Module
Music Module
PIR Motion Detection
FEZ Spider Mainboard
The hardware design on .NET Gadgeteer Using Visual Studio
Fig. 12. The hardware design, used components and the snapshot of one of the
real-time scenarios
The device was designed to facilitate the user’s mobility by
providing appropriate navigational information. We used C#
programing language and we used Mathlab software to display
the fuzzy rules. The system is built using the .NET Gadgeteer
compatible mainboard and modules from GHI Electronics [20].
The software implementation is built on top of the following
SDKs using Visual Studio 2013:
NETMF and Gadgeteer Package 2013 R3
The device was tested in indoor and outdoor scenarios by
blind folded person.Our system was tested in a number of real
time scenarios as well as a video dataset that was directly fed to
the system. The dataset contains 30 videos, each video has 700
frames in average. The results of the tested dataset are illustrated
in Table 3.
We divided the objects in each scene into two subgroups
because we realized that some objects should not be considered
as obstacles unless they are located in front of the users or are
blocking his/her path; otherwise, this item is considered to be an
object not an obstacle.
No. of
Number of
Frames per
Rate for
Rate for
Best: 100%
The focus of this evaluation is to provide an efficient and
economically accessible device that assists VI people in
navigating indoors and outdoors. The experiments were run on
Windows 7, core i7, and the resolution of the camera is 320×240
with a maximum resolution of 20 fps.
A. Computational Analysis of the Collision Avoidance
The collision avoidance approach is used in this framework
to avoid the detected obstacles that the user may collide with.
The user is provided with the avoidance instructions in
predefined distance (1/3 height of a frame). Therefore, we have
one scanning level (Fig.3), where we scan for a free path to the
The scanning level is divided into three areas: left, right, and
middle. The searching approach in each area is based on fuzzy
logic under one “for loop”. We first search for the detected
obstacles. If there is no obstacle within the scanning level, the
user will continue straight. If there is an obstacle, the fuzzy rules
will be applied. Thus, the proposed obstacle avoidance approach
has a complexity of linear time (O(n); n is the number of
detected obstacles). Every time the user passes the threshold (1/3
height of the frame), a new window (1/3 height) will be
In order to get the time complexity of the whole system, we
needed to add the time complexity of the detection algorithm
and the time complexity of the obstacle avoidance algorithm.
Most obstacle avoidance algorithms are applied after SIFT or
SURF algorithms for object detection. Our obstacle avoidance
approach is applied after ORB algorithm for object detection
was used, which requires less memory and computation time
than other systems [21]. According to [22, 23], the time
complexity of ORB algorithm is almost half of the time
complexity of SIFT and SURF. This conclude that our overall
system provides a faster and more reliable obstacle avoidance
Fig.13 represents the taken time to process five frames; each
frame includes the number of obstacles. In addition, Fig. 14
describes the actual taken time to detect obstacles, avoid
obstacles as well as sending the audio feedback to the user
through their headphones. This Fig. 13 demonstrates the taken
time for detection / avoidance algorithm, establishing HTTP
requests and playing audio feedback. Thus, the complete
processing time is increasing proportional to the number of
detected objects in terms of (nlogn).
The required processing time for 50 obstacles in one frame
is 0.35sec. Thus, our system is capable to process more than
three frames within a second. This indicates that proposed
system is in real time as we have designed it for pedestrians.
Fig. 14 demonstrates the performance of the obstacle
avoidance system. Seven scenarios were considered in Fig. 14.
The blue column expresses the average number of objects that
appeared in the scenario. The orange column is a representation
of the detected obstacles. In some scenarios, the detection
system did not detect all the presented objects in the frame. For
example scenario# 4, we have 10 obstacles that were detected
2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC)
out of a total of 12 objects. The reason behind the mismatch is
the large size of some objects in the frame. The grey column
demonstrates the average number of obstacles being avoided
using the proposed obstacle avoidance system. That indicated
that collision avoidance approach is 100% accurate in providing
a free path to the user as long as the obstacle was detected in the
set of objects.
Fig. 13. The cost of the proposed approach as a time function for avoiding
number of obstacles in each frame
Fig. 14. The performance evaluation of the obstacle avoidance system
In this work, we developed a hardware and software
implementation that provides a framework that can assist VI
people. The system was implemented using a .NET Gadgeteer-
compatible mainboard and modules from GHI Electronics. This
novel electronic travel aid facilitates the mobility of VI people
indoors and outdoors using computer vision and sensor-based
A high accuracy for the static and dynamic detection system
is achieved based on the proposed sequence of well-known
algorithms. Our proposed obstacle avoidance system enabled
the user to traverse his/her path and avoid 100% of the detected
obstacles. We conducted numerous experiments to test the
accuracy of the system. The proposed system exhibits
outstanding performance when comparing the expected decision
with the actual decision. Based on the extensive evaluation of
other systems, our system exhibits accurate performance and an
improved interaction structure with VI people.
[1] World Health Organization. Visual Impairment and Blindness. Available
online: (Accessed on
June 2017).
[2] National Federation of the Blind. Available online:
(Accessed on January 2016).
[3] American Foundation for the Blind. Available online:
(Accessed on January 2017).
[4] V. Ramiro. "Wearable assistive devices for the blind," Wearable and
autonomous biomedical devices and systems for smart environment 75, pp.
331-349, Oct. 2010.
[5] W. Elmannai and K. Elleithy. "Sensor-Based Assistive Devices for
Visually-Impaired People: Current Status, Challenges, and Future
Directions," Sensors 17, no. 3, p.565, 2017
[6] A. Brilhault, S. Kammoun, O. Gutierrez, P. Truillet, C. Jouffrais, “Fusion
of artificial vision and GPS to improve blind pedestrian positioning,” 4th
IFIP International Conference on New Technologies, Mobility and
Security (NTMS), Paris, France, pp. 1–57–10, February 2011.
[7] C.E. White, D. Bernstein, A.L. Kornhauser, “Some map matching
algorithms for personal navigation assistants,” Transportation research part
c: emerging technologies, 8(1). pp.91-108, Dec 31, 2000.
[8] J.M. Loomis, R.G. Golledge, R.L. Klatzky, J.M. Speigle and J. Tietz,
“Personal guidance system for the visually-impaired,” In Proceedings of
the First Annual ACM Conference on Assistive Technologies, Marina Del
Rey, CA, USA, 31 October–1 November 1994.
[9] A. Landa-Hernández and E. Bayro-Corrochano, “Cognitive guidance
system for the blind,” The IEEE World Automation Congress (WAC),
Puerto Vallarta, Mexico, 24–28 June 2012.
[10] R. Tapu, B. Mocanu, T. Zaharia, “A computer vision system that ensure
the autonomous navigation of blind people,” The IEEE E-Health and
Bioengineering Conference (EHB), Iasi, Romania, 21–23 November 2013.
[11] R. Tapu, B. Mocanu, T. Zaharia, “Real time static/dynamic obstacle
detection for visually-impaired persons,” The 2014 IEEE International
Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 10–
13 January 2014.
[12] A. Aladren, G. Lopez-Nicolas, L. Puig, J.J. Guerrero, “Navi gation
Assistance for the Visually-impaired Using RGB-D Sensor with Range
Expansion” IEEE Syst. J., 10, pp. 922–932, 2016.
[13] N. Kiryati, Y. Eldar and M. Bruckstein, “A probabilistic Hough transform,”
Pattern Recogn, 24, pp.303–316, 1991.
[14] B. Mocanu, R. Tapu and T. Zaharia, “When Ultrasonic Sensors and
Computer Vision Join Forces for Efficient Obstacle Detection and
Recognition,” Sensors, 16, p. 1807, 2016.
[15] E. Rublee, V. Rabaud, K. Konolige and G. Bradski, “ORB: An efficient
alternative to SIFT or SURF,” In Computer Vision (ICCV), 2011 IEEE
international conference on, IEEE, pp. 2564-2571, November 2011.
[16] L. Yu, Z. Yu, Y. Gong, “An Improved ORB Algorithm of Extracting and
Matching Features,” International Journal of Signal Processing, Image
Processing and Pattern Recognition, 8(5), pp.117-126, 2015.
[17] A. Vinay, A.S. Rao, V.S. Shekhar, A. Kumar, K.B. Murthy, S. Natarajan,
“Feature Extraction using ORB-RANSAC for Face Recognition,” Procedia
Computer Science, 70, pp.174-184, 2015.
[18] L. Zadeh, “Fuzzy sets, fuzzy logic, and fuzzy systems”, selected papers by
Lotfi A. Zadeh. NJ: World Scientific Publishing Co., Inc, pp. 94–102,
[19] M. Hol, A. Bilgin, T. Bosse, & B. Bredeweg, A Fuzzy Logic Approach for
Anomaly Detection in Energy Consumption Data. In BNAIC (Vol. 28).
Vrije Universiteit, Department of Computer Sciences. June, 2016.
[20] G.H.I Electronics, Catalog .NET Gadgeteer - GHI Electronics.
May 5, 2013.
[21] E. Rublee, V. Rabaud, K. Konolige and G. Bradski, “ORB: An efficient
alternative to SIFT or SURF,” In Computer Vision (ICCV), 2011 IEEE
international conference on, IEEE, pp. 2564-2571, November 2011.
[22] A. Canclini , M.Cesana, A. Redondi, M. Tagliasacchi, J. Ascenso, & R.
Cilla, (2013, July). Evaluation of low-complexity visual feature detectors
and descriptors. In Digital Signal Processing (DSP), 2013 18th
International Conference on (pp. 1-7). IEEE.
[23] L. Yang, and Z. Lu, 2015. A new scheme for keypoint detection and
description. Mathematical Problems in Engineering, 2015.
2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC)
Rule User’s
ObsLeft Obs20%
LeftMid ObsMiddle Obs20%
RightMid ObsRight Feedback
1 Middle Near ObsEx Obs_NEx Obs_NEx Obs_NEx Obs_NEx GoStraight
2 Middle Near ObsEx Obs_NEx
Obs_NEx Obs_NEx ObsEx GoStraight
3 Middle Near Obs_NEx Obs_NEx Obs_NEx Obs_NEx ObsEx GoStraight
4 Middle Near ObsEx ObsEx Obs_NEx Obs_NEx Obs_NEx SlightRightStraight
5 Middle Near Obs_NEx Obs_NEx Obs_NEx ObsEx ObsEx SlightLeftStraight
6 Middle Near ObsEx ObsEx Obs_NEx ObsEx ObsEx stop
7 Middle Near ObsEx Obs_NEx ObsEx Obs_NEx Obs_NEx MoveRight
8 Middle Near Obs_NEx Obs_NEx ObsEx Obs_NEx ObsEx MoveLeft
9 Middle Near ObsEx Obs_NEx ObsEx Obs_NEx ObsEx Stop
10 Middle Far Obs_NEx Obs_NEx Obs_NEx Obs_NEx Obs_NEx GoStraight
11 Middle Far Obs_NEx Obs_NEx Obs_NEx Obs_NEx Obs_NEx GoStraight
12 Middle Far Obs_NEx
Obs_NEx Obs_NEx Obs_NEx Obs_NEx GoStraight
13 Middle Far Obs_NEx Obs_NEx Obs_NEx Obs_NEx Obs_NEx GoStraight
14 Middle Far Obs_NEx Obs_NEx Obs_NEx Obs_NEx Obs_NEx GoStraight
15 Middle Far Obs_NEx Obs_NEx Obs_NEx Obs_NEx Obs_NEx GoStraight
16 Middle Far Obs_NEx Obs_NEx Obs_NEx Obs_NEx Obs_NEx GoStraight
17 Middle Far Obs_NEx
Obs_NEx Obs_NEx Obs_NEx Obs_NEx GoStraight
18 Middle Far Obs_NEx Obs_NEx Obs_NEx Obs_NEx Obs_NEx GoStraight
2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC)
... The solution in [30] is based on another mathematical method that uses depth images and fuzzy control logic for the approximate measurement of obstacles' distance. This solution divides the frame into three parts (right, left and center), categorizes the location of obstacles in three different categories and provides audio feedback for the user according to them. ...
Full-text available
During the past years, the development of assistive technologies for visually impaired (VI)/blind people has helped address various challenges in their lives by providing services such as obstacle detection, indoor/outdoor navigation, scene description, text reading, facial recognition and so on. This systematic mapping review is mainly focused on the scene understanding aspect (e.g., object recognition and obstacle detection) of assistive solutions. It provides guidance for researchers in this field to understand the advances during the last four and a half years. This is because deep learning techniques together with computer vision have become more powerful and accurate than ever in tasks like object detection. These advancements can bring a radical change in the development of high-quality assistive technologies for VI/blind users. Additionally, an overview of the current challenges and a comparison between different solutions is provided to indicate the pros and cons of existing approaches.
... Makarem Aljahdali [2] designed an IoT based walker system with a mobile phone for elderly and visually impaired people. W. M. Elmannai [5] implemented an obstacle avoidance system with .NET Gadgeteer-compatible mainboard based on fuzzy logic. It can detect 10 objects. ...
Full-text available
There are about 253 million people in the world living with visual impairment, of which about 36 million are blind. 217 million people have moderate to severe vision impairment. It is very important to consider the difficulties of visually impaired people to help them perform with their daily activities. Through sensation, these people identify and understand the obstacles. VIZIYON is a device based on IoT system for identifying obstacles, in a cost-effective manner. In this device, the identification is carried out based on the distance vector and object detection. The object recognition is done using the concept of Convolutional Neural Network. The device is able to identify the objects with an accuracy of 91% and recall of 94%.
... One approach is to provide the user with a pair of glasses or another wearable device, which carry one or more cameras to capture visual information about the immediate environment. The information is processed by appropriate algorithms and then certain audio messages regarding navigation are relayed to the user either via conventional earphones [2] - [3] or boneconducting headphones [4]. Other proposals focus on image or text recognition [5], [6], again conveying audio information to the user. ...
... In 2018, Elmannai and Elleithy developed a fuzzy system for accurate, reliable, user-friendly, lightweight and economically accessible obstacle avoidance that facilitates internal and external mobility for visually impaired people where the environment is not needed [4]. Lim et al. developed a system for autonomous cars that uses a neural network for obstacle detection as the main system and is powered by a fuzzy system that controls ultrasonic sensors to measure the front distance of surrounding objects in fuzzy sets like near, medium or distant [7]. ...
... An innovative obstacle avoidance approach based on fuzzy control rules and image depth was used by Elmannai and Elleithy. (23) A walking context analysis using fuzzy logic has been attempted in a limited number of these research studies. Context awareness using a multimodal profile model developed by Lin and Han (5) is one such approach, which used fuzzy logic to estimate the safety level of the walking context. ...
Full-text available
In this work, we present a walking context analysis for the electronic navigation of visually impaired persons. The walking context is defined as the safety level of the walking condition. An extensive literature review provided the framework for the model developed in this research. A hybrid fuzzy logic model is built to evaluate this safety level on the basis of several environmental and personal factors identified in the navigation. Range measurements related to the obstacles in the surrounding environment are acquired by sonar sensors, and personal information taken by the prototype is the input to the fuzzy logic model, which is used to evaluate the safety level of the current walking context of the visually impaired person. An audio feedback relevant to the walking context is provided, indicating the safety level and direction of motion. The obtained results proved the successful operation and effectiveness of the fuzzy control in reducing the navigation time and increasing safety because it clarifies the uncertainty in each situation as compared with the nonfuzzy approach. The current status of the work and future developments are presented in this paper.
Full-text available
Blind and Visually impaired people (BVIP) face a range of practical difficulties when undertaking outdoor journeys as pedestrians. Over the past decade, a variety of assistive devices have been researched and developed to help BVIP navigate more safely and independently. In addition, research in overlapping domains are addressing the problem of automatic environment interpretation using computer vision and machine learning, particularly deep learning, approaches. Our aim in this article is to present a comprehensive review of research directly in, or relevant to, assistive outdoor navigation for BVIP. We breakdown the navigation area into a series of navigation phases and tasks. We then use this structure for our systematic review of research, analysing articles, methods, datasets and current limitations by task. We also provide an overview of commercial and non-commercial navigation applications targeted at BVIP. Our review contributes to the body of knowledge by providing a comprehensive, structured analysis of work in the domain, including the state of the art, and guidance on future directions. It will support both researchers and other stakeholders in the domain to establish an informed view of research progress.
Full-text available
Rear-end collision detection and avoidance is one of the most crucial driving tasks of self-driving vehicles. Mathematical models and fuzzy logic-based methods have recently been proposed to improve the effectiveness of the rear-end collision detection and avoidance systems in autonomous vehicles (AVs). However, these methodologies do not tackle real-time object detection and response problems in dense/dynamic road traffic conditions due to their complex computation and decision-making structures. In our previous work, we presented an affective computing-inspired Enhanced Emotion Enabled Cognitive Agent (EEEC_Agent), which is capable of rear-end collision avoidance using artificial human driver emotions. However, the architecture of the EEEC_Agent is based on an ultrasonic sensory system which follows three-state driving strategies without considering the neighbor vehicles types. To address these issues, in this paper we propose an extended version of the EEEC_Agent which contains human driver-inspired dynamic driving mode controls for autonomous vehicles. In addition, a novel end-to-end learning-based motion planner has been devised to perceive the surrounding environment and regulate driving tasks accordingly. The real-time in-field experiments performed using a prototype AV demonstrate the effectiveness of this proposed rear-end collision avoidance system.
Full-text available
The World Health Organization (WHO) reported that there are 285 million visually-impaired people worldwide. Among these individuals, there are 39 million who are totally blind. There have been several systems designed to support visually-impaired people and to improve the quality of their lives. Unfortunately, most of these systems are limited in their capabilities. In this paper, we present a comparative survey of the wearable and portable assistive devices for visually-impaired people in order to show the progress in assistive technology for this group of people. Thus, the contribution of this literature survey is to discuss in detail the most significant devices that are presented in the literature to assist this population and highlight the improvements, advantages, disadvantages, and accuracy. Our aim is to address and present most of the issues of these systems to pave the way for other researchers to design devices that ensure safety and independent mobility to visually-impaired people.
Full-text available
In the most recent report published by theWorld Health Organization concerning people with visual disabilities it is highlighted that by the year 2020, worldwide, the number of completely blind people will reach 75 million, while the number of visually impaired (VI) people will rise to 250 million. Within this context, the development of dedicated electronic travel aid (ETA) systems, able to increase the safe displacement of VI people in indoor/outdoor spaces, while providing additional cognition of the environment becomes of outmost importance. This paper introduces a novel wearable assistive device designed to facilitate the autonomous navigation of blind and VI people in highly dynamic urban scenes. The system exploits two independent sources of information: ultrasonic sensors and the video camera embedded in a regular smartphone. The underlying methodology exploits computer vision and machine learning techniques and makes it possible to identify accurately both static and highly dynamic objects existent in a scene, regardless on their location, size or shape. In addition, the proposed system is able to acquire information about the environment, semantically interpret it and alert users about possible dangerous situations through acoustic feedback. To determine the performance of the proposed methodology we have performed an extensive objective and subjective experimental evaluation with the help of 21 VI subjects from two blind associations. The users pointed out that our prototype is highly helpful in increasing the mobility, while being friendly and easy to learn.
Full-text available
Face Recognition is one of the most thriving and cutting- edge fields of research that stands unwaveringly as the most critically challenging problems in the domain of Computer Vision. In the design of an efficient FR system, the potency of the feature descriptor dictates the proficiency of the recognition performance. The effective employment of FR systems in handheld and mobile devices is hindered by their lack of GPU acceleration, which considerably limits their computational capabilities and furthermore, the prevalent SURF and SIFT feature descriptors are computationally expensive and hence are not viable for low-powered devices. To that end, ORB, a cost effective feature descriptor, has been effective for such devices. Hence, in this paper, we propose a novel technique that utilizes ORB, and in turn, address a few of its crucial inadequacies by incorporating RANSAC as a post-processing step to remove redundant key-points and noise (in the form of outliers) and hence improve ORB's efficacy to proffer a robust system for facial image recognition, that has improved accuracy than the prevalent state-of-the-art methods such as SIFT-RANSAC and SURF-RANSAC.
Full-text available
The keypoint detection and its description are two critical aspects of local keypoints matching which is vital in some computer vision and pattern recognition applications. This paper presents a new scale-invariant and rotation-invariant detector and descriptor, coined, respectively, DDoG and FBRK. At first the Hilbert curve scanning is applied to converting a two-dimensional (2D) digital image into a one-dimensional (1D) gray-level sequence. Then, based on the 1D image sequence, an approximation of DoG detector using second-order difference-of-Gaussian function is proposed. Finally, a new fast binary ratio-based keypoint descriptor is proposed. That is achieved by using the ratio-relationships of the keypoint pixel value with other pixel of values around the keypoint in scale space. Experimental results show that the proposed methods can be computed much faster and approximate or even outperform the existing methods with respect to performance.
Full-text available
Navigation assistance for visually impaired (NAVI) refers to systems that are able to assist or guide people with vision loss, ranging from partially sighted to totally blind, by means of sound commands. In this paper, a new system for NAVI is presented based on visual and range information. Instead of using several sensors, we choose one device, a consumer RGB-D camera, and take advantage of both range and visual information. In particular, the main contribution is the combination of depth information with image intensities, resulting in the robust expansion of the range-based floor segmentation. On one hand, depth information, which is reliable but limited to a short range, is enhanced with the long-range visual information. On the other hand, the difficult and prone-to-error image processing is eased and improved with depth information. The proposed system detects and classifies the main structural elements of the scene providing the user with obstacle-free paths in order to navigate safely across unknown scenarios. The proposed system has been tested on a wide variety of scenarios and data sets, giving successful results and showing that the system is robust and works in challenging indoor environments.
Conference Paper
Full-text available
Several visual feature extraction algorithms have recently appeared in the literature, with the goal of reducing the computational complexity of state-of-the-art solutions (e.g., SIFT and SURF). Therefore, it is necessary to evaluate the performance of these emerging visual descriptors in terms of processing time, repeatability and matching accuracy, and whether they can obtain competitive performance in applications such as image retrieval. This paper aims to provide an up-to-date detailed, clear, and complete evaluation of local feature detector and descriptors, focusing on the methods that were designed with complexity constraints, providing a much needed reference for researchers in this field. Our results demonstrate that recent feature extraction algorithms, e.g., BRISK and ORB, have competitive performance requiring much lower complexity and can be efficiently used in low-power devices.
Conference Paper
In this paper we introduce a novel framework for detecting static/moving obstacles in order to assist visually impaired/blind persons to navigate safely. Firstly, a set of interest points is extracted base on an image grid and tracked using the multiscale Lucas - Kanade algorithm. Next, the camera/background motion is determined through a set of homographic transforms, estimated by recursively applying the RANSAC algorithm on the interest point correspondence while other types of movements are identified using an agglomerative clustering technique. Finally, obstacles are classified as urgent/normal based on their distance to the subject and motion vectors orientation. The experimental results performed on various challenging scenes demonstrate that our approach is effective in videos with important camera movement, including noise and low resolution data.
Conference Paper
In this paper we introduce a real-time obstacle recognition framework designed to alert the visually impaired people/blind of their presence and to assist humans to navigate safely, in indoor and outdoor environments, by handling a Smartphone device. Static and dynamic objects are detected using interest points selected based on an image grid and tracked using the multiscale Lucas-Kanade algorithm. Next, we activated an object classification methodology. We incorporate HOG (Histogram of Oriented Gradients) descriptor into the BoVW (Bag of Visual Words) retrieval framework and demonstrate how this combination may be used for obstacle classification in video streams. The experimental results performed on various challenging scenes demonstrate that our approach is effective in image sequence with important camera movement, including noise and low resolution data and achieves high accuracy, while being computational efficient.