Content uploaded by Khaled Elleithy
Author content
All content in this area was uploaded by Khaled Elleithy on May 07, 2018
Content may be subject to copyright.
Content uploaded by Khaled Elleithy
Author content
All content in this area was uploaded by Khaled Elleithy on Mar 21, 2018
Content may be subject to copyright.
A Novel Obstacle Avoidance System for Guiding the
Visually Impaired through the use of Fuzzy Control
Logic
Wafa M. Elmannai, Member, IEEE; Khaled M. Elleithy, Senior Member, IEEE
Department of Computer Science and Engineering
University of Bridgeport
Bridgeport, USA
welmanna@my.bridgeport.edu; elleithy@bridgeport.edu
Abstract—This paper presents an intelligent framework that
includes several types of sensors embedded in a wearable device to
support the visually impaired (VI) community. The proposed
work is based on an integration of sensor-based techniques and a
computer vision-based technology in order to introduce an
efficient and economical visual device. The 98% accuracy rate of
the proposed sequence is based on a wide detection view that used
two camera modules and a detection range of approximate
9meteres. In addition, we introduce a novel obstacle avoidance
approach based on the image depth and fuzzy control rules. In this
approach, each frame divided into three areas. By using the fuzzy
logic, we were able to provide precise information to help the VI
user in avoiding front obstacles. The strength of this proposed
approach aids the VI users in avoiding 100% of all identified
objects. Once the device is initialized, the VI user can confidently
enter unfamiliar surroundings. Therefore, this implemented
device can be described as following: accurate, reliable, friendly,
light, and economically accessible that facilitates the indoor and
outdoor mobility of VI people and does not require any previous
knowledge of the surrounding environment.
Keywords— assistive wearable devices, computer vision
techniques, obstacle detection, obstacle avoidance, visual
impairment, fuzzy logic.
I. INTRODUCTION
In 2014, The World Health Organization (WHO) [1]
reported that 285 million people are Visually Impaired (VI)
people worldwide ; Out of this number, Thirty-nine million
people are completely blind. In the USA, approximately 8.7
million people are VI, whereas approximately 1.3 million people
are blind [2]. Both the National Federation for the Blind [2] and
the American Foundation for the Blind [3] reported that 100,000
VI people are students. During the last decade, the
accomplishment of public health performance was a decrease in
the number of diseases that cause blindness. Globally, ninety
percent of VI people are low-income and live in developing
countries [1]. In addition, 82% of VI people are older than 50
years old [1]. This number is estimated to increase
approximately 2 million per decade. By 2020, this number is
estimated to double [4].
VI people encounter many challenges when performing
most natural activities that are performed by human beings, such
as detecting static or dynamic objects and safely navigating
through paths. These activities are highly difficult and may be
dangerous for VI people, especially if the environment is
unknown. Therefore, VI people use the same route every time
by remembering unique elements.
We have investigated several solutions that assist VI people.
A taxonomy was the result of our intensive study to provide a
technical classification to compare any system with other
systems. This taxonomy is presented in a literature survey paper
that was published in [5]. None of these studies provides a
complete solution that can assist VI people in all aspects of their
lives. Thus, the objective of this work is to design an efficient
framework that significantly improves the life of VI people.
Their framework can overcome the limitations of previous
systems by providing a wider range of detection that works
indoors and outdoors and provides a navigational service.
The organization of this paper is the following: Section II
presents a study of the state-of-the art assistive technologies for
VI people. The proposed framework is described in Section III.
Implementation and experimental setup results are presented in
Section IV. Section V demonstrates the results and evaluation.
Section VI concludes the paper.
II. RELATED WORK
Although several solutions were proposed in the last decade,
none of these solutions are a complete solution that can assist VI
people in all aspects of their lives. This section presents a
representative set of the proposed techniques.
Recently, we noticed a rapid propagation of assistive
systems due to the improvement and progress of computer
vision techniques that add more value and services with
flexibility. Some of these proposed systems are [6], which
introduced a fusion of artificial vision and map matching [7] and
GPS as an enhanced navigational system that supports VI people
in their mobility. The SpikNet recognition algorithm was
employed for image processing [8]. In addition, ccognitive
guidance devices are designed by integrating the Kinect sensor’s
output, vanishing points and fuzzy decisions to navigate the VI
person through a known environment [9]. An independent
mobility aid was proposed in [10] using the indoor and outdoor
obstacle avoidance [11] for static/dynamic object detection.
Lucas-Kanade, RANSAC, adapted HOG descriptor, Bag of
$20,000 fellowship grant by the American Association of University Women
(AAUW) was used to partially support Wafa Elmannai to conduct this research.
2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC)
978-1-5386-4790-5/18/$31.00 ©2018 IEEE
Visual Words (BoVW), and SVM are employed for object
detection and recognition. The authors of [12] introduced an
object detection system using an RGB-D Sensor and computer
vision techniques. The classification process was based on the
use of a canny edge detector [13], the Hough Line Transform
and RANSA, which are applied on the RGB-D’s output. The
work in [14] is based on the integration of both ultrasonic-based
technology and computer vision algorithms, which used for the
proposed obstacle detection and recognition system.
Details about the systems presented in this section are
provided in [5]. To overcome the limitations of the discussed
systems, we propose a novel system that integrates both sensor-
based techniques and computer vision techniques to provide a
complete solution for VI people indoors and outdoors with other
complementary features. The proposed approach is described in
the following section.
III. PROPOSED WORK
A. Proposed Methodology of the Data Fusion Algorithm
The proposed framework includes hardware and software
components. The hardware design is composed of two camera
modules, a compass module, a GPS module, a gyroscope
module, a music (audio output) module, a microphone module,
and a wi-fi module. However, the aim of the software is to
develop an efficient data fusion algorithm that is based on
sensory data to enhance and provide a highly accurate object
detection/avoidance and navigational system with the help of
computer vision methods to provide safe mobility. Fig1 shows
the communication methodology between the components.
Fig. 1. Proposed methodology of interaction process among the hardware
components
The system is designed to navigate the user to their desired
location and to avoid any obstacles in front of the user after it is
detected. Using the fused data from multiple sensors and the
computer vision methods, the user will receive navigational
instructions in an audio format. Two camera sensors are used for
object detection, which is processed using computer vision
methods. The remote server handles the image processing
component. Based on the depth of the image and fuzzy logic, we
can approximately measure the distance between the obstacle
and the VI person. A compass is employed for orientation
adjustment. A gyroscope (gyro) sensor is employed for rotation
measurements in degrees. The GPS provides the user’s location.
All components are connected with the microcontroller board,
which communicates with a remote server. Route guidance is
provided by a GIS. Thus, we use a gyro, compass, and GPS to
track the user’s directions, locations, and orientations to provide
accurate route guidance information.
A.1. Static/ Dynamic Obstacle Detection
A sequence of well-known algorithms are used to provide a
robot obstacle detection system.
The Oriented FAST and Rotated BRIEF (ORB) is the
approach that we use for static/dynamic object detection. ORB
is characterized by a fast computation for panorama stitching
and low power consumption. The ORB algorithm is an open
source that was presented by Ethan Rublee, Vincent Rabaud,
Kurt Konolige and Gary R. Bradski in 2011 as a suitable
alternative of SIFT due to its effective matching performance
and low computing cost [15]. Unlike other extraction
algorithms, ORB has a descriptor. ORB is used to extract the
interest points in each frame.
In addition, we use the K-Nearest Neighbor (KNN)
algorithm where k = 2 to match the descriptors of extracted
interest points between two frames of an object’s presence. In
this paper, we use the Brute Force matcher, which is the simple
version of the KNN. In our case, the Brute Force will match the
closest two corresponding descriptors with the descriptors that
are in the current frame by trying each corresponding descriptor
in the corresponding frame. The Hamming Distance method is
applied between two pairs since the descriptor of ORB is a
binary string. Each descriptor of an interest point will be
represented as vector f, which was generated by BRIEF. If the
descriptors of two interest points are equal, the result is 0;
otherwise, the result is 1. The Hamming distance will ensure
correct matching by counting the difference between the
attributes, in which the pair of two instances differ.The K-
Nearest Neighbor Algorithm finds the best match of the
descriptor of an extracted interest point to the corresponding
descriptor [16]. However, RANSAC algorithm reduces the false
positive match when the presence of an object is determined but
an actual object does not exist [17].
The final step is the K-Means clustering technique to cluster
n extracted points of a particular frame. However, more than one
cluster may represent the same object. Therefore, a merging
method needs to be applied in the case of any intersections
among the clusters. We made a slight modification to the K-
Means clustering technique. We merged two intersected clusters
if the first cluster’s centroid is within the second cluster or vice
versa. For example, S1 (Cs1 is the centroid point of S1) and S2
(Cs2 is the centroid point of S2) are clusters. The two mentioned
clusters can be merged into one cluster if ((ሺ
ଵ
ת
ଶ
ሻis not null)
AND (the centroid
ୗ
మ
is within
ଵ
)) then merges
ଶ
to
ଵ
.
Otherwise, merging does not occur even if ሺ
ଵ
ת
ଶ
ሻ is not null.
The result of the combination of adopted algorithms is
represented in Fig. 2. The two green dots, which are represented
on the floor in Fig. 2 (b), denote the detection range from where
the user is standing.
2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC)
Fig. 2. Representation of the proposed object detection technique; (a) original frame, (b) the frame after applying proposed sequence of algorithms for object
detection and (c) the frame after applying K-means clustering and merging to identify each object
A.2. Reliable Obstacle Avoidance System using Fuzzy Logic
The existing systems use sensors to measure the distance
between the user and the obstacle; a technique that supports
distance measurement for this type of system is not available. In
this section, we propose a proximity measurement methodology
to approximately measure the distance between the user and the
obstacle using mathematical models and an obstacle avoidance
approach based on fuzzy logic. This proposed approach uses a
camera that faces a slight angle down to have a fixed distance
between a VI person and the ground. This view enables us to
have a reference to determine whether an object is an
obstruction. We have determined that the average distance
between a VI person and the ground is 9 meters with the device
facing down on an angle. This result enables us to identify an
obstacle within a 9-meter range; however, a VI person would
only need to react to an object within the 3-meter range. Our
proposed method divides the frame into three areas—left, right,
and center—as shown in Fig. 3.
Fig. 3. Approximate distance measurement for object avoidance
We have assumed that an object in the upper part of the
frame is further away than on object in the lower 1/3 and that an
object detected in the lower 1/3 is an obstruction to the VI
person. We can represent the frame in an
Ǧ. Let be the width and be the height.
The calculation of the right and left frames is expressed as
follows: ሺͳ
͵ܪǡ ʹ
ͷܹሻƬሺͳ
͵ܪǡ ͵
ͷܹሻ (1)
Equation (1) represents the corners of the middle area, where
we detect objects and inform the VI person that an obstacle is in
front of them. Two green dots, which are equal to
ଵ
ଷ
of the
frame, represent the threshold of the free collision area. Objects
between the two green dots and the start point must be avoided.
An object is deemed an obstruction if and when the lower
corners of the objects represented by
ሺ
୫୧୬ଵ
ǡ
୫୧୬
ሻƬሺ
୫୧୬ଶ
ǡ
୫୧୬
ሻ enter below the area of equation
(1). If an object exists in front of the VI person, an alternative
path is required. We determine this path by searching for an
object on the left or right of the area enclosed by (1). If no objects
are detected on the left side of equation (1), the system issues a
turn left and go straight command to the VI person. If an object
on the left is detected, then the system searches for an object on
the right. If an object on the right is not detected, then a turn right
and go straight command is issued to the VI person. If objects
are detected in all three areas, the system issues a stop and wait
command until a suitable path is identified for the VI to
continue. We calculated 20% of the middle quadrant to provide
accurate information to the user. If the obstacle exists within
20% of the middle quadrant, he/she does not need to move to
slightly to the otherside sides as long as the object appears in one
of the 20% of the middle quadrant but not both.
A.2.1. Fuzzy Control Logic
In order to implement the abovementioned strategy, we use
fuzzy logic to determine the precise decision that the user will
take in order to avoid front obstacles based on multiple inputs.
Fig. 4 shows the fuzzy controller system for obstacle avoidance
algorithm which includes: fuzzier that converts the inputs to
2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC)
number of fuzzy sets based on the defined variables and member
functions; interface engine which generates fuzzy results based
on the fuzzy rules; Each fuzzy output will be mapped by
member functions to get the precise output the user should
seek[18]. We used Mathlab R2017b software in order to
implement the rules of the fuzzy logic.
Fig.4. The Fuzzy structure for obstacle avoidance system
• Step 1: Input and output Determination
The input variables for the proposed system are seven inputs.
Those inputs are based on the position of the detected obstacles,
the obstacle range and the user position. They are donated as:
{ObsRange, UserPosition, ObsLeft, Obs20%LeftMid,
ObsMiddle, Obs20%RightMid, and ObsRight}. The output is
the feedback that the user needs for a path to the endpoint. Fig.
5(a) is a representation of applying the proposed obstacle
avoidance (Mamdani) System using Fuzzy Inference System
(FIS) in Maltlab software. Fig. 5(b) displays the design of the
fuzzy system for the obstacle avoidance approach which is
illustrating the inputs and their membership functions and
outputs and their membership.
• Step2: Fuzzification
We have divided each input into membership functions.
Since the user is wearing the devices on his/her chest, there are
only three options in term of the user’s position which are: {Left,
middle, and right}. However, since we are using two cameras;
and the processed frames of those two cameras are stitched
every time as one, the user’s position is going to be always in
middle. Therefore, the membership function of the user’s
position is donated as shown in Fig. 6. The range of this
membership function is 300cm as it is considered to be the width
of the scene. The used membership function is Gaussian
Function. Gaussian function is represented in (2) using the
middle value and ɐͲǤ As ɐ gets smaller, the bell gets
narrower.
ܩሺݔሻൌቈെሺݔെ݉ሻ
ଶ
ʹߪ
ଶ
(2)
Table 2 describes the terms of obstacle’s position. The
obstacle’s range is divided into two membership functions
{Near, Far} within the scene’s height which is [0 -900cm] . The
threshold is set to be 300cm. Consequently, the obstacle is near
if it exists within the range of [0 – 300cm], however, the obstacle
is far if it is far than 300cm. Fig. 7 represents the membership
function of the obstacle’s range within the height of the scene
(frame or view). In addition, the obstacle’s position is divided
into {ObsLeft, Obs20%LeftMid, ObsMiddle,
Obs20%RightMid, ObsRight}. However, in order to have more
control on the fuzzy rules, we had to divide each part of the
obstacle’s position into two membership functions that exist or
does exist {ObsEx, Obs_NEx}.
Fig. 5(a). The Fuzzy Inference System (FIS) in Maltlab for the proposed
obstacle Avoidance (Mamdani) System.
Fig. 5(b). A high-level diagram of an FIS for the proposed obstacle
Avoidance System
2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC)
Fig. 6. Membership function for the user’s position
T
ABLE
1:
D
EFINITION OF MEMBERSHIP FUNCTION FOR USER POSITION
Term Meaning
Range
Left User is in the left 0 – 1/3 of scene’s
width [0:50:100]
Middle User is in the middle
1/3 of the scene’s
width to 2/3 of
scene’s width
[100:150:200]
Right User is in the right last 1/3 of the scene’s
width [200:250:300]
Fig. 7. Membership function of object presentence in two ranges of the scene
Fig. 8 represents the membership function of the obstacle’s
position in the left side of the scene. Same function will be
presented for the remaining inputs for the obstacle’s position.
The negative values indicates that the obstacle does not exist in
that side, whereas, the positive values exemplifies the existence
of the obstacle in that side. Assume the value of the obstacle’s
position is and in range, where Ԗ. Consequently, four
parameters ሾǡǡǡሿ are used to express the Trapezoidal-shaped
membership function in the following equation (3):
ߤ
௧
ሺݔǣ݅ǡ
݆
ǡ݇ǡ݈ሻൌሺ൬ݔെ݅
݆
െ݅ǡͳǡ݈െݔ
݈െ݇ǡͲ൰ሻ (3)
Fig. 8. Membership function for obstacle’s position in the left side
T
ABLE
2:
D
EFINITION OF THE OBSTACLE POSITION
’
S VARIAB LES
Term
Meaning
ObsLeft Left [0-100] cm
ObsRight Right [200-300] cm
ObsMiddle Middle [100 – 200] cm
Obs20%LeftMid
Obstacle is the left side , yet it is within the
20% of middle quadrant from left side. [75 -
125] cm
Obs20%RightMid
Obstacle is in the right side yet it is within
the 20% of middle quadrant from right side.
[175 – 225] cm
The output is divided into six membership functions that are
based on the fused input variables. The output can be:
{MoveLeft, SlightLeftStraight, GoStraight, SlightRightStraight,
MoveRight, and Stop}. We used the Trapezoidal-shaped
membership function for MoveRight and MoveLeft
membership values. However, we used Triangular membership
function as shown in Fig. 9 to represent {SlightLeftStraight,
GoStraight, SlightRightStraight, MoveLeft, MoveRight, and
Stop}. The model value, lower limit a and upper limit b, can
define the Triangular membership function; where a < m < b.
This function can be expressed in (4):
ܣሺݔሻൌە
ۖ
۔
ۖ
ۓ
Ͳ ݂݅ݔܽ
ݔെܽ
݉െ݂ܽ݅ݔ߳ሺܽǡ݉ሻ
ܾെݔ
ܾെ݂݉݅ݔ߳ሺ݉ǡܾሻ
Ͳ ݂݅ݔܾ
(4)
2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC)
Fig. 9. Membership function of the output (feedback/directions)
• Step3: Creating Fuzzy Rules
The fuzzy rules can be produced based on observing and
employing the Knowledge that was introduced in Table 1 ,Table
2, member function and variables. The rules were implemented
using five conditions of the obstacle’s position, the obstacles’
range, and user’s position. There are 18 rules for the fuzzy
controller system. The implemented 18 rules are presented in
Appendix A. 1. Hence, the rule viewer for these implemented
rules is displayed in Appendix A.2.
We have used the union operation to connect the
membership values. AND is a representation of minimum result
between two values, whereas, OR is the representation of
maximum result between two values. Let Ɋ
ᎇ
and Ɋ
ஔ
are two
membership values, thus, the fuzzy AND is described as
following:
Ɋ
ᎇ
ܣܰܦ
Ɋ
ఋ
ൌሺ
Ɋ
ᎇ
ǡ
Ɋ
ఋ
ሻ (5)
• Step4: Defuzzification
Defuzzification is the last step of the fuzzy controller system.
The output is produced based on the set of inputs, membership
functions and values, and the fuzzy rules. The defized effect of
the user’s position and the obstacles’ position on the feedback is
calculated using the defuzzification method the Large Of
Maximum (LOM) method. Fig. 10 illustrates the surface viewer
that displays the boundary of the differences and combination of
obstacle’s position and user’s position. The user will be allowed
to receive the accurate and precise feedback in order to avoid
front obstacles based on the combination of the described
membership values.
Furthermore, fuzzy logic is used to assist the VI person from
colliding with front obstacles in front of them. The proposed
system was built based on the user’s position, obstacle’s
position and one output. After the device’s initialization step
occurs, the information of the obstacle and user position will be
fed to the fuzzy controller. Then the decision will be made
based on the 18 fuzzy rules. This feedback will be sent to the
user through their headphones. The whole process will be
recursively employed. In case an obstacle does not exist, user
will continue his/her path (straight) with no change.
Fig. 10. The Surface Viewer that examines the output surface of an FIS for
obstacle’s position and user’s position using fuzzy logic toolbox
IV. IMPLEMENTATION AND EXPERIMENT SETUP
Fig. 11 illustrates the designed platform, which aims
to detect static and dynamic objects, obstacle avoidance, and
provide navigational information. Fig. 12 demonstrates the
design of the hardware diagram designed using .Net Gadgeteer
on Visual studio, hardware components and snapshot of one of
the real-time scenarios we performed.
Fig. 11. Assistive System for the Visually Impaired
2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC)
2 Serial Camera L2
Module
Compass Modules GPS Module
Gyro Module
Wi-Fi RS21 Module
Music Module
PIR Motion Detection
Module
FEZ Spider Mainboard
The hardware design on .NET Gadgeteer Using Visual Studio
Fig. 12. The hardware design, used components and the snapshot of one of the
real-time scenarios
The device was designed to facilitate the user’s mobility by
providing appropriate navigational information. We used C#
programing language and we used Mathlab software to display
the fuzzy rules. The system is built using the .NET Gadgeteer
compatible mainboard and modules from GHI Electronics [20].
The software implementation is built on top of the following
SDKs using Visual Studio 2013:
• NETMF SDK 4.3
• NETMF and Gadgeteer Package 2013 R3
The device was tested in indoor and outdoor scenarios by
blind folded person.Our system was tested in a number of real
time scenarios as well as a video dataset that was directly fed to
the system. The dataset contains 30 videos, each video has 700
frames in average. The results of the tested dataset are illustrated
in Table 3.
We divided the objects in each scene into two subgroups
because we realized that some objects should not be considered
as obstacles unless they are located in front of the users or are
blocking his/her path; otherwise, this item is considered to be an
object not an obstacle.
T
ABLE
3.
E
VALUATION RESULTS FOR THE TESTED DATASET
No. of
Videos
Average
Number of
Frames per
Video
Detection
Rate for
Detected
Objects
Detection
Rate for
Detected
Obstacles
Average
Accuracy
30
700
Worst:
85.71%
100.00%
98.36%
Average:
96.72%
Best: 100%
V. RESULTS AND EVALUATION
The focus of this evaluation is to provide an efficient and
economically accessible device that assists VI people in
navigating indoors and outdoors. The experiments were run on
Windows 7, core i7, and the resolution of the camera is 320×240
with a maximum resolution of 20 fps.
A. Computational Analysis of the Collision Avoidance
Algorithm
The collision avoidance approach is used in this framework
to avoid the detected obstacles that the user may collide with.
The user is provided with the avoidance instructions in
predefined distance (1/3 height of a frame). Therefore, we have
one scanning level (Fig.3), where we scan for a free path to the
user.
The scanning level is divided into three areas: left, right, and
middle. The searching approach in each area is based on fuzzy
logic under one “for loop”. We first search for the detected
obstacles. If there is no obstacle within the scanning level, the
user will continue straight. If there is an obstacle, the fuzzy rules
will be applied. Thus, the proposed obstacle avoidance approach
has a complexity of linear time (O(n); n is the number of
detected obstacles). Every time the user passes the threshold (1/3
height of the frame), a new window (1/3 height) will be
calculated.
In order to get the time complexity of the whole system, we
needed to add the time complexity of the detection algorithm
and the time complexity of the obstacle avoidance algorithm.
Most obstacle avoidance algorithms are applied after SIFT or
SURF algorithms for object detection. Our obstacle avoidance
approach is applied after ORB algorithm for object detection
was used, which requires less memory and computation time
than other systems [21]. According to [22, 23], the time
complexity of ORB algorithm is almost half of the time
complexity of SIFT and SURF. This conclude that our overall
system provides a faster and more reliable obstacle avoidance
system.
Fig.13 represents the taken time to process five frames; each
frame includes the number of obstacles. In addition, Fig. 14
describes the actual taken time to detect obstacles, avoid
obstacles as well as sending the audio feedback to the user
through their headphones. This Fig. 13 demonstrates the taken
time for detection / avoidance algorithm, establishing HTTP
requests and playing audio feedback. Thus, the complete
processing time is increasing proportional to the number of
detected objects in terms of (nlogn).
The required processing time for 50 obstacles in one frame
is 0.35sec. Thus, our system is capable to process more than
three frames within a second. This indicates that proposed
system is in real time as we have designed it for pedestrians.
Fig. 14 demonstrates the performance of the obstacle
avoidance system. Seven scenarios were considered in Fig. 14.
The blue column expresses the average number of objects that
appeared in the scenario. The orange column is a representation
of the detected obstacles. In some scenarios, the detection
system did not detect all the presented objects in the frame. For
example scenario# 4, we have 10 obstacles that were detected
2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC)
out of a total of 12 objects. The reason behind the mismatch is
the large size of some objects in the frame. The grey column
demonstrates the average number of obstacles being avoided
using the proposed obstacle avoidance system. That indicated
that collision avoidance approach is 100% accurate in providing
a free path to the user as long as the obstacle was detected in the
set of objects.
Fig. 13. The cost of the proposed approach as a time function for avoiding
number of obstacles in each frame
Fig. 14. The performance evaluation of the obstacle avoidance system
VI. CONCLUSION
In this work, we developed a hardware and software
implementation that provides a framework that can assist VI
people. The system was implemented using a .NET Gadgeteer-
compatible mainboard and modules from GHI Electronics. This
novel electronic travel aid facilitates the mobility of VI people
indoors and outdoors using computer vision and sensor-based
approaches.
A high accuracy for the static and dynamic detection system
is achieved based on the proposed sequence of well-known
algorithms. Our proposed obstacle avoidance system enabled
the user to traverse his/her path and avoid 100% of the detected
obstacles. We conducted numerous experiments to test the
accuracy of the system. The proposed system exhibits
outstanding performance when comparing the expected decision
with the actual decision. Based on the extensive evaluation of
other systems, our system exhibits accurate performance and an
improved interaction structure with VI people.
REFERENCES
[1] World Health Organization. Visual Impairment and Blindness. Available
online: http://www.who.int/mediacentre/factsheets/fs282/en/ (Accessed on
June 2017).
[2] National Federation of the Blind. Available online: http://www.nfb.org/
(Accessed on January 2016).
[3] American Foundation for the Blind. Available online: http://www.afb.org/
(Accessed on January 2017).
[4] V. Ramiro. "Wearable assistive devices for the blind," Wearable and
autonomous biomedical devices and systems for smart environment 75, pp.
331-349, Oct. 2010.
[5] W. Elmannai and K. Elleithy. "Sensor-Based Assistive Devices for
Visually-Impaired People: Current Status, Challenges, and Future
Directions," Sensors 17, no. 3, p.565, 2017
[6] A. Brilhault, S. Kammoun, O. Gutierrez, P. Truillet, C. Jouffrais, “Fusion
of artificial vision and GPS to improve blind pedestrian positioning,” 4th
IFIP International Conference on New Technologies, Mobility and
Security (NTMS), Paris, France, pp. 1–57–10, February 2011.
[7] C.E. White, D. Bernstein, A.L. Kornhauser, “Some map matching
algorithms for personal navigation assistants,” Transportation research part
c: emerging technologies, 8(1). pp.91-108, Dec 31, 2000.
[8] J.M. Loomis, R.G. Golledge, R.L. Klatzky, J.M. Speigle and J. Tietz,
“Personal guidance system for the visually-impaired,” In Proceedings of
the First Annual ACM Conference on Assistive Technologies, Marina Del
Rey, CA, USA, 31 October–1 November 1994.
[9] A. Landa-Hernández and E. Bayro-Corrochano, “Cognitive guidance
system for the blind,” The IEEE World Automation Congress (WAC),
Puerto Vallarta, Mexico, 24–28 June 2012.
[10] R. Tapu, B. Mocanu, T. Zaharia, “A computer vision system that ensure
the autonomous navigation of blind people,” The IEEE E-Health and
Bioengineering Conference (EHB), Iasi, Romania, 21–23 November 2013.
[11] R. Tapu, B. Mocanu, T. Zaharia, “Real time static/dynamic obstacle
detection for visually-impaired persons,” The 2014 IEEE International
Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 10–
13 January 2014.
[12] A. Aladren, G. Lopez-Nicolas, L. Puig, J.J. Guerrero, “Navi gation
Assistance for the Visually-impaired Using RGB-D Sensor with Range
Expansion” IEEE Syst. J., 10, pp. 922–932, 2016.
[13] N. Kiryati, Y. Eldar and M. Bruckstein, “A probabilistic Hough transform,”
Pattern Recogn, 24, pp.303–316, 1991.
[14] B. Mocanu, R. Tapu and T. Zaharia, “When Ultrasonic Sensors and
Computer Vision Join Forces for Efficient Obstacle Detection and
Recognition,” Sensors, 16, p. 1807, 2016.
[15] E. Rublee, V. Rabaud, K. Konolige and G. Bradski, “ORB: An efficient
alternative to SIFT or SURF,” In Computer Vision (ICCV), 2011 IEEE
international conference on, IEEE, pp. 2564-2571, November 2011.
[16] L. Yu, Z. Yu, Y. Gong, “An Improved ORB Algorithm of Extracting and
Matching Features,” International Journal of Signal Processing, Image
Processing and Pattern Recognition, 8(5), pp.117-126, 2015.
[17] A. Vinay, A.S. Rao, V.S. Shekhar, A. Kumar, K.B. Murthy, S. Natarajan,
“Feature Extraction using ORB-RANSAC for Face Recognition,” Procedia
Computer Science, 70, pp.174-184, 2015.
[18] L. Zadeh, “Fuzzy sets, fuzzy logic, and fuzzy systems”, selected papers by
Lotfi A. Zadeh. NJ: World Scientific Publishing Co., Inc, pp. 94–102,
1996.
[19] M. Hol, A. Bilgin, T. Bosse, & B. Bredeweg, A Fuzzy Logic Approach for
Anomaly Detection in Energy Consumption Data. In BNAIC (Vol. 28).
Vrije Universiteit, Department of Computer Sciences. June, 2016.
[20] G.H.I Electronics, Catalog .NET Gadgeteer - GHI Electronics.
[Online:]Available:http://www.ghielectronics.com/catalog/category/265/,
May 5, 2013.
[21] E. Rublee, V. Rabaud, K. Konolige and G. Bradski, “ORB: An efficient
alternative to SIFT or SURF,” In Computer Vision (ICCV), 2011 IEEE
international conference on, IEEE, pp. 2564-2571, November 2011.
[22] A. Canclini , M.Cesana, A. Redondi, M. Tagliasacchi, J. Ascenso, & R.
Cilla, (2013, July). Evaluation of low-complexity visual feature detectors
and descriptors. In Digital Signal Processing (DSP), 2013 18th
International Conference on (pp. 1-7). IEEE.
[23] L. Yang, and Z. Lu, 2015. A new scheme for keypoint detection and
description. Mathematical Problems in Engineering, 2015.
2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC)
A
PPENDIX
A.1:
F
UZZY RULES FOR PROPOSED OBSTACLES AVOIDANCE SYSTEM
Rule User’s
Position
Obstacle’s
Range
ObsLeft Obs20%
LeftMid ObsMiddle Obs20%
RightMid ObsRight Feedback
1 Middle Near ObsEx Obs_NEx Obs_NEx Obs_NEx Obs_NEx GoStraight
2 Middle Near ObsEx Obs_NEx
Obs_NEx Obs_NEx ObsEx GoStraight
3 Middle Near Obs_NEx Obs_NEx Obs_NEx Obs_NEx ObsEx GoStraight
4 Middle Near ObsEx ObsEx Obs_NEx Obs_NEx Obs_NEx SlightRightStraight
5 Middle Near Obs_NEx Obs_NEx Obs_NEx ObsEx ObsEx SlightLeftStraight
6 Middle Near ObsEx ObsEx Obs_NEx ObsEx ObsEx stop
7 Middle Near ObsEx Obs_NEx ObsEx Obs_NEx Obs_NEx MoveRight
8 Middle Near Obs_NEx Obs_NEx ObsEx Obs_NEx ObsEx MoveLeft
9 Middle Near ObsEx Obs_NEx ObsEx Obs_NEx ObsEx Stop
10 Middle Far Obs_NEx Obs_NEx Obs_NEx Obs_NEx Obs_NEx GoStraight
11 Middle Far Obs_NEx Obs_NEx Obs_NEx Obs_NEx Obs_NEx GoStraight
12 Middle Far Obs_NEx
Obs_NEx Obs_NEx Obs_NEx Obs_NEx GoStraight
13 Middle Far Obs_NEx Obs_NEx Obs_NEx Obs_NEx Obs_NEx GoStraight
14 Middle Far Obs_NEx Obs_NEx Obs_NEx Obs_NEx Obs_NEx GoStraight
15 Middle Far Obs_NEx Obs_NEx Obs_NEx Obs_NEx Obs_NEx GoStraight
16 Middle Far Obs_NEx Obs_NEx Obs_NEx Obs_NEx Obs_NEx GoStraight
17 Middle Far Obs_NEx
Obs_NEx Obs_NEx Obs_NEx Obs_NEx GoStraight
18 Middle Far Obs_NEx Obs_NEx Obs_NEx Obs_NEx Obs_NEx GoStraight
A
PPENDIX
A.2:
T
HE
R
ULE
V
IEWER FOR THE OBSTACLE
A
VOIDANCE SYSTEM USING AN FIS OM MATHLAB
2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC)