ArticlePDF Available

Trajectory Planning and Collision Avoidance Algorithm for Mobile Robotics System

Authors:

Abstract and Figures

The field of autonomous mobile robotics has recently gained many researchers' interests. Due to the specific needs required by various applications of mobile robot systems, especially in navigation, designing a real time obstacle avoidance and path following robot system has become the backbone of controlling robots in unknown environments. Therefore, an efficient collision avoidance and path following methodology is needed to develop an intelligent and effective autonomous mobile robot system. This paper introduces a new technique for line following and collision avoidance in the mobile robotic systems. The proposed technique relies on the use of low-cost infrared sensors, and involves a reasonable level of calculations, so that it can be easily used in real-time control applications. The simulation setup is implemented on multiple scenarios to show the ability of the robot to follow a path, detect obstacles, and navigate around them to avoid collision. It also shows that the robot has been successfully following very congested curves and has avoided any obstacle that emerged on its path. Webots simulator was used to validate the effectiveness of the proposed technique.
Content may be subject to copyright.
IEEE SENSORS JOURNAL, VOL. 16, NO. 12, JUNE 15, 2016 5021
Trajectory Planning and Collision Avoidance
Algorithm for Mobile Robotics System
Marwah M. Almasri, Student Member, IEEE, Abrar M. Alajlan, Student Member, IEEE,
andKhaledM.Elleithy,Senior Member, IEEE
Abstract The field of autonomous mobile robotics has
recently gained many researchers’ interests. Due to the specific
needs required by various applications of mobile robot systems,
especially in navigation, designing a real time obstacle avoidance
and path following robot system has become the backbone
of controlling robots in unknown environments. Therefore, an
efficient collision avoidance and path following methodology is
needed to develop an intelligent and effective autonomous mobile
robot system. This paper introduces a new technique for line
following and collision avoidance in the mobile robotic systems.
The proposed technique relies on the use of low-cost infrared
sensors, and involves a reasonable level of calculations, so that
it can be easily used in real-time control applications. The
simulation setup is implemented on multiple scenarios to show the
ability of the robot to follow a path, detect obstacles, and navigate
around them to avoid collision. It also shows that the robot has
been successfully following very congested curves and has avoided
any obstacle that emerged on its path. Webots simulator was used
to validate the effectiveness of the proposed technique.
Index Terms—Collision avoidance, path planning, robotics
control, Webots, mobile robot, e-puck.
I. INTRODUCTION
THERE has been a spurt of interest in recent years in the
area of autonomous mobile robots that are considered
as mechanical devices capable of completing scheduled tasks,
decision-making and navigating without any involvement from
humans [1], [2]. This has brought up some serious concerns
about the interaction between mobile robots and the environ-
ment, including autonomous mobile navigation, path planning,
obstacle avoidance etc.
Deploying autonomous mobile robots is coupled with the
use of external sensors that assist in detecting obstacles in
advance. The mobile robot uses these sensors to receive
information about the tested area through digital image
processing or distance measurements to recognize any pos-
sible obstacle [3]. Several ways of testing the surroundings
have been introduced in the literature of path planning of
mobile robot. Although ultrasonic sensors, positioning systems
and camera are most widely used to move in an unknown
environment, they are not the suitable solution to facilitate
and neaten the robot structure. Therefore, some infrared
sensors are used to follow an optimum non-collision path
Manuscript received August 18, 2015; accepted March 31, 2015. Date
of publication April 12, 2016; date of current version May 17, 2016. The
associate editor coordinating the review of this paper and approving it for
publication was Prof. Julian C. C. Chan.
The authors are with the Computer Science and Engineering Department,
University of Bridgeport, Bridgeport, CT 06604 USA (e-mail: maalmasr@
my.bridgeport.edu; aalajlan@my.bridgeport.edu; elleithy@bridgeport.edu).
Digital Object Identifier 10.1109/JSEN.2016.2553126
from source to destination according to particular performance
objectives [4].
In addition, path planning in mobile robot can be divided
into two types based on the robot’s knowledge of the environ-
ment. In the global path planning where the environmental
information is predefined, the global path planning is also
called static collision avoidance planning and the local path
planning where the environmental information is not pre-
known, the local path planning is also called dynamic collision
avoidance planning. The local path planning is more demand-
ing than global path planning since it has a changeable direc-
tion and requires a prediction of dynamic obstacle position
at every time step in order to achieve the requirement of a
time-critical trajectory planning. The local path planning also
considers some kinds of measurements regarding the dimen-
sions of the moving obstacle such as position, size and shape
through sensors as fast as possible to avoid arisen unknown
obstacle while the robot is moving toward goal state [5].
The most well-known sensors used to follow a specific path
while detecting obstacles and measuring the distance between
robots and objects are infrared, ultrasonic, and laser sensors.
Given the specific needs required by different applications
of mobile robot especially in navigation, it is crucial to develop
an autonomous robotic system that is capable of avoiding
obstacles while following the path in real time applications.
Consequently, an efficient collision avoidance and path fol-
lowing technique is essential to assure intelligent and effective
autonomous mobile robot system.
This paper presents the results of a research aimed to
develop a new technique for line following and obstacle
avoidance relying on the use of low cost infrared sensors,
and involving a reasonable level of calculations, so that it
can be easily used in real time control applications with
microcontrollers.
The paper is organized as follows. Section II reviews state-
of-the-art literature for collision avoidance and line follow-
ing techniques. Section III discusses a novel technique for
collision avoidance and line-following. Section IV describes
the robotic platform used. Section V discusses and presents
simulation results to validate the proposed approach. Finally,
Section VI offers conclusions.
II. REVIEW AND ANALYSIS OF PREVIOUS TECHNIQUES
Mobile robot motion planning in an unknown environment
is always been the main research focus in the mobile robotics
area due to its practical importance and the complex nature
1558-1748 © 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
5022 IEEE SENSORS JOURNAL, VOL. 16, NO. 12, JUNE 15, 2016
of the problem. Several collision avoidance and line following
techniques have been introduced lately. Each of these tech-
niques was developed to be used in specific applications for
the purposes of education, entertainment, business and so on.
The ability to detect obstacles in real time mobile robotics
systems is a very critical requirement for any practical applica-
tion of autonomous vehicles. The main objective behind using
the obstacle avoidance approach is to obtain a collision-free
trajectory from the starting point to the target in monitoring
environments. There are two types of obstacles: static obstacle,
which has a fixed position and requires a priori knowledge
of the obstacle; dynamic obstacle, which does not require
any priori knowledge of the motion of the obstacle and
has uncertain motion and patterns (moving objects). Indeed,
detecting dynamic obstacles is more challenging than detecting
static obstacles since the dynamic obstacle has a changeable
direction and requires a prediction of the obstacle position
at every time step in order to achieve the requirement of a
time-critical trajectory planning [5].
Moreover, Path tracking is a major aspect of mobile robotics
systems that is expected to be widely used in industries and
airports to enhance the automatic transportation procedure.
In general, the line-following technique is used to track a
predefined path with zero steady state error [6].
In terms of the research literature, there have been a number
of techniques proposed and studied which addressed collision
avoidance and line-following approaches in mobile robotics
systems. A brief overview of some of these approaches is
presented in the following sections.
A. Collision Avoidance Approach
Many collision avoidance algorithms have been proposed in
the literature of robotics motion planning that allow the robot
to reach its target without colliding with any obstacles that
may exist in its path. Each algorithm differs in the way of
avoiding static/dynamic obstacles.
The vector field histogram (VFH) [7] is a real-time obstacle
avoidance method that uses the two-dimensional Cartesian
histogram grid to perform a two-stage data reduction
process. First, it converts the two-dimensional histogram to
a one-dimensional polar histogram. Second, it selects the
most suitable sector with low polar density and calculating
the steering angle in that direction. A known problem of the
VFH is that the robot can only detect static obstacles.
The Artificial Potential Field (APF) is presented in [8].
The algorithm is used to find the shortest path between the
starting and target points. The obstacle produces a repulsive
force to repel the robot while the target produces an attractive
force to attract the robot. That is, the total force on the robot
can be calculated based on the attractive and repulsive forces.
Accordingly, this approach is very sensitive to local minima
in case of a symmetric environment [9].
Another obstacle avoidance method is the Bug algo-
rithm [10] where the robot follows the boundary of each
obstacle in its way until the path is free. The robot then restarts
moving toward the target without taking into account any other
parameters. There are some significant shortcomings in the
Bug algorithm. To illustrate, the algorithm does not consider
any other obstacles during the edge detection process. Also,
it only considers the most recent sensor readings that might
be affected by sensor noise.
B. Line Following Approach
Various methods have been proposed as solutions for
line following problems that generally include line-following,
object-following, path tracking and so on. The line follower
in robotics is simply an autonomous mobile robot that detects
a particular line and keeps following it. This line can be as
visible as a black line in a white area or as invisible as a
magnetic field.
Line following in mobile robots can be achieved in three
basic operations. First, capturing the line width by camera
(image processing) or some reflective sensors mounted at the
front of the robot. Second, adjusting the robot to follow the
predefined line by using some infrared sensors placed around
the robot. Third, controlling the robot speed based on the line
condition.
Bakker et al. proposed a new technique for path following
control for mobile robotics systems that sets up the robot to
navigate autonomously along its path [11]. The robot utilizes
a real time Kinematic Differential Global Positioning System
to determine both the position and orientation that correspond
to the path. The performance of the control shows sufficient
results when tested on different paths shapes including a step,
ramp, and headland path.
Another type of line following technique is described
in [12], where the mobile robot is able to choose a desired line
among multiple lines autonomously. This robot can detect not
only black and white colors but also can differentiate between
multiple colors. Each line has a specific color and the robot
can select the desired color to reach its destination. With this
technique, the robot is also able to follow very congested
curves as it moves toward its target.
Another path following approach was introduced in [13],
where it relies on the use of digital image processing
techniques to track the robot path. It uses computer vision
as its main sensor (web cam) for surveying the environment.
Moreover, the Proportional-Integral-Derivative control
algorithm is also employed to adjust the robot on the line.
The demonstrated approach proved its robustness against
darkness, camera distortion and lighting.
Several other obstacle avoidance and line following
approaches are suitable for real-time applications but will not
be discussed here due to space limitations.
Among the reported approaches, the proposed one is unlike
any other simple technique. This approach is intended to
develop a line follower robot that has the ability to detect
and avoid any obstacles that emerges on its path. The mobile
robot is equipped with multiple sensors, and a microcontroller
that is used to receive information about the surrounding area
and then make a decision based on the sensors readings.
III. PROPOSED TECHNIQUE:ARCHITECTURE AND DESIGN
In this brief, a fairly general technique is developed that
has components of formation development, line follower and
ALMASRI et al.: TRAJECTORY PLANNING AND COLLISION AVOIDANCE ALGORITHM FOR MOBILE ROBOTICS SYSTEM 5023
Fig. 1. Block diagram of the proposed technique.
obstacles detection. The contribution of this work relies on
the use of low-cost infrared sensors, so that it can be easily
used in real-time robotic applications. The block diagram of
the proposed technique is given in Fig. 1.
The controller receives input values directly from the
infrared sensors. The robot controller applies the line
follower (LFA) and collision avoidance (CAA) approaches.
The line follower approach (LFA) receives ground sensor
readings as input values and, the controller will then issue a
signal to the robot to adjust the motor speeds and follow the
line; whereas the collision avoidance approach (CAA) receives
distance sensor readings as an input value. When an object is
detected in front of the robot, CAA is responsible to spin the
robot direction and adjust its speed according to the obstacle’s
position in order to avoid collision. By applying both
approaches, the robot follows the line and detects obstacles
simultaneously. In other words, if an obstacle is detected, the
robot must spin around the obstacle until it finds the line again.
An efficient algorithm of the proposed technique is devel-
oped as in Fig. 2, to make the robot have the ability to follow
the path and avoid obstacles along its way.
As shown in Fig.2, initialization is needed for the global
variables such as the number of distance and ground sensors
used (8 distance sensors and 3 ground sensors) and the colli-
sion avoidance threshold value before starting the line follower
and collision avoidance robot. After identifying the number of
sensors used for each type of sensors, enabling these sensors is
the next step. After that, for each time step of the simulation,
all the eight distance sensors’ values and the three ground
sensors’ values are obtained and. The three ground sensors
are responsible for following the line and the motor speeds are
adjusted based on these values. However, in case of possible
collisions, the front distance sensors values are compared with
a predefined collision avoidance threshold. Where if one of
these sensors reaches the threshold, a front obstacle is detected.
Subsequently, the reading of the right and left distance
sensors are taken to determine the direction of the robot
movement in case of front obstacle, and later compared with
the threshold to check for detected right/left obstacle. When
the robot’s path is determined, the left and right motor speeds
are adjusted accordingly. After avoiding the obstacle, the robot
will return to the line and continue following its path.
Fig. 2. Line follower and collision avoidance Algorithm.
IV. ROBOTIC PLATFORM
Using simulations to test the proposed technique is very
useful prior to investigations with real robots. They are
more convenient to use, less expensive, and easier to setup.
5024 IEEE SENSORS JOURNAL, VOL. 16, NO. 12, JUNE 15, 2016
Fig. 3. Schematic drawing of distance sensors in the e-puck robot.
In this work, Webots simulator is used to develop a line
follower and collision avoidance environment. It is one of the
most well-known simulation software used in mobile robots
that is developed by Cyberbotics [14]. Webots simulator is
employed in this work because it is equipped with a number
of sensor and actuator devices, such as Infrared sensors for
proximity and light measurements, differential wheels, camera
module, motors, touch sensors, emitters, receivers, etc. It also
allows importing 3D models in its scene tree from most 3D
modeling software through the VRML97 standard. Moreover,
the user can setup the properties of each object used in
the simulation like the shape, color, texture, mass, friction,
etc [15].
The distance sensors used for collision avoidance are
8 infrared sensors placed around the robot. Fig. 3 shows a
semantic drawing of the top view of the robot used in this
work, which is called the “e-puck” robot. The red lines repre-
sent the directions of infrared distance sensors. For simplicity
purposes, we grouped all sensors based on their positions.
Moreover, the robot detects the obstacles based on the values
returned by these sensors that ranged between 0 and 2000.
That is, the values returned from distance sensors depend
on the distance between the robot and the obstacle. In other
words, the values returned will be 0 (no light), if there is
no obstacle detected, and 1000 means obstacle is too close
to the robot (big amount of light is measured), etc. For the
line-following approach, another type of infrared sensor called
ground sensors is used. They are three proximity sensors
mounted on the frontal area of the robot and pointed to the
ground in order to detect the line as shown in Fig. 4. These
sensors allow the robot to see the color level of the ground at
three locations in the line across its front.
V. SIMULATION SETUP
In this work, we use Webots simulator and e-puck robot for
our experiment as shown in Fig. 5. We show some simulations
to illustrate the proposed approach. All simulations were
programmed using Matlab as a controller. First, the robot starts
sensing the environment with its three infrared sensors (ground
sensors) and follows a black line drawn on the ground where
Fig. 4. Ground sensors for the e-puck robot.
Fig. 5. Robot platform of the simulation where there are two obstacles placed
on the robot path.
there are two obstacles along its path. When the robot detects
an object with its eight infrared sensors (distance sensors),
it navigates around it to avoid collision based on the proposed
algorithm stated above. Lastly, the robot recovers its path
afterwards. The robot will follow the same steps each time
it detects an obstacle and recovers its path again. The robot
performance in different situations is illustrated by Fig. 6.
Fig. 6. (a), shows the time of the simulation, where the
robot detects the first object and stops. The robot checks both
directions left and right to decide which way to go. It is much
like pedestrian crossing roadways. The robot then gets the left
sensor readings and right sensor readings in order to compare
it with the threshold. Once these calculations are obtained,
the robot will turn around the object to avoid obstacles as
in Fig. 6. (b), (c), and (d).
As a final point, the robot successfully avoids the obstacles
and recovers its path again as depicted in Fig. 6. (e). The robot
will execute the same steps with each obstacle detected along
its way as in Fig. 6. (f), (g), (h), and (i).
VI. RESULTS AND DATA ANALYSIS
This section focuses on the results obtained from the simula-
tion and the analysis of data gathered which validates both the
performance and the effectiveness of the proposed technique.
The following subsections discuss in detail the simulation
results.
ALMASRI et al.: TRAJECTORY PLANNING AND COLLISION AVOIDANCE ALGORITHM FOR MOBILE ROBOTICS SYSTEM 5025
Fig. 6. Snapshots of the simulation run at different simulation times.
Fig. 7. Distance sensors values at 3 seconds of the simulation time.
A. Distance Sensors Data Analysis
The e-puck is equipped with 8 distance sensors (Infrared
sensors) for collision avoidance around environment. These
sensors have values varying from 0 to 2000, where 1000 or
more means that the obstacle very close so the e-puck should
avoid it accordingly. Various sensor measurements have been
taken at different simulation times. Fig.7 shows the distance
sensor readings at the beginning of the simulation (3 seconds
of the simulation time) where all 8 distance sensors have low
values (50 or less) due to no presence of obstacles along the
robot path. However, once one of these 8 sensors detects an
obstacle, their values increase to 1000 or more. As depicted
in Fig.8, distance sensor 1 and 8 (the front sensors) have higher
values than the remaining sensors (especially the left front
sensor has more than 1000). This indicates that there is an
obstacle in front of the robot.
Fig. 8. Distance sensors values at 9 seconds of the simulation time.
Fig. 9. Distance sensors values at 35 seconds of the simulation time.
Fig. 10. Distance sensors values at 40 seconds of the simulation time.
According to the proposed algorithm, when one of the
distance sensors reaches the threshold, the robot will avoid
the obstacle and move around it to recover its path.
Fig.9 and Fig. 10, summarize the distance sensor readings
once the robot returns to the line. It also shows that at
35 and 40 seconds of the simulation time, all sensor readings
are low which means that there is no obstacle in front or
around the robot. At 64 seconds of the simulation time, another
obstacle is detected with one of the front sensors which reaches
5026 IEEE SENSORS JOURNAL, VOL. 16, NO. 12, JUNE 15, 2016
Fig. 11. Distance sensors values at 46 seconds of the simulation time.
Fig. 12. Distance sensors values at 1 min and 13 sec of the simulation time.
the threshold as represented in Fig.11. In Fig. 12 and Fig. 13,
the sensor measurements are shown at 1 minute and
13 seconds, and 1 minute and seconds respectively. Both
figures indicate low sensor readings thus there are no obstacles
detected. In addition, Table. I. summarizes all 8 distance sensor
values at various simulation times.
B. Ground Sensors Data Analysis
The e-puck robot is also equipped with three ground sensors
which are infrared sensors facing the ground. Their role is
to detect the black line in a white surface in order to guide
the robot. After obtaining the ground sensor readings, is
computed, which is the difference between the right and left
ground sensors. After this, the left and right motor speeds are
adjusted accordingly. Table. II shows all three ground sensor
values at several simulation times along with their values.
Fig. 14, displays all three ground sensor measurements
during the simulation whereas Fig. 15 shows the left and right
robot’s motor speeds. As indicated in these two figures, at
35 s, 1:13 s, and 1:18 s, the highest values of the ground
sensors and speeds are recorded. At 35 s and 1:13 s, where the
robot detects the first and second obstacles, the right ground
Fig. 13. Distance sensors values at 1 min and 18 sec of the simulation time.
Fig. 14. All three ground sensors measurements at different simulation times.
Fig. 15. Left and right motor speeds.
sensor (gs3) has a very high value which indicates that the
robot is turning sharply left to avoid the obstacles (Fig.14).
Furthermore, Fig.15 validates this outcome as shown at 35 s
and 1:13 s of the simulation time. The left motor speeds are
in negative values whereas the right motor speeds are in very
large positive values.
ALMASRI et al.: TRAJECTORY PLANNING AND COLLISION AVOIDANCE ALGORITHM FOR MOBILE ROBOTICS SYSTEM 5027
TAB LE I
SUMMARIZES ALL DISTANCE SENSORS MEASUREMENTS AT DIFFERENT SIMULATION TIMES
TAB LE I I
SUMMARIZES ALL GROUND SENSORS READINGS AND VALUES
AT DIFFERENT SIMULATIONTIMES
This demonstrates that the robot is turning sharply left to
avoid the obstacle in front of it. Finally, as shown in Fig. 14, at
1:18 s, the left ground sensor (gs1) is reading a much higher
value than the right ground sensor (gs3) because the robot
is turning right due to the curvy line (see also Fig. 6 (i)).
In addition, Fig. 15 shows that at 1:18 s, the left motor speed
is much higher as compared to the right motor speed which
indicates again that the robot is turning right to follow the
curvy line.
VII. CONCLUSIONS
Recently, mobile robot navigation in an unknown environ-
ment has been the research focus in the mobile robot intelligent
control domain. In this paper, we developed a line follower
robot that has the ability to detect and avoid any obstacles that
emerge on its path. It depends on the use of low-cost infrared
sensors (distance sensors and ground sensors), that are used to
measure and obtain the distance and orientation of the robot.
The data is utilized in the Webots simulator to validate the
effectiveness and the performance of the proposed technique.
In general, the proposed approach unlike any other simple
technique can be easily used in different real-time robotic
applications.
REFERENCES
[1] L. E. Zarate, M. Becker, B. D. M. Garrido, and H. S. C. Rocha, “An arti-
ficial neural network structure able to obstacle avoidance behavior used
in mobile robots,” in Proc. IEEE 28th Annu. Conf. Ind. Electron. Soc.,
Nov. 2002, pp. 2457–2461.
[2] S. X. Yang and C. Luo, “A neural network approach to complete
coverage path planning,” IEEE Trans. Syst., Man, Cybern. B, Cybern.,
vol. 34, no. 1, pp. 718–724, Feb. 2004.
[3] I. Ullah, F. Ullah, Q. Ullah, and S. Shin, “Integrated tracking and
accident avoidance system for mobile robots,Int. J. Control, Autom.
Syst., vol. 11, no. 6, pp. 1253–1265, 2013.
[4] N. D. Phuc and N. T. Thinh, “A solution of obstacle collision avoidance
for robotic fish based on fuzzy systems,” in Proc. IEEE Int. Conf. Robot.
Biomimetics (ROBIO), Dec. 2011, pp. 1707–1711.
[5] L. Zeng and G. M. Bone, “Mobile robot collision avoidance in human
environments,” Int. J. Adv. Robot. Syst., vol. 9, pp. 1–14, Jan. 2013, doi:
10.5772/54933.
[6] M. A. Zakaria, H. Zamzuri, R. Mamat, and S. A. Mazlan, “A path
tracking algorithm using future prediction control with spike detection
for an autonomous vehicle robot,” Int. J. Adv. Robot. Syst., vol. 10,
pp. 1–9, Jan. 2013.
[7] J. Borenstein and Y. Koren, “The vector field histogram—Fast obstacle
avoidance for mobile robots,” IEEE J. Robot. Autom., vol. 7, no. 3,
pp. 278–288, Jun. 1991.
[8] M. Zohaib, M. Pasha, R. A. Riaz, N. Javaid, M. Ilahi, and R. D. Khan,
“Control strategies for mobile robot with obstacle avoidance,” J. Basic
Appl. Sci. Res., vol. 3, no. 4, pp. 1027–1036, 2013.
[9] J. A. Oroko and G. N. Nyakoe, “Obstacle avoidance and path planning
schemes for autonomous navigation of a mobile robot: A review,” in
Proc. Sustain. Res. Innov. Conf., vol. 4, 2012, pp. 314–318.
[10] A. Yufka and O. Parlaktuna, “Performance comparison of bug algorithms
for mobile robots,” in Proc. 5th Int. Adv. Technol. Symp., May 2009,
pp. 1–5.
[11] T. Bakker, K. van Asselt, J. Bontsema, J. Müller, and G. van Straten,
“A path following algorithm for mobile robots,” Auto. Robots, vol. 29,
no. 1, pp. 85–97, 2010.
[12] K. M. Hasan, A. Al-Nahid, K. J. Reza, S. Khatun, and M. R. Basar,
“Sensor based autonomous color line follower robot with obstacle
avoidance,” in Proc. IEEE Bus. Eng. Ind. Appl. Colloq. (BEIAC),
Apr. 2013, pp. 598–603.
[13] W. E. Elhady, “Implementation and evaluation of image processing
techniques on a vision navigation line follower robot,” J. Comput. Sci.,
vol. 10, no. 6, pp. 1036–1044, 2014.
[14] (2015). Cyberbotics.com, Webots, accessed on Mar. 31, 2015. [Online].
Available: https://www.cyberbotics.com/overview
[15] O. Michel, “WebotsTM: Professional mobile robot simulation,” Int. J.
Adv. Robot. Syst., vol. 1, no. 1, pp. 39–42, 2004.
5028 IEEE SENSORS JOURNAL, VOL. 16, NO. 12, JUNE 15, 2016
Marwah M. Almasri received the bachelor’s
degree in computer science and engineering from
Taibah University, Medina, Saudi Arabia, and the
M.B.A. degree in management information system
from the University of Scranton, PA, in 2011.
She is currently pursuing the Ph.D. degree with
the Computer Science and Engineering Department,
University of Bridgeport. She has been recognized
by Upsilon Pi Epsilon and Phi Kappa Phi honor
society organizations for her academic accomplish-
ments. She received a Scholarship Award from the
Honor Society of Upsilon Pi Epsilon Organization, 2013. She also received the
Engineering Academic Achievement Award from the University of Bridgeport,
2014-2015. She received the Best Paper Award at the IEEE Long Island
Systems, Applications and Technology Conference, 2015. She also received
the Third Position Award in Faculty Research Day at the University of
Bridgeport, 2016. She also received an award from the MIS Department,
University of Scranton, for her outstanding work. Her research interests
include wireless sensor networks, computer networks, mobile computing,
autonomous mobile robots, and data fusion. She published a number of quality
research papers in national and international conferences and journals.
Abrar M. Alajlan received the B.Sc. degree
in computer science from Umm Al-Qura
University, Makkah, Saudi Arabia, in 2008, and the
M.Sc. degree in management information systems
from Troy University, Alabama, USA, in 2012. She
is currently pursuing the Ph.D. degree in computer
science and engineering with the University of
Bridgeport. She has been recognized by the Honor
Society of Upsilon Pi Epsilon Organization and
the Honor Society of Phi Kappa Phi Organization.
She also received the Engineering Academic
Achievement Award from the University of Bridgeport, 2015. She received
the Best Paper Award in the applications track from the IEEE Long Island
Systems, Applications and Technology Conference 2015. She is a member of
technical program committees of some international conferences. Her fields
of interest are autonomous ground robots, energy efficiency, obstacle/collision
avoidance, dynamic motion control, and real-time programming. She has
published several papers in influential international journals and conferences.
Khaled M. Elleithy received the B.Sc. degree in
computer science and automatic control and the
M.S. degree in computer networks from Alexandria
University, in 1983 and 1986, respectively, and the
M.S. and Ph.D. degrees in computer science from
the Center for Advanced Computer Studies, Uni-
versity of Louisiana–Lafayette, in 1988 and 1990,
respectively. He is the Associate Vice President for
Graduate Studies and Research with the University
of Bridgeport. He is a Professor of Computer Sci-
ence and Engineering. His research interests include
wireless sensor networks, mobile communications, network security, quantum
computing, and formal approaches for design and verification. He has pub-
lished more than 350 research papers in national/international journals and
conferences in his areas of expertise. He is the Editor or Co-Editor for 12
books published by Springer.
Dr. Elleithy has more than 25 years of teaching experience. He was a
recipient of the Distinguished Professor of the Year, University of Bridgeport,
from 2006 to 2007. He supervised hundreds of senior projects, M.S. theses,
and Ph.D. dissertations. He developed and introduced many new undergrad-
uate/graduate courses. He also developed new teaching/research laboratories
in his area of expertise. His students have won more than 20 prestigious
national/international awards from the IEEE, ACM, and ASEE.
Dr. Elleithy is a member of the technical program committees of many inter-
national conferences as recognition of his research qualifications. He served
as a Guest Editor for several international journals. He was the Chairperson
for the International Conference on Industrial Electronics, Technology and
Automation. Furthermore, he is the Co-Chair and Co-Founder of the Annual
International Joint Conferences on Computer, Information, and Systems
Sciences, and Engineering Virtual Conferences from 2005 to 2014.
... ROS [210] has become a foundational platform, with a comprehensive ecosystem of packages to facilitate message exchange and the creation of sophisticated and efficient algorithms and strategies for navigating complex environments while avoiding obstacles. ROS is designed with a modular architecture, making it easy to integrate GPS and other sensors, such as LiDARs and IMUs, and provides visualization tools such as Rviz [211] for real-time monitoring of sensor data, costmaps [212], and robot trajectories [213]. For example, it can be configured to handle temporary GPS signal loss or inaccuracies by using sensor fusion techniques and dead reckoning, ensuring continued navigation in challenging environments. ...
... Almasri et al., 2016). We chose iRobot's Create robot prototype as our agent and a simple planar, 30 × 30 m arena as the environment, both included in this software package(Figure 9). ...
Article
Full-text available
Navigation of mobile agents in unknown, unmapped environments is a critical task for achieving general autonomy. Recent advancements in combining Reinforcement Learning with Deep Neural Networks have shown promising results in addressing this challenge. However, the inherent complexity of these approaches, characterized by multi-layer networks and intricate reward objectives, limits their autonomy, increases memory footprint, and complicates adaptation to energy-efficient edge hardware. To overcome these challenges, we propose a brain-inspired method that employs a shallow architecture trained by a local learning rule for self-supervised navigation in uncharted environments. Our approach achieves performance comparable to a state-of-the-art Deep Q Network (DQN) method with respect to goal-reaching accuracy and path length, with a similar (slightly lower) number of parameters, operations, and training iterations. Notably, our self-supervised approach combines novelty-based and random walks to alleviate the need for objective reward definition and enhance agent autonomy. At the same time, the shallow architecture and local learning rule do not call for error backpropagation, decreasing the memory overhead and enabling implementation on edge neuromorphic processors. These results contribute to the potential of embodied neuromorphic agents utilizing minimal resources while effectively handling variability.
... Many recent object detection algorithms are vision-based [3,4]; these need a large amount of calculations, and vision-based object detection is greatly affected by light. Of course, some people use low-cost sensors such as infrared sensors for detecting obstacles [5]. Although the amount of calculation is small, there is too little information, so in this article, we weigh the pros and cons of using two-dimensional lidar to detect obstacle movement information [6][7][8][9][10][11]. ...
Article
Full-text available
Local path planning is a necessary ability for mobile robot navigation, but existing planners are not sufficiently effective at dynamic obstacle avoidance. In this article, an improved timed elastic band (TEB) planner based on the requirements of mobile robot navigation in dynamic environments is proposed. The dynamic obstacle velocities and TEB poses are fully integrated through two-dimensional (2D) lidar and multi-obstacle tracking. First, background point filtering and clustering are performed on the lidar points to obtain obstacle clusters. Then, we calculate the data association matrix of the obstacle clusters of the current and previous frame so that the clusters can be matched. Thirdly, a Kalman filter is adopted to track clusters and obtain the optimal estimates of their velocities. Finally, the TEB poses and obstacle velocities are associated: we predict the obstacle position corresponding to the TEB pose through the detected obstacle velocity and add this constraint to the corresponding TEB pose vertex. Then, a pose sequence considering the future positions of obstacles is obtained through a graph optimization algorithm. Compared with the original TEB, our method reduces the total running time by 22.87%, reduces the running distance by 19.23%, and increases the success rate by 21.05%. Simulations and experiments indicate that the improved TEB enables robots to efficiently avoid dynamic obstacles and reach the goal as quickly as possible.
... In practical scenarios where obstacles unexpectedly emerge, the combined method is typically used to increase the stability and safety of the robot while ensuring accurate path tracking. In a recent research study [10], multiple sensors were employed to track the desired path and avoid obstacles accurately. In a research study [11], fuzzy logic fusion has been utilized for both line-following and collision avoidance, indicating the effectiveness of this approach. ...
... The plethora of existing works in the literature on path planning and obstacle avoidance [1][2][3][4][5] indicates that the issues surrounding autonomous navigation of mobile robots are far from being resolved and highlights the growing interest in addressing these problems. Most of the proposed approaches suffer from either realism (real-time execution is not feasible due to high computational times) or exhaustiveness (scenario-dependent applications that do not consider certain aspects), only partially addressing the problem. ...
Article
Full-text available
The basic functions of an autonomous vehicle typically involve navigating from one point to another in the world by following a reference path and analyzing the traversability along this path to avoid potential obstacles. What happens when the vehicle is subject to uncertainties in its localization? All its capabilities, whether path following or obstacle avoidance, are affected by this uncertainty, and stopping the vehicle becomes the safest solution. In this work, we propose a framework that optimally combines path following and obstacle avoidance while keeping these two objectives independent, ensuring that the limitations of one do not affect the other. Absolute localization uncertainty only has an impact on path following, and in no way affects obstacle avoidance, which is performed in the robot’s local reference frame. Therefore, it is possible to navigate with or without prior information, without being affected by position uncertainty during obstacle avoidance maneuvers. We conducted tests on an EZ10 shuttle in the PAVIN experimental platform to validate our approach. These experimental results show that our approach achieves satisfactory performance, making it a promising solution for collision-free navigation applications for mobile robots even when localization is not accurate.
... Implementation of UAV collision avoidance is necessary when two UAVs come into close proximity. However, it is important to note that such an implementation may result in a decline in tracking performance [38,49,50]. Although advanced UAV collision avoidance strategies need to be contemplated, they exceed the scope of this paper. ...
Article
Full-text available
This paper focuses on distributed state estimation (DSE) and unmanned aerial vehicle (UAV) path optimization for target tracking. First, a diffusion cubature Kalman filter with intermittent measurements based on covariance intersection (DCKFI-CI) is proposed, to address state estimation with the existence of detection failure and unknown cross-correlations in the network. Furthermore, an alternative transformation of DCKFI-CI based on the information form is developed utilizing a pseudo measurement matrix. The performance of the proposed DSE algorithm is analyzed using the consistency and the bounded error covariance of the estimate. Additionally, the condition of the bounded error covariance is derived. In order to further improve the tracking performance, a UAV path optimization algorithm is developed by minimizing the sum of the trace of fused error covariance, based on the distributed optimization method. Finally, simulations were conducted to verify the effectiveness of the proposed algorithm.
Article
Full-text available
In a fast growing industrial world, carriers are required to carry products from one manufacturing plant to another which are usually in different buildings or separate blocks. This study intends to automate this sector using vision controlled mobile robots instead of laying railway tracks which are both expensive and inconvenient. To achieve this purpose an autonomous robot with computer vision as its primary sensor for gaining information about its environment for path following is developed. The proposed Line Follower Robot (LFR) consists of web cam mounted on the vehicle and connected to Matlab platform. A PID control algorithm will be applied to adjust the robot on the line. The proposed LFR is accomplished through the following stages: Firstly, the image is acquired using the web cam. The acquired RGB image is converted to another color coordinates for testing and comparing to choose the best color space. After that, the image contrast is enhanced using histogram equalization and then Wiener, Lee and Kuan filters are implemented to decide the best filter to be implemented. Subsequently, the basic morphological operations are carried out to choose the suitable operation to be utilized. The results are evaluated qualitatively and quantitatively from the points of Peak Signal-to-Noise Ratio (PSNR), entropy and image smoothness. The results show that the closing process is more suitable for the vision enhancement purpose, as well as the wiener filter gives the best result as regards the time and efficiency. Besides, the demonstrated LFR is capable of tracking a pre specified colored line as long as it is different from the surroundings.
Article
Full-text available
Trajectory tracking is an important aspect of autonomous vehicles. The idea behind trajectory tracking is the ability of the vehicle to follow a predefined path with zero steady state error. The difficulty arises due to the nonlinearity of vehicle dynamics. Therefore, this paper proposes a stable tracking control for an autonomous vehicle. An approach that consists of steering wheel control and lateral control is introduced. This control algorithm is used for a non-holonomic navigation problem, namely tracking a reference trajectory in a closed loop form. A proposed future prediction point control algorithm is used to calculate the vehicle's lateral error in order to improve the performance of the trajectory tracking. A feedback sensor signal from the steering wheel angle and yaw rate sensor is used as feedback information for the controller. The controller consists of a relationship between the future point lateral error, the linear velocity, the heading error and the reference yaw rate. This paper also introduces a spike detection algorithm to track the spike error that occurs during GPS reading. The proposed idea is to take the advantage of the derivative of the steering rate. This paper aims to tackle the lateral error problem by applying the steering control law to the vehicle, and proposes a new path tracking control method by considering the future coordinate of the vehicle and the future estimated lateral error. The effectiveness of the proposed controller is demonstrated by a simulation and a GPS experiment with noisy data. The approach used in this paper is not limited to autonomous vehicles alone since the concept of autonomous vehicle tracking can be used in mobile robot platforms, as the kinematic model of these two platforms is similar.
Article
Full-text available
In the intelligent transportation field, various accident avoidance techniques have been applied. One of the most common issues with these is the collision, which remains an unsolved problem. To this end, we developed a Collision Warning and Avoidance System (CWAS), which was implemented in the wheeled mobile robot. Path planning is crucial for a mobile robot to perform a given task correctly. Here, a tracking system for mobile robots that follow an object is presented. Thus, we implemented an integrated tracking system and CWAS in a mobile robot. Both systems can be activated independently. Using the CWAS, the robot is controlled through a remotely controlled device, and collision warning and avoidance functions are performed. Using the tracking system, the robot performs tasks autonomously and maintains a constant distance from the followed object. Information on the surroundings is obtained through range sensors, and the control functions are performed through the microcontroller. The front, left, and right sensors are activated to track the object, and all the sensors are used for the CWAS. The proposed system was tested using the binary logic controller and the Fuzzy Logic Controller (FLC). The efficiency of the robot was improved by increasing the smoothness of motion via the FLC, achieving accuracy in tracking and increasing the safety of the CWAS. Finally, simulations and experimental outcomes have shown the usefulness of the system.
Article
Full-text available
Collision avoidance is a fundamental requirement for mobile robots. Avoiding moving obstacles (also termed dynamic obstacles) with unpredictable direction changes, such as humans, is more challenging than avoiding moving obstacles whose motion can be predicted. Precise information on the future moving directions of humans is unobtainable for use in navigation algorithms. Furthermore, humans should be able to pursue their activities unhindered and without worrying about the robots around them. In this paper, both active and critical regions are used to deal with the uncertainty of human motion. A procedure is introduced to calculate the region sizes based on worst‐case avoidance conditions. Next, a novel virtual force field‐based mobile robot navigation algorithm (termed QVFF) is presented. This algorithm may be used with both holonomic and nonholonomic robots. It incorporates improved virtual force functions for avoiding moving obstacles and its stability is proven using a piecewise continuous Lyapunov function. Simulation and experimental results are provided for a human walking towards the robot and blocking the path to a goal location. Next, the proposed algorithm is compared with five state‐of‐the‐art navigation algorithms for an environment with one human walking with an unpredictable change in direction. Finally, avoidance results are presented for an environment containing three walking humans. The QVFF algorithm consistently generated collision‐free paths to the goal.
Conference Paper
Full-text available
This paper introduces the multiple source Multiple Destination Robot (MDR-l) having the ability to choose a desired line among multiple lines autonomously. Every line has different colors as their identities. The robot can differentiate among various colors and choose a desired one to find its target. Unlike any other simple line follower robot, this robot can be considered as a true autonomous line follower robot having the ability to detect presence of obstacle on its path. A powerful close loop control system is used in the robot. The robot senses a line and endeavors itself accordingly towards the desired target by correcting the wrong moves using a simple feedback mechanism but yet very effective closed loop system. The robot is capable of following very congested curves as it receives the continuous data from the sensors.
Conference Paper
Full-text available
In this study, Bug1, Bug2, and DistBug motion planning algorithms for mobile robots are simulated and their performances are compared. These motion planning algorithms are applied on a Pioneer mobile robot on the simulation environment of MobileSim. Sonar range sensors are used as the sensing elements. This study shows that mobile robots build a new motion planning using the bug's algorithms only if they meet an unknown obstacle during their motion to the goal. Each of the bug's algorithms is tested separately for an identical configuration space. At the end of this study, the performance comparison of the bug's algorithms is shown. Keywords: Bug, pioneer, robots, sonar, MobileSim Paper videos can be reached from this URL: http://www.youtube.com/playlist?list=PLENSkat0854tTGxwtIYy3JOrSsPGFx074
Article
Full-text available
Obstacle avoidance is an important task in the field of robotics, since the goal of autonomous robot is to reach the destination without collision. Several algorithms have been proposed for obstacle avoidance, having drawbacks and benefits. In this survey paper, we mainly discussed different algorithms for robot navigation with obstacle avoidance. We also compared all provided algorithms and mentioned their characteristics; advantages and disadvantages, so that we can select final efficient algorithm by fusing discussed algorithms. Comparison table is provided for justifying the area of interest
Article
Full-text available
Cyberbotics Ltd. develops WebotsTM, a mobile robotics simulation software that provides you with a rapid prototyping environment for modelling, programming and simulating mobile robots. The provided robot libraries enable you to transfer your control programs to several commercially available real mobile robots. WebotsTM lets you define and modify a complete mobile robotics setup, even several different robots sharing the same environment. For each object, you can define a number of properties, such as shape, color, texture, mass, friction, etc. You can equip each robot with a large number of available sensors and actuators. You can program these robots using your favorite development environment, simulate them and optionally transfer the resulting programs onto your real robots. WebotsTM has been developed in collaboration with the Swiss Federal Institute of Technology in Lausanne, thoroughly tested, well documented and continuously maintained for over 7 years. It is now the main commercial product available from Cyberbotics Ltd.
Article
In a fast growing industrial world, carriers are re quired to carry products from one manufacturing pla nt to another which are usually in different buildings or separate blocks. This study intends to automate this sector using vision controlled mobile robots i nstead of laying railway tracks which are both expensive and inconvenient. To achieve this purpose an autonomous robot with computer vision as its primary sensor for gaining information about its en vironment for path following is developed. The proposed Line Follower Robot (LFR) consists of web cam mounted on the vehicle and connected to Matlab platform. A PID control algorithm will be ap plied to adjust the robot on the line. The proposed LFR is accomplished through the following stages: F irstly, the image is acquired using the web cam. The acquired RGB image is converted to another color coordinates for testing and comparing to choose the best color space. After that, the image contras t is enhanced using histogram equalization and then Wiener, Lee and Kuan filters are implemented to dec ide the best filter to be implemented. Subsequently, the basic morphological operations ar e carried out to choose the suitable operation to b e utilized. The results are evaluated qualitatively a nd quantitatively from the points of Peak Signal-to Noise Ratio (PSNR), entropy and image smoothness. The results show that the closing process is more suitable for the vision enhancement purpose, as wel l as the wiener filter gives the best result as reg ards the time and efficiency. Besides, the demonstrated LFR is capable of tracking a pre specified colored line as long as it is different from the surroundin gs.
Article
This paper is addressed in fuzzy decision made to change on trajectory direction of robot's path. The generation of membership functions for fuzzy systems is a challenging problem. While robotic fish is swimming in the environment with a high potential of hurdles, it is possible for them to meet with collisions because there are many kinds of obstructions in water. Therefore, approaching natural and smooth movements for robotic fish is related to the detecting and recognizing obstacles as well as trying to avoid any kind of collision. Two fundamental data are the measuring of distance from sensor to obstacles and the possible existence of obstacles. Because the data is nonlinear, they can be solved with the fuzzy trajectory direction. The changing direction of trajectory of a robot should be made so that the robot can move in the direction with no hurdles. The experimental results indicate that robotic fish changes the trajectory planed following the proposed fuzzy decision results within the higher obstacles density without collision.