ArticlePDF Available

Trajectory Planning and Collision Avoidance Algorithm for Mobile Robotics System

Authors:

Abstract and Figures

The field of autonomous mobile robotics has recently gained many researchers' interests. Due to the specific needs required by various applications of mobile robot systems, especially in navigation, designing a real time obstacle avoidance and path following robot system has become the backbone of controlling robots in unknown environments. Therefore, an efficient collision avoidance and path following methodology is needed to develop an intelligent and effective autonomous mobile robot system. This paper introduces a new technique for line following and collision avoidance in the mobile robotic systems. The proposed technique relies on the use of low-cost infrared sensors, and involves a reasonable level of calculations, so that it can be easily used in real-time control applications. The simulation setup is implemented on multiple scenarios to show the ability of the robot to follow a path, detect obstacles, and navigate around them to avoid collision. It also shows that the robot has been successfully following very congested curves and has avoided any obstacle that emerged on its path. Webots simulator was used to validate the effectiveness of the proposed technique.
Content may be subject to copyright.
IEEE SENSORS JOURNAL, VOL. 16, NO. 12, JUNE 15, 2016 5021
Trajectory Planning and Collision Avoidance
Algorithm for Mobile Robotics System
Marwah M. Almasri, Student Member, IEEE, Abrar M. Alajlan, Student Member, IEEE,
andKhaledM.Elleithy,Senior Member, IEEE
Abstract The field of autonomous mobile robotics has
recently gained many researchers’ interests. Due to the specific
needs required by various applications of mobile robot systems,
especially in navigation, designing a real time obstacle avoidance
and path following robot system has become the backbone
of controlling robots in unknown environments. Therefore, an
efficient collision avoidance and path following methodology is
needed to develop an intelligent and effective autonomous mobile
robot system. This paper introduces a new technique for line
following and collision avoidance in the mobile robotic systems.
The proposed technique relies on the use of low-cost infrared
sensors, and involves a reasonable level of calculations, so that
it can be easily used in real-time control applications. The
simulation setup is implemented on multiple scenarios to show the
ability of the robot to follow a path, detect obstacles, and navigate
around them to avoid collision. It also shows that the robot has
been successfully following very congested curves and has avoided
any obstacle that emerged on its path. Webots simulator was used
to validate the effectiveness of the proposed technique.
Index Terms—Collision avoidance, path planning, robotics
control, Webots, mobile robot, e-puck.
I. INTRODUCTION
THERE has been a spurt of interest in recent years in the
area of autonomous mobile robots that are considered
as mechanical devices capable of completing scheduled tasks,
decision-making and navigating without any involvement from
humans [1], [2]. This has brought up some serious concerns
about the interaction between mobile robots and the environ-
ment, including autonomous mobile navigation, path planning,
obstacle avoidance etc.
Deploying autonomous mobile robots is coupled with the
use of external sensors that assist in detecting obstacles in
advance. The mobile robot uses these sensors to receive
information about the tested area through digital image
processing or distance measurements to recognize any pos-
sible obstacle [3]. Several ways of testing the surroundings
have been introduced in the literature of path planning of
mobile robot. Although ultrasonic sensors, positioning systems
and camera are most widely used to move in an unknown
environment, they are not the suitable solution to facilitate
and neaten the robot structure. Therefore, some infrared
sensors are used to follow an optimum non-collision path
Manuscript received August 18, 2015; accepted March 31, 2015. Date
of publication April 12, 2016; date of current version May 17, 2016. The
associate editor coordinating the review of this paper and approving it for
publication was Prof. Julian C. C. Chan.
The authors are with the Computer Science and Engineering Department,
University of Bridgeport, Bridgeport, CT 06604 USA (e-mail: maalmasr@
my.bridgeport.edu; aalajlan@my.bridgeport.edu; elleithy@bridgeport.edu).
Digital Object Identifier 10.1109/JSEN.2016.2553126
from source to destination according to particular performance
objectives [4].
In addition, path planning in mobile robot can be divided
into two types based on the robot’s knowledge of the environ-
ment. In the global path planning where the environmental
information is predefined, the global path planning is also
called static collision avoidance planning and the local path
planning where the environmental information is not pre-
known, the local path planning is also called dynamic collision
avoidance planning. The local path planning is more demand-
ing than global path planning since it has a changeable direc-
tion and requires a prediction of dynamic obstacle position
at every time step in order to achieve the requirement of a
time-critical trajectory planning. The local path planning also
considers some kinds of measurements regarding the dimen-
sions of the moving obstacle such as position, size and shape
through sensors as fast as possible to avoid arisen unknown
obstacle while the robot is moving toward goal state [5].
The most well-known sensors used to follow a specific path
while detecting obstacles and measuring the distance between
robots and objects are infrared, ultrasonic, and laser sensors.
Given the specific needs required by different applications
of mobile robot especially in navigation, it is crucial to develop
an autonomous robotic system that is capable of avoiding
obstacles while following the path in real time applications.
Consequently, an efficient collision avoidance and path fol-
lowing technique is essential to assure intelligent and effective
autonomous mobile robot system.
This paper presents the results of a research aimed to
develop a new technique for line following and obstacle
avoidance relying on the use of low cost infrared sensors,
and involving a reasonable level of calculations, so that it
can be easily used in real time control applications with
microcontrollers.
The paper is organized as follows. Section II reviews state-
of-the-art literature for collision avoidance and line follow-
ing techniques. Section III discusses a novel technique for
collision avoidance and line-following. Section IV describes
the robotic platform used. Section V discusses and presents
simulation results to validate the proposed approach. Finally,
Section VI offers conclusions.
II. REVIEW AND ANALYSIS OF PREVIOUS TECHNIQUES
Mobile robot motion planning in an unknown environment
is always been the main research focus in the mobile robotics
area due to its practical importance and the complex nature
1558-1748 © 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
5022 IEEE SENSORS JOURNAL, VOL. 16, NO. 12, JUNE 15, 2016
of the problem. Several collision avoidance and line following
techniques have been introduced lately. Each of these tech-
niques was developed to be used in specific applications for
the purposes of education, entertainment, business and so on.
The ability to detect obstacles in real time mobile robotics
systems is a very critical requirement for any practical applica-
tion of autonomous vehicles. The main objective behind using
the obstacle avoidance approach is to obtain a collision-free
trajectory from the starting point to the target in monitoring
environments. There are two types of obstacles: static obstacle,
which has a fixed position and requires a priori knowledge
of the obstacle; dynamic obstacle, which does not require
any priori knowledge of the motion of the obstacle and
has uncertain motion and patterns (moving objects). Indeed,
detecting dynamic obstacles is more challenging than detecting
static obstacles since the dynamic obstacle has a changeable
direction and requires a prediction of the obstacle position
at every time step in order to achieve the requirement of a
time-critical trajectory planning [5].
Moreover, Path tracking is a major aspect of mobile robotics
systems that is expected to be widely used in industries and
airports to enhance the automatic transportation procedure.
In general, the line-following technique is used to track a
predefined path with zero steady state error [6].
In terms of the research literature, there have been a number
of techniques proposed and studied which addressed collision
avoidance and line-following approaches in mobile robotics
systems. A brief overview of some of these approaches is
presented in the following sections.
A. Collision Avoidance Approach
Many collision avoidance algorithms have been proposed in
the literature of robotics motion planning that allow the robot
to reach its target without colliding with any obstacles that
may exist in its path. Each algorithm differs in the way of
avoiding static/dynamic obstacles.
The vector field histogram (VFH) [7] is a real-time obstacle
avoidance method that uses the two-dimensional Cartesian
histogram grid to perform a two-stage data reduction
process. First, it converts the two-dimensional histogram to
a one-dimensional polar histogram. Second, it selects the
most suitable sector with low polar density and calculating
the steering angle in that direction. A known problem of the
VFH is that the robot can only detect static obstacles.
The Artificial Potential Field (APF) is presented in [8].
The algorithm is used to find the shortest path between the
starting and target points. The obstacle produces a repulsive
force to repel the robot while the target produces an attractive
force to attract the robot. That is, the total force on the robot
can be calculated based on the attractive and repulsive forces.
Accordingly, this approach is very sensitive to local minima
in case of a symmetric environment [9].
Another obstacle avoidance method is the Bug algo-
rithm [10] where the robot follows the boundary of each
obstacle in its way until the path is free. The robot then restarts
moving toward the target without taking into account any other
parameters. There are some significant shortcomings in the
Bug algorithm. To illustrate, the algorithm does not consider
any other obstacles during the edge detection process. Also,
it only considers the most recent sensor readings that might
be affected by sensor noise.
B. Line Following Approach
Various methods have been proposed as solutions for
line following problems that generally include line-following,
object-following, path tracking and so on. The line follower
in robotics is simply an autonomous mobile robot that detects
a particular line and keeps following it. This line can be as
visible as a black line in a white area or as invisible as a
magnetic field.
Line following in mobile robots can be achieved in three
basic operations. First, capturing the line width by camera
(image processing) or some reflective sensors mounted at the
front of the robot. Second, adjusting the robot to follow the
predefined line by using some infrared sensors placed around
the robot. Third, controlling the robot speed based on the line
condition.
Bakker et al. proposed a new technique for path following
control for mobile robotics systems that sets up the robot to
navigate autonomously along its path [11]. The robot utilizes
a real time Kinematic Differential Global Positioning System
to determine both the position and orientation that correspond
to the path. The performance of the control shows sufficient
results when tested on different paths shapes including a step,
ramp, and headland path.
Another type of line following technique is described
in [12], where the mobile robot is able to choose a desired line
among multiple lines autonomously. This robot can detect not
only black and white colors but also can differentiate between
multiple colors. Each line has a specific color and the robot
can select the desired color to reach its destination. With this
technique, the robot is also able to follow very congested
curves as it moves toward its target.
Another path following approach was introduced in [13],
where it relies on the use of digital image processing
techniques to track the robot path. It uses computer vision
as its main sensor (web cam) for surveying the environment.
Moreover, the Proportional-Integral-Derivative control
algorithm is also employed to adjust the robot on the line.
The demonstrated approach proved its robustness against
darkness, camera distortion and lighting.
Several other obstacle avoidance and line following
approaches are suitable for real-time applications but will not
be discussed here due to space limitations.
Among the reported approaches, the proposed one is unlike
any other simple technique. This approach is intended to
develop a line follower robot that has the ability to detect
and avoid any obstacles that emerges on its path. The mobile
robot is equipped with multiple sensors, and a microcontroller
that is used to receive information about the surrounding area
and then make a decision based on the sensors readings.
III. PROPOSED TECHNIQUE:ARCHITECTURE AND DESIGN
In this brief, a fairly general technique is developed that
has components of formation development, line follower and
ALMASRI et al.: TRAJECTORY PLANNING AND COLLISION AVOIDANCE ALGORITHM FOR MOBILE ROBOTICS SYSTEM 5023
Fig. 1. Block diagram of the proposed technique.
obstacles detection. The contribution of this work relies on
the use of low-cost infrared sensors, so that it can be easily
used in real-time robotic applications. The block diagram of
the proposed technique is given in Fig. 1.
The controller receives input values directly from the
infrared sensors. The robot controller applies the line
follower (LFA) and collision avoidance (CAA) approaches.
The line follower approach (LFA) receives ground sensor
readings as input values and, the controller will then issue a
signal to the robot to adjust the motor speeds and follow the
line; whereas the collision avoidance approach (CAA) receives
distance sensor readings as an input value. When an object is
detected in front of the robot, CAA is responsible to spin the
robot direction and adjust its speed according to the obstacle’s
position in order to avoid collision. By applying both
approaches, the robot follows the line and detects obstacles
simultaneously. In other words, if an obstacle is detected, the
robot must spin around the obstacle until it finds the line again.
An efficient algorithm of the proposed technique is devel-
oped as in Fig. 2, to make the robot have the ability to follow
the path and avoid obstacles along its way.
As shown in Fig.2, initialization is needed for the global
variables such as the number of distance and ground sensors
used (8 distance sensors and 3 ground sensors) and the colli-
sion avoidance threshold value before starting the line follower
and collision avoidance robot. After identifying the number of
sensors used for each type of sensors, enabling these sensors is
the next step. After that, for each time step of the simulation,
all the eight distance sensors’ values and the three ground
sensors’ values are obtained and. The three ground sensors
are responsible for following the line and the motor speeds are
adjusted based on these values. However, in case of possible
collisions, the front distance sensors values are compared with
a predefined collision avoidance threshold. Where if one of
these sensors reaches the threshold, a front obstacle is detected.
Subsequently, the reading of the right and left distance
sensors are taken to determine the direction of the robot
movement in case of front obstacle, and later compared with
the threshold to check for detected right/left obstacle. When
the robot’s path is determined, the left and right motor speeds
are adjusted accordingly. After avoiding the obstacle, the robot
will return to the line and continue following its path.
Fig. 2. Line follower and collision avoidance Algorithm.
IV. ROBOTIC PLATFORM
Using simulations to test the proposed technique is very
useful prior to investigations with real robots. They are
more convenient to use, less expensive, and easier to setup.
5024 IEEE SENSORS JOURNAL, VOL. 16, NO. 12, JUNE 15, 2016
Fig. 3. Schematic drawing of distance sensors in the e-puck robot.
In this work, Webots simulator is used to develop a line
follower and collision avoidance environment. It is one of the
most well-known simulation software used in mobile robots
that is developed by Cyberbotics [14]. Webots simulator is
employed in this work because it is equipped with a number
of sensor and actuator devices, such as Infrared sensors for
proximity and light measurements, differential wheels, camera
module, motors, touch sensors, emitters, receivers, etc. It also
allows importing 3D models in its scene tree from most 3D
modeling software through the VRML97 standard. Moreover,
the user can setup the properties of each object used in
the simulation like the shape, color, texture, mass, friction,
etc [15].
The distance sensors used for collision avoidance are
8 infrared sensors placed around the robot. Fig. 3 shows a
semantic drawing of the top view of the robot used in this
work, which is called the “e-puck” robot. The red lines repre-
sent the directions of infrared distance sensors. For simplicity
purposes, we grouped all sensors based on their positions.
Moreover, the robot detects the obstacles based on the values
returned by these sensors that ranged between 0 and 2000.
That is, the values returned from distance sensors depend
on the distance between the robot and the obstacle. In other
words, the values returned will be 0 (no light), if there is
no obstacle detected, and 1000 means obstacle is too close
to the robot (big amount of light is measured), etc. For the
line-following approach, another type of infrared sensor called
ground sensors is used. They are three proximity sensors
mounted on the frontal area of the robot and pointed to the
ground in order to detect the line as shown in Fig. 4. These
sensors allow the robot to see the color level of the ground at
three locations in the line across its front.
V. SIMULATION SETUP
In this work, we use Webots simulator and e-puck robot for
our experiment as shown in Fig. 5. We show some simulations
to illustrate the proposed approach. All simulations were
programmed using Matlab as a controller. First, the robot starts
sensing the environment with its three infrared sensors (ground
sensors) and follows a black line drawn on the ground where
Fig. 4. Ground sensors for the e-puck robot.
Fig. 5. Robot platform of the simulation where there are two obstacles placed
on the robot path.
there are two obstacles along its path. When the robot detects
an object with its eight infrared sensors (distance sensors),
it navigates around it to avoid collision based on the proposed
algorithm stated above. Lastly, the robot recovers its path
afterwards. The robot will follow the same steps each time
it detects an obstacle and recovers its path again. The robot
performance in different situations is illustrated by Fig. 6.
Fig. 6. (a), shows the time of the simulation, where the
robot detects the first object and stops. The robot checks both
directions left and right to decide which way to go. It is much
like pedestrian crossing roadways. The robot then gets the left
sensor readings and right sensor readings in order to compare
it with the threshold. Once these calculations are obtained,
the robot will turn around the object to avoid obstacles as
in Fig. 6. (b), (c), and (d).
As a final point, the robot successfully avoids the obstacles
and recovers its path again as depicted in Fig. 6. (e). The robot
will execute the same steps with each obstacle detected along
its way as in Fig. 6. (f), (g), (h), and (i).
VI. RESULTS AND DATA ANALYSIS
This section focuses on the results obtained from the simula-
tion and the analysis of data gathered which validates both the
performance and the effectiveness of the proposed technique.
The following subsections discuss in detail the simulation
results.
ALMASRI et al.: TRAJECTORY PLANNING AND COLLISION AVOIDANCE ALGORITHM FOR MOBILE ROBOTICS SYSTEM 5025
Fig. 6. Snapshots of the simulation run at different simulation times.
Fig. 7. Distance sensors values at 3 seconds of the simulation time.
A. Distance Sensors Data Analysis
The e-puck is equipped with 8 distance sensors (Infrared
sensors) for collision avoidance around environment. These
sensors have values varying from 0 to 2000, where 1000 or
more means that the obstacle very close so the e-puck should
avoid it accordingly. Various sensor measurements have been
taken at different simulation times. Fig.7 shows the distance
sensor readings at the beginning of the simulation (3 seconds
of the simulation time) where all 8 distance sensors have low
values (50 or less) due to no presence of obstacles along the
robot path. However, once one of these 8 sensors detects an
obstacle, their values increase to 1000 or more. As depicted
in Fig.8, distance sensor 1 and 8 (the front sensors) have higher
values than the remaining sensors (especially the left front
sensor has more than 1000). This indicates that there is an
obstacle in front of the robot.
Fig. 8. Distance sensors values at 9 seconds of the simulation time.
Fig. 9. Distance sensors values at 35 seconds of the simulation time.
Fig. 10. Distance sensors values at 40 seconds of the simulation time.
According to the proposed algorithm, when one of the
distance sensors reaches the threshold, the robot will avoid
the obstacle and move around it to recover its path.
Fig.9 and Fig. 10, summarize the distance sensor readings
once the robot returns to the line. It also shows that at
35 and 40 seconds of the simulation time, all sensor readings
are low which means that there is no obstacle in front or
around the robot. At 64 seconds of the simulation time, another
obstacle is detected with one of the front sensors which reaches
5026 IEEE SENSORS JOURNAL, VOL. 16, NO. 12, JUNE 15, 2016
Fig. 11. Distance sensors values at 46 seconds of the simulation time.
Fig. 12. Distance sensors values at 1 min and 13 sec of the simulation time.
the threshold as represented in Fig.11. In Fig. 12 and Fig. 13,
the sensor measurements are shown at 1 minute and
13 seconds, and 1 minute and seconds respectively. Both
figures indicate low sensor readings thus there are no obstacles
detected. In addition, Table. I. summarizes all 8 distance sensor
values at various simulation times.
B. Ground Sensors Data Analysis
The e-puck robot is also equipped with three ground sensors
which are infrared sensors facing the ground. Their role is
to detect the black line in a white surface in order to guide
the robot. After obtaining the ground sensor readings, is
computed, which is the difference between the right and left
ground sensors. After this, the left and right motor speeds are
adjusted accordingly. Table. II shows all three ground sensor
values at several simulation times along with their values.
Fig. 14, displays all three ground sensor measurements
during the simulation whereas Fig. 15 shows the left and right
robot’s motor speeds. As indicated in these two figures, at
35 s, 1:13 s, and 1:18 s, the highest values of the ground
sensors and speeds are recorded. At 35 s and 1:13 s, where the
robot detects the first and second obstacles, the right ground
Fig. 13. Distance sensors values at 1 min and 18 sec of the simulation time.
Fig. 14. All three ground sensors measurements at different simulation times.
Fig. 15. Left and right motor speeds.
sensor (gs3) has a very high value which indicates that the
robot is turning sharply left to avoid the obstacles (Fig.14).
Furthermore, Fig.15 validates this outcome as shown at 35 s
and 1:13 s of the simulation time. The left motor speeds are
in negative values whereas the right motor speeds are in very
large positive values.
ALMASRI et al.: TRAJECTORY PLANNING AND COLLISION AVOIDANCE ALGORITHM FOR MOBILE ROBOTICS SYSTEM 5027
TAB LE I
SUMMARIZES ALL DISTANCE SENSORS MEASUREMENTS AT DIFFERENT SIMULATION TIMES
TAB LE I I
SUMMARIZES ALL GROUND SENSORS READINGS AND VALUES
AT DIFFERENT SIMULATIONTIMES
This demonstrates that the robot is turning sharply left to
avoid the obstacle in front of it. Finally, as shown in Fig. 14, at
1:18 s, the left ground sensor (gs1) is reading a much higher
value than the right ground sensor (gs3) because the robot
is turning right due to the curvy line (see also Fig. 6 (i)).
In addition, Fig. 15 shows that at 1:18 s, the left motor speed
is much higher as compared to the right motor speed which
indicates again that the robot is turning right to follow the
curvy line.
VII. CONCLUSIONS
Recently, mobile robot navigation in an unknown environ-
ment has been the research focus in the mobile robot intelligent
control domain. In this paper, we developed a line follower
robot that has the ability to detect and avoid any obstacles that
emerge on its path. It depends on the use of low-cost infrared
sensors (distance sensors and ground sensors), that are used to
measure and obtain the distance and orientation of the robot.
The data is utilized in the Webots simulator to validate the
effectiveness and the performance of the proposed technique.
In general, the proposed approach unlike any other simple
technique can be easily used in different real-time robotic
applications.
REFERENCES
[1] L. E. Zarate, M. Becker, B. D. M. Garrido, and H. S. C. Rocha, “An arti-
ficial neural network structure able to obstacle avoidance behavior used
in mobile robots,” in Proc. IEEE 28th Annu. Conf. Ind. Electron. Soc.,
Nov. 2002, pp. 2457–2461.
[2] S. X. Yang and C. Luo, “A neural network approach to complete
coverage path planning,” IEEE Trans. Syst., Man, Cybern. B, Cybern.,
vol. 34, no. 1, pp. 718–724, Feb. 2004.
[3] I. Ullah, F. Ullah, Q. Ullah, and S. Shin, “Integrated tracking and
accident avoidance system for mobile robots,Int. J. Control, Autom.
Syst., vol. 11, no. 6, pp. 1253–1265, 2013.
[4] N. D. Phuc and N. T. Thinh, “A solution of obstacle collision avoidance
for robotic fish based on fuzzy systems,” in Proc. IEEE Int. Conf. Robot.
Biomimetics (ROBIO), Dec. 2011, pp. 1707–1711.
[5] L. Zeng and G. M. Bone, “Mobile robot collision avoidance in human
environments,” Int. J. Adv. Robot. Syst., vol. 9, pp. 1–14, Jan. 2013, doi:
10.5772/54933.
[6] M. A. Zakaria, H. Zamzuri, R. Mamat, and S. A. Mazlan, “A path
tracking algorithm using future prediction control with spike detection
for an autonomous vehicle robot,” Int. J. Adv. Robot. Syst., vol. 10,
pp. 1–9, Jan. 2013.
[7] J. Borenstein and Y. Koren, “The vector field histogram—Fast obstacle
avoidance for mobile robots,” IEEE J. Robot. Autom., vol. 7, no. 3,
pp. 278–288, Jun. 1991.
[8] M. Zohaib, M. Pasha, R. A. Riaz, N. Javaid, M. Ilahi, and R. D. Khan,
“Control strategies for mobile robot with obstacle avoidance,” J. Basic
Appl. Sci. Res., vol. 3, no. 4, pp. 1027–1036, 2013.
[9] J. A. Oroko and G. N. Nyakoe, “Obstacle avoidance and path planning
schemes for autonomous navigation of a mobile robot: A review,” in
Proc. Sustain. Res. Innov. Conf., vol. 4, 2012, pp. 314–318.
[10] A. Yufka and O. Parlaktuna, “Performance comparison of bug algorithms
for mobile robots,” in Proc. 5th Int. Adv. Technol. Symp., May 2009,
pp. 1–5.
[11] T. Bakker, K. van Asselt, J. Bontsema, J. Müller, and G. van Straten,
“A path following algorithm for mobile robots,” Auto. Robots, vol. 29,
no. 1, pp. 85–97, 2010.
[12] K. M. Hasan, A. Al-Nahid, K. J. Reza, S. Khatun, and M. R. Basar,
“Sensor based autonomous color line follower robot with obstacle
avoidance,” in Proc. IEEE Bus. Eng. Ind. Appl. Colloq. (BEIAC),
Apr. 2013, pp. 598–603.
[13] W. E. Elhady, “Implementation and evaluation of image processing
techniques on a vision navigation line follower robot,” J. Comput. Sci.,
vol. 10, no. 6, pp. 1036–1044, 2014.
[14] (2015). Cyberbotics.com, Webots, accessed on Mar. 31, 2015. [Online].
Available: https://www.cyberbotics.com/overview
[15] O. Michel, “WebotsTM: Professional mobile robot simulation,” Int. J.
Adv. Robot. Syst., vol. 1, no. 1, pp. 39–42, 2004.
5028 IEEE SENSORS JOURNAL, VOL. 16, NO. 12, JUNE 15, 2016
Marwah M. Almasri received the bachelor’s
degree in computer science and engineering from
Taibah University, Medina, Saudi Arabia, and the
M.B.A. degree in management information system
from the University of Scranton, PA, in 2011.
She is currently pursuing the Ph.D. degree with
the Computer Science and Engineering Department,
University of Bridgeport. She has been recognized
by Upsilon Pi Epsilon and Phi Kappa Phi honor
society organizations for her academic accomplish-
ments. She received a Scholarship Award from the
Honor Society of Upsilon Pi Epsilon Organization, 2013. She also received the
Engineering Academic Achievement Award from the University of Bridgeport,
2014-2015. She received the Best Paper Award at the IEEE Long Island
Systems, Applications and Technology Conference, 2015. She also received
the Third Position Award in Faculty Research Day at the University of
Bridgeport, 2016. She also received an award from the MIS Department,
University of Scranton, for her outstanding work. Her research interests
include wireless sensor networks, computer networks, mobile computing,
autonomous mobile robots, and data fusion. She published a number of quality
research papers in national and international conferences and journals.
Abrar M. Alajlan received the B.Sc. degree
in computer science from Umm Al-Qura
University, Makkah, Saudi Arabia, in 2008, and the
M.Sc. degree in management information systems
from Troy University, Alabama, USA, in 2012. She
is currently pursuing the Ph.D. degree in computer
science and engineering with the University of
Bridgeport. She has been recognized by the Honor
Society of Upsilon Pi Epsilon Organization and
the Honor Society of Phi Kappa Phi Organization.
She also received the Engineering Academic
Achievement Award from the University of Bridgeport, 2015. She received
the Best Paper Award in the applications track from the IEEE Long Island
Systems, Applications and Technology Conference 2015. She is a member of
technical program committees of some international conferences. Her fields
of interest are autonomous ground robots, energy efficiency, obstacle/collision
avoidance, dynamic motion control, and real-time programming. She has
published several papers in influential international journals and conferences.
Khaled M. Elleithy received the B.Sc. degree in
computer science and automatic control and the
M.S. degree in computer networks from Alexandria
University, in 1983 and 1986, respectively, and the
M.S. and Ph.D. degrees in computer science from
the Center for Advanced Computer Studies, Uni-
versity of Louisiana–Lafayette, in 1988 and 1990,
respectively. He is the Associate Vice President for
Graduate Studies and Research with the University
of Bridgeport. He is a Professor of Computer Sci-
ence and Engineering. His research interests include
wireless sensor networks, mobile communications, network security, quantum
computing, and formal approaches for design and verification. He has pub-
lished more than 350 research papers in national/international journals and
conferences in his areas of expertise. He is the Editor or Co-Editor for 12
books published by Springer.
Dr. Elleithy has more than 25 years of teaching experience. He was a
recipient of the Distinguished Professor of the Year, University of Bridgeport,
from 2006 to 2007. He supervised hundreds of senior projects, M.S. theses,
and Ph.D. dissertations. He developed and introduced many new undergrad-
uate/graduate courses. He also developed new teaching/research laboratories
in his area of expertise. His students have won more than 20 prestigious
national/international awards from the IEEE, ACM, and ASEE.
Dr. Elleithy is a member of the technical program committees of many inter-
national conferences as recognition of his research qualifications. He served
as a Guest Editor for several international journals. He was the Chairperson
for the International Conference on Industrial Electronics, Technology and
Automation. Furthermore, he is the Co-Chair and Co-Founder of the Annual
International Joint Conferences on Computer, Information, and Systems
Sciences, and Engineering Virtual Conferences from 2005 to 2014.
... The foregoing is represented by the use of control algorithms, such as Villela [3] and IPC [4], which seek to manipulate the speeds of the robot by controlling its linear and angular speed. To these algorithms, other systems can be added to strengthen their tasks-for example, obstacle avoidance systems through the use of sensor arrays arranged in the robot [5] and line trackers to improve position control [6,7]. In the previous cases, models of the feedback system are used that allow for the modification of a control law through inputs to calculate the next action and gradually decrease the error associated with its measurements. ...
Article
Full-text available
This article proposes the use of reinforcement learning (RL) algorithms to control the position of a simulated Kephera IV mobile robot in a virtual environment. The simulated environment uses the OpenAI Gym library in conjunction with CoppeliaSim, a 3D simulation platform, to perform the experiments and control the position of the robot. The RL agents used correspond to the deep deterministic policy gradient (DDPG) and deep Q network (DQN), and their results are compared with two control algorithms called Villela and IPC. The results obtained from the experiments in environments with and without obstacles show that DDPG and DQN manage to learn and infer the best actions in the environment, allowing us to effectively perform the position control of different target points and obtain the best results based on different metrics and indices.
... When there are no new frontiers left to detect, the entire environment is deemed to have been explored thoroughly [24]. Some other techniques based on frontiers that reduce the computational requirements and exploration turnaround time can be found in [15,39,40]. Frontier-based exploration can perform well not only on a single mobile platform but also on multiple robots [41,42]. ...
Preprint
In mobile robotics, area exploration and coverage are critical capabilities. In most of the available research, a common assumption is global, long-range communication and centralised cooperation. This paper proposes a novel swarm-based coverage control algorithm that relaxes these assumptions. The algorithm combines two elements: swarm rules and frontier search algorithms. Inspired by natural systems in which large numbers of simple agents (e.g., schooling fish, flocking birds, swarming insects) perform complicated collective behaviors, the first element uses three simple rules to maintain a swarm formation in a distributed manner. The second element provides means to select promising regions to explore (and cover) using the minimization of a cost function involving the agent's relative position to the frontier cells and the frontier's size. We tested our approach's performance on both heterogeneous and homogeneous groups of mobile robots in different environments. We measure both coverage performance and swarm formation statistics that permit the group to maintain communication. Through a series of comparison experiments, we demonstrate the proposed strategy has superior performance over recently presented map coverage methodologies and the conventional artificial potential field based on a percentage of cell-coverage, turnaround, and safe paths while maintaining a formation that permits short-range communication.
... Manipulators with many degrees of freedom for performing a wide range of sophisticated manipulations in the work environment [4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20]. ...
... In this paper, we did not address the sensor fusion model as we already presented a collision-free mobile robot navigation based on the fuzzy logic fusion model in [21,22]. Eight distance sensors and a range finder camera are used for the collision avoidance approach, where three ground sensors are used for the line or path following approach. ...
Chapter
Due to the rise of e-commerce material handling industry has been experiencing significant changes, especially in the COVID-19 pandemic. Notwithstanding the broad utilization of Automated Guided Vehicles (AGVs) for many years, the demand for Autonomous Mobile Robot (AMR) is rapidly increasing. One of the main challenges in autonomous operation in an unstructured environment is gapless perception. In this paper, we present a concept for reactive collision avoidance using Capacitive Proximity Sensor (CPS), with the goal to augment robot perception in close proximity situations. We propose a proximity-based potential field method using capacitive measurement for collision avoidance. A local minima problem is solved by applying tangential forces around the virtual obstacle points. We evaluate the proof-of-concept both in simulation and on a real mobile robot equipped with CPS. The results have shown that capacitive sensing technology can compensate localization tolerance and odometry drift closing the perception gap in close proximity scenarios.
Article
Mobile robotic systems are used in a wide range of applications. Especially in the assistive field, they can enhance the mobility of the elderly and disable people. Modern robotic technologies have been implemented in wheelchairs to give them intelligence. Thus, by equipping wheelchairs with intelligent algorithms, controllers, and sensors, it is possible to share the wheelchair control between the user and the autonomous system. The present research proposes a methodology for intelligent wheelchairs based on head movements and vector fields. In this work, the user indicates where to go, and the system performs obstacle avoidance and planning. The focus is developing an assistive technology for people with quadriplegia that presents partial movements, such as the shoulder and neck musculature. The developed system uses shared control of velocity. It employs a depth camera to recognize obstacles in the environment and an inertial measurement unit (IMU) sensor to recognize the desired movement pattern measuring the user’s head inclination. The proposed methodology computes a repulsive vector field and works to increase maneuverability and safety. Thus, global localization and mapping are unnecessary. The results were evaluated by simulated models and practical tests using a Pioneer-P3DX differential robot to show the system’s applicability.
Article
This paper highlights some limitations of the VFH+ algorithm on the domain of local obstacle avoidance. An enhanced algorithm dubbed VFH+D is proposed, which considers a different way of calculating the obstacle vector magnitude and a decay algorithm for dynamic obstacle avoidance. Experiments were conducted to compare both algorithms on two different mecanum wheeled robots, VFH+D achieved higher average speeds and lower distance traveled to reach the goal.
Article
Full-text available
In a fast growing industrial world, carriers are required to carry products from one manufacturing plant to another which are usually in different buildings or separate blocks. This study intends to automate this sector using vision controlled mobile robots instead of laying railway tracks which are both expensive and inconvenient. To achieve this purpose an autonomous robot with computer vision as its primary sensor for gaining information about its environment for path following is developed. The proposed Line Follower Robot (LFR) consists of web cam mounted on the vehicle and connected to Matlab platform. A PID control algorithm will be applied to adjust the robot on the line. The proposed LFR is accomplished through the following stages: Firstly, the image is acquired using the web cam. The acquired RGB image is converted to another color coordinates for testing and comparing to choose the best color space. After that, the image contrast is enhanced using histogram equalization and then Wiener, Lee and Kuan filters are implemented to decide the best filter to be implemented. Subsequently, the basic morphological operations are carried out to choose the suitable operation to be utilized. The results are evaluated qualitatively and quantitatively from the points of Peak Signal-to-Noise Ratio (PSNR), entropy and image smoothness. The results show that the closing process is more suitable for the vision enhancement purpose, as well as the wiener filter gives the best result as regards the time and efficiency. Besides, the demonstrated LFR is capable of tracking a pre specified colored line as long as it is different from the surroundings.
Article
Full-text available
Trajectory tracking is an important aspect of autonomous vehicles. The idea behind trajectory tracking is the ability of the vehicle to follow a predefined path with zero steady state error. The difficulty arises due to the nonlinearity of vehicle dynamics. Therefore, this paper proposes a stable tracking control for an autonomous vehicle. An approach that consists of steering wheel control and lateral control is introduced. This control algorithm is used for a non-holonomic navigation problem, namely tracking a reference trajectory in a closed loop form. A proposed future prediction point control algorithm is used to calculate the vehicle's lateral error in order to improve the performance of the trajectory tracking. A feedback sensor signal from the steering wheel angle and yaw rate sensor is used as feedback information for the controller. The controller consists of a relationship between the future point lateral error, the linear velocity, the heading error and the reference yaw rate. This paper also introduces a spike detection algorithm to track the spike error that occurs during GPS reading. The proposed idea is to take the advantage of the derivative of the steering rate. This paper aims to tackle the lateral error problem by applying the steering control law to the vehicle, and proposes a new path tracking control method by considering the future coordinate of the vehicle and the future estimated lateral error. The effectiveness of the proposed controller is demonstrated by a simulation and a GPS experiment with noisy data. The approach used in this paper is not limited to autonomous vehicles alone since the concept of autonomous vehicle tracking can be used in mobile robot platforms, as the kinematic model of these two platforms is similar.
Article
Full-text available
In the intelligent transportation field, various accident avoidance techniques have been applied. One of the most common issues with these is the collision, which remains an unsolved problem. To this end, we developed a Collision Warning and Avoidance System (CWAS), which was implemented in the wheeled mobile robot. Path planning is crucial for a mobile robot to perform a given task correctly. Here, a tracking system for mobile robots that follow an object is presented. Thus, we implemented an integrated tracking system and CWAS in a mobile robot. Both systems can be activated independently. Using the CWAS, the robot is controlled through a remotely controlled device, and collision warning and avoidance functions are performed. Using the tracking system, the robot performs tasks autonomously and maintains a constant distance from the followed object. Information on the surroundings is obtained through range sensors, and the control functions are performed through the microcontroller. The front, left, and right sensors are activated to track the object, and all the sensors are used for the CWAS. The proposed system was tested using the binary logic controller and the Fuzzy Logic Controller (FLC). The efficiency of the robot was improved by increasing the smoothness of motion via the FLC, achieving accuracy in tracking and increasing the safety of the CWAS. Finally, simulations and experimental outcomes have shown the usefulness of the system.
Article
Full-text available
Collision avoidance is a fundamental requirement for mobile robots. Avoiding moving obstacles (also termed dynamic obstacles) with unpredictable direction changes, such as humans, is more challenging than avoiding moving obstacles whose motion can be predicted. Precise information on the future moving directions of humans is unobtainable for use in navigation algorithms. Furthermore, humans should be able to pursue their activities unhindered and without worrying about the robots around them. In this paper, both active and critical regions are used to deal with the uncertainty of human motion. A procedure is introduced to calculate the region sizes based on worst‐case avoidance conditions. Next, a novel virtual force field‐based mobile robot navigation algorithm (termed QVFF) is presented. This algorithm may be used with both holonomic and nonholonomic robots. It incorporates improved virtual force functions for avoiding moving obstacles and its stability is proven using a piecewise continuous Lyapunov function. Simulation and experimental results are provided for a human walking towards the robot and blocking the path to a goal location. Next, the proposed algorithm is compared with five state‐of‐the‐art navigation algorithms for an environment with one human walking with an unpredictable change in direction. Finally, avoidance results are presented for an environment containing three walking humans. The QVFF algorithm consistently generated collision‐free paths to the goal.
Conference Paper
Full-text available
This paper introduces the multiple source Multiple Destination Robot (MDR-l) having the ability to choose a desired line among multiple lines autonomously. Every line has different colors as their identities. The robot can differentiate among various colors and choose a desired one to find its target. Unlike any other simple line follower robot, this robot can be considered as a true autonomous line follower robot having the ability to detect presence of obstacle on its path. A powerful close loop control system is used in the robot. The robot senses a line and endeavors itself accordingly towards the desired target by correcting the wrong moves using a simple feedback mechanism but yet very effective closed loop system. The robot is capable of following very congested curves as it receives the continuous data from the sensors.
Conference Paper
Full-text available
In this study, Bug1, Bug2, and DistBug motion planning algorithms for mobile robots are simulated and their performances are compared. These motion planning algorithms are applied on a Pioneer mobile robot on the simulation environment of MobileSim. Sonar range sensors are used as the sensing elements. This study shows that mobile robots build a new motion planning using the bug's algorithms only if they meet an unknown obstacle during their motion to the goal. Each of the bug's algorithms is tested separately for an identical configuration space. At the end of this study, the performance comparison of the bug's algorithms is shown. Keywords: Bug, pioneer, robots, sonar, MobileSim Paper videos can be reached from this URL: http://www.youtube.com/playlist?list=PLENSkat0854tTGxwtIYy3JOrSsPGFx074
Article
Full-text available
Obstacle avoidance is an important task in the field of robotics, since the goal of autonomous robot is to reach the destination without collision. Several algorithms have been proposed for obstacle avoidance, having drawbacks and benefits. In this survey paper, we mainly discussed different algorithms for robot navigation with obstacle avoidance. We also compared all provided algorithms and mentioned their characteristics; advantages and disadvantages, so that we can select final efficient algorithm by fusing discussed algorithms. Comparison table is provided for justifying the area of interest
Article
Full-text available
Cyberbotics Ltd. develops WebotsTM, a mobile robotics simulation software that provides you with a rapid prototyping environment for modelling, programming and simulating mobile robots. The provided robot libraries enable you to transfer your control programs to several commercially available real mobile robots. WebotsTM lets you define and modify a complete mobile robotics setup, even several different robots sharing the same environment. For each object, you can define a number of properties, such as shape, color, texture, mass, friction, etc. You can equip each robot with a large number of available sensors and actuators. You can program these robots using your favorite development environment, simulate them and optionally transfer the resulting programs onto your real robots. WebotsTM has been developed in collaboration with the Swiss Federal Institute of Technology in Lausanne, thoroughly tested, well documented and continuously maintained for over 7 years. It is now the main commercial product available from Cyberbotics Ltd.
Article
In a fast growing industrial world, carriers are re quired to carry products from one manufacturing pla nt to another which are usually in different buildings or separate blocks. This study intends to automate this sector using vision controlled mobile robots i nstead of laying railway tracks which are both expensive and inconvenient. To achieve this purpose an autonomous robot with computer vision as its primary sensor for gaining information about its en vironment for path following is developed. The proposed Line Follower Robot (LFR) consists of web cam mounted on the vehicle and connected to Matlab platform. A PID control algorithm will be ap plied to adjust the robot on the line. The proposed LFR is accomplished through the following stages: F irstly, the image is acquired using the web cam. The acquired RGB image is converted to another color coordinates for testing and comparing to choose the best color space. After that, the image contras t is enhanced using histogram equalization and then Wiener, Lee and Kuan filters are implemented to dec ide the best filter to be implemented. Subsequently, the basic morphological operations ar e carried out to choose the suitable operation to b e utilized. The results are evaluated qualitatively a nd quantitatively from the points of Peak Signal-to Noise Ratio (PSNR), entropy and image smoothness. The results show that the closing process is more suitable for the vision enhancement purpose, as wel l as the wiener filter gives the best result as reg ards the time and efficiency. Besides, the demonstrated LFR is capable of tracking a pre specified colored line as long as it is different from the surroundin gs.
Article
This paper is addressed in fuzzy decision made to change on trajectory direction of robot's path. The generation of membership functions for fuzzy systems is a challenging problem. While robotic fish is swimming in the environment with a high potential of hurdles, it is possible for them to meet with collisions because there are many kinds of obstructions in water. Therefore, approaching natural and smooth movements for robotic fish is related to the detecting and recognizing obstacles as well as trying to avoid any kind of collision. Two fundamental data are the measuring of distance from sensor to obstacles and the possible existence of obstacles. Because the data is nonlinear, they can be solved with the fuzzy trajectory direction. The changing direction of trajectory of a robot should be made so that the robot can move in the direction with no hurdles. The experimental results indicate that robotic fish changes the trajectory planed following the proposed fuzzy decision results within the higher obstacles density without collision.