Content uploaded by Khaled Elleithy
Author content
All content in this area was uploaded by Khaled Elleithy on Oct 05, 2017
Content may be subject to copyright.
IEEE SENSORS JOURNAL, VOL. 16, NO. 12, JUNE 15, 2016 5021
Trajectory Planning and Collision Avoidance
Algorithm for Mobile Robotics System
Marwah M. Almasri, Student Member, IEEE, Abrar M. Alajlan, Student Member, IEEE,
andKhaledM.Elleithy,Senior Member, IEEE
Abstract— The field of autonomous mobile robotics has
recently gained many researchers’ interests. Due to the specific
needs required by various applications of mobile robot systems,
especially in navigation, designing a real time obstacle avoidance
and path following robot system has become the backbone
of controlling robots in unknown environments. Therefore, an
efficient collision avoidance and path following methodology is
needed to develop an intelligent and effective autonomous mobile
robot system. This paper introduces a new technique for line
following and collision avoidance in the mobile robotic systems.
The proposed technique relies on the use of low-cost infrared
sensors, and involves a reasonable level of calculations, so that
it can be easily used in real-time control applications. The
simulation setup is implemented on multiple scenarios to show the
ability of the robot to follow a path, detect obstacles, and navigate
around them to avoid collision. It also shows that the robot has
been successfully following very congested curves and has avoided
any obstacle that emerged on its path. Webots simulator was used
to validate the effectiveness of the proposed technique.
Index Terms—Collision avoidance, path planning, robotics
control, Webots, mobile robot, e-puck.
I. INTRODUCTION
THERE has been a spurt of interest in recent years in the
area of autonomous mobile robots that are considered
as mechanical devices capable of completing scheduled tasks,
decision-making and navigating without any involvement from
humans [1], [2]. This has brought up some serious concerns
about the interaction between mobile robots and the environ-
ment, including autonomous mobile navigation, path planning,
obstacle avoidance etc.
Deploying autonomous mobile robots is coupled with the
use of external sensors that assist in detecting obstacles in
advance. The mobile robot uses these sensors to receive
information about the tested area through digital image
processing or distance measurements to recognize any pos-
sible obstacle [3]. Several ways of testing the surroundings
have been introduced in the literature of path planning of
mobile robot. Although ultrasonic sensors, positioning systems
and camera are most widely used to move in an unknown
environment, they are not the suitable solution to facilitate
and neaten the robot structure. Therefore, some infrared
sensors are used to follow an optimum non-collision path
Manuscript received August 18, 2015; accepted March 31, 2015. Date
of publication April 12, 2016; date of current version May 17, 2016. The
associate editor coordinating the review of this paper and approving it for
publication was Prof. Julian C. C. Chan.
The authors are with the Computer Science and Engineering Department,
University of Bridgeport, Bridgeport, CT 06604 USA (e-mail: maalmasr@
my.bridgeport.edu; aalajlan@my.bridgeport.edu; elleithy@bridgeport.edu).
Digital Object Identifier 10.1109/JSEN.2016.2553126
from source to destination according to particular performance
objectives [4].
In addition, path planning in mobile robot can be divided
into two types based on the robot’s knowledge of the environ-
ment. In the global path planning where the environmental
information is predefined, the global path planning is also
called static collision avoidance planning and the local path
planning where the environmental information is not pre-
known, the local path planning is also called dynamic collision
avoidance planning. The local path planning is more demand-
ing than global path planning since it has a changeable direc-
tion and requires a prediction of dynamic obstacle position
at every time step in order to achieve the requirement of a
time-critical trajectory planning. The local path planning also
considers some kinds of measurements regarding the dimen-
sions of the moving obstacle such as position, size and shape
through sensors as fast as possible to avoid arisen unknown
obstacle while the robot is moving toward goal state [5].
The most well-known sensors used to follow a specific path
while detecting obstacles and measuring the distance between
robots and objects are infrared, ultrasonic, and laser sensors.
Given the specific needs required by different applications
of mobile robot especially in navigation, it is crucial to develop
an autonomous robotic system that is capable of avoiding
obstacles while following the path in real time applications.
Consequently, an efficient collision avoidance and path fol-
lowing technique is essential to assure intelligent and effective
autonomous mobile robot system.
This paper presents the results of a research aimed to
develop a new technique for line following and obstacle
avoidance relying on the use of low cost infrared sensors,
and involving a reasonable level of calculations, so that it
can be easily used in real time control applications with
microcontrollers.
The paper is organized as follows. Section II reviews state-
of-the-art literature for collision avoidance and line follow-
ing techniques. Section III discusses a novel technique for
collision avoidance and line-following. Section IV describes
the robotic platform used. Section V discusses and presents
simulation results to validate the proposed approach. Finally,
Section VI offers conclusions.
II. REVIEW AND ANALYSIS OF PREVIOUS TECHNIQUES
Mobile robot motion planning in an unknown environment
is always been the main research focus in the mobile robotics
area due to its practical importance and the complex nature
1558-1748 © 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
5022 IEEE SENSORS JOURNAL, VOL. 16, NO. 12, JUNE 15, 2016
of the problem. Several collision avoidance and line following
techniques have been introduced lately. Each of these tech-
niques was developed to be used in specific applications for
the purposes of education, entertainment, business and so on.
The ability to detect obstacles in real time mobile robotics
systems is a very critical requirement for any practical applica-
tion of autonomous vehicles. The main objective behind using
the obstacle avoidance approach is to obtain a collision-free
trajectory from the starting point to the target in monitoring
environments. There are two types of obstacles: static obstacle,
which has a fixed position and requires a priori knowledge
of the obstacle; dynamic obstacle, which does not require
any priori knowledge of the motion of the obstacle and
has uncertain motion and patterns (moving objects). Indeed,
detecting dynamic obstacles is more challenging than detecting
static obstacles since the dynamic obstacle has a changeable
direction and requires a prediction of the obstacle position
at every time step in order to achieve the requirement of a
time-critical trajectory planning [5].
Moreover, Path tracking is a major aspect of mobile robotics
systems that is expected to be widely used in industries and
airports to enhance the automatic transportation procedure.
In general, the line-following technique is used to track a
predefined path with zero steady state error [6].
In terms of the research literature, there have been a number
of techniques proposed and studied which addressed collision
avoidance and line-following approaches in mobile robotics
systems. A brief overview of some of these approaches is
presented in the following sections.
A. Collision Avoidance Approach
Many collision avoidance algorithms have been proposed in
the literature of robotics motion planning that allow the robot
to reach its target without colliding with any obstacles that
may exist in its path. Each algorithm differs in the way of
avoiding static/dynamic obstacles.
The vector field histogram (VFH) [7] is a real-time obstacle
avoidance method that uses the two-dimensional Cartesian
histogram grid to perform a two-stage data reduction
process. First, it converts the two-dimensional histogram to
a one-dimensional polar histogram. Second, it selects the
most suitable sector with low polar density and calculating
the steering angle in that direction. A known problem of the
VFH is that the robot can only detect static obstacles.
The Artificial Potential Field (APF) is presented in [8].
The algorithm is used to find the shortest path between the
starting and target points. The obstacle produces a repulsive
force to repel the robot while the target produces an attractive
force to attract the robot. That is, the total force on the robot
can be calculated based on the attractive and repulsive forces.
Accordingly, this approach is very sensitive to local minima
in case of a symmetric environment [9].
Another obstacle avoidance method is the Bug algo-
rithm [10] where the robot follows the boundary of each
obstacle in its way until the path is free. The robot then restarts
moving toward the target without taking into account any other
parameters. There are some significant shortcomings in the
Bug algorithm. To illustrate, the algorithm does not consider
any other obstacles during the edge detection process. Also,
it only considers the most recent sensor readings that might
be affected by sensor noise.
B. Line Following Approach
Various methods have been proposed as solutions for
line following problems that generally include line-following,
object-following, path tracking and so on. The line follower
in robotics is simply an autonomous mobile robot that detects
a particular line and keeps following it. This line can be as
visible as a black line in a white area or as invisible as a
magnetic field.
Line following in mobile robots can be achieved in three
basic operations. First, capturing the line width by camera
(image processing) or some reflective sensors mounted at the
front of the robot. Second, adjusting the robot to follow the
predefined line by using some infrared sensors placed around
the robot. Third, controlling the robot speed based on the line
condition.
Bakker et al. proposed a new technique for path following
control for mobile robotics systems that sets up the robot to
navigate autonomously along its path [11]. The robot utilizes
a real time Kinematic Differential Global Positioning System
to determine both the position and orientation that correspond
to the path. The performance of the control shows sufficient
results when tested on different paths shapes including a step,
ramp, and headland path.
Another type of line following technique is described
in [12], where the mobile robot is able to choose a desired line
among multiple lines autonomously. This robot can detect not
only black and white colors but also can differentiate between
multiple colors. Each line has a specific color and the robot
can select the desired color to reach its destination. With this
technique, the robot is also able to follow very congested
curves as it moves toward its target.
Another path following approach was introduced in [13],
where it relies on the use of digital image processing
techniques to track the robot path. It uses computer vision
as its main sensor (web cam) for surveying the environment.
Moreover, the Proportional-Integral-Derivative control
algorithm is also employed to adjust the robot on the line.
The demonstrated approach proved its robustness against
darkness, camera distortion and lighting.
Several other obstacle avoidance and line following
approaches are suitable for real-time applications but will not
be discussed here due to space limitations.
Among the reported approaches, the proposed one is unlike
any other simple technique. This approach is intended to
develop a line follower robot that has the ability to detect
and avoid any obstacles that emerges on its path. The mobile
robot is equipped with multiple sensors, and a microcontroller
that is used to receive information about the surrounding area
and then make a decision based on the sensors readings.
III. PROPOSED TECHNIQUE:ARCHITECTURE AND DESIGN
In this brief, a fairly general technique is developed that
has components of formation development, line follower and
ALMASRI et al.: TRAJECTORY PLANNING AND COLLISION AVOIDANCE ALGORITHM FOR MOBILE ROBOTICS SYSTEM 5023
Fig. 1. Block diagram of the proposed technique.
obstacles detection. The contribution of this work relies on
the use of low-cost infrared sensors, so that it can be easily
used in real-time robotic applications. The block diagram of
the proposed technique is given in Fig. 1.
The controller receives input values directly from the
infrared sensors. The robot controller applies the line
follower (LFA) and collision avoidance (CAA) approaches.
The line follower approach (LFA) receives ground sensor
readings as input values and, the controller will then issue a
signal to the robot to adjust the motor speeds and follow the
line; whereas the collision avoidance approach (CAA) receives
distance sensor readings as an input value. When an object is
detected in front of the robot, CAA is responsible to spin the
robot direction and adjust its speed according to the obstacle’s
position in order to avoid collision. By applying both
approaches, the robot follows the line and detects obstacles
simultaneously. In other words, if an obstacle is detected, the
robot must spin around the obstacle until it finds the line again.
An efficient algorithm of the proposed technique is devel-
oped as in Fig. 2, to make the robot have the ability to follow
the path and avoid obstacles along its way.
As shown in Fig.2, initialization is needed for the global
variables such as the number of distance and ground sensors
used (8 distance sensors and 3 ground sensors) and the colli-
sion avoidance threshold value before starting the line follower
and collision avoidance robot. After identifying the number of
sensors used for each type of sensors, enabling these sensors is
the next step. After that, for each time step of the simulation,
all the eight distance sensors’ values and the three ground
sensors’ values are obtained and. The three ground sensors
are responsible for following the line and the motor speeds are
adjusted based on these values. However, in case of possible
collisions, the front distance sensors values are compared with
a predefined collision avoidance threshold. Where if one of
these sensors reaches the threshold, a front obstacle is detected.
Subsequently, the reading of the right and left distance
sensors are taken to determine the direction of the robot
movement in case of front obstacle, and later compared with
the threshold to check for detected right/left obstacle. When
the robot’s path is determined, the left and right motor speeds
are adjusted accordingly. After avoiding the obstacle, the robot
will return to the line and continue following its path.
Fig. 2. Line follower and collision avoidance Algorithm.
IV. ROBOTIC PLATFORM
Using simulations to test the proposed technique is very
useful prior to investigations with real robots. They are
more convenient to use, less expensive, and easier to setup.
5024 IEEE SENSORS JOURNAL, VOL. 16, NO. 12, JUNE 15, 2016
Fig. 3. Schematic drawing of distance sensors in the e-puck robot.
In this work, Webots simulator is used to develop a line
follower and collision avoidance environment. It is one of the
most well-known simulation software used in mobile robots
that is developed by Cyberbotics [14]. Webots simulator is
employed in this work because it is equipped with a number
of sensor and actuator devices, such as Infrared sensors for
proximity and light measurements, differential wheels, camera
module, motors, touch sensors, emitters, receivers, etc. It also
allows importing 3D models in its scene tree from most 3D
modeling software through the VRML97 standard. Moreover,
the user can setup the properties of each object used in
the simulation like the shape, color, texture, mass, friction,
etc [15].
The distance sensors used for collision avoidance are
8 infrared sensors placed around the robot. Fig. 3 shows a
semantic drawing of the top view of the robot used in this
work, which is called the “e-puck” robot. The red lines repre-
sent the directions of infrared distance sensors. For simplicity
purposes, we grouped all sensors based on their positions.
Moreover, the robot detects the obstacles based on the values
returned by these sensors that ranged between 0 and 2000.
That is, the values returned from distance sensors depend
on the distance between the robot and the obstacle. In other
words, the values returned will be 0 (no light), if there is
no obstacle detected, and 1000 means obstacle is too close
to the robot (big amount of light is measured), etc. For the
line-following approach, another type of infrared sensor called
ground sensors is used. They are three proximity sensors
mounted on the frontal area of the robot and pointed to the
ground in order to detect the line as shown in Fig. 4. These
sensors allow the robot to see the color level of the ground at
three locations in the line across its front.
V. SIMULATION SETUP
In this work, we use Webots simulator and e-puck robot for
our experiment as shown in Fig. 5. We show some simulations
to illustrate the proposed approach. All simulations were
programmed using Matlab as a controller. First, the robot starts
sensing the environment with its three infrared sensors (ground
sensors) and follows a black line drawn on the ground where
Fig. 4. Ground sensors for the e-puck robot.
Fig. 5. Robot platform of the simulation where there are two obstacles placed
on the robot path.
there are two obstacles along its path. When the robot detects
an object with its eight infrared sensors (distance sensors),
it navigates around it to avoid collision based on the proposed
algorithm stated above. Lastly, the robot recovers its path
afterwards. The robot will follow the same steps each time
it detects an obstacle and recovers its path again. The robot
performance in different situations is illustrated by Fig. 6.
Fig. 6. (a), shows the time of the simulation, where the
robot detects the first object and stops. The robot checks both
directions left and right to decide which way to go. It is much
like pedestrian crossing roadways. The robot then gets the left
sensor readings and right sensor readings in order to compare
it with the threshold. Once these calculations are obtained,
the robot will turn around the object to avoid obstacles as
in Fig. 6. (b), (c), and (d).
As a final point, the robot successfully avoids the obstacles
and recovers its path again as depicted in Fig. 6. (e). The robot
will execute the same steps with each obstacle detected along
its way as in Fig. 6. (f), (g), (h), and (i).
VI. RESULTS AND DATA ANALYSIS
This section focuses on the results obtained from the simula-
tion and the analysis of data gathered which validates both the
performance and the effectiveness of the proposed technique.
The following subsections discuss in detail the simulation
results.
ALMASRI et al.: TRAJECTORY PLANNING AND COLLISION AVOIDANCE ALGORITHM FOR MOBILE ROBOTICS SYSTEM 5025
Fig. 6. Snapshots of the simulation run at different simulation times.
Fig. 7. Distance sensors values at 3 seconds of the simulation time.
A. Distance Sensors Data Analysis
The e-puck is equipped with 8 distance sensors (Infrared
sensors) for collision avoidance around environment. These
sensors have values varying from 0 to 2000, where 1000 or
more means that the obstacle very close so the e-puck should
avoid it accordingly. Various sensor measurements have been
taken at different simulation times. Fig.7 shows the distance
sensor readings at the beginning of the simulation (3 seconds
of the simulation time) where all 8 distance sensors have low
values (50 or less) due to no presence of obstacles along the
robot path. However, once one of these 8 sensors detects an
obstacle, their values increase to 1000 or more. As depicted
in Fig.8, distance sensor 1 and 8 (the front sensors) have higher
values than the remaining sensors (especially the left front
sensor has more than 1000). This indicates that there is an
obstacle in front of the robot.
Fig. 8. Distance sensors values at 9 seconds of the simulation time.
Fig. 9. Distance sensors values at 35 seconds of the simulation time.
Fig. 10. Distance sensors values at 40 seconds of the simulation time.
According to the proposed algorithm, when one of the
distance sensors reaches the threshold, the robot will avoid
the obstacle and move around it to recover its path.
Fig.9 and Fig. 10, summarize the distance sensor readings
once the robot returns to the line. It also shows that at
35 and 40 seconds of the simulation time, all sensor readings
are low which means that there is no obstacle in front or
around the robot. At 64 seconds of the simulation time, another
obstacle is detected with one of the front sensors which reaches
5026 IEEE SENSORS JOURNAL, VOL. 16, NO. 12, JUNE 15, 2016
Fig. 11. Distance sensors values at 46 seconds of the simulation time.
Fig. 12. Distance sensors values at 1 min and 13 sec of the simulation time.
the threshold as represented in Fig.11. In Fig. 12 and Fig. 13,
the sensor measurements are shown at 1 minute and
13 seconds, and 1 minute and seconds respectively. Both
figures indicate low sensor readings thus there are no obstacles
detected. In addition, Table. I. summarizes all 8 distance sensor
values at various simulation times.
B. Ground Sensors Data Analysis
The e-puck robot is also equipped with three ground sensors
which are infrared sensors facing the ground. Their role is
to detect the black line in a white surface in order to guide
the robot. After obtaining the ground sensor readings, is
computed, which is the difference between the right and left
ground sensors. After this, the left and right motor speeds are
adjusted accordingly. Table. II shows all three ground sensor
values at several simulation times along with their values.
Fig. 14, displays all three ground sensor measurements
during the simulation whereas Fig. 15 shows the left and right
robot’s motor speeds. As indicated in these two figures, at
35 s, 1:13 s, and 1:18 s, the highest values of the ground
sensors and speeds are recorded. At 35 s and 1:13 s, where the
robot detects the first and second obstacles, the right ground
Fig. 13. Distance sensors values at 1 min and 18 sec of the simulation time.
Fig. 14. All three ground sensors measurements at different simulation times.
Fig. 15. Left and right motor speeds.
sensor (gs3) has a very high value which indicates that the
robot is turning sharply left to avoid the obstacles (Fig.14).
Furthermore, Fig.15 validates this outcome as shown at 35 s
and 1:13 s of the simulation time. The left motor speeds are
in negative values whereas the right motor speeds are in very
large positive values.
ALMASRI et al.: TRAJECTORY PLANNING AND COLLISION AVOIDANCE ALGORITHM FOR MOBILE ROBOTICS SYSTEM 5027
TAB LE I
SUMMARIZES ALL DISTANCE SENSORS MEASUREMENTS AT DIFFERENT SIMULATION TIMES
TAB LE I I
SUMMARIZES ALL GROUND SENSORS READINGS AND VALUES
AT DIFFERENT SIMULATIONTIMES
This demonstrates that the robot is turning sharply left to
avoid the obstacle in front of it. Finally, as shown in Fig. 14, at
1:18 s, the left ground sensor (gs1) is reading a much higher
value than the right ground sensor (gs3) because the robot
is turning right due to the curvy line (see also Fig. 6 (i)).
In addition, Fig. 15 shows that at 1:18 s, the left motor speed
is much higher as compared to the right motor speed which
indicates again that the robot is turning right to follow the
curvy line.
VII. CONCLUSIONS
Recently, mobile robot navigation in an unknown environ-
ment has been the research focus in the mobile robot intelligent
control domain. In this paper, we developed a line follower
robot that has the ability to detect and avoid any obstacles that
emerge on its path. It depends on the use of low-cost infrared
sensors (distance sensors and ground sensors), that are used to
measure and obtain the distance and orientation of the robot.
The data is utilized in the Webots simulator to validate the
effectiveness and the performance of the proposed technique.
In general, the proposed approach unlike any other simple
technique can be easily used in different real-time robotic
applications.
REFERENCES
[1] L. E. Zarate, M. Becker, B. D. M. Garrido, and H. S. C. Rocha, “An arti-
ficial neural network structure able to obstacle avoidance behavior used
in mobile robots,” in Proc. IEEE 28th Annu. Conf. Ind. Electron. Soc.,
Nov. 2002, pp. 2457–2461.
[2] S. X. Yang and C. Luo, “A neural network approach to complete
coverage path planning,” IEEE Trans. Syst., Man, Cybern. B, Cybern.,
vol. 34, no. 1, pp. 718–724, Feb. 2004.
[3] I. Ullah, F. Ullah, Q. Ullah, and S. Shin, “Integrated tracking and
accident avoidance system for mobile robots,” Int. J. Control, Autom.
Syst., vol. 11, no. 6, pp. 1253–1265, 2013.
[4] N. D. Phuc and N. T. Thinh, “A solution of obstacle collision avoidance
for robotic fish based on fuzzy systems,” in Proc. IEEE Int. Conf. Robot.
Biomimetics (ROBIO), Dec. 2011, pp. 1707–1711.
[5] L. Zeng and G. M. Bone, “Mobile robot collision avoidance in human
environments,” Int. J. Adv. Robot. Syst., vol. 9, pp. 1–14, Jan. 2013, doi:
10.5772/54933.
[6] M. A. Zakaria, H. Zamzuri, R. Mamat, and S. A. Mazlan, “A path
tracking algorithm using future prediction control with spike detection
for an autonomous vehicle robot,” Int. J. Adv. Robot. Syst., vol. 10,
pp. 1–9, Jan. 2013.
[7] J. Borenstein and Y. Koren, “The vector field histogram—Fast obstacle
avoidance for mobile robots,” IEEE J. Robot. Autom., vol. 7, no. 3,
pp. 278–288, Jun. 1991.
[8] M. Zohaib, M. Pasha, R. A. Riaz, N. Javaid, M. Ilahi, and R. D. Khan,
“Control strategies for mobile robot with obstacle avoidance,” J. Basic
Appl. Sci. Res., vol. 3, no. 4, pp. 1027–1036, 2013.
[9] J. A. Oroko and G. N. Nyakoe, “Obstacle avoidance and path planning
schemes for autonomous navigation of a mobile robot: A review,” in
Proc. Sustain. Res. Innov. Conf., vol. 4, 2012, pp. 314–318.
[10] A. Yufka and O. Parlaktuna, “Performance comparison of bug algorithms
for mobile robots,” in Proc. 5th Int. Adv. Technol. Symp., May 2009,
pp. 1–5.
[11] T. Bakker, K. van Asselt, J. Bontsema, J. Müller, and G. van Straten,
“A path following algorithm for mobile robots,” Auto. Robots, vol. 29,
no. 1, pp. 85–97, 2010.
[12] K. M. Hasan, A. Al-Nahid, K. J. Reza, S. Khatun, and M. R. Basar,
“Sensor based autonomous color line follower robot with obstacle
avoidance,” in Proc. IEEE Bus. Eng. Ind. Appl. Colloq. (BEIAC),
Apr. 2013, pp. 598–603.
[13] W. E. Elhady, “Implementation and evaluation of image processing
techniques on a vision navigation line follower robot,” J. Comput. Sci.,
vol. 10, no. 6, pp. 1036–1044, 2014.
[14] (2015). Cyberbotics.com, Webots, accessed on Mar. 31, 2015. [Online].
Available: https://www.cyberbotics.com/overview
[15] O. Michel, “WebotsTM: Professional mobile robot simulation,” Int. J.
Adv. Robot. Syst., vol. 1, no. 1, pp. 39–42, 2004.
5028 IEEE SENSORS JOURNAL, VOL. 16, NO. 12, JUNE 15, 2016
Marwah M. Almasri received the bachelor’s
degree in computer science and engineering from
Taibah University, Medina, Saudi Arabia, and the
M.B.A. degree in management information system
from the University of Scranton, PA, in 2011.
She is currently pursuing the Ph.D. degree with
the Computer Science and Engineering Department,
University of Bridgeport. She has been recognized
by Upsilon Pi Epsilon and Phi Kappa Phi honor
society organizations for her academic accomplish-
ments. She received a Scholarship Award from the
Honor Society of Upsilon Pi Epsilon Organization, 2013. She also received the
Engineering Academic Achievement Award from the University of Bridgeport,
2014-2015. She received the Best Paper Award at the IEEE Long Island
Systems, Applications and Technology Conference, 2015. She also received
the Third Position Award in Faculty Research Day at the University of
Bridgeport, 2016. She also received an award from the MIS Department,
University of Scranton, for her outstanding work. Her research interests
include wireless sensor networks, computer networks, mobile computing,
autonomous mobile robots, and data fusion. She published a number of quality
research papers in national and international conferences and journals.
Abrar M. Alajlan received the B.Sc. degree
in computer science from Umm Al-Qura
University, Makkah, Saudi Arabia, in 2008, and the
M.Sc. degree in management information systems
from Troy University, Alabama, USA, in 2012. She
is currently pursuing the Ph.D. degree in computer
science and engineering with the University of
Bridgeport. She has been recognized by the Honor
Society of Upsilon Pi Epsilon Organization and
the Honor Society of Phi Kappa Phi Organization.
She also received the Engineering Academic
Achievement Award from the University of Bridgeport, 2015. She received
the Best Paper Award in the applications track from the IEEE Long Island
Systems, Applications and Technology Conference 2015. She is a member of
technical program committees of some international conferences. Her fields
of interest are autonomous ground robots, energy efficiency, obstacle/collision
avoidance, dynamic motion control, and real-time programming. She has
published several papers in influential international journals and conferences.
Khaled M. Elleithy received the B.Sc. degree in
computer science and automatic control and the
M.S. degree in computer networks from Alexandria
University, in 1983 and 1986, respectively, and the
M.S. and Ph.D. degrees in computer science from
the Center for Advanced Computer Studies, Uni-
versity of Louisiana–Lafayette, in 1988 and 1990,
respectively. He is the Associate Vice President for
Graduate Studies and Research with the University
of Bridgeport. He is a Professor of Computer Sci-
ence and Engineering. His research interests include
wireless sensor networks, mobile communications, network security, quantum
computing, and formal approaches for design and verification. He has pub-
lished more than 350 research papers in national/international journals and
conferences in his areas of expertise. He is the Editor or Co-Editor for 12
books published by Springer.
Dr. Elleithy has more than 25 years of teaching experience. He was a
recipient of the Distinguished Professor of the Year, University of Bridgeport,
from 2006 to 2007. He supervised hundreds of senior projects, M.S. theses,
and Ph.D. dissertations. He developed and introduced many new undergrad-
uate/graduate courses. He also developed new teaching/research laboratories
in his area of expertise. His students have won more than 20 prestigious
national/international awards from the IEEE, ACM, and ASEE.
Dr. Elleithy is a member of the technical program committees of many inter-
national conferences as recognition of his research qualifications. He served
as a Guest Editor for several international journals. He was the Chairperson
for the International Conference on Industrial Electronics, Technology and
Automation. Furthermore, he is the Co-Chair and Co-Founder of the Annual
International Joint Conferences on Computer, Information, and Systems
Sciences, and Engineering Virtual Conferences from 2005 to 2014.