Content uploaded by Victor Sarker
Author content
All content in this area was uploaded by Victor Sarker on Jun 26, 2020
Content may be subject to copyright.
3D Perception with Low-cost 2D LIDAR and
Edge Computing for Enhanced Obstacle Detection
Victor Kathan Sarker, Li Qingqing and Tomi Westerlund
Department of Future Technologies, University of Turku, Turku, Finland
Email: {vikasar, qingqli, tovewe}@utu.fi
Abstract—Autonomous small robots are getting a lot of at-
tention for their use in industrial and domestic purposes. These
are equipped with few to numerous amounts of sensors to help
understand the operating environment, take appropriate next-
step decisions, prioritize actions and operate with little to no
human intervention. Light detection and ranging (LIDAR) sen-
sors are widely used for detecting objects around an autonomous
robot and to drive by effectively avoiding any possible obstacle.
However, cheap LIDARs available in the market mostly can
measure the distance of surrounding objects with a rotating
LASER beam and an accompanying sensor, thus obtaining a
two-dimensional (2D) map. This is insufficient for extracting
enough contextual data for autonomous robot movement. In
this paper, we propose a novel approach for enhancing the
detection capabilities of a typical 2D LIDAR to yield perception
in three dimensions (3D). This can facilitate more efficient object
detection while keeping the costs of small-robots to an affordable
level. The proposed approach requires little customization to
achieve reasonably enhanced mapping. From our experiments,
it is evident that our proposed method provides better mapping
and detection capabilities using commonly available low-cost 2D
LIDAR. Furthermore, integration of Edge computing makes data
processing efficient and noticeably increases battery life of a robot
resulting in a longer run-time.
Index Terms—LIDAR, Low-cost, Obstacle, Detection, 2D, 3D,
Mapping, Enhanced, Autonomous, Robots, Vehicles.
I. INTRODUCTION
In recent years, powerful embedded hardware systems and
sensor technology are emerging due to rapid growth of elec-
tronics. Small robots and vehicles are becoming ubiquitous,
especially in the commercial and home environments for
different purposes such as carrying goods and cleaning [1],
[2]. These robots usually have small footprints and consist of
relatively simple set of sensors and embedded controllers. In
most cases, such robots move from one place to another in
a fully-autonomous or semi-autonomous manner to perform
the specified tasks. In a multi-robot environment, the robots
can communicate with a central controller and with each other
to share contextual information such as location, navigational
route and possible obstacles. For navigating in a certain
environment, a robot-controller depends on onboard sensors
to collect different kinds of data about the environment it
is traversing through. For example, ambient light sensors,
passive-infrared (PIR) motion sensors, ultrasonic distance sen-
sors, 2D and 3D light detection and ranging (LIDAR) scanners,
RGB cameras and depth cameras are mostly used for this
purpose with the latter ones being most expensive solutions.
For small robots, particularly in a multi-robot system, the 2D
LIDAR is most suitable due to its reasonably fast measurement
speed, accuracy of measurement and cost-effectiveness. A 2D
LIDAR has a rotating setup comprising of a LASER emitter
and a receiver. The distance of an object is calculated by
measuring the time of LASER emission and reception of
reflected ray from an object. Fitted on the robots, a LIDAR
rotates continuously while performing this procedure and
retrieves a 360 degree distance-map constituting a 2D plane
[3], [4]. However, having one distance measurement value
in a certain direction, i.e., at a specific rotational angle, this
procedure can be easily fooled by objects with inconsistent and
random shape. For example, if a robot is passing underneath
a low-height table, the LIDAR might only acquire data about
its four legs and not the horizontal part of the table, and thus
can end up in a collision. In such a situation, multiple scans
at different height in the same direction can help understand
the obstacle better and more accurately.
Small mobile robots are mostly operated by battery power
and sometimes by means of fuel-powered small-scale engines
combined with electrical controllers. However, they are highly
limited in terms of resources such as computational power,
memory and available amount of energy when compared to
personal computers and servers. Edge computing is a popular
concept where complex processing of data is moved to a
nearby local computational unit such as a powerful gateway
to relief the processing burden of sensor nodes and devices
which need to extract specific information by fast-processing
large amount of data [5]. As mobile robots need to act in
near-real-time to safely navigate by avoiding obstacles, a lot
of sensor data is needed to be processed. In most cases, the
processing capability of the onboard embedded controllers is
insufficient or can barely make it within reasonable time. The
Edge gateway plays an important role by receiving the data
from the robots’ sensors, performing complex computation and
returning the desired information to the robot. This can be very
useful when a large number of robots need to work simulta-
neously in the same local environment. Also, as individual
onboard computers are not required when Edge computing is
utilized, it can reduce the cost of implementation [4].
In this paper, we propose a novel way of acquiring 3D
perception data to realize a better understanding of the objects
in the direction of robot movement by using a varied-angle
mirror and low-cost 2D LIDAR. In addition, we also utilize the
computing power of Edge to process the raw data and generate
enhanced object distance plane within a certain field of view.
Although we focus on small surface robots and vehicles in
this work, the aforementioned method equally suits marine
vehicles for detecting objects and avoiding obstacles, hence
yielding a smooth navigation. The specific contributions of
this work are listed below:
•Propose a novel method of producing 3D-like map data
(3D perception) using a 2D LIDAR through experiment,
•Demonstrate proof-of-concept and visualization of 3D
perception from 2D data,
•Integrate Edge Computing and compare performance.
The rest of the paper is arranged as follows: Section II
reviews related work on obstacle avoidance and mapping using
2D LIDARs while discussing the limitations and finding mo-
tivation for this work, Section III describes the overall setup,
architecture and hardware prototype of our system, Section
IV presents the experiments and resulting data, Section V
goes through the limitations of the system and implementation
considerations. Finally, Section VI concludes the paper and
provides directions for future work.
II. RE LATE D WOR K AN D MOTI VATI ON
To navigate smoothly, small mobile robots need to know
about the surroundings, particularly, towards the direction a
robot moves. However, obstacle detection and subsequently
avoidance is a challenge for small autonomous and semi-
autonomous robots. Many works have been carried out and
different ways have been proposed.
Catapang et al. presented obstacle detection method with 2D
LIDAR for Advanced Driver Assistance System (ADAS) in
autonomous vehicles [6]. The authors used low-cost LIDAR-
Lite v1 for distance measurements and the collected raw
data was median-filtered and pre-processed. With clustering
method, detection process was performed to find obstacles
with a width of 1 meter at a distance of up to 10 meters. During
experiments, the authors placed the LIDAR at an elevation of
1.5 meters from the ground. However, having results for a
single 2D plane is of limited use when obstacles are situated
higher or lower than the plane. For example, when moving
forward, if an obstacle shorter than 1.5 meters is placed on
the ground, the system will not be able to detect it and hence
the robot will collide. This makes it less practical when the
robot needs to navigate around in an unknown territory with
random-shaped objects in the surrounding.
Peng et al. presented an algorithm for effectively avoiding
obstacles using an LMS511 2D LIDAR [7]. They incorpo-
rated 2D median filtering, point-cloud data prepossessing and
obstacle clustering to achieve better obstacle detection while
the robot moves. In their implementation, the authors placed
the LIDAR at a certain angle with the horizon to detect small
objects near the ground. Although the presented simulation
shows that obstacles can be adequately identified in complex
scenarios, the 2D planner data highly limits the real-life usage
of the system. If the algorithm could be used in combination
with multiple 2D plane data, the resulting obstacle detection
would be more accurate and useful where obstacle shape
varies, i.e., different width at different height.
In a different work, Zheng et al. presented a method to
adequately detect obstacles with a low-cost 2D LIDAR and
Raspberry Pi fitted in a UAV [8]. The authors used a velocity
estimation method based on polynomial fit while scanning
the surroundings of a drone for easier clustering of unevenly
dense point cloud. They compared experimental results with
simulated outcomes to show that their method for detecting
obstacles is effective. Although it can find the obstacles,
however, the drone is susceptible to collision as it flies in a
three dimensional (3D) space. In such cases, multiple scans at
different levels within the vertical span of the drone can ensure
that it will be able to avoid an obstacle when heading towards a
certain direction. For this reason, a technique is required which
provides a 3D-like point cloud based on which the drone can
extract a 3D surface perpendicular to the direction of flying.
Baras et al. presented obstacle avoidance system with
Raspberry Pi and 2D RPLIDAR-A2 to safely navigate in envi-
ronments with low luminosity [9]. The authors chose LIDAR
over camera-based detection using computer vision to make
it more affordable and effective for miniature autonomous
vehicles. While it does not require high computation and
energy, 2D LIDAR limits its capability to sufficiently realize
the surroundings. In addition, if a highly illuminated object or
a light source is in the scanning direction while the LIDAR
scans, it can totally fail to measure the object distance due
to sensor saturation. As before, with multiple scans in one
direction at different angles, this problem can be effectively
alleviated. Furthermore, as number of robots increases, the
deployment cost will also rise as each robot needs to have
a computer onboard. This can be easily reduced if the data
processing is moved to a nearby Edge computational unit.
Mobile robots, especially small indoor ones often move
around through complex environment with random obstacles.
Keeping the limitations of the aforementioned systems in
mind, it is evident that there exists a need of an simple and
affordable scanning system for realizing the surroundings of
a small robot to enhance obstacle detection and avoidance.
III. SYS TE M DESIGN
In this paper, we propose a novel way of using a typical
2D LIDAR and a mirror to conceive a 3D perception. The
following subsections describe the system architecture, illus-
trate the working principle of our method and clarify related
mathematical operations on acquired LIDAR data.
A. Architecture
We put forward an architecture for reducing processing bur-
den of the resource limited robot, for leveraging the computing
power of Edge and for efficient management of the robots [4].
As shown in Figure 1, it involves three parts: Robots, one or
multiple Edge gateways and an optional user terminal to view
the extracted data. In addition, a simplified flowchart of the
Robot
MCU
RPLIDAR
A1
Battery HC-05 - Data processing
- Obstacle detection
- Robot control
- Storage
Edge Gateway User Terminal (Optional)
- Real-time monitoring
- Manual control
- ConfigurationMotor
Control
Servo
Fig. 1. Proposed system architecture consisting of the robots, one or more Edge gateways and optional user terminal for visualization.
working principle in a setup consisting of a robot and an Edge
gateway is shown in Figure 2.
Initialize,
Connect to Edge gateway
Increase
mirror angle
Max. angle
reached?
Process data,
append to 3D space
Get angle and distance,
Send to Edge gateway
Get raw data
No
Span
complete?
No
Yes
Visualize (optional),
Detect obstacle
Send control
commands to robot
Yes
Robot Edge Gateway
Communication
Set mirror angle
to start position
Apply control
commands
Fig. 2. Simplified workflow of our setup.
1) Robot: Although a robot can have various sensors and
related electronics, we only focus on realizing a 3D perception
to enhance obstacle avoidance. A micro-controller (MCU) is
used for controlling the LIDAR and the acquired data is sent to
the Edge gateway via a suitable communication medium. The
robot is powered by a 4 Ah Li-ion battery. In general, a typical
low-cost 2D LIDAR provides raw data on parameters such as
angle of scan, measured object distance, intensity of reflected
LASER beam and quality of received signal. The LIDAR we
used is a low cost 360-degree 2D laser scanner RPLIDAR A1
developed by Slamtec [10]. For our work, we have used the
scan angle and object distance which are provided in floating
point format. When sending the data to the Edge gateway for
processing, 360 pairs of angle and distance values are sent in
one packet. However, in our case, we limit the scan window
according to preset horizontal and vertical angle. In Arduino
IDE with C language, a float takes 4 bytes. If we have a scan
window with αndegree vertical, and βndegree horizontal
angle resolution, then the total transferable data forming a
packet is as follows:
Packet size = 2 (types of values) . 4 (bytes) . αn.βn
For example, a 10x10 degree scan window will yield 800 bytes
or 3.2 kb of raw data per packet.
We have chosen Bluetooth communication for connecting
the robot to the Edge gateway. Low-cost Bluetooth modules
often result in transmission jitters and efficiency of commu-
nication can be affected by interference from other similar
transceivers which use 2.4 GHz band. Also, the shorter the
transmission period, the lesser the probability of interference.
Therefore, to ensure smooth transfer of data and have extra
bandwidth margin, a baud rate of 921.6 bps is chosen. In
addition, it keeps the energy requirement of the communica-
tion module from the robot’s power source low due to shorter
transmission period compared to when using slower baud
rates. Furthermore, this leaves out some time for other required
pre-processing of data such as encryption while ensuring data
packet readiness in the gateway for timely processing.
2) Edge Gateway: The Edge gateway is a powerful com-
puter running from mains AC 220 V outlet and is connected to
the robots via Bluetooth for processing scan data, performing
analysis and accordingly sending robot’s navigation control
commands. It is also responsible for data pre-processing
such as encryption and compression before sending to, and,
decryption and decompression after receiving data from a
robot. The connected robots send scan data to the gateway
in a continuous manner simultaneously receiving navigational
control commands such as speed and direction of a robot. In
our case, we have used the Aaeon UP-GWS01 with Intel Up
board as the Edge gateway [11] running Ubuntu 16 operating
system. If the number of robots is large, one gateway may not
be sufficient to perform data processing in a timely manner.
Therefore, multiple gateways should be implemented spanning
the whole area the robots need to roam around. In addition,
this would allow distribution of the processing tasks among
the gateways and seamlessly handover one robot to another
when the robot is moving from one coverage area to another.
3) User Terminal: Although autonomous and semi-
autonomous systems are expected to work with little human
intervention, authorities and users are provided with a control
panel or terminal. This makes it possible to define actions
and rules while providing the possibility for manual takeover
in case of emergency. In our work, a simple GUI based
terminal is implemented at the gateway with Python language
for configuring scan parameters and viewing the results. Whilst
not implemented in this paper, the Edge gateway can be
connected to the cloud via high speed Internet connection for
having virtually unlimited storage and enabling global access
to the robot control.
B. Prototype
To provide a proof-of-concept and validate performance of
our method for realizing a 3D scan from 2D LIDAR data, a
prototype is implemented which is shown in Figure 3. The
prototype’s mechanical structure consists of a 2D LIDAR, a
servo, and a mirror which is fixed with the servo. The rotation
of the servo can change the orientation/tilt angle of the mirror
with respect to the LIDAR’s direction of LASER ray emission.
The mirror is placed in such a way that both the emitted ray
and the returning ray to the receiver of the LIDAR after being
reflected from a possible object can be incident on the mirror
surface across the whole span of mirror’s tilt movement. For
collecting data from the LIDAR and controlling the servo, we
have used Arduino Uno R3 [12] which consists of energy-
efficient and low-cost Microchip ATMega328P [13] running
with 5 V supply voltage. The default crystal oscillator of
16 MHz is replaced with a 14.7456 MHz variant to ensure
0 % error within the MCU and to match the baud rate of the
communication module. For reducing the power consumption,
a lower clock frequency such as 1.8432 MHz to 3.6864 MHz
can be used, however, at a cost of slightly increased processing
latency. For connecting to the Edge gateway, we have used a
HC-05 serial Bluetooth adapter [14]. However, if long range
is required, an nRF24l01+ module [15] with external antenna
from Nordic Semiconductor should be used.
Fig. 3. Implemented prototype of our method.
As the ATMega328P has only one hardware serial (UART)
port, a software based serial port was used for the infrequent
and small-sized data such as control commands. In contrast,
the hardware UART receive line is used for getting data from
the LIDAR and UART transmit line is used for connecting to
the Bluetooth module for sending data to the gateway. This
eliminates the need of using a more expensive MCU which
has two hardware serial ports. The firmware is written in C
language and is compiled using Arduino IDE. For controlling
the parameters and fetching measurement data from the LI-
DAR, RPLidarDriver from RoboPeak is used [16]. Scanning
parameters such as horizontal and vertical scan window size is
customizable on-the-fly. The scan window is set to the forward
movement direction of a small robot.
When our proposed setup is fitted in upright position, the 3D
scan window is above the horizontal level at which the LIDAR
is placed. However, if the setup is kept in upside-down manner,
the window is below the horizon level and hence can be used
for marine-surface boats where obstacles are near the water
level. The two scenarios can be visualized from Figure 4.
Our Prototype
Scan Window
Fig. 4. Example scenario for land and marine vehicles, respectively.
Mirror
α
β
Lidar
O
H
M'
P
Axis of rotation
A point of an obstacle
Fig. 5. Working principle of our proposed method.
C. Data Processing
After acquiring the raw data from the 2D LIDAR, it must
be processed to construct 3D perception and provide planner
information of the scan window. Figure 5 shows the working
procedure and related notations of our setup. The mirror is
inclined with respect to the LIDAR scan plane at an angle α.
The scan data points provided by the LIDAR are in polar
coordinates (r, θ), where θ= 0 when the LASER ray is
perpendicular to the line of intersection between the LIDAR
scan plane and the mirror plane. While scanning, the rotational
frequency of the RPLIDAR A1 [10] can reach 7.5 Hz on
average while acquiring 360 data points with each rotation.
As Figure 5 shows, when the LASER ray from the LIDAR
hits the mirror surface at point Mand reflects to point P, we
can get the distance Lfrom LIDAR’s raw data which equals
to |OM |+|M P |. The point His hit by the LASER ray when
LIDAR’s rotation angle βis zero. Our method is based on the
law of reflection as shown in Equation 1,
b
k2=b
k1−2( b
k1∗bn)∗bn(1)
where the hats indicate unit vectors, k1is the incident ray, k2
is the reflected ray, and nis the mirror’s surface normal. For
a plane mirror, its normal vector ncan be represented with
(x, y, z)components (nx, ny, nz). If we use standard vector
representation with unit vectors, this translates to bn, where,
bn=nxb
i+nyb
j+nzb
k
Furthermore, if we express the normal vector of the mirror
with a matrix, it can be defined as n, where,
n=nxnynzT
Besides, the vector law of reflection can be conveyed in the
matrix form as
K2=MK1
Here, K1is incident ray, K2is reflected ray, and M is the
reflection matrix of mirror. The reflection matrix M can be
expanded as
M= 1 −2n.nT
M=
1−2n2
x−2nxny−2nxnz
−2nxny1−2n2
y−2nynz
−2nxnz−2nynz1−2n2
z
(2)
Fig. 6. Typical scan using a 2D LIDAR fixed on the top of a robot. The
red line indicates the axis of the rotating LIDAR sensor. The perimeter of
surrounding obstacles is shown in blue.
As shown in Figure 5, we utilize the LIDAR coordinate
system for which the normal vector of mirror can be expressed
as −sin(α) 0 −cos(α)T.
In our case, nx=−sin(α),ny= 0, the nz=−cos(α).
In addition, the unit matrix for incident ray can be represented
as shown in Equation 3.
Ir=−cos(β)sin(β) 0T(3)
Therefore, the unit matrix of reflection ray Rris :
Rr=MIr
=
1+2sin(α)20−2sin(α)cos(α)
010
−2sin(α)cos(α) 0 1 −2cos(α)2
∗
−cos(β)
sin(β)
0
or, Rr=
−(1 + 2sin(α)2)cos(β)
sin(β)
2sin(α)cos(α)cos(β)
T
(4)
From (3) and (4), we can get the coordinate of the point Pas
follows:
P(α, β)=(|OH|tan(β))Ir+ (L−(|OH |tan(β)))Rr(5)
It is observable that Equation 5 is a function of αand β
and can be used for locating a point of an obstacle within
the scan window of the system. As the angle of the mirror
αtends to zero, the reflected rays will get affected by the
quality and thickness of the glass used in the mirror, eventually
reflecting nothing due to total internal reflection of light while
passing through mediums with varied densities. When using
this system, the scanner unit should be placed in such a way
that the mirror can cover the target scan window, i.e., the area
which we want to construct a 3D perception of.
IV. EXP ER IM EN TAL RE SULTS
In general, a 360 degree scan consisting of distance and
angle of objects surrounding the LIDAR is obtained to create
a 2D map as shown in Figure 6. On the other hand, Figure 7
shows a scan output constructed by using our method. From
the figure it is evident that the method introduced in this
work can provide a 3D perception within the scan window
of interest, e.g., the view area towards the driving direction
Fig. 7. Scan result (within the scan window of interest) with our method
using a 2D LIDAR. The red dot implies the placement of our prototype.
of the robot. During our experiment, the scan achieved an
measurement accuracy of 96 % to 99.5 %. The drift in
accuracy is observed least at the center, and most near the scan
window border (marked with green dots in Figure 4). This
inaccuracy occurred due to the menial quality of the mirror
used, because reflection preciseness changes with the LASER
ray’s angle of incidence.
Table I shows the comparison of operational latency for
processing a single scan, current draw and battery life between
two different setups: (a) a LIDAR and onboard processing with
a Raspberry Pi 4 (RPi4), and, (b) a LIDAR, a MCU-based
controller and Edge Gateway (Edge GW). It is observable
that an additional transmission latency incurs when processing
data at the Edge gateway. However, the data processing per-
formance of the system and operational run-time of the robot
are improved. Furthermore, for additional security, ChaCha8-
256 stream cipher [17] can be used to encrypt serial data.
However, during experiment, we observed a slight increase
in processing latency of about 7.4 ms per data packet when
applying encryption.
TABLE I
COM PARI SON O F OP ERATI ONA L LATE NCY A ND BATT ERY L IFE O F ROB OT
WH EN PRO CE SSI NG O NBO ARD V S AT THE ED GE GAT EWAY
Operation Processing Transmission Average Battery
mode Latency Latency Current Life
(ms / scan) (ms) (mA) (h)
L + RP 11.37 0 941.6 4.23
L + C + EGW 7.42 3.47 601.3 6.65
Here, L=LIDAR, RP=Raspberry Pi 4, C=Controller, EGW=Edge Gateway.
V. DISCUSSION
When designing a mapping system which allows a mobile
robot to sufficiently understand the environment it is navigat-
ing in by actively bypassing obstacles on its way, precision
measurement plays a key role. Although our system provides
better contextual data when compared to that from a simple
2D LIDAR, the accuracy of the system depends on several
factors. When implementing our proposed method, these must
be considered to obtain optimum acquisition result.
In our experiment, we have used the RPLIDAR A1 which
is a low-cost LIDAR. Although the quality is sufficient for
extracting a rudimentary 360 degree distance map, due to the
low-cost mechanical construction of the system, the position
of the detected points can fluctuate slightly even when the
LIDAR and the objects are stationary. A higher quality but
still affordable LIDAR can be used in this case for improving
the measurement accuracy. In addition, we used a mirror to
guide the LASER from LIDAR to different angles. The quality
of the mirror plays an important role due to factors such as
thickness of glass, quality of reflective material and uniformity
of the mirror glass. These can result in variation of intensity
and angle of the reflected LASER beam. In addition, the servo
used for tilting the mirror has an angular precision of 1 degree
which is sufficient for realizing nearby obstacles. However, for
objects at distance, it is better to have more precise control of
the mirror angle to yield higher resolution.
A LIDAR measures the distance by measuring the strength
and time difference between LASER emission and reception
from the object’s surface. While experimenting, we observed
that the quality of the received ray varies depending on the
shape, color and reflective properties of the surface, especially,
when the object is not near. While this does not affect the
small robot’s obstacle avoidance capability, care should be
taken so that the speed of acquiring and processing of data is
considerably higher than the speed of the vehicle for ensuring
safe and smooth navigation.
Although the use of a powerful Edge gateway can take the
burden off the small robots, the intermediate communication
can affect the performance of the system. While selecting
a data transfer method, factors such as available bandwidth,
channel congestion, latency and energy consumption should be
carefully considered. In our experiments, we used a higher data
transfer rate than the actual requirement to have some band-
width and transmission jitter headroom. This should be ad-
justed in accordance with the chosen communication medium
to maximize use of available bandwidth while ensuring data
integrity. In addition, as robots and humans often operate
side-by-side, any breach in the system can cause damage of
property and injury to humans. Therefore, strict authorization
and encryption of data, especially when transferring control
commands, must be ensured. Furthermore, when the number
of robots increases (not shown in this work), parallel commu-
nication can be more difficult to handle. Besides, the gateway
must be chosen in such a way that it can process data from
all the robots connected to it within a certain deadline.
VI. CONCLUSION AND FUTURE WO RK
With the improvement of embedded systems, powerful com-
puting capability and low-cost LIDAR scanners, autonomous
mobile robots and vehicles are getting more popular day by
day. One key aspect in autonomous movement is to detect
surrounding objects and effectively avoid them while navi-
gating. The cost of implementing such robots equipped with
obstacle detection can be high, especially when deploying a
large number of robots. In this paper, we have proposed a novel
way of having 3D perception using a 2D LIDAR for enhanced
obstacle detection. The method is low-cost to build and can be
adapted with little modification to existing systems which only
use 2D LIDARs. Results from our experiments show that our
proposed method can be used to obtain enhanced perception
of the robot’s surroundings. In addition, we demonstrated an
example for processing raw data at an Edge gateway having
significantly powerful computational capability and ample
resources. We compared our proposed architecture based on
Edge computing against typical onboard processing of data
in terms of latency and energy-efficiency. When deploying
several robots, this method can be empirically useful for
simultaneous processing of raw data and smooth navigation
of robots.
In future, we plan to implement a 2D LIDAR and prism-
based scanning mechanism to produce a 360 degree 3D
obstacle map. Furthermore, the system will be improved by
implementing error correction methods so that the results are
more accurate.
REFERENCES
[1] Robopeak. Rplidar arduino. [Online] Available: https://github.com/
robopeak/rplidar arduino. Accessed: Jan. 2, 2020.
[2] iRobot. iRobot Floor Cleaning Robots. [Online] Available: https://witt.
zone/irobot/home-uk/. Accessed: Jan. 20, 2020.
[3] S. Chan, P. Wu, and L. Fu. Robust 2D Indoor Localization Through
Laser SLAM and Visual SLAM Fusion. In 2018 IEEE International
Conference on Systems, Man, and Cybernetics (SMC), pages 1263–1268,
Oct 2018.
[4] V. K. Sarker, J. Pe˜
na Queralta, T. N. Gia, H. Tenhunen, and T. West-
erlund. Offloading SLAM for Indoor Mobile Robots with Edge-Fog-
Cloud Computing. In 2019 1st International Conference on Advances
in Science, Engineering and Robotics Technology (ICASERT), pages 1–
6, May 2019.
[5] General Electric. What is Edge Computing? [Online] Available:
https://www.ge.com/digital/blog/what-edge-computing. Accessed: Jan.
10, 2020.
[6] A. N. Catapang and M. Ramos. Obstacle detection using a 2D LIDAR
system for an Autonomous Vehicle. In 2016 6th IEEE International
Conference on Control System, Computing and Engineering (ICCSCE),
pages 441–445, Nov 2016.
[7] Y. Peng, D. Qu, Y. Zhong, S. Xie, J. Luo, and J. Gu. The obstacle
detection and obstacle avoidance algorithm based on 2-D lidar. In 2015
IEEE International Conference on Information and Automation, pages
1648–1653, Aug 2015.
[8] L. Zheng, P. Zhang, J. Tan, and F. Li. The Obstacle Detection Method
of UAV Based on 2D Lidar. IEEE Access, 7:163437–163448, 2019.
[9] N. Baras, G. Nantzios, D. Ziouzios, and M. Dasygenis. Autonomous
Obstacle Avoidance Vehicle Using LIDAR and an Embedded System.
In 2019 8th International Conference on Modern Circuits and Systems
Technologies (MOCAST), pages 1–4, May 2019.
[10] Slamtec. RPLIDAR A1. [Online] Available: http://www.slamtec.com/
en/lidar/a1. Accessed: Dec. 15, 2019.
[11] Aaeon. UP-GWS01. [Online] Available: https://www.aaeon.com/en/p/
tiny-gateway-system- with-upboard. Accessed: Jan. 5, 2020.
[12] Arduino. Arduino Uno Rev.3. [Online] Available: https://store.arduino.
cc/arduino-uno- rev3. Accessed: Nov. 20, 2019.
[13] Arduino. ATmega328P - 8-bit AVR Microcontrollers. [Online] Avail-
able: https://www.microchip.com/wwwproducts/en/ATmega328p. Ac-
cessed: Nov. 20, 2019.
[14] HC-05 Serial Bluetooth Module. [Online] Available: https://electrosome.
com/hc-05- serial-bluetooth- module/. Accessed: Jan. 5, 2020.
[15] Nordic Semiconductor. nRF24 Series. [Online] Available: https:
//www.nordicsemi.com/Products/Low-power-short-range- wireless/
nRF24-series. Accessed: Jan. 5, 2020.
[16] Mobile Industrial Robots. Five Collaborative Mobile Robots Applica-
tions. [Online] Available: https://www.mobile-industrial-robots.com/en/
resources/whitepapers/5-collaborative-mobile- robots-applications/. Ac-
cessed: Jan. 20, 2020.
[17] Internet Research Task Force (IRTF). ChaCha20 and Poly1305 for
IETF Protocols. [Online] Available: https://tools.ietf.org/html/rfc7539.
Accessed: Dec. 15, 2019.