Conference PaperPDF Available

Collaborative Mapping with IoE-based Heterogeneous Vehicles for Enhanced Situational Awareness


Abstract and Figures

The development of autonomous vehicles or advanced driving assistance platforms has had a great leap forward getting closer to human daily life over the last decade. Nevertheless, it is still challenging to achieve an efficient and fully autonomous vehicle or driving assistance platform due to many strict requirements and complex situations or unknown environments. One of the main remaining challenges is a robust situational awareness in autonomous vehicles in unknown environments. An autonomous system with a poor situation awareness due to low quantity or quality of data may directly or indirectly cause serious consequences. For instance, a person's life might be at risk due to a delay caused by a long or incorrect path planning of an autonomous ambulance. Internet of Everything (IoE) is currently becoming a prominent technology for many applications such as automation. In this paper, we propose an IoE-based architecture consisting of a heterogeneous team of cars and drones for enhancing situational awareness in autonomous cars, especially when dealing with critical cases of natural disasters. In particular, we show how an autonomous car can plan in advance the possible paths to a given destination, and send orders to other vehicles. These, in turn, perform terrain reconnaissance for avoiding obstacles and dealing with difficult situations. Together with a map merging algorithm deployed into the team, the proposed architecture can help to save traveling distance and time significantly in case of complex scenarios.
Content may be subject to copyright.
Collaborative Mapping with IoE-based
Heterogeneous Vehicles for Enhanced Situational
Jorge Pe˜
na Queralta1, Tuan Nguyen Gia1, Hannu Tenhunen2, Tomi Westerlund1
1Department of Future Technologies, University of Turku, Turku, Finland
2Department of Electronics, KTH Royal Institute of Technologies, Sweden
Email: {jopequ, tunggi, tovewe},
Abstract—The development of autonomous vehicles or ad-
vanced driving assistance platforms has had a great leap for-
ward getting closer to human daily life over the last decade.
Nevertheless, it is still challenging to achieve an efficient and
fully autonomous vehicle or driving assistance platform due to
many strict requirements and complex situations or unknown
environments. One of the main remaining challenges is a ro-
bust situational awareness in autonomous vehicles in unknown
environments. An autonomous system with a poor situation
awareness due to low quantity or quality of data may directly or
indirectly cause serious consequences. For instance, a person’s
life might be at risk due to a delay caused by a long or
incorrect path planning of an autonomous ambulance. Internet of
Everything (IoE) is currently becoming a prominent technology
for many applications such as automation. In this paper, we
propose an IoE-based architecture consisting of a heterogeneous
team of cars and drones for enhancing situational awareness
in autonomous cars, especially when dealing with critical cases
of natural disasters. In particular, we show how an autonomous
car can plan in advance the possible paths to a given destination,
and send orders to other vehicles. These, in turn, perform terrain
reconnaissance for avoiding obstacles and dealing with difficult
situations. Together with a map merging algorithm deployed into
the team, the proposed architecture can help to save traveling
distance and time significantly in case of complex scenarios.
Index Terms—swarm robotics, heterogeneous swarms, cooper-
ative mapping, Internet-of-Everything (IoE), situational aware-
The development of autonomous vehicles, such as self-
driving cars, has had a significant improvement over the last
decade. However, it is challenging to achieve fully autonomous
vehicles due to a variety of open problems related to technical,
ethical, cyber-security, and legal issues [1–5]. Autonomous
vehicles need to fulfill strict requirements of reliability and
be efficient in terms of energy consumption, path planning, or
obstacle avoidance. In addition, autonomous vehicles must be
able to achieve high levels of situational awareness. In some
critical cases related to natural disasters such as sinkholes
or earthquakes, serious consequences may occur due to a
delay caused by the inefficiency of path planning and lack of
situational awareness. For instance, endangered citizens cannot
be saved on time because an emergency vehicle cannot reach
a destination. Although many autonomous vehicle systems
offer some levels of situational awareness, they fail to model
the complexity of the infrastructure (i.e., road blocked due
to fallen trees or collapsed houses) during or after the natural
disasters. These situations might also occur in rescue missions
in remote areas where detailed maps are not available, or
exploration missions in underdeveloped countries, including
those for delivery of humanitarian aid.
Internet of Everything (IoE) can be defined as a virtual
platform where virtual objects, actual objects, human, data
and processes can be interconnected and communicate with
each other. IoE can be considered as an expansion of Internet-
of-Things (IoT) where several advanced technologies such
as compressed sensing, mesh wireless communication and
hybrid cloud/fog computing architectures are involved. With
an increasingly ubiquitous IoE, connected vehicles are becom-
ing a closer reality, as an essential part of fully autonomous
operation realization [6, 7]. Therefore, it is expected that
autonomous vehicles will no longer rely only on data from
on-board sensors for local mapping, localization, and opera-
tion. A large network of interconnected vehicles will provide
enhanced situational awareness and more accurate and efficient
mapping. A better understanding of the environment is crucial
for the improvement of autonomous operation technology
[1, 8, 9].
In order to provide situational awareness for autonomous
vehicles, a combination of IoE and a swarm of vehicles
is applied. Particularly, we propose a new architecture for
cooperative mapping in an unknown environment with a target
destination by a group of heterogeneous vehicles. We focus on
scenarios where the main vehicle has been given an objective
destination and uses a heterogeneous team of support vehicles
to gain an enhanced situational awareness via map merging.
We give examples of a real scenario in which the support units
significantly influence the global path planning.
The main contribution of this work is to provide a proof of
concept for the coordination of collaborative mapping within
an autonomous team of heterogeneous aerial and land robots.
We propose an IoE-based architecture for task assignment
and coordination for local map merging. We put a focus
on mapping of unknown areas, with a potential application
in search-and-rescue missions in post-disaster scenarios. The
architecture we propose is meant for optimizing path planning
and minimizing traveling time and distance towards a known
Fig. 1. System architecture
destination. The exploration work is meant to be carried out
in an open environment, thus we assume that GNSS-based
positioning is available for all vehicles in the reconnaissance
mission. If GPS is not available during short periods of time,
then inertial measurement units (IMU) and other information
such as vision odometry or lidar odometry will be used to
estimate the position based on the last known global position.
The paper is organized as follows: Section II presents
related work. Section III introduces an IoE-based architecture
for enhancing situational awareness. Section IV discusses the
implementation and experimental results. Section V concludes
the work.
Multi-vehicle localization and mapping was first proposed
in 2002 by Williams et al. [10]. The authors propose a
novel methodology for fusing local maps from multiple agents
onto a shared global map. In particular, they simulated a
generalization of a Constrained Local Submap Filter to a
multi-agent system. The authors also consider the addition of
new agents after the mapping has already started and provide
a solution for estimating the relationship between the relative
frames of reference.
Li et al. demonstrate in [11] the benefits that cooperative
mapping can offer in challenging environments in comparison
with a single robot. They propose a procedure for merging
occupancy grid maps in outdoor environments. Their approach
includes both estimating indirect relative positions of different
vehicles, and a merging function based on occupancy likeli-
hood. The authors demonstrate the utility of their proposed
architecture in challenging scenarios where two autonomous
vehicles might be too close to each other and therefore
partially blocked their vision. Map merging techniques can be
applied in such a scenario to enhance the situational awareness
of both vehicles and improve their path planning and obstacle
avoidance capabilities.
When taking into account the application in a real scenario
of a collaborative mapping algorithm, one must take into
account the amount of data that needs to be transferred and
the bandwidth of the network that is used for coordinating a
team of multiple robots and merging their local maps in real
time. In particular, if the deployment of such team occurs in
a post-disaster scenario, networking infrastructure might be
damaged and only mobile networks with lower bandwidth
are available. Mostofi et al. has demonstrated the efficiency
of a collaborative mapping algorithm for a team of UAVs
using compressed sensing to reduce the amount of data to
be transferred among the agents [12].
Cooperative mapping of unknown environments by a team
of heterogeneous robots has already been proposed. For in-
stance, Masehian et al. recently present a solution for the
coordination and assignment of tasks within the cooperation
of multiple robots with the objective of completing a map of
an unknown environment [13]. They deploy a team of land
robots with different sensor capabilities. The authors specif-
ically focus on the assignment of different tasks to optimize
the amount of information gathered by different robots as a
function of their sensor capabilities. Also, they introduce an
enhanced line merging methodology with fuzzy membership
functions for the different robots in order to properly decide
on the mergeability of lines from different maps.
As mentioned in Section I, one of the potential application
areas for collaborative mapping of a heterogeneous team of
robots is a post-disaster scenario. An earthquake, a typhoon
or a tsunami can cause a transformation of a well pre-
defined or established map into an unknown environment.
Moreover, it is often unsafe for human such as inspectors or
lifeguards to explore the damaged area soon after the disaster
Fig. 2. System Operation
and before the damage has been properly evaluated. This is
so because of the potentially unstable structures that might
remain in the area. Michael et al. carried out an experiment to
collaboratively create a 3D map of a compromised building
after an earthquake in Sendai, Japan, that was affected by
the 2011 Tohoku earthquake [14]. Their experiment produced
several 3D voxel grid-based maps of the top three floors of the
building. However, the authors discuss that the maps obtained
from the scans might be too coarse to be used in a real search-
and-rescue operation. Nonetheless, high-quality 3D maps can
be achieved with more advanced sensors and improved sensor
fusion. While the vehicles that they used in the experiment
where tele-operated and not autonomous, the authors stressed
how autonomous operation would improve the outcome of
their mission.
We propose an IoE-based system architecture for enhancing
situational awareness in autonomous vehicles. The architecture
shown in Fig. 2 consists of a swarm of heterogeneous support
units, the main vehicle, cloud servers, and an end-user termi-
The main decisions are taken by the main vehicle, which
is given a target position to travel to. In order to optimize the
path planning and reduce the traveling time and distance, the
main vehicle surveys the position of potential support units
and send initial commands to those that caight be near the
expected route. The support units can be also other common
vehicles (e.g., cars) or special support units (e.g., robots, cars,
and drones). The support units gather data and generate local
maps which are sent back to the main vehicles for map
merging. Depending on the applications, special support units
can be deployed at specific points or sent to the target area
to check for obstacles and free paths. Some of the special
support units can have UAVs in order to survey and collect
information of large areas. The collected data from on-board
sensors of the support units and drones is processed at the
special support units in order to generate a geographical local
map. Due to the limited battery capacity, UAVs controlled by
the special support units only fly when needed. After each
mission, the battery of UAVs is charged with the assistance of
the special support units. UAVs can be also deployed from the
main vehicles but it is not necessary as UAVs cannot fly over
the entire course of the main vehicle’s mission and are more
efficient at key points during short reconnaissance missions.
The support units are interconnected with each other and
connected to cloud services via the 4G/5G wireless protocol.
Processed data (i.e., a local map) from all vehicles including
main and support units will be transmitted to cloud servers for
storing and further processing. An end-user such as system
administrators or a driver can access the map with real-time
positions of the main vehicles via a browser.
We put the focus on the system-level architecture and defi-
nition of commands and data flow. However, the architecture
assumes that the positioning of all units with respect to a global
coordinate system, such as GPS, is known when the data is
shared. In Section V, we use several ground vehicles equipped
with GNSS sensors, Lidars and IMUs to test different parts of
the proposed architecture.
In this section, we introduce the algorithms for path plan-
ning, coordination of support units and map merging. The pro-
posed mapping algorithm achieves the best results when some
a priori information of the objective environment is available
in advance, so that support units can perform reconnaissance
in predetermined areas and therefore minimize the time for
mapping. This relates to those applications in which a general
map of the are is given, but details are unkown.
Due to the constraint computational resources of small
UAVs, while the operation is autonomous, the drone path
planning is performed on the special support land units which
hosts the drone. Moreover, data analysis and compression, and
sensor fusion algorithms are implemented on the drone hosts.
A. Mapping and merging local maps
One of the key aspects of the proposed architecture is
merging of local maps from the main vehicle and different
support units into a single global map. In order to achieve
the target, it is required that all vehicles are interconnected
and communicate with each other, and connected to remote
cloud servers if remote monitoring or control is necessary.
Accordingly, they share their positions (i.e. data from GPS)
referenced in a global coordinate system. In case these vehicles
cannot connect to the Internet, they have to be connected to
the same local network, such as a Bluetooth 5 or LoRa mesh
network. A position in a global reference may be estimated via
simultaneous localization and mapping (SLAM) algorithms,
IMU integration, odometry or other methods if GNSS sensors
or other similar methods are not available. However, in all
those cases, the initial position of the vehicle must be known.
Each of the special support units gathers data about its
environment using on-board sensors. Obstacles are detected
and stored in a grid occupancy map. We take into account
(a) Local map 1 (a) Local map 2 (a) Local map 3 (a) Merged map
Fig. 3. Real-time map merging in an indoors environment with a known relative positioning. The first three graphs (a-c) show a 12m×12mmap with each
cell representing 1/100m2. The last graph (d) shows the merged map. The units in all four maps are 1/10m.
divergent measurements in consecutive mappings by using a
value in the interval [0,1] to represent each cell in the grid.
The cell value is increased a fixed value δ > 0whenever an
obstacle is detected in that cell, and decreased a fixed value
ε<δwhenever it is detected as free. When merging maps
from support units into the main vehicle’s global map, the
main vehicle’s map values are given preference with respect
to new values. This is so because of the bigger inherent error of
the support unit’s map due to transmission latency and error in
the estimation of its relative position. New values are assumed
true for unknown cells, and cells with a higher value in the
new local map are given the average value.
Figure 3 shows an example of map merging in an indoors
environment with a known relative position between the main
vehicle and a single support unit. The last graph (d) illustrates
how the main vehicle can have a much more clear understand-
ing of its environment after merging local maps from support
B. Path planning and task allocation
The mission starts when the main vehicle is given an
objective position to travel to. There might be details that
are already known about the area between the initial and
objective positions. For instance, if the mission is to be carried
out after an earthquake, then hills or forests have probably
been less affected than roads due to their size. Therefore, it
may be assumed that a path through a known forest will still
be impracticable for a search-and-rescue operation. The same
applies to major constructions. However, it is unknown for the
vehicles whether roads and previous paths are still accessible
or have been destroyed or blocked due to the natural disaster.
In order to perform path planning and assignment of routes
for support units, the grid occupancy map is converted into
an undirected graph. The nodes are generated by grouping to-
gether small numbers of free adjacent cells. Then, we find how
many connected components are in the subgraphs contained in
the different paths between the origin and objective positions.
If several paths are found, then the support units are sent
in advance for early reconnaissance while the main vehicle
dynamically chooses the shortest path. Cells with unknown
value are considered to be empty at this step.
Fig. 4. The autonomous cars used in the experiments
Global path planning is achieved by running an adapted
Breadth-First Search (BFS) algorithm using the undirected
graph generated in the previous step. Then, local path planning
with collision avoidance is calculated between the current
position and the center position of the next node in the
graph. Once the objective positions for the support units are
calculated, the instructions are sent over the network (via a
direct connection or a cloud server, depending on the deployed
network topology). These support units, in turn, can generate
instructions for drones that they might be hosting. The raw
data obtained from a drone’s camera and the support units’
sensors is not sent over the network. The raw is analyzed,
processed, and compressed into a local map that is transmitted
to the main vehicle and Cloud servers for map merging and
real-time monitoring, respectively. The raw data can be also
stored on the support units so that it can be analyzed after the
mission has ended and the algorithms can be optimized.
In order to assess the efficiency of the proposed system
architecture, we compare the distance an autonomous car takes
to travel between two points in an unknown environment
in different cases. When the traveling distance is longer, it
implies that more time is also needed. First, a single vehicle
dynamically performs mapping and path planning. In the
second experiment, the vehicle performs a mapping with the
assistance of support vehicles and a drone.
A. Implementation
The vehicles (e.g., cars) used in the experiments are 1:10
Elektro-Monstertruck ”NEW1” BL models. A radio controller
is replaced by Raspberry Pi having WiFi to communicate
with other vehicles. The car is controlled via two servo motor
control signals, which control the turning angle of the front
wheels and the turning speed of all 4 wheels. On top of
that, the Raspberry Pi is connected to a RPLiDAR A1M8
LiDAR having a range of 12m and offering a 360 degrees
view of the car’s environment. The car is equipped with a 9-
axis MPU9250 in order to properly capture LiDAR map into
an oriented map. In addition, the car has a NEO-M8 GNSS
module which can be used to receive data from concurrent
reception of up to 3 GNSS (e.g., GPS, Galileo, GLONASS,
BeiDou). The GNSS is used for positioning itself as well
as calculating the relative positions of support vehicles when
local maps referenced in local coordinate systems are merged.
Figure 4 shows the cars that are used during the experiments.
The LiDAR and GNSS modules are placed outside the car
body while battery, Raspberry Pi and MPU9250 are inside.
The drone used in the experiments is made from a DJI f450
drone, a Pixhawk controller, and a Rasberry Pi with a camera.
Images collected by the drone’s camera are transmitted to the
special support unit (i.e., a car). At the support unit, images
will be processed by the real-time algorithm for object and
distance identification proposed by Ilas et al. [15].
For testing purposes only, we connect all cars and drones
to the same Wi-Fi Network and place the access point near
the starting point in the area that is being subject mapped. In
a real scenario, this communication layer would be replaced
by a mesh network using a mobile connection such as 4G/5G
or another wireless solution with a higher range. During our
experiments, we assume that the IP address of the main vehicle
is known, while support vehicles can register at any time via
a predefined endpoint. In addition to a web server placed at
Cloud servers, a web server written in Python using Flask runs
on the main vehicle to provide endpoints for receiving the local
maps from support units. We also provide a simple monitoring
panel to be able to see the evolution of the mapping process
in real time. This helps to reduce the latency of transmitting
the global map from Cloud to the main vehicle. In this case, a
driver of the main vehicles can access the real-time global map
with minimum latency. When a system administrator wants to
monitor the global map, he/she can access the web server at
Cloud server via a browser. Fig. 5 shows the web interface
which consists of instant vehicle orientation, LiDAR data,
local and global maps, and path planning decisions.
B. Experimental results
All the experiments are carried out in a floor which map is
shown in Fig 6. In this floor, there are rooms and furniture.
In order to create complex situations which represent chaotic
streets caused by natural disasters, several objects are added
into the environment. These objects are cardboard boxes and
higher than the total height of the car and the LiDAR. In
the experiments, vehicles, objects, and LiDAR measurement
are scaled down 14 times with respect to their actual size
and measurement range. As can be seen a map shown in Fig.
6, in order to reach the destination in the shortest path, an
conventional autonomous vehicle follows route 12
34. At position 4, the car detects that the route is
blocked, so it will take the route 567. At position
7, the car will go to 8due to the shorter path to reach
the destination point. At position 8, the car detects that the
way is blocked; it then follows the route 7910
11 12 to the destination. The entire route requires a large
amount of traveling time and distance.
When the car is between positions 2and 3, the path
planning algorithm detect two possible paths to the destination.
In this experiment, the path to 4is blocked very soon,
but in a real scenario this could be a distance of hundreds
to thousands of meters. If there were vehicles nearby those
two possible paths, that have previously registered via the
known endpoint in the main vehicle, then the latter sends
instructions to explore those two areas in advance. The same
occurs between points 6and 7.
An aerial support vehicle (i.e., a drone) can significantly
contribute since it is able to see beyond relatively short
obstacles that completely block a view of ground vehicles.
For instance, a drone flying tens of meters above a car can
see what is behind a long wall, or a wrecked building after
a natural disaster. Due to this extended vision from above,
a drone might easily detect additional obstacles behind the
nearest one. This can save time for the ground car as it would
need to travel around the obstacle in order to see if the path
continues behind or is also blocked. This situation is illustrated
in the map in Figure 6 in the area between points 2,3and
4, and, more clearly, in the area between points 6,7and
8. In the latter case, for instance, a drone flying over a car
moving between 6and 7equipped with a camera can detect
the blocked path ahead point 8in advance. Experimental
results show that in the same scenario and destination, the
traveling distance of an autonomous car in case of without
the support of the proposed architecture is 30% more than the
case of with the support of the proposed architecture. It can be
inferred that 30% traveling time can be saved with the support
of the proposed architecture approximately 30% different. The
efficiency of the proposed architecture can vary depending on
the maps and scenarios.
In this paper, we presented an architecture definition for
the coordination of an autonomous team of heterogeneous
aerial and land robots that work together on collaborative
mapping. Based on the presented concept, we proposed an IoE
architecture having heterogeneous support units for enhancing
the situational awareness of autonomous vehicles in an un-
known environment. In addition, a complete implementation
of heterogeneous vehicles including cars and drones using
cameras and sensors such as GPS, and LiDAR was carried
out. The results show that the proposed architecture helps
Fig. 5. Web interface with monitoring panel in the main vehicle.
Fig. 6. A map of the experiment environment (Map 1)
to enhance significantly situational awareness. With the large
map and the information related to map, an autonomous
vehicle can plan in advance for avoiding obstacles and dealing
with difficult situations. Furthermore, the results show that the
proposed architecture helps to increase the efficiency of an
autonomous vehicle by reducing traveling time and distance.
In the experiments, 30% traveling distance and time can be
saved by deploying the proposed architecture. The efficiency
of the proposed can vary depending on the maps and scenarios.
[1] R. W. Wolcott et al. Visual localization within lidar maps
for automated urban driving. In IEEE IROS 2014, Sept
[2] J. Bryson and A. Winfield. Standardizing ethical design
for artificial intelligence and autonomous systems. Com-
puter, 50(5), May 2017.
[3] N. E. Vellinga. From the testing to the deployment of
self-driving cars: Legal challenges to policymakers on the
road ahead. Computer Law & Security Review, 2017.
[4] J. Bruyne and J. Werbrouck. Merging self-driving cars
with the law. Computer Law & Security Review, 34(5),
[5] J. Petit and S. E. Shladover. Potential cyberattacks on
automated vehicles. IEEE Transactions on Intelligent
Transportation Systems, 16(2), April 2015.
[6] E. Uhlemann. Introducing connected vehicles [connected
vehicles]. IEEE Vehicular Technology Magazine, 10(1),
March 2015.
[7] E. Uhlemann. Connected-vehicles applications are
emerging [connected vehicles]. IEEE Vehicular Tech-
nology Magazine, 11(1), March 2016.
[8] O. McAree et al. Towards artificial situation awareness
by autonomous vehicles. IFAC-PapersOnLine, 50(1),
2017. 20th IFAC World Congress.
[9] D. Sirkin et al. Toward measurement of situation
awareness in autonomous vehicles. In Proceedings of the
2017 CHI Conference on Human Factors in Computing
Systems, CHI ’17. ACM, 2017.
[10] S. B. Williams et al. Towards multi-vehicle simultaneous
localisation and mapping. In Proceedings 2002 IEEE
International Conference on Robotics and Automation,
volume 3, May 2002.
[11] H. Li et al. Multivehicle cooperative local mapping:
A methodology based on occupancy grid map merging.
IEEE Transactions on Intelligent Transportation Systems,
15(5), Oct 2014.
[12] Y. Mostofi et al. Compressive cooperative sensing and
mapping in mobile networks. IEEE Transactions on
Mobile Computing, 10(12), Dec 2011.
[13] E. Masehian et al. Cooperative mapping of unknown
environments by multiple heterogeneous mobile robots
with limited sensing. Robotics and Autonomous Systems,
87, 2017.
[14] N. Michael et al. Collaborative mapping of an earthquake
damaged building via ground and aerial robots. In
Journal of Field Robotics, volume 92, 07 2012.
[15] C. Ilas et al. Real-time image processing algorithms
for object and distances identification in mobile robot
trajectory planning. Journal of Control Engineering and
Applied Informatics, 13(2), 2011.
... Moreover, IoE pays more attention to intelligent network connection and technology based on IoT infrastructure. Therefore, IoE has become the promising network paradigm, and it can offer wide application in many fields, such as industry [3,4], transportation [5,6], commerce [7], and education [8]. ...
... Then, they determine whether to use the data provided by the device for preprocessing and caching through the reputation of the corresponding device. Step (6): The data transmission from Edge Cloud to Core Clouds. The Edge Clouds store the data in IPFS, and then the hash address of the data is returned to the Ethereum blockchain by calling the function of the smart contract deployed on Ethereum blockchain. ...
... This enables applications such as collaborative sensing for enhanced situational awareness, where vehicles send large amounts of Light Detection and Ranging (LiDAR) data and other high-resolution sensor information to applications that fuse them together to build a high-precision view of the environment in real time. This information is then distributed to all the vehicles in the area so that they can avoid obstacles or make decisions [12][13][14]. All these procedures (collection of information, processing, and distribution of the resulting information) have to be completed under strict time constraints (as low as 10 ms [15]). ...
Full-text available
Vehicle automation is driving the integration of advanced sensors and new applications that demand high-quality information, such as collaborative sensing for enhanced situational awareness. In this work, we considered a vehicular sensing scenario supported by 5G communications, in which vehicle sensor data need to be sent to edge computing resources with stringent latency constraints. To ensure low latency with the resources available, we propose an optimization framework that deploys User Plane Functions (UPFs) dynamically at the edge to minimize the number of network hops between the vehicles and them. The proposed framework relies on a practical Software-Defined-Networking (SDN)-based mechanism that allows seamless re-assignment of vehicles to UPFs while maintaining session and service continuity. We propose and evaluate different UPF allocation algorithms that reduce communications latency compared to static, random, and centralized deployment baselines. Our results demonstrated that the dynamic allocation of UPFs can support latency-critical applications that would be unfeasible otherwise.
... Active research areas in TIERS include multi-robot coordination [1], [2], [3], [4], [5], swarm design [6], [7], [8], [9], UWB-based localization [10], [11], [12], [13], [14], [15], localization and navigation in unstructured environments [16], [17], [18], lightweight AI at the edge [19], [20], [21], [22], [23], distributed ledger technologies at the edge [24], [25], [26], [27], [28], [29], edge architectures [30], [31], [32], [33], [34], [35], offloading for mobile robots [36], [37], [38], [39], [40], [41], [42], LPWAN networks [43], [44], [45], [46], sensor fusion algorithms [47], [48], [49], and reinforcement and federated learning for multi-robot systems [50], [51], [52], [53]. ...
... In general terms, IoT is a collection of physical devices, computers, servers, and small objects embedded within a network system [14]. Some of the most prominent IoT application areas are smart homes [15] and smart cities [16], vehicular systems [17], and smart healthcare networks [18]. All these systems are highly distributed. ...
Full-text available
Providing security and privacy to the Internet of Things (IoT) networks while achieving it with minimum performance requirements is an open research challenge. Blockchain technology, as a distributed and decentralized ledger, is a potential solution to tackle the limitations of the current peer-to-peer IoT networks. This paper presents the development of an integrated IoT system implementing the permissioned blockchain Hyperledger Fabric (HLF) to secure the edge computing devices by employing a local authentication process. In addition, the proposed model provides traceability for the data generated by the IoT devices. The presented solution also addresses the IoT systems’ scalability challenges, the processing power and storage issues of the IoT edge devices in the blockchain network. A set of built-in queries is leveraged by smart-contracts technology to define the rules and conditions. The paper validates the performance of the proposed model with practical implementation by measuring performance metrics such as transaction throughput and latency, resource consumption, and network use. The results show that the proposed platform with the HLF implementation is promising for the security of resource-constrained IoT devices and is scalable for deployment in various IoT scenarios.
... Collaborative MRS need to be able to communicate to keep coordinated, but also need to be aware of each other's position in order to make the most out of the shared data [83], [84]. Situated communication refers to wireless communication technologies that enable simultaneous data transfer while locating the data source [85]. ...
Search and rescue (SAR) operations can take significant advantage from supporting autonomous or teleoperated robots and multi-robot systems. These can aid in mapping and situational assessment, monitoring and surveillance, establishing communication networks, or searching for victims. This paper provides a review of multi-robot systems supporting SAR operations, with system-level considerations and focusing on the algorithmic perspectives for multi-robot coordination and perception. This is, to the best of our knowledge, the first survey paper to cover (i) heterogeneous SAR robots in different environments, (ii) active perception in multi-robot systems, while (iii) giving two complementary points of view from the multi-agent perception and control perspectives. We also discuss the most significant open research questions: shared autonomy, sim-to-real transferability of existing methods, awareness of victims' conditions, coordination and interoperability in heterogeneous multi-robot systems, and active perception. The different topics in the survey are put in the context of the different challenges and constraints that various types of robots (ground, aerial, surface, or underwater) encounter in different SAR environments (maritime, urban, wilderness, or other post-disaster scenarios). The objective of this survey is to serve as an entry point to the various aspects of multi-robot SAR systems to researchers in both the machine learning and control fields by giving a global overview of the main approaches being taken in the SAR robotics area.
... Al-dhubhani et al. [78] have proposed a smart border security system where sensors and different sources of data are used to make decisions and take actions. Queralta et al. [79] proposed an IoE-based architecture that employs a heterogeneous group of vehicles to improve traveling quality. Alam et al. [80] have developed an object recognition method for autonomous driving to improve the accuracy of vehicle recognition. ...
Full-text available
Artificial intelligence (AI) has taken us by storm, helping us to make decisions in everything we do, even in finding our "true love" and the "significant other". While 5G promises us high-speed mobile internet, 6G pledges to support ubiquitous AI services through next-generation softwarization, heterogeneity, and configurability of networks. The work on 6G is in its infancy and requires the community to conceptualize and develop its design, implementation, deployment, and use cases. Towards this end, this paper proposes a framework for Distributed AI as a Service (DAIaaS) provisioning for Internet of Everything (IoE) and 6G environments. The AI service is "distributed" because the actual training and inference computations are divided into smaller, concurrent, computations suited to the level and capacity of resources available with cloud, fog, and edge layers. Multiple DAIaaS provisioning configurations for distributed training and inference are proposed to investigate the design choices and performance bottlenecks of DAIaaS. Specifically, we have developed three case studies (e.g., smart airport) with eight scenarios (e.g., federated learning) comprising nine applications and AI delivery models (smart surveillance, etc.) and 50 distinct sensor and software modules (e.g., object tracker). The evaluation of the case studies and the DAIaaS framework is reported in terms of end-to-end delay, network usage, energy consumption, and financial savings with recommendations to achieve higher performance. DAIaaS will facilitate standardization of distributed AI provisioning, allow developers to focus on the domain-specific details without worrying about distributed training and inference, and help systemize the mass-production of technologies for smarter environments.
... Collaborative multi-robot systems need to be able to communicate to keep coordinated, but also need to be aware of each other's position in order to make the most out of the shared data [87], [88]. Situated communication refers to wireless communication technologies that enable simultaneous data transfer while locating the data source [89]. ...
Autonomous or teleoperated robots have been playing increasingly important roles in civil applications in recent years. Across the different civil domains where robots can support human operators, one of the areas where they can have more impact is in search and rescue (SAR) operations. In particular, multi-robot systems have the potential to significantly improve the efficiency of SAR personnel with faster search of victims, initial assessment and mapping of the environment, real-time monitoring and surveillance of SAR operations, or establishing emergency communication networks, among other possibilities. SAR operations encompass a wide variety of environments and situations, and therefore heterogeneous and collaborative multi-robot systems can provide the most advantages. In this paper, we review and analyze the existing approaches to multi-robot SAR support, from an algorithmic perspective and putting an emphasis on the methods enabling collaboration among the robots as well as advanced perception through machine vision and multi-agent active perception. Furthermore, we put these algorithms in the context of the different challenges and constraints that various types of robots (ground, aerial, surface or underwater) encounter in different SAR environments (maritime, urban, wilderness or other post-disaster scenarios). This is, to the best of our knowledge, the first review considering heterogeneous SAR robots across different environments, while giving two complimentary points of view: control mechanisms and machine perception. Based on our review of the state-of-the-art, we discuss the main open research questions, and outline our insights on the current approaches that have potential to improve the real-world performance of multi-robot SAR systems.
Mobile edge computing (MEC) and next-generation mobile networks are set to disrupt the way intelligent and autonomous systems are interconnected. This will have an effect on a wide range of domains, from the Internet of Things to autonomous mobile robots. The integration of such a variety of MEC services in an inherently distributed architecture requires a robust system for managing hardware resources, balancing the network load and securing the distributed applications. Blockchain technology has emerged a solution for managing MEC services, with consensus protocols and data integrity checks that enable transparent and efficient distributed decision-making. In addition to transparency, the benefits from a security point of view are evident. Nonetheless, blockchain technology faces significant challenges in terms of scalability. In this chapter, we review existing consensus protocols and scalability techniques in both well-established and next-generation blockchain architectures. From this, we evaluate the most suitable solutions for managing MEC services and discuss the benefits and drawbacks of the available alternatives.
Full-text available
AI is here now, available to anyone with access to digital technology and the Internet. But its consequences for our social order aren't well understood. How can we guide the way technology impacts society?
Conference Paper
Full-text available
Until vehicles are fully autonomous, safety, legal and ethical obligations require that drivers remain aware of the driving situation. Key decisions about whether a driver can take over when the vehicle is confused, or its capabilities are degraded, depend on understanding whether he or she is responsive and aware of external conditions. The leading techniques for measuring situation awareness in simulated environments are ill-suited to autonomous driving scenarios, and particularly to on-road testing. We have developed a technique, named Daze, to measure situation awareness through real-time, in-situ event alerts. The technique is ecologically valid: it resembles applications people use in actual driving. It is also flexible: it can be used in both simulator and on-road research settings. We performed simulator-based and on-road test deployments to (a) check that Daze could characterize drivers' awareness of their immediate environment and (b) understand practical aspects of the technique's use. Our contributions include the Daze technique, examples of collected data, and ways to analyze such data.
Full-text available
While recent advancements in using sophisticated onboard sensors on robotic platforms have made it possible to consign various tasks like exploration, mapping, and flocking to teams of mobile robots, some issues like handling extensive amount of data, high dependency on sensors’ performance, and high expenses emerge. In this paper, the problem of mapping unknown environments by a team of heterogeneous mobile robots with limited and inexpensive sensing abilities is addressed. The concepts of Information Space and sensor models have been employed to plan the motions of robots with limited sensory data in order to accomplish the common goal of mapping the entire workspace as complete as possible. Also, a cooperation architecture is proposed to fuse and interrelate the dissimilar data obtained by individual heterogeneous robots and allocate various exploratory tasks to each of them in order to complete the map. The algorithm works with various limited sensing models, such as depth-limited boundary distance sensor, quadridirectional depth sensor, depth-limited gap sensor, and depth-limited radially-bounded depth senor. Based on each sensor model, the best moving strategy is introduced to maximize the workspace coverage for each robot. The proposed algorithm, which yields a geometric map of the environment, is implemented in diverse simulated problems both with and without sensing noises, and the results and comparisons with a recent related work show that it is able to reliably construct maps of simply-connected and multiply-connected environments with convex and concave obstacles. In the presence of noises, the produced maps had about 12.4% false positive and 3.3% false negative errors on average. Also, some sensitivity analyses are done on the effects of workspace size and number of robots on the mapping time.
Self-driving cars are gradually being introduced in the United States and in several Member States of the European Union. Policymakers will thus have to make important choices regarding the application of the law. One important aspect relates to the question who should be held liable for the damage caused by such vehicles. Arguably, product liability schemes will gain importance considering that the driver's fault as a cause of damage will become less likely with the increase of autonomous systems. The application of existing product liability legislation, however, is not always straightforward. Without a proper and effective liability framework, other legal or policy initiatives concerning technical and safety matters related to self-driving cars might be in vain. The article illustrates this conclusion by analysing the limitation periods for filing a claim included in the European Union Product Liability Directive, which are inherently incompatible with the concept of autonomous vehicles. On a micro-level, we argue that every aspect of the Directive should be carefully considered in the light of the autonomisation of our society. On the macro-level, we believe that ongoing technological evolutions might be the perfect moment to bring the European Union closer to its citizens.
This paper presents a novel approach to artificial situation awareness for an autonomous vehicle operating in complex dynamic environments populated by other agents. A key aspect of situation awareness is the use of mental models to predict future states of the environment, allowing safe and rational routing decisions to be made. We present a technique for predicting future discrete state transitions (such as the commencement of a turn) by other agents, based upon an uncertain mental model. Predictions take the form of univariate Gaussian Probability Density Functions which capture the inherent uncertainty in transition time whilst still providing great benefit to a decision making system. The prediction distributions are compared with Monte Carlo simulations and show an excellent correlation over long prediction horizons.
Self-driving cars and self-driving technology are tested on public roads in several countries on a large scale. With this development not only technical, but also legal questions arise. This article will give a brief overview of the legal developments in multiple jurisdictions – California (USA), United Kingdom, and the Netherlands – and will highlight several legal questions regarding the testing and deployment of self-driving cars. Policymakers are confronted with the question how the testing of self-driving cars can be regulated. The discussed jurisdictions all choose a different approach. Different legal instruments – binding regulation, non-binding regulation, granting exemptions – are used to regulate the testing of self-driving cars. Are these instruments suitable for the objectives the jurisdictions want to achieve? As technology matures, self-driving cars will at some point become available to the general public. Regarding this post-testing phase, two pressing problems arise: how to deal with the absence of a human driver and how does this affect liability and insurance? The Vienna Convention on Road Traffic 1968 and the Geneva Convention on Road Traffic 1949, as well as national traffic laws, are based on the notion that only a human can drive a car. To what extent a different interpretation of the term ‘driver’ in traffic laws and international Conventions can accommodate the deployment of self-driving cars without a human driver present will be discussed in this article. When the self-driving car becomes reality, current liability regimes can fall short. Liability for car accidents might shift from the driver or owner to the manufacturer of the car. This could have a negative effect on the development of self-driving cars. In this context, it will also be discussed to what extent insurance can affect this development.
The age of autonomous vehicles is fast approaching, according to new survey results by the World Economic Forum. Nearly 60% of consumers in cities around the world are willing to travel in self-driving vehicles. In the consumer survey among 5,500 respondents in ten countries, acceptance is highest in emerging markets, such as China, India, and the United Arab Emirates; and about 50% in the United States and the United Kingdom; and acceptance rates were lowest in Japan and Germany. As part of a project, the World Economic Forum also conducted interviews with over 20 city policy makers and transport authorities from cities such as Dubai, Helsinki, New York, Amsterdam, Singapore, and Toronto about their expectations for self-driving vehicles. The survey showed that most city authorities believe that applications such as shared self-driving vehicles are coming very quickly and will have the potential to be the last-mile solution for public transport. City planners and governments thus need to prepare for the introduction of self-driving cars?smart mobility cities such as Gothenburg and Singapore are already doing so.
This paper reports on the problem of map-based visual localization in urban environments for autonomous vehicles. Self-driving cars have become a reality on roadways and are going to be a consumer product in the near future. One of the most significant road-blocks to autonomous vehicles is the prohibitive cost of the sensor suites necessary for localization. The most common sensor on these platforms, a three-dimensional (3D) light detection and ranging (LIDAR) scanner, generates dense point clouds with measures of surface reflectivity - which other state-of-the-art localization methods have shown are capable of centimeter-level accuracy. Alternatively, we seek to obtain comparable localization accuracy with significantly cheaper, commodity cameras. We propose to localize a single monocular camera within a 3D prior ground-map, generated by a survey vehicle equipped with 3D LIDAR scanners. To do so, we exploit a graphics processing unit to generate several synthetic views of our belief environment. We then seek to maximize the normalized mutual information between our real camera measurements and these synthetic views. Results are shown for two different datasets, a 3.0 km and a 1.5 km trajectory, where we also compare against the state-of-the-art in LIDAR map-based localization.
The term connected vehicles refers to applications, services, and technologies that connect a vehicle to its surroundings. Adopting a definition similar to that of AUTO Connected Car News, a connected vehicle is basically the presence of devices in a vehicle that connect to other devices within the same vehicle and/or devices, networks, applications, and services outside the vehicle. Applications include everything from traffic safety and efficiency, infotainment, parking assistance, roadside assistance, remote diagnostics, and telematics to autonomous self-driving vehicles and global positioning systems (GPS). Typically, vehicles that include interactive advanced driver-assistance systems (ADASs) and cooperative intelligent transport systems (C-ITS) can be regarded as connected. Connected-vehicle safety applications are designed to increase situation awareness and mitigate traffic accidents through vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications. ADAS technology can be based on vision/camera systems, sensor technology, vehicle data networks, V2V, or V2I systems. Features may include adaptive cruise control, automate braking, incorporate GPS and traffic warnings, connect to smartphones, alert the driver to hazards, and keep the driver aware of what is in the blind spot. V2V communication technology could mitigate traffic collisions and improve traffic congestion by exchanging basic safety information such as location, speed, and direction between vehicles within range of each other. It can supplement active safety features, such as forward collision warning and blind-spot detection.