Conference PaperPDF Available

Scaling up an Edge Server Deployment

Authors:

Abstract and Figures

In this article, we study the scaling up of edge computing deployments. In edge computing, deployments are scaled up by adding more computational capacity atop the initial deployment, as deployment budgets allow. However, without careful consideration, adding new servers may not improve proximity to the mobile users, crucial for the Quality of Experience of users and the Quality of Service of the network operators. In this paper, we propose a novel method for scaling up an edge computing deployment by selecting the optimal number of new edge servers and their placement, and re-allocating access points optimally to the old and new edge servers. The algorithm is evaluated with two scenarios, using data on a real-world large-scale wireless network deployment. The evaluation shows that the proposed method is stable on a real city-scale deployment, resulting in optimized Quality of Service for the network operator.
Content may be subject to copyright.
This is the accepted version of the work. The final version will be published in 4th International Smart Edge Computing and Networking (SmartEdge2020], jointly held with 18th Annual IEEE International Conference on Pervasive Computing and
Communications (PerCom2020), March 23rd, Austin, TX, US, 2020. ©2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing
this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Scaling up an Edge Server Deployment
Lauri Lov´
en, Tero L¨
ahderanta, Leena Ruha, Teemu Lepp¨
anen, Ella Peltonen,
Jukka Riekki, and Mikko J. Sillanp¨
a¨
a
Center for Ubiquitous Computing
University of Oulu
Oulu, Finland
{first.last}@oulu.fi
Research Unit of Mathematical Sciences
University of Oulu
Oulu, Finland
{first.last}@oulu.fi
Abstract—In this article, we study the scaling up of edge
computing deployments. In edge computing, deployments are
scaled up by adding more computational capacity atop the initial
deployment, as deployment budgets allow. However, without care-
ful consideration, adding new servers may not improve proximity
to the mobile users, crucial for the Quality of Experience of users
and the Quality of Service of the network operators. In this paper,
we propose a novel method for scaling up an edge computing
deployment by selecting the optimal number of new edge servers
and their placement, and re-allocating access points optimally to
the old and new edge servers. The algorithm is evaluated with
two scenarios, using data on a real-world large-scale wireless
network deployment. The evaluation shows that the proposed
method is stable on a real city-scale deployment, resulting in
optimized Quality of Service for the network operator.
Index Terms—edge computing, facility location, service scaling
I. INTRODUCTION
The modern smart devices, from smartphones to home IoT,
from industrial applications to smart cities and smart transporta-
tion, are ushering in an era of pervasive computing. The massive
amounts of data generated by these devices provide a ground
for various novel applications, while also introducing novel
challenges for data processing and connectivity [1]. Indeed,
current cloud computing systems and network infrastructures
may not provide enough computing capacity to manage latency
and performance requirements set by the modern pervasive
computing systems [2].
Edge computing refers to technologies and development
methodologies for distributing and running computations close
to the user devices, ”on the edges of the network”, at the
network infrastructure devices or dedicated local systems
providing computing resources for the user devices. Such
computations typically include the collection and preprocessing
of application-specific multimodal data [3], [4] or facilitating
real-time user interactions such as augmented or virtual reality
applications [5]. The expected advantages of edge computing
include low latency and high bandwidth between user devices
and edge components, crucial for real-time applications [6],
support for user mobility [7], [8], and increased privacy
especially with applications relying on highly personal data
[9].
Edge computing platforms aim to provide location-aware
services and multitenancy of applications at the close physical
and logical proximity. A number of solutions have already been
proposed, e.g. cloudlets [10] and Fog computing [11], or even
Kubernetes-based over-the-air LTE/5G Edge DevOps platforms
[12], that illustrate different technologies and implementation
strategies [13].
In addition to placing data and computations [1], placing
computational resources [14] is a key issue in edge computing.
Edge computing requires a flexible and scalable deployment of
edge servers to support user mobility, the inherent dynamicity
of the operational environment, and the variety of applications,
some requiring real-time responsiveness. The deployments are
often geographically large, spanning entire cities. The edge
servers are often deployed densely, based on observations
on application workload and usage patterns and the resulting
network traffic. Often, the available deployment budgets lead to
homogeneous hardware in the edge servers, which may result in
under- or overused capacity. The simplest approach is to provide
a clustered deployment without initially considering capacity
or proximity constraints. However, the operators are expected
to ensure efficient resource usage as well as a sufficient Quality
of Experience (QoE) for the users, while maintaining overall
QoS (Quality of Service) in their system. Therefore, when
planning the deployment architecture, promixity to the mobile
users is a crucial element.
Edge-based service and functionality availability is constantly
increasing, and the soon-to-come 5G networks will only further
increase their availability. Further, the increased usage of
smartphones, other personal smart devices, connected vehicles,
autonomous devices, and other novel technologies lead to
an unforeseen variety of edge applications and services with
increasing requirements for computation and communication.
As a result, edge server deployments need to be scaled up
to meet the increasing demands. Scaling up an edge server
deployment, the operator needs to find the optimal number of
new edge servers, their optimal locations, and the allocations
of Access Points (APs) to these servers.
In this paper, we present the following novel contributions:
1)
We present a novel method for scaling a deployment,
finding the optimal number of new edge servers, placing
them in an existing edge network, and allocating APs
for the edge servers.
2)
We evaluate our method with three scenarios which
correspond to real-life use cases for a network operator.
The evaluation indicates that the proposed method results
in optimized QoS for the network operator.
3)
We present the results based on a real-world data set of
a smart city Wi-Fi usage, collected over nine years and
comprising hundreds of millions of connections.
II. RE LATE D WOR K
In the previously proposed placement schemes, the best
proximity, with regard to some metric, was often the initial
assumption in the deployment planning [14]. When the aim is to
address minimized proximity, upgrading server capacity to the
existing (co-located) deployment is not sufficient as distances
may not be optimized for the added users and their workload.
Moreover, if server capacity is fixed, the solution prohibits over-
provisioning and QoE cannot be guaranteed. Also, on-demand
provisioning has been considered, e.g. [15], [16], that increases
scalability in response to the online workload. But such efforts
may be difficult to realize by the network operators.
The number of new servers may be dictated by a set budget.
Sometimes it is necessary to determine what is the number
of servers that would produce a best trade-off between the
budget and QoS. To explore this trade-off, some previous work
[17], [18] evaluate the average user latency as a function of
the number of edge servers.
The survey of L
¨
ahderanta et al. [14] studies edge server
placement extensively. For example, in the study by Wand
et al. [17] a fixed number of edge servers are placed by
minimizing the geospatial distances while concurrently seeking
for a balanced workload distribution. In [18] a hierarchical tree-
like structures are used to locate fixed number of edge servers
without capacity limits. Yin et al. [19] propose a heuristic
decision-support management system for server placement that
enables the discovery of unforeseen server locations. Guo et
al. [20] place a fixed number of edge servers in a two-step
scheme, where first the servers are located using the k-means
algorithm and then the APs are allocated to the servers with
the aim to minimize the communication delay and to balance
the workload.
The method presented in this paper is based on the ca-
pacitated location allocation method, presented in a previous
study of ours [14]. To the best of our knowledge, however, no
articles have studied the scaling up of an existing edge server
deployment.
III. MET HO DS
A. Placement model
We base our method for the placement of new edge servers on
the PACK algorithm, proposed in our earlier work [14], detailed
below. The algorithm finds optimal locations for a fixed number
of edge servers, minimizing the geospatial distances to APs
and satisfying the capacity constraints. Such an optimization
problem can be considered as a capacitated location allocation
problem [21]–[24] and more specifically as a capacitated p-
median type problem [24].
In a p-median problem typically the Euclidean distance is
used. However, PACK applies the squared Euclidean distances
producing k-means type clustering with spherical-like clusters
with centralized cluster heads [25]. This results in a star-like
topology with spatially centralized edge servers for both dense
and sparse areas, contributing towards better proximity, i.e.
QoS, particularly in the remote APs. Thus, the approach can
actually be considered as a capacitated k-means clustering,
where the cluster centers are constrained to the data points. Such
a discrete variant of k-means method is generally referred to as
a k-medoid method [26]. Given a data set of
n
APs, let
xi
be the
coordinates a for AP
i
and
wi
the corresponding workload. In
practice, workload
wi
is determined by the maximum amount
of simultaneous users in AP i.
Let us denote by
cj
,
j= 1,2, ..., k
, the coordinates for
k
edge servers and by
yij
the membership of AP
i
to the edge
server j.
Our aim is to optimize the locations of the servers and the
allocations of the APs by minimizing the squared Euclidean
distance between the edge servers and the APs they cater,
weighted by the workload of each AP, while taking into
consideration the capacity constraints of each server.
More specifically, the objective function to be minimized is
argmin
cj,yij
n
X
i=1
k
X
j=1
wi||xicj||2yij .(1)
The optimization is carried out with the following constraints:
cj∈ {x1, x2, . . . , xn} ∀j, (2)
yij ∈ {0,1} ∀i, j, (3)
k
X
j=1
yij = 1 i, (4)
L
n
X
i=1
wiyij Uj. (5)
These constraints correspond to the following assumptions:
An edge server must be co-located with an AP (2), each AP
is connected to exactly one edge server (3), (4), and the total
workload of the APs connected to one edge server can be
uniformly distributed between a lower (L) and an upper (U)
limit (5).
The optimization problem is NP-hard, calling for approx-
imate solutions. PACK [14] is an iterative block-coordinate
descent algorithm, detailed in Alg. 1, consisting of two main
steps: the allocation-step on line 4 and the location-step on line
5. PACK iterates these two steps until the locations of edge
servers
cj
do not change. However, this type of iteration does
not guarantee that the result is the global minimum. Therefore
PACK runs with
N
initial values for the server locations, which
Algorithm 1 PACK-algorithm [14]
Input: xi, wi, k, N
Output:
Edge server locations
c
j
and allocations
y
ij , j = 1, . . . , k
1: for i= 1 to Ndo
2: Initialize cj,j= 1,2, . . . , k using k-means++
3: while cjchanges do
4: Allocation-step: minimize (1) with respect to yij
5: Location-step: minimize (1) with respect to cj
6: Sthe value of the objective function
7: end while
8: if S < Smin or i= 1 then
9: Smin S
10: c
jcj
11: y
ij yij
12: end if
13: end for
14: return c
j, y
ij
are obtained using the k-means++ algorithm [27], and selects
the placement obtained with iteration with the best objective
function value.
B. Scaling PACK
We modify the PACK algorithm to support adding new
servers for an existing server network. Consider a deployment
where
k1
servers
c1, . . . , ck1
are previously placed and the
aim is to optimally place
k2
more servers
ck1+1, . . . , ck1+k2
.
The placement is optimized by minimizing (1) with respect to
ck1+1, . . . , ck1+k2
and
yijall i= 1, . . . , n, jall = 1, . . . , k1+
k2
, assuming
c1, . . . , ck1
fixed. In practice, this means that in
Alg. 1, while the allocation step must still consider all APs,
the location step should update only the locations of the new
servers. The new, scaling PACK algorithm (sPACK) is detailed
in Alg. 2.
C. Selection of the number of servers
We select the number of additional servers based on the
elbow method, often used in clustering analysis for selecting
the optimal number of clusters [28]. In the elbow method,
a curve, referred here as cost-effectiveness curve, is drawn
with the number of clusters in the horizontal axis and the
minimum of the objective function in the vertical axis. The
number of clusters is then selected visually as the ”elbow”
where increasing the number of clusters does not appear to
produce considerable decrease in the value of the objective
function.
D. Measuring the Quality of Service
Following multiple other studies [14], [16]–[18], [20], we
measure QoS by proximity, i.e. the Euclidean distances between
Algorithm 2 sPACK-algorithm
Input: xi, wi, k1, k2, N, cjold , jold = 1, . . . , k1
Output:
New allocations for all edge servers
y
ijall
, with
jall =
1, . . . , k1+k2
, and locations for the new servers
c
jnew
, where
jnew =k1, . . . , k2
1: for i= 1 to Ndo
2: Initialize cjnew using k-means++
3: while cjnew changes do
4: Allocation-step: minimize (1) with respect to yijall
5: Location-step: minimize (1) with respect to cjnew
6: Sthe value of the objective function
7: end while
8: if S < Smin or i= 1 then
9: Smin S
10: c
jnew cjnew
11: y
ijall yijall
12: end if
13: end for
14: return c
jnew , y
ijall
the APs and their allocated edge servers. The average AP
distance, weighted by the workload, is
Mean =1
W
n
X
i=1
k
X
j=1
wi||xic
j||y
ij ,
where
W=Pn
i=1 wi
corresponds to the total workload of all
the studied APs.
Further, following our previous work [14], we take a closer
look at the proximity distributions by the sample quantiles
qα
that measure the distance within which
α
proportion of
the workload is from the associated edge server. Accordingly,
selecting
α
close to 1 evaluates the worst case QoS, whereas
α= 0.5
corresponds to the median QoS. The quantiles
qα
can
be obtained by solving
F(qα) = α
, where
F
is the empirical
cumulative distribution function of the workload.
IV. EVALUATION
A. Data
We test our methods with a real-world Wi-Fi network data set,
collected from the PanOULU public network access points (AP)
in the city of Oulu, Finland, in 2007–2015 [29]. The raw data
contains the timestamps, durations and source identifications
of all the connections to the APs during the observation period.
Following our earlier work [14], we use only 2014 data, the
last full year in the data set. The number of individual Wi-Fi
connections that year was 257,552,871 on 450 active access
points (AP). The PanOULU APs are shown in Fig. 1.
Each connected user is assumed to introduce a workload
of one to an AP and the edge server that AP is connected
to. The number of concurrently connected users for one busy
AP of a local polytechnic on the first week of September is
depicted in Fig. 2. To make sure the edge servers have capacity
for even the peak hours, we choose the highest number of
Fig. 1: Locations of the PanOULU access points.
0
25
50
75
100
Sep 02 Sep 04 Sep 06 Sep 08
Fig. 2: Connected users on the first week of Sept. 2014 at one of
the panOULU access points.
concurrent users in 2014 for each AP as the relative workload
the APs introduce on the edge server. Fig. 3 shows that the AP
workloads are distributed roughly exponentially, with a small
number of high workloads and a fat tail of low workloads.
Fig. 3: Workload of the PanOULU access points.
B. Scenarios
We evaluate the proposed method in three scenarios. In two
of the scenarios, there is an existing deployment of edge servers,
which is subsequently grown using the proposed method to
find the number of new edge servers to deploy, as well as their
placements. These are compared to a reference scenario which
places all servers and allocates their APs optimally, without
an intermediate scaling step. The scenarios are described in
detail below and summarized in Table I.
TABLE I: EVA LUATI ON S CENAR IOS.
Scenario Existing New Capacity
Reference 0 15—25 [300,600]
Small 5 10—20 [300,600]
Large 15 0—10 [300,600]
1) Reference deployment: The reference deployment starts
from a clean slate. 15–25 edge servers are placed and their
AP allocations set using the method proposed in our earlier
work [14]. The capacity ranges for edge servers are chosen
to be [300,600] to accommodate the workloads of the AP
allocations in the range of edge servers: the more edge servers,
the fewer APs need to be allocated to each edge server, and
the smaller their combined workloads. The center point in
the capacity range, 450, divides the total workload of all APs
evenly between 20 edge servers.
2) Small deployment: The first scaling scenario assumes
there are five deployed edge servers. This corresponds to a
small-scale testing deployment in strategic locations, such as
the operator R&D center or a university, which is then extended
towards a commercial service. In this scenario, further, the
operator is assumed to have a flexible budget to add 10 to
20 new edge servers. The capacities are set identical to the
reference scenario.
3) Large deployment: The second scenario assumes a setup
of 15 existing edge servers, placed in optimal locations using
the method proposed by L
¨
ahderanta et al. [14], and there
are potentially 0–10 new ones to deploy. This corresponds to
a business-as-usual scenario where the operator periodically
checks if edge application user QoS could be improved by
new edge server deployments. The existing deployment as
well as the range of new servers to add in the Large scenario
are chosen to reflect the other two scenarios to make them
comparable.
C. Results
For the Small Deployment, with 5 edge servers placed by a
domain expert, the cost-effectiveness curve in Fig. 4 shows an
elbow at the midpoint of 15 added servers. The resulting APs
and their allocated edge servers are depicted in panel (a) of
Fig. 5, with the fixed edge servers shown as asterisks, the new
servers as X’s, and the APs allocated to particular server all
colored identically within a convex hull.
For the Large Deployment, with 15 existing edge servers
placed in optimal locations, the cost-effectiveness curve sug-
gests deploying 5 extra servers. These were placed as indicated
in the right panel of Fig. 5.
Table II demonstrates how the QoS measures differ between
the scenarios and a reference deployment of 20 optimally
placed edge servers. The reference scenario, as expected, gives
the best objective function minimum and QoS on all the
(a) Small deployment (5 + [10, 20] edge servers)
(b) Large deployment (15 + [0, 10] edge servers)
(c) Reference deployment ([15, 25] edge servers)
Fig. 4: Cost-effectiveness curves of Small, Large and Reference
deployment scenarios.
measures. The Small deployment scenario performs better than
the Large deployment scenario in terms of the objective function
minimum. However, in terms of QoS, the Small deployment
scenario excels in mean proximity as well as in the 75%
quantile, while the Large scenario has better 25% quantile,
median and worst case behavior, those being equal or nearly-
equal to the reference.
TABLE II: EVA LUTIO N RE SU LTS FO R TH E THREE S CE NAR IO S.
Proximity (km)
Scenario Obj. function Mean 25% 50% 75% 95%
Reference 15.2e+06 0.556 0.05 0.29 0.65 2.10
Small 15.7e+06 0.571 0.11 0.31 0.62 2.14
Large 15.9e+06 0.593 0.05 0.29 0.75 2.11
V. DISCUSSION
We compared the scaled Small and Large deployment
scenarios with the reference scenario which had no scaling.
The Small deployment scenario turned out to be close to
the reference scenario in terms of the objective function and
the mean proximity, whereas the Large deployment scenario
produced inferior results. However, the results are highly
dependent on the placement of the initial edge servers before
scaling. Indeed, the initial placement affects the placement of
new servers. In this case, both of the scaling scenarios had a
small number of servers placed non-optimally, which overall
causes worse QoS than in the reference scenario.
For all the scenarios, the total number of servers turned out
to be 20, with the number of new servers at the midpoint of
the budget range. This is unsurprising considering the capacity
limits in the scenarios: the midpoint of the capacity range was
450, and the total workload of all AP’s divided by 450 is 20.
Indeed, in clustering, the objective function typically decreases
while the number of clusters (here edge servers) is increased.
However, as we apply both the lower and the upper limits to
the capacity, increasing the number of servers may not always
decrease the objective function: if a high number of servers
is placed, the algorithm has to make spatial compromises for
fulfilling the lower capacity limit, leading to higher (i.e. worse)
objective function values. Similarly, if a low number of servers
is placed, obeying the upper capacity limit is difficult. Due to
these effects, the elbow approach actually appears to have a
tendency to favor the number of servers that divides the whole
workload to the midpoint of the capacity limits.
Scaling up the deployments occasionally produces allocations
which overlap spatially. For example, in Fig. 5, both the
Small and Large deployments have an edge server from the
original deployment (marked with an asterisk) within the north-
easternmost cluster, with some of the nearby APs allocated to
another edge server (marked with an X). Such deployments
are not obvious and are less likely to occur when a domain
expert is placing the servers, and speak in favor of the proposed
method.
On the other hand, spatial overlap seems to occur due to
mistakes made in the original placement. Indeed, in Fig. 7 (a)
and (b), the red boxes illustrate how the edge server placed in
the original deployment is different from the optimal placement
in the reference scenario. In such cases, comparing with the
reference, the network operator could consider the replacement
of some of the original edge servers to further improve QoS.
Typically in the edge server placement literature, the servers
are co-located with APs. However, omitting this constraint
(2) may provide valuable information about new candidate
places where APs may be located. The domain expert could
then detect, close to these optimal locations, new candidate
places where a new server could be set. Then the algorithm
could be re-run with the co-location constraint (2) with the
newly discovered places added to the candidate places, and
the QoS improvement considered. This procedure resembles
the approach taken by Yin et al. [19].
Overall, both scenarios appeared to produce relatively
good QoS compared to the reference scenario. The proposed
algorithm works well both in a case where a small pilot
deployment is extended to an extensive server network, and in
a case where an existing server network is further expanded.
Further, the proposed algorithm provides an important tool
for domain experts to design and analyze different solutions
for edge server deployments. This is illustrated by Fig. 7,
(a) Small deployment (5 + 15 servers). (b) Large deployment (15 + 5 servers).
Fig. 5: Edge server placements and AP allocations with the Small and Large deployment scenarios.
Fig. 6: Edge server placement for Reference deployment (20 servers).
showing the scaled-up Small and Large deployments compared
with the edge server placements of the reference deployments.
Such information is important for making placement decisions
in-situ.
VI. CONCLUSION
Edge server deployments need to scale up along the growth
of edge application usage. In this paper, we presented a novel
method for finding the optimal number of new servers to add,
their optimal locations, as well as the optimal allocations of
the access points for both the old and the new edge servers.
We evaluated the method with a real-world data set of Wi-Fi
access logs, comparing two example scenarios to a reference
setup.
The evaluation showed that the proposed method is stable
in city-wide placement scenarios with small and large initial
edge server deployments, resulting in optimized QoS for the
network operator. As future work we will study, in particular,
the capacity constraints and their effect on the optimization
process.
VII. ACKN OWLEDGEMENTS
This research is supported by Academy of Finland 6Genesis
Flagship (grant 318927), the Infotech Oulu research institute,
the Future Makers program of the Jane and Aatos Erkko
Foundation and the Technology Industries of Finland Cen-
tennial Foundation, by Academy of Finland Profi 5 funding
for mathematics and AI: data insight for high-dimensional
dynamics, and by the personal grant for Lauri Lov
´
en on Edge-
native AI research by the Tauno T ¨
onning foundation.
REFERENCES
[1]
M. Breitbach, D. Sch
¨
afer, J. Edinger, and C. Becker, “Context-aware
data and task placement in edge computing environments,” in 2019 IEEE
International Conference on Pervasive Computing and Communications
(PerCom. IEEE, 2019, pp. 1–10.
[2]
A. Yousefpour, C. Fung, T. Nguyen, K. Kadiyala, F. Jalali, A. Niakanlahiji,
J. Kong, and J. P. Jue, “All one needs to know about fog computing
and related edge computing paradigms: A complete survey,” Journal of
Systems Architecture, 2019.
[3]
L. Lov
´
en et al., “Mobile road weather sensor calibration by sensor fusion
and linear mixed models,” PLoS ONE, vol. 14, no. 2, pp. 1–17, 2019.
[4]
——, “Towards EDISON: An edge-native approach to distributed
interpolation of environmental data,” in 28th International Conference
on Computer Communications and Networks (ICCCN2019), 1st Edge of
Things Workshop 2019 (EoT2019). Valencia, Spain: IEEE, 2019.
[5]
L. Lov
´
en, T. Lepp
¨
anen, E. Peltonen, J. Partala, E. Harjula, P. Porambage,
M. Ylianttila, and J. Riekki, “EdgeAI: A vision for distributed, edge-
native artificial intelligence in future 6G networks,” in The 1st 6G Wireless
Summit, Levi, Finland, 2019, pp. 1–2.
[6]
W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu, “Edge computing: Vision and
challenges,” IEEE Internet of Things Journal, vol. 3, no. 5, pp. 637–646,
2016.
(a) Small deployment (5 + 15 servers). (b) Large deployment (15 + 5 servers).
Fig. 7: Comparison of the Small and Large deployment scenarios to the Reference scenario. The red boxes highlight a difference
between both scaling scenarios and the reference, leading to spatial overlap in allocation.
[7]
J. Liu, J. Wan, B. Zeng, Q. Wang, H. Song, and M. Qiu, “A scalable and
quick-response software defined vehicular network assisted by mobile
edge computing,” IEEE Communications Magazine, vol. 55, no. 7, pp.
94–100, 2017.
[8]
T. Lepp
¨
anen et al., “Developing Agent-Based Smart Objects for IoT Edge
Computing: Mobile Crowdsensing Use Case,” in Internet and Distributed
Computing Systems. IDCS 2018. Lecture Notes in Computer Science, vol
11226, Y. Xiang, J. Sun, G. Fortino, A. Guerrieri, and J. J. Jung, Eds.
Springer, Cham, 2018, pp. 235–247.
[9]
S. Yi, Z. Qin, and Q. Li, “Security and privacy issues of fog computing:
A survey,” in International conference on wireless algorithms, systems,
and applications. Springer, 2015, pp. 685–695.
[10]
M. Satyanarayanan, P. Bahl, R. Caceres, and N. Davies, “The case for
VM-based cloudlets in mobile computing,” IEEE Pervasive Computing,
vol. 8, pp. 14–23, 2009.
[11]
F. Bonomi, R. Milito, J. Zhu, and S. Addepalli, “Fog computing and its
role in the internet of things,” in Proceedings of the first edition of the
MCC workshop on Mobile cloud computing. ACM, 2012, pp. 13–16.
[12]
J. Haavisto, M. Arif, L. Lov
´
en, T. Lepp
¨
anen, and J. Riekki, “Open-source
rans in practice: an over-the-air deployment for 5g mec,” in European
Conference on Networks and Communications, 2019, [To appear].
[13]
K. Dolui and S. K. Datta, “Comparison of edge computing implemen-
tations: Fog computing, cloudlet and mobile edge computing,” in 2017
Global Internet of Things Summit (GIoTS). IEEE, 2017, pp. 1–6.
[14]
T. L
¨
ahderanta et al., “Edge server placement with capacitated
location allocation,” arXiv preprint, 2019. [Online]. Available:
https://arxiv.org/pdf/1907.07349.pdf
[15]
F. Zeng, Y. Ren, X. Deng, and W. Li, “Cost-effective edge server
placement in wireless metropolitan area networks,” Sensors, vol. 19,
no. 1, 2018.
[16]
J. Liu, U. Paul, S. Troia, O. Falowo, and G. Maier, “K-means based
spatial base station clustering for facility location problem in 5G,”
in Proceedings of Southern Africa Telecommunication Networks and
Applications Conference (SATNAC), J. Lewis and Z. Ndlela, Eds., Sept.
2018, pp. 406–409.
[17]
S. Wang, Y. Zhao, J. Xu, J. Yuan, and C.-H. Hsu, “Edge server
placement in mobile edge computing,” Journal of Parallel and Distributed
Computing, vol. 127, pp. 160–168, 2019.
[18]
H. Sinky, B. Khalfi, B. Hamdaoui, and A. Rayes, “Adaptive edge-centric
cloud content placement for responsive smart cities, IEEE Network,
vol. 33, no. 3, pp. 177–183, 2019.
[19]
H. Yin, X. Zhang, H. H. Liu, Y. Luo, C. Tian, S. Zhao, and F. Li,
“Edge provisioning with flexible server placement,IEEE Transactions
on Parallel and Distributed Systems, vol. 28, no. 4, pp. 1031–1045, 2017.
[20]
Y. Guo, S. Wang, A. Zhou, J. Xu, J. Yuan, and C.-H. Hsu, “User
allocation-aware edge cloud placement in mobile edge computing,”
Software: Practice and Experience, pp. 1–14, 2019.
[21]
R. Z. Farahani and M. Hekmatfar, Facility location: concepts, models,
algorithms and case studies, ser. Contributions to Management Science.
Physica-Verlag Heidelberg, 2009.
[22]
J. Brimberg, P. Hansen, N. Mladenovic, and S. Salhi, “A survey of solution
methods for the continuous location-allocation problem,” International
Journal of Operations Research, vol. 5, pp. 1–12, 01 2008.
[23]
L. Cooper, “Heuristic methods for location-allocation problems,
SIAM Review, vol. 6, no. 1, pp. 37–53, 1964. [Online]. Available:
http://www.jstor.org/stable/2027512
[24]
S. Tafazzoli and M. Marzieh, Classification of Location Models and
Location Softwares, ser. Contributions to Management Science. Physica,
Heidelberg, 07 2009, pp. 505–521.
[25]
M. Negreiros and A. Palhano, “The capacitated centred clustering
problem,” Computers & Operations Research, vol. 33, no. 6, pp. 1639–
1663, 2006.
[26]
L. Kaufman and P. Rousseeuw, Clustering by Means of Medoids, ser.
Delft University of Technology : reports of the Faculty of Technical
Mathematics and Informatics. Faculty of Mathematics and Informatics,
1987.
[27] D. Arthur and S. Vassilvitskii, “K-means++: The advantages of careful
seeding,” in Proceedings of the Eighteenth Annual ACM-SIAM Symposium
on Discrete Algorithms, ser. SODA ’07. Philadelphia, PA, USA:
Society for Industrial and Applied Mathematics, 2007, pp. 1027–1035.
[Online]. Available: http://dl.acm.org/citation.cfm?id=1283383.1283494
[28]
T. M. Kodinariya and P. R. Makwana, “Review on determining number
of cluster in K-means clustering,” International Journal of Advance
Research in Computer Science and Management Studies, vol. 1, no. 6,
pp. 90–95, 2013.
[29]
V. Kostakos, T. Ojala, and T. Juntunen, “Traffic in the smart city:
Exploring city-wide sensing for traffic control center augmentation,”
IEEE Internet Computing, vol. 17, no. 6, pp. 22–29, Nov 2013.
... However, edge computing architectures come with several already envisioned challenges, including computational optimization and physical placement of the edge servers in dynamic scenarios with mobile users [5,6]. Particularly load balancing has seen as a mission-critical challenge for any computing service from cloud to local networking capabilities [7]. ...
... In edge computing, workload management must, however, deal with user mobility and higher variance in server and network topologies and capacities, thus making it a distinct research topic. Workload management on the edge can be handled with different strategies, such as the physical placement of edge servers [5,12,36] or reallocating services on the softwareside with different optimization algorithms [18,37,38]. Reallocation can rely on known edge server features, such as capacity, or their current state, such as load or even price [39]. ...
... While the study considered the Wi-Fi deployment of one geographical area, our earlier studies [5,12] have shown the deployment is representative of an edge deployment spanning urban areas with a high AP density as well as suburban areas with a low AP density. ...
Article
Full-text available
Efficient resource usage in edge computing requires clever allocation of the workload of application components. In this paper, we show that under certain circumstances, the number of superfluous workload reallocations from one edge server to another may grow to a significant proportion of all user tasks—a phenomenon we present as a reallocation storm. We showcase this phenomenon on a city-scale edge server deployment by simulating the allocation of user task workloads in a number of scenarios capturing likely edge computing deployments and usage patterns. The simulations are based on a large real-world data set of city-wide Wi-Fi network connections, with more than 47M connections over ca. 560 access points. We study the occurrence of reallocation storms in three common edge-based reallocation strategies and compare the latency–workload trade-offs related to each strategy. As a result, we find that the superfluous reallocations vanish when the edge server capacity is increased above a certain threshold, unique for each reallocation strategy, peaking at ca. 35% of the peak ES workload. Further, while a reallocation strategy aiming to minimize latency consistently resulted in the worst reallocation storms, the two other strategies, namely a random reallocation strategy and a bottom-up strategy which always chooses the edge server with the lowest workload as a reallocation target, behave nearly identically in terms of latency as well as the reallocation storm in dense edge deployments. Since the random strategy requires much less coordination, we recommend it over the bottom-up one in dense ES deployments. Moreover, we study the conditions associated with reallocation storms. We discover that edge servers with the very highest workloads are best associated with reallocation storms, with other servers around the few busy nodes thus mirroring their workload. Further, we identify circumstances associated with an elevated risk of reallocation storms, such as summertime (ca. 4 times the risk than on average) and on weekends (ca. 1.5 times the risk). Furthermore, mass events such as popular sports games incurred a high risk (nearly 10 times that of the average) of a reallocation storm in a MEC-based scenario.
... In general, the more people downloaded the app, the better the data for contact tracing and at the time of writing, there have been almost six million downloads-of course, not every download means the app will be active, and so, notifications are sent as reminders to have the app turned on when leaving the home. The app is based on proximity sensing using Bluetooth Low Energy which is equipped in most smartphones on the market-the protocol is called BlueTrace 42 and is based on Singapore's TraceTogether app. 43 There is also MIT's SafePaths contact tracing app which uses GPS and Bluetooth 44 -in this app, the location log data collected on the smartphone is stored on the phone, and leaves the device only when the user sends the information to a public health authority-this is a similar design principle to BlueTrace which aims to defer sending of the data from the phone to the authority until there has been a diagnosis of a COVID-19 case. ...
... [last accessed: 30/6/2020]. 42 BlueTrace is an open source application protocol; https://bluetrace.io/ [last accessed: 30/6/2020]. ...
... 96 IoT is not only about sensor networks city-scale or over vast geographical areas, but also very much sensing and analytics in personal spaces. The quantified self movement 100 encourages self-tracking and analytics of the self [36], e.g., via wearable IoT devices, and one can even apply such ideas to understand the human driver [42]. ...
Chapter
This chapter reviews the notion (and visions) of the Automated City in popular press, and in research publications, and then attempts to outline a conceptualisation of the Automated City. We first discuss what form the Automated City can take, from a mainly technological perspective. But a city is really constituted by its human inhabitants. We then discuss the Automated City in relation to its inhabitants via metaphors as guiding lenses through which one can view and shape developments towards a vision of the humane Automated City.
... In general, the more people downloaded the app, the better the data for contact tracing and at the time of writing, there have been almost six million downloads-of course, not every download means the app will be active, and so, notifications are sent as reminders to have the app turned on when leaving the home. The app is based on proximity sensing using Bluetooth Low Energy which is equipped in most smartphones on the market-the protocol is called BlueTrace 42 and is based on Singapore's TraceTogether app. 43 There is also MIT's SafePaths contact tracing app which uses GPS and Bluetooth 44 -in this app, the location log data collected on the smartphone is stored on the phone, and leaves the device only when the user sends the information to a public health authority-this is a similar design principle to BlueTrace which aims to defer sending of the data from the phone to the authority until there has been a diagnosis of a COVID-19 case. ...
... [last accessed: 30/6/2020]. 42 BlueTrace is an open source application protocol; https://bluetrace.io/ [last accessed: 30/6/2020]. ...
... 96 IoT is not only about sensor networks city-scale or over vast geographical areas, but also very much sensing and analytics in personal spaces. The quantified self movement 100 encourages self-tracking and analytics of the self [36], e.g., via wearable IoT devices, and one can even apply such ideas to understand the human driver [42]. ...
Chapter
The previous chapter discussed particular issues in relation to Automated Vehicles, urban robots and urban drones. This chapter discusses visions, perspectives and challenges of the Automated City more generally, including aspirational visions of future cities, what must be overcome or addressed towards a favourable notion of the Automated City, and issues of governance, new business models, city transportation, sustainability, real-time tracking, urban edge computing, blockchain, technical challenges of cooperation, and trust, fairness and ethics in relation to AI and algorithms in the city—we elaborate on the last two aspects in more detail.
... In general, the more people downloaded the app, the better the data for contact tracing and at the time of writing, there have been almost six million downloads-of course, not every download means the app will be active, and so, notifications are sent as reminders to have the app turned on when leaving the home. The app is based on proximity sensing using Bluetooth Low Energy which is equipped in most smartphones on the market-the protocol is called BlueTrace 42 and is based on Singapore's TraceTogether app. 43 There is also MIT's SafePaths contact tracing app which uses GPS and Bluetooth 44 -in this app, the location log data collected on the smartphone is stored on the phone, and leaves the device only when the user sends the information to a public health authority-this is a similar design principle to BlueTrace which aims to defer sending of the data from the phone to the authority until there has been a diagnosis of a COVID-19 case. ...
... [last accessed: 30/6/2020]. 42 BlueTrace is an open source application protocol; https://bluetrace.io/ [last accessed: 30/6/2020]. ...
... 96 IoT is not only about sensor networks city-scale or over vast geographical areas, but also very much sensing and analytics in personal spaces. The quantified self movement 100 encourages self-tracking and analytics of the self [36], e.g., via wearable IoT devices, and one can even apply such ideas to understand the human driver [42]. ...
Book
The book outlines the concept of the Automated City, in the context of smart city research and development. While there have been many other perspectives on the smart city such as the participatory city and the data-centric city, this book focuses on automation for the smart city based on current and emerging technologies such as the Internet of Things, Artificial Intelligence and Robotics. The book attempts to provide a balanced view, outlining the promises and potential of the Automated City as well as the perils and challenges of widespread automation in the city. The book discusses, at some depth, automated vehicles, urban robots and urban drones as emerging technologies that will automate many aspects of city life and operation, drawing on current work and research literature. The book also considers broader perspectives of the future city, in the context of automation in the smart city, including aspirational visions of cities, transportation, new business models, and socio-technological challenges, from urban edge computing, ethics of the Automated City and smart devices, to large scale cooperating autonomous systems in the city.
... In general, the more people downloaded the app, the better the data for contact tracing and at the time of writing, there have been almost six million downloads-of course, not every download means the app will be active, and so, notifications are sent as reminders to have the app turned on when leaving the home. The app is based on proximity sensing using Bluetooth Low Energy which is equipped in most smartphones on the market-the protocol is called BlueTrace 42 and is based on Singapore's TraceTogether app. 43 There is also MIT's SafePaths contact tracing app which uses GPS and Bluetooth 44 -in this app, the location log data collected on the smartphone is stored on the phone, and leaves the device only when the user sends the information to a public health authority-this is a similar design principle to BlueTrace which aims to defer sending of the data from the phone to the authority until there has been a diagnosis of a COVID-19 case. ...
... [last accessed: 30/6/2020]. 42 BlueTrace is an open source application protocol; https://bluetrace.io/ [last accessed: 30/6/2020]. ...
... 96 IoT is not only about sensor networks city-scale or over vast geographical areas, but also very much sensing and analytics in personal spaces. The quantified self movement 100 encourages self-tracking and analytics of the self [36], e.g., via wearable IoT devices, and one can even apply such ideas to understand the human driver [42]. ...
Chapter
As illustrations of what constitutes the Automated City, this chapter highlights (among many) three types of technologies: (1) automated vehicles, (2) robots in indoor public spaces and outdoors (on city streets, e.g., cleaning robots, delivery robots, and other applications), and (3) drones (Unmanned Aerial Vehicles) in urban environments, discussing their potential and specific issues. Existing advancements and current limitations are highlighted, including technical challenges, human-machine interaction, and socio-technical issues including governance and safety for these three types of technologies.
... An edge server is a server on the edges of the network [29]; it is located where the corresponding function is required and distributed processing is performed [30]. The edge server performs compute offloading, data storage, caching, and processing. ...
Article
Full-text available
Recently, low-latency services for large-capacity data have been studied given the development of edge servers and wireless mesh networks. The 3D data provided for augmented reality (AR) services have a larger capacity than general 2D data. In the conventional WebAR method, a variety of data such as HTML, JavaScript, and service data are downloaded when they are first connected. The method employed to fetch all AR data when the client connects for the first time causes initial latency. In this study, we proposed a prefetching method for low-latency AR services. Markov model-based prediction via the partial matching (PPM) algorithm was applied for the proposed method. Prefetched AR data were predicted during AR services. An experiment was conducted at the Nowon Career Center for Youth and Future in Seoul, Republic of Korea from 1 June 2022 to 31 August 2022, and a total of 350 access data points were collected over three months; the prefetching method reduced the average total latency of the client by 81.5% compared to the conventional method.
... include user location, computational and communication resources, and application data. In essence, edge resources must be placed [58]- [60] and their resources allocated [61] in a way that considers such factors and their trade-offs. ...
Article
Full-text available
italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Edge Intelligence (EI) is an emerging computing and communication paradigm that enables Artificial Intelligence (AI) functionality at the network edge. In this article, we highlight EI as an emerging and important field of research, discuss the state of research, analyze research gaps and highlight important research challenges with the objective of serving as a catalyst for research and innovation in this emerging area. We take a multidisciplinary view to reflect on the current research in AI, edge computing, and communication technologies, and we analyze how EI reflects on existing research in these fields. We also introduce representative examples of application areas that benefit from, or even demand the use of EI.
... They formulated the edge server placement problem as a multi-objective optimization model and took into account the usage and characteristics of the in-place backhaul network. Lovén et al. studied the edge server scaling up problem in which new edge servers are deployed based on the initial deployment and formulated this problem as a NP-hard optimization problem [28]. They proposed a method that selects the optimal number of new edge servers and their placement, and reallocates APs optimally to the old and new edge servers. ...
Article
Full-text available
In the evolution of Internet of Things and 5G networks, edge computing, as an emerging computing paradigm, can effectively reduce the latency of accessing the cloud service and enhance the computing power for resource-constrained user devices. However, in existing communication scenarios, there are still situations where the infrastructure coverage is limited or devices are not covered. At the same time, device location changes constantly due to users’ uncertain mobility. In response to such situations, mobile and flexible equipment combined with cloudlet is used to achieve mobile deployment of cloudlets and provides computing power support for user devices. In this paper, a dynamic cloudlet deployment method based on clustering algorithm (DCDM-CA) is proposed to solve the problem of deploying mobile cloudlets for mobile applications. DCDM-CA determines the cloudlet deployment destination based on the geographic location of multiple devices and the number of tasks generated by multiple devices in a unit time period. In addition, the task offloading is optimized after deploying cloudlets to minimize the system response latency. Extensive simulations reveal that DCDM-CA can efficiently deploy mobile cloudlets, and the system response latency is minimized through optimizing task offloading.
... As an example, in [179] the authors consider the problem of scaling up an edge computing deployment by selecting the optimal number of new MEHs and their placement and reallocating access points optimally to the old and new MEHs. In this case, the considered performance is the Quality of Experience of users and the QoS of the network operator. ...
Preprint
The main innovation of the Fifth Generation (5G) of mobile networks is the ability to provide novel services with new and stricter requirements. One of the technologies that enable the new 5G services is the Multi-access Edge Computing (MEC). MEC is a system composed of multiple devices with computing and storage capabilities that are deployed at the edge of the network, i.e., close to the end users. MEC reduces latency and enables contextual information and real-time awareness of the local environment. MEC also allows cloud offloading and the reduction of traffic congestion. Performance is not the only requirement that the new 5G services have. New mission-critical applications also require high security and dependability. These three aspects (security, dependability, and performance) are rarely addressed together. This survey fills this gap and presents 5G MEC by addressing all these three aspects. First, we overview the background knowledge on MEC by referring to the current standardization efforts. Second, we individually present each aspect by introducing the related taxonomy (important for the not expert on the aspect), the state of the art, and the challenges on 5G MEC. Finally, we discuss the challenges of jointly addressing the three aspects.
Article
The network topology formation in an Edge Infrastructure-as-a-Service (EIaaS) paradigm must consider the placement of Edge Computational Nodes (ECNs) so as to minimize the delay. Existing ECN placement schemes consider redundant node density, non-optimal location selection, and distance-based association, which affect the ultra-low latency requirement(s) of applications. Further, per ECN to IoT nodes association is key to efficient utilization of ECNs and delay minimization between IoT node(s) and ECN. This work proposes a Cost-aware Edge Computational Node Placement (coECNP) scheme for optimal topology formation in EIaaS paradigm with the objective of IoT nodes delay minimization. It formulates ECN placement problem as a constrained optimization problem. Each iteration in the location discovery module of coECNP identifies optimal placement location by utilizing IoT node’s density on an updated set of IoT nodes and hop-distance among previous iterations’ ECN locations and current candidate locations. As a result, it maximizes the number of IoT nodes that access ECN with minimum hop-distance, leading to end-to-end delay minimization. The assignment module of coECNP takes care of previously assigned nodes in each iteration before associating new IoT nodes to the nearest ECN to attain balanced mapping. Thus, it alleviates total delay from IoT node to respective ECN and enhances edge resource utilization to cater the application(s) near real-time execution requirement(s). The performance comparison indicates that coECNP achieves promising results by reducing IoT nodes delay by 23%–64%, 20%–66%, and 35%–73% on periodic, event-based, and query-based data traffic scenarios, respectively, under various network settings, compared to the benchmark solutions.
Article
Full-text available
The deployment of edge computing infrastructure requires a careful placement of the edge servers, with an aim to improve application latencies and reduce data transfer load in opportunistic Internet of Things systems. In the edge server placement, it is important to consider computing capacity, available deployment budget, and hardware requirements for the edge servers and the underlying backbone network topology. In this paper, we thoroughly survey the existing literature in edge server placement, identify gaps and present an extensive set of parameters to be considered. We then develop a novel algorithm, called PACK, for server placement as a capacitated location–allocation problem. PACK minimizes the distances between servers and their associated access points, while taking into account capacity constraints for load balancing and enabling workload sharing between servers. Moreover, PACK considers practical issues such as prioritized locations and reliability. We evaluate the algorithm in two distinct scenarios: one with high capacity servers for edge computing in general, and one with low capacity servers for Fog computing. Evaluations are performed with a data set collected in a real-world network, consisting of both dense and sparse deployments of access points across a city area. The resulting algorithm and related tools are publicly available as open source software.
Conference Paper
Full-text available
Prevalent weather prediction methods are based on sensor data, collected by satellites and a sparse grid of stationary weather stations. Various initiatives improve the prediction models by including additional data sources such as mobile weather sensors, mobile phones, and micro weather stations of, for example, smart homes. The underlying computing paradigm is predominantly centralized, with all data collected and analyzed in the cloud. This solution is not scalable. When the spatial and temporal density of weather sensor data grows, the required data transmission capacities and computational resources become unfeasible. We identify the challenges posed by spatial distribution of a weather prediction model, and suggest solutions for those challenges. We propose EDISON: an edge-native interpolation approach based on AI methods, distributed horizontally on edge servers. Finally, we demonstrate EDISON with a simple, simulated setup.
Conference Paper
Full-text available
Edge computing that leverages cloud resources to the proximity of user devices is seen as the future infrastructure for distributed applications. However, developing and deploying edge applications, that rely on cellular networks, is burdensome. Such network infrastructures are often based on proprietary components, each with unique programming abstractions and interfaces. To facilitate straightforward deployment of edge applications, we introduce open-source software (OSS) based radio access network (RAN) on over-the-air (OTA) commercial spectrum with Development Operations (DevOps) capabilities. OSS allows software modifications and integrations of the system components, e.g., Evolved Packet Core (EPC) and edge hosts running applications, required for new data pipelines and optimizations not addressed in standardization. Such an OSS infrastructure enables further research and prototyping of novel end-user applications in an environment familiar to software engineers without telecommunications background. We evaluated the presented infrastructure with end-to-end (E2E) OTA testing, resulting in 7.5MB/s throughput and latency of 21ms, which shows that the presented infrastructure provides low latency for edge applications.
Article
Full-text available
With the Internet of Things (IoT) becoming part of our daily life and our environment, we expect rapid growth in the number of connected devices. IoT is expected to connect billions of devices and humans to bring promising advantages for us. With this growth, fog computing, along with its related edge computing paradigms, such as multi-access edge computing (MEC) and cloudlet, are seen as promising solutions for handling the large volume of security-critical and time-sensitive data that is being produced by the IoT. In this paper, we first provide a tutorial on fog computing and its related computing paradigms, including their similarities and differences. Next, we provide a taxonomy of research topics in fog computing, and through a comprehensive survey, we summarize and categorize the efforts on fog computing and its related computing paradigms. Finally, we provide challenges and future directions for research in fog computing.
Article
Full-text available
Mobile, vehicle-installed road weather sensors are becoming ubiquitous. While mobile sensors are often capable of making observations on a high frequency, their reliability and accuracy may vary. Large-scale road weather observation and forecasting are still mostly based on stationary road weather stations (RWS). Though expensive, sparsely located and making observations on a relatively low frequency, RWS’ reliability and accuracy are well-known and accommodated for in the road weather forecasting models. Statistical analysis revealed that road weather conditions indeed have a great effect on how the observations of mobile and stationary road weather temperature sensors differ from each other. Consequently, we calibrated the observations of mobile sensors with a linear mixed model. The mixed model was fitted fusing ca. 20 000 pairs of mobile and RWS observations of the same location at the same time, following a rendezvous model of sensor calibration. The calibration nearly halved the MSE between the observations of the mobile and the RWS sensor types. Computationally very light, the calibration can be embedded directly in the sensors.
Conference Paper
Full-text available
Edge computing, a key part of the upcoming 5G mobile networks and future 6G technologies, promises to distribute cloud applications while providing more bandwidth and reducing latencies [1]. The promises are delivered by moving application-specific computations between the cloud, the data producing devices, and the network infrastructure components at the edges of wireless and fixed networks. In stark contrast, current artificial intelligence (AI) and in particular machine learning (ML) methods assume computations are conducted in a homogeneous cloud with ample computational and data storage resources available. Currently, AI's cloud-centric architectural model requires transmitting data from the end-user devices to the cloud, consuming significant data transmission resources and introducing latencies. Previous studies address AI in different perspectives of IoT, edge computing and networks [2], [3], [4], [5]. However, we provide a holistic view of AI methods and capabilities in the context of edge computing. In our vision, a holistic view of AI methods for edge computing comprises the well-known paradigms, such as predictive data analysis, machine learning , reasoning, and autonomous agents with learning and cognitive capabilities. Further, the edge environment with its opportunistic nature, intermittent connectivity, and interplay of numerous stakeholders, presents a unique environment for deploying such applications based on computations units with different degrees of intelligence capabilities. Joint consideration of edge computing and AI methods, EdgeAI, improves both fields in a variety of aspects. We aim to identify the challenges and detail the potential benefits of AI at the edge, building a coherent and overarching vision of what distributed artificial intelligence means in the context of edge computing. Further, we aim to find the methods realizing those benefits, testing hypotheses in a real-world setting on the edge platform atop the 5G test network (http://5gtn.fi). Our vision will be realized within the 8-year span of the Academy of Finland 6Genesis Flagship.
Article
Full-text available
In this paper, we propose content-centric, in-network content caching and placement approaches that leverage cooperation among edge cloud devices, content popularity, and GPS trajectory information to improve content delivery speeds, network traffic congestions, cache resource utilization efficiency, and users’ quality of experience in highly populated cities. More specifically, our proposed approaches exploit collaborative filtering theory to provide accurate and efficient content popularity predictions to enable proactive in-network caching of Internet contents. We propose a practical content delivery architecture that consists of standalone edge cloud devices to be deployed in the city to cache and process popular Internet contents as it disseminates throughout the network. We also show that our proposed approaches ensure responsive cloud content delivery with minimized service disruption.
Article
Full-text available
Remote clouds are gradually unable to achieve ultra-low latency to meet the requirements of mobile users because of the intolerable long distance between remote clouds and mobile users and the network congestion caused by the tremendous number of users. Mobile edge computing, a new paradigm, has been proposed to mitigate aforementioned effects. Existing studies mostly assume the edge servers have been deployed properly and they just pay attention to how to minimize the delay between edge servers and mobile users. In this paper, considering the practical environment, we investigate how to deploy edge servers effectively and economically in wireless metropolitan area networks. Thus, we address the problem of minimizing the number of edge servers while ensuring some QoS requirements. Aiming at more consistence with a generalized condition, we extend the definition of the dominating set, and transform the addressed problem into the minimum dominating set problem in graph theory. In addition, two conditions are considered for the capacities of edge servers: one is that the capacities of edge servers can be configured on demand, and the other is that all the edge servers have the same capacities. For the on-demand condition, a greedy based algorithm is proposed to find the solution, and the key idea is to iteratively choose nodes that can connect as many other nodes as possible under the delay, degree and cluster size constraints. Furthermore, a simulated annealing based approach is given for global optimization. For the second condition, a greedy based algorithm is also proposed to satisfy the capacity constraint of edge servers and minimize the number of edge servers simultaneously. The simulation results show that the proposed algorithms are feasible.