ArticlePDF Available

A Lightweight Service Placement Approach for Community Network Micro-Clouds

Authors:
  • Instituto Superior Técnico, ULisboa / INESC-ID Lisboa, Portugal

Abstract and Figures

Community networks (CNs) have gained momentum in the last few years with the increasing number of spontaneously deployed WiFi hotspots and home networks. These networks, owned and managed by volunteers, offer various services to their members and to the public. While Internet access is the most popular service, the provision of services of local interest within the network is enabled by the emerging technology of CN micro-clouds. By putting services closer to users, micro-clouds pursue not only a better service performance, but also a low entry barrier for the deployment of mainstream Internet services within the CN. Unfortunately, the provisioning of these services is not so simple. Due to the large and irregular topology, high software and hardware diversity of CNs, a "careful" placement of micro-clouds services over the network is required to optimize service performance. This paper proposes to leverage state information about the network to inform service placement decisions, and to do so through a fast heuristic algorithm, which is critical to quickly react to changing conditions. To evaluate its performance, we compare our heuristic with one based on random placement in Guifi.net, the biggest CN worldwide. Our experimental results show that our heuristic consistently outperforms random placement by 2x in bandwidth gain. We quantify the benefits of our heuristic on a real live video-streaming service, and demonstrate that video chunk losses decrease significantly, attaining a 37% decrease in the packet loss rate. Further, using a popular Web 2.0 service, we demonstrate that the client response times decrease up to an order of magnitude when using our heuristic. Since these improvements translate in the QoE (Quality of Experience) perceived by the user, our results are relevant for contributing to higher QoE, a crucial parameter for using services from volunteer-based systems and adapting CN micro-clouds as an ecosystem for service deployment.
Content may be subject to copyright.
https://doi.org/10.1007/s10723-018-9437-3
A Lightweight Service Placement Approach for Community
Network Micro-Clouds
Mennan Selimi ·Llorenc¸ Cerd`
a-Alabern ·
Felix Freitag ·Lu´
ıs Veiga ·Arjuna Sathiaseelan ·
Jon Crowcroft
Received: 20 July 2017 / Accepted: 12 February 2018
© The Author(s) 2018. This article is an open access publication
Abstract Community networks (CNs) have gained
momentum in the last few years with the increasing
number of spontaneously deployed WiFi hotspots and
home networks. These networks, owned and managed
by volunteers, offer various services to their mem-
bers and to the public. While Internet access is the
most popular service, the provision of services of
local interest within the network is enabled by the
emerging technology of CN micro-clouds. By putting
services closer to users, micro-clouds pursue not only
M. Selimi ()·A. Sathiaseelan ·J. Crowcroft
University of Cambridge, Cambridge, UK
e-mail: mennan.selimi@cl.cam.ac.uk
A. Sathiaseelan
e-mail: arjuna.sathiaseelan@cl.cam.ac.uk
J. Crowcroft
e-mail: jon.crowcroft@cl.cam.ac.uk
L. Cerd`
a-Alabern ·F. Freitag
Universitat Polit`
ecnica de Catalunya, BarcelonaTech,
Barcelona, Spain
L. Cerd`
a-Alabern
e-mail: llorenc@ac.upc.edu
F. Freitag
e-mail: felix@ac.upc.edu
L. Veiga
Instituto Superior T´
ecnico (IST), INESC-ID Lisboa,
Lisbon, Portugal
e-mail: luis.veiga@inesc-id.pt
a better service performance, but also a low entry
barrier for the deployment of mainstream Internet ser-
vices within the CN. Unfortunately, the provisioning
of these services is not so simple. Due to the large and
irregular topology, high software and hardware diver-
sity of CNs, a “careful” placement of micro-clouds
services over the network is required to optimize ser-
vice performance. This paper proposes to leverage
state information about the network to inform ser-
vice placement decisions, and to do so through a
fast heuristic algorithm, which is critical to quickly
react to changing conditions. To evaluate its perfor-
mance, we compare our heuristic with one based on
random placement in Guifi.net, the biggest CN world-
wide. Our experimental results show that our heuristic
consistently outperforms random placement by 2x
in bandwidth gain. We quantify the benefits of our
heuristic on a real live video-streaming service, and
demonstrate that video chunk losses decrease signif-
icantly, attaining a 37% decrease in the packet loss
rate. Further, using a popular Web 2.0 service, we
demonstrate that the client response times decrease
up to an order of magnitude when using our heuris-
tic. Since these improvements translate in the QoE
(Quality of Experience) perceived by the user, our
results are relevant for contributing to higher QoE, a
crucial parameter for using services from volunteer-
based systems and adapting CN micro-clouds as an
eco-system for service deployment.
J Grid Computing (2019) 17:169–189
/ Published online: 28 February 2018
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
M. Selimi et al.
Keywords Service placement ·Community
networks ·Micro-clouds ·Edge-clouds ·Wireless
mesh networks
1 Introduction
Since early 2000s, community networks (CNs) or
Do-It-Yourself ” networks have gained momentum in
response to the growing demands for network con-
nectivity in rural and urban communities. The main
singularity of CNs is that they are built “bottom-up”,
mixing wireless and wired links, with communities
of citizens building, operating and managing the net-
work. The result of this open, agglomerative, organic
process is a very heterogeneous network, with self-
managing links and devices. For instance, devices are
typically “low-tech”, built entirely by off-the-shelf
hardware and open source software, which communi-
cate over wireless links. This poses several challenges,
such as the lack of service guarantees, inefficient use
of the available resources, and absence of security, to
name just a few.
These challenges have not precluded CNs from
flourishing around. For instance, Guifi.net,1located in
the Catalonia region of Spain, is a successful example
of this paradigm.
Guifi.net is a “crowdsourced network”, i.e., a net-
work infrastructure built by citizens and organiza-
tions who pool their resources and coordinate their
efforts to make these networks happen [7]. In this
network, the infrastructure is established by the par-
ticipants and is managed as a common resource [5].
Guifi.net is the largest and fast growing CN world-
wide. Some measurable indicators are the number of
nodes (>34,000), the geographic scope (>50,000 km
of links), the Internet traffic etc. Regarding the Inter-
net traffic, Fig. 1depicts the evolution of the total
inbound (i.e., pink color) and outbound (i.e., yellow
color) traffic from and to the Internet for the last
two years. A mere inspection of this figure tells us
that Guifi.net traffic has tripled (i.e., 3 Gbps peak).
Traffic peaks correspond to the arrival of new users
and deployment of bandwidth-hungry services in the
network. Actually, a significant number of services,
including GuifiTV, graph servers, mail and game ser-
vices, are running within Guifi.net. All these services
1http://guifi.net/
have been provided by individuals, social groups, and
small non-profit or commercial service providers.
Guifi.net ultimate aim is to create a full digital
ecosystem that covers a highly localized area. But this
mission is not so simple. A quick glance at the type
of services that users demand reveals that the percent-
age of the Internet services (e.g., proxies) is higher
than 50% [13,30]. This confirms that Guifi.net users
are typically interested in mainstream Internet ser-
vices, which imposes a heavy burden on the “‘thin”
backbone links, with users experiencing high service
variability. The main reasons why the local services
have not been developed within CNs or have not
gained traction among the members, is the lack of
streamlined mechanisms to exploit all the resources
available within the CNs. As a result, the development
of these types of services can be very challenging.
The current network deployment model in the
Guifi.net CN is based on geographic singularities
rather than on the QoS (Quality of Service). The
resources in the network are not uniformly distributed
[41]. Wireless links are with asymmetric quality for
the services and there is a highly skewed traffic and
bandwidth distribution [10].
Further, the network topology in a wireless CN
such as Guifi.net is organic and different with respect
to conventional ISP (Internet Service Provider) net-
works [44]. Guifi.net is composed of numerous dis-
tributed CNs and they represent different types of
network topologies. The overall topology is constantly
changing and there is no fixed topology as in the
Data Center (DC) environment. The Guifi.net net-
work shows some typical patterns from the urban
networks (i.e., mesh networks) combined with an
unusual deployment, that do not completely fit neither
with organically grown networks nor with planned
networks [42]. This implies that a service placement
solution (i.e., algorithm) that works in a certain topol-
ogy might not work in another one.
The infrastructure in the Guifi.net CN is highly
unreliable and heterogeneous [41]. Devices and the
network are very heterogeneous compared to the DCs
where they are very homogeneous. The strong hetero-
geneity is due to the diverse capacity of nodes and
links, as well as the asymmetric quality of wireless
links. Employed technologies in the network vary sig-
nificantly, ranging from very low-cost, off-the-shelf
wireless (WiFi) routers, home gateways, laptops to
expensive optical fiber equipment [4,32]. In terms of
170
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
A Lightweight Service Placement Approach for Community Network Micro-Clouds
Fig. 1 Guifi.net inbound and outbound traffic (2014–2016)
demand distribution, the demand comes directly from
the edge so there are no central load balancers as in
the DC environments.
Among other issues, the above-mentioned chal-
lenges spurred the invention of “alternative” service
deployment models to cater for users in the Guifi.net.
One of these models was that based on micro-clouds.
A micro-cloud is nothing but a platform to deliver
services to a local community of citizens within the
vast CN. Services can be of any type, ranging from
personal storage [29] to video streaming and P2P-TV
[28]. Observe that this model is different from Fog
computing [9,21], which extends cloud computing by
introducing an intermediate layer between devices and
datacenters. Micro-clouds take the opposite track, by
putting services closer to consumers, so that no further
or minimal action takes place in the Internet. The idea
is to tap into the shorter, faster connectivity between
users to deliver a better service and alleviate overload
in the backbone links.
This approach, however, poses new challenges,
such as that of the optimal placement of micro-clouds
within the CN to overcome suboptimal performance.
And Guifi.net is not an exception. Obviously, a place-
ment algorithm that is agnostic to the state of the
underlying network may lead to important ineffi-
ciencies. Although conceptually straightforward, it is
challenging to calculate an optimal decision due to the
dynamic nature of CNs and usage patterns.
This paper tries to answer the following three
research questions:
1. First, given that sufficient state information is
in place, is network-aware placement enough to
deliver satisfactory performance to CN users?
2. Second, can the redundant placement of services
further improve performance?
3. Third, given a CN micro-cloud infrastructure,
what is an effective and low-complexity service
placement solution that maximizes the end-to-
end performance (e.g., bandwidth), taking into
account the dynamic behavior of the network and
resource availability?
To answer these questions, we contribute in this
work with a new placement heuristic called BASP
(Bandwidth and Availability-aware Service Place-
ment), which uses the state of the underlying CN
to optimize service deployment [27]. In particular,
it considers two sources of information: i) network
bandwidth and ii) node availability to make opti-
mized decisions. Compared with brute-force search,
which takes in the order of hours to complete, BASP
runs much faster; it just takes a few seconds, while
achieving equally good results.
Our results show that the BASP heuristic consis-
tently outperforms random placement, the existing
in-place and naturally fast strategy in Guifi.net, by
2xwith respect to end-to-end bandwidth gain. Driven
by these findings, we then ran BASP in a real CN
and quantified the boost in performance achieved after
deploying a live video-streaming and Web 2.0 service
according to BASP. Our experimental results demon-
strate that with BASP, the video chunk loss in the peer
side decreased up to a 3% point reduction, i.e., worth
a 37% reduction in the packet loss rate, which is a
significant improvement. Furthermore, when using the
BASP with the Web 2.0 service (i.e., social network-
ing service), the client response times decreased up to
an order of magnitude.
The rest of the paper is organized as follows. In
Section 2we define CN micro-clouds and describe and
characterize the performance of the production CN
such as QMP (Quick Mesh Project) network, which
171
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
M. Selimi et al.
is a subset of the Guifi.net CN. Section 3defines
our system model and presents our BASP heuristic.
In Section 4we discuss the evaluation results of our
BASP heuristic, using the QMP network traces. In
Section 5we present and discuss the real deployment
experiments with a video-streaming and Web 2.0 ser-
vice. Section 6describes related work and Section 7
concludes and discusses future research directions.
2 Background and Network Characterization
The adoption of the CN micro-cloud services requires
carefully addressing the service deployment and per-
formance requirements. Our service placement strat-
egy considers two aspects: node availability and net-
work bandwidth. As the first step, it is vital to under-
stand the behavior of these two dimensions in a real
CN. We achieve this by characterizing over a five-
month period a production wireless CN such as the
QMP network, which is a subset of the Guifi.net. Our
goal is to determine the key features of the network
(e.g. bandwidth, traffic distribution), of the nodes (e.g.,
availability patterns) and service types in the network
that could help us to design new heuristics for intelli-
gent service placement in CNs.
2.1 Micro-Clouds in the Community Networks
CN micro-clouds are built on top of the CNs. In
this model, a cloud is deployed closer to CN users
and other existing network infrastructure (e.g., public
schools, strategic locations etc.). CN micro-clouds
take the opposite track from Fog Computing, by
putting services closer to consumers, so that no fur-
ther or minimal action takes place in the Internet. In
CN micro-clouds, by contrast to other edge comput-
ing models, the users of edge services are enabled to
collaborate and actively participate in the service pro-
vision, and contribute to sustain edge micro-clouds.
They are deployed over a single or set of user nodes,
and comparing to the public clouds they have a smaller
scale, so one still gets high performance due to locality
and control over service placement.
The devices forming the CN micro-clouds are co-
located in either users homes (e.g., as home gateways,
routers, laptops, parabolic antennas etc., as shown in
the Fig. 2) or distributed in the CNs. The concept
of micro-clouds can also be introduced in order to
Fig. 2 Devices forming a CN micro-cloud (home gateways,
routers, laptops, set-top boxes, antennas etc.)
split deployed CN nodes into different groups. For
instance, a micro-cloud can refer to these nodes which
are within the same service announcement and discov-
ery domain. Different criteria can be applied to deter-
mine to which micro-cloud a node belongs to. Apply-
ing technical criteria (e.g., Round-trip time (RTT),
bandwidth, number of hops, resource characteristics)
for micro-cloud assignment is a possibility to optimize
the performance of several services. But also social
criteria may be used, e.g., bringing in a micro-cloud
cloud resources together from users which are socially
close may improve acceptance, the willingness to
share resources and to maintain the infrastructure.
2.2 The QMP Network: an Urban CN of the Guifi.net
QMP network began to operate in 2009 in a quarter
of the city of Barcelona, Spain, called Sants, as part
of the Quick Mesh Project (QMP).2The QMP net-
work is an urban mesh network and it is a subset of
the Guifi.net CN sometimes called GuifiSants. At the
time of writing, the QMP has around 77 nodes. There
are two gateways (i.e., proxies) distributed in the net-
work that connect the QMP to the rest of Guifi.net
and the Internet (highlighted in the Fig. 3). A detailed
description of QMP can be found in [10].
Typically, the QMP users have an outdoor router
(OR) with a Wi-fi interface on the roof, connected
through Ethernet to an indoor AP (access point) as
a premises network. The most common OR in the
QMP is the NanoStation M5 as shown in the Fig. 2,
2http://qmp.cat
172
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
A Lightweight Service Placement Approach for Community Network Micro-Clouds
0.0
0.5
1.0
1.5
0123
x (km)
y (km)
UPCc6-65ab
BCNevaristoarnus5Rd5-bdac
Fig. 3 QMP network topology
which is used to build point-to-point links in the net-
work and integrates a sectorial antenna with a router
furnished with a wireless 802.11an interface. Some
strategic locations have several NanoStations, that
provide larger coverage. In addition, some links of
several kilometers are set up with parabolic anten-
nas (NanoBridges). ORs in the QMP are flashed with
the Linux distribution which was developed inside the
QMP project which is a branch of OpenWRT3and
uses BMX6 and BMX7 as the routing protocol [25].
The user devices connected to the ORs consists
of Minix Neo Z64 and Jetway mini PCs, which
are equipped with an Intel Atom CPU. They run
the Cloudy4operating system, which leverages the
Docker containerization technology and allows CN
users to launch their favorite or the predefined Docker
images in a few clicks, from their browser. This
rapid application provision allows room for new, very
dynamic ways to deploy services and share resources
in a digital community.
Methodology and Data Collection The measurements
have been obtained by connecting via SSH to each
QMP OR and running basic system commands avail-
able in the QMP distribution. This method has the
advantage that no additional software needs to be
installed on the nodes. Live measurements have been
taken hourly over a five-month period, starting from
July 2016 to November 2016, and our live monitoring
3https://openwrt.org/
4http://cloudy.community/
Ye a r
Number of services
Network-focused Services
User-focused Services
Fig. 4 Number of local services in the Guifi.net (network and
user-focused)
page and data is publicly available in the Internet.5We
use this data to analyze the main aspects of the QMP
network.
2.3 Services in the QMP Network
In the Guifi.net (QMP) CN, the Internet cloud ser-
vices have equivalent alternatives that are owned and
operated at the community level. There are two type
of services in the network: network-focused and user-
focused services. Figure 4depicts the evolution of user
and network-focused services during the last 10 years.
Considering that network management is of interest
to all users in the network (i.e., to keep the network
up and running), Fig. 4reveals that services related
to the network operation outnumber the local services
intended for end-users. However, in the recent years
the local user services are also gaining attraction as
demonstrated by the Fig. 4.
Moreover, the most frequent of all the services,
whether user-focused or network-focused, are the
proxy services [12]. Proxies act as free gateways to
the Internet for the CN users. Specifically for the user-
focused services, the percentage of the Internet access
services (i.e., proxies and tunnel-based) is higher than
55%, confirming that the users of Guifi.net are typi-
cally interested in accessing the Internet [30]. Further,
other important services are web hosting, data stor-
age, VoIP, and video streaming. From the service
placement point of view, we are focusing on both type
of services in the network.
5http://dsg.ac.upc.edu/qmpsu/
173
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
M. Selimi et al.
2.4 Node Availability
The quality and state of the heterogeneous hardware
used in the QMP influences the stability of the links
and network performance. Availability of the QMP
nodes is used as an indirect metric for the quality
of connectivity that new members expect from the
network.
Figure 5shows the Empirical Cumulative Dis-
tribution Function (ECDF) of the node availability
collected for a period of five months. We define the
availability of a node as the percentage of times that
the node appears in a capture, counted since the node
shows up for the first time. A capture is an hourly net-
work snapshot that we take from the QMP network
(i.e., we took 2718 captures in total). Figure 5reveals
that 25% of the nodes have an availability lower than
90% and others nodes left have an availability between
90–100%. In a CN such as QMP, users do not tend to
deliberately reboot the device unless they have to per-
form an upgrade, which is not very common. Hence,
the percentage of times that node appears in a capture
is a relatively good measure of the node availability
due to random failures.
When we compare the availability distribution
reported in a similar study and environment on Plan-
etLab [43], a QMP node has a higher probability of
being disconnected or not to be reachable from the
network. The fact that PlanetLab showed a higher
average availability (i.e., sysUpTime) on its nodes
may be because it is an experimental testbed running
0.00
0.25
0.50
0.75
1.00
0 102030405060708090100
Availabilit
y
[%]
ECDF
Fig. 5 Node availability in the QMP network
on much more stable computers and environment. Fur-
thermore, the QMP members are not only responsible
for the maintenance of their nodes, but also for ensur-
ing a minimum standard of connectivity with other
parts of the network.
Figure 6depicts the number of nodes and links
during captures. Figure shows that the QMP is grow-
ing. Overall, 77 different nodes were detected. From
those, 71 were alive during the entire measurement
period. Around 6 nodes were missed in the majority
of the captures. These are temporarily working nodes
from other mesh networks and laboratory devices used
for various experiments. Figure 6also reveals that on
average 175 of the links used between nodes are bidi-
rectional and 34 are unidirectional. For bidirectional
links, we count both links in opposite direction as a
single link.
In summary, node availability is important to iden-
tify those nodes that will minimize service interrup-
tions over time. Based on the measurements, we assign
availability scores (Rn) to each of the nodes. The
highly available nodes are the possible candidates for
deploying on them the micro-cloud services.
2.5 Bandwidth Characterization
A significant amount of services that run on the
QMP and Guifi.net network are network-intensive
(i.e., bandwidth and delay sensitive), transferring large
amounts of data between the network nodes [8,30].
The performance of such kind of services depends not
Number of nodes
Bidirectional links
Unidirectional links
66
68
70
72
74
76
160
170
180
190
200
20
30
40
50
60
0 500 1000 1500 2000 2500
Capture
Number
Fig. 6 Node and link presence in the QMP network
174
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
A Lightweight Service Placement Approach for Community Network Micro-Clouds
just on computational and disk resources but also on
the network bandwidth between the nodes on which
they are deployed. Therefore, considering the network
bandwidth when placing services in the network is of
high importance.
First, we characterize the wireless links of the QMP
network by studying their bandwidth. Figure 7shows
the average bandwidth distribution of all the links.
The figure shows that the link throughput can be fit-
ted with a mean of 21.8 Mbps. At the same time
Fig. 7reveals that the 60% of the nodes have 10 Mbps
or less throughput. The average bandwidth of 21.8
Mbps obtained in the network allows many popular
bandwidth-hungry service to run without big interrup-
tions. This high performance can be attributed to the
802.11an devices used in the network.
In order to see the variability of the bandwidth,
Fig. 8shows the bandwidth averages in both direc-
tions of the three busiest links. Upload operation is
depicted with a solid line and download operation with
a dashed line. The nodes of three busiest links are
highlighted on the top of the figure. We noted that the
asymmetry of the bandwidths measured in both direc-
tions it not always due to the asymmetry of the user
traffic (not shown in the graphs). For instance, node
GSgranVia255, around 6 am, when the user traffic is
the lowest and equal in both directions, the asymme-
try of the links bandwidth observed in Fig. 8remains
the same. We thus conclude that even though band-
width time to time is slightly affected by the traffic, the
asymmetry of the links that we see might be due to the
0.0
0.2
0.4
0.6
0.8
1.0
001011
Link throughput [Mbps] scale)
ECDF
Fig. 7 Bandwidth distribution of the links
GSgV-nsl-b828/GSgranVia255nl-c493
GSgranVia255-db37/GScallao3Rd1-9090
UPCc6-ab/UPC-ETSEIB-NS-7094
22.5
25.0
27.5
10
15
20
25
18
20
22
24
0 2 4 6 8 10 12 14 16 18 20 22 24
Hour of the day
Link throughput [Mbps]
Fig. 8 Bandwidth in three busiest links
link characteristics, as level of interferences present at
each end, or different transmission powers.
In order to measure the link asymmetry, Fig. 9
depicts the bandwidth measured in each direction. A
boxplot of the absolute value of the deviation over
the mean is also depicted on the right. The figure
shows that around 25% of the links have a deviation
higher than 40%. At the same time, the other 25%
of the links have a deviation less than 10%. After
performing some measurements regarding the signal-
ing power of the devices, we discovered that some
of the community members have re-tuned the radios
of their devices (e.g., transmission power, channel
and other parameters), trying to achieve better perfor-
mance, thus, changing the characteristics of the links.
Thus, we can conclude that the symmetry of the links,
an assumption often used in the literature of wireless
mesh networks, is not very realistic for our case and
service placement algorithms unquestionably need to
take this into account.
2.6 Discussion
Here are some observations (features) that we have
derived from the measurements in the QMP network:
Dynamic Topology The QMP network is highly
dynamic and diverse due to many reasons, e.g., its
community nature in an urban area; its decentral-
ized organic growth with extensive diversity in the
technological choices for hardware, wireless media,
link protocols, channels, routing protocols etc.; its
175
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
M. Selimi et al.
Fig. 9 Bandwidth
asymmetry
0
10
20
30
40
50
0 102030405060
Throughput [Mbps]
Throughput [Mbps]
0
10
20
30
40
50
60
70
80
90
100
Deviation
%
mesh topology etc. The current network deployment
model is based on geographic singularities rather than
QoS. The network is not scale-free. The topology is
organic and different with respect to conventional ISP
networks.
Non-uniform Resource Distribution The resources are
not uniformly distributed in the network. Wireless
links are with asymmetric quality for services (25%
of the links have a deviation higher than 40%). We
observed a highly skewed traffic pattern and highly
skewed bandwidth distribution (Fig. 7).
Currently used organic (i.e., random) placement
scheme in the QMP and Guifi.net in general, is utterly
inefficient, failing to capture the dynamics of the net-
work and therefore it fails to deliver the satisfying
QoS. The strong assumption under random service
placement, i.e., uniform distribution of resources, does
not hold in such environments.
Furthermore, the services deployed have differ-
ent QoS requirements. Services that require intensive
inter-component communication (e.g., streaming ser-
vice), can perform better if the replicas (i.e., service
components) are placed close to each other in the
high capacity links [28]. On other side, bandwidth-
intensive services (e.g., distributed storage, video-on-
demand) can perform much better if their replicas are
as close as possible to their final users (i.e., overall
reduction of bandwidth for service provisioning) [31].
Our goal is to build on this insight and design a
network-aware service placement heuristic that will
improve the service quality and network performance
by optimizing the usage of scarce resources in CNs
such as bandwidth.
3 Context and Problem
Based on the network measurements we did at the
QMP network, in this section, first we describe our
model for network and service graph. Subsequently
we build on this to describe the service placement
problem. The symbols used in this section are listed in
Table 1.
3.1 Network Graph
The deployment and sharing of services in CNs is
made available through community network micro-
clouds (CNMCs). The idea of CNMC is to place the
Tabl e 1 Input variables
Symbol Description
NSet of physical nodes in the network
ESet of edges (physical links) in the network
SSet of services
DSet of service copies
kMax number of service copies
BeBandwidth capacity of link e
βs1,s2Bandwidth requirement between services s1and s2
RnAvailability of node n
λAvailability threshold
176
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
A Lightweight Service Placement Approach for Community Network Micro-Clouds
cloud at the edge closer to community end-users, so
users can have fast and reliable access to the ser-
vice. To reach its full potential, a CNMC needs to be
carefully deployed in order to effectively take advan-
tage and utilize efficiently the available bandwidth
resources.
In a CNMC, a server or low-power device (i.e,
home gateway) is directly connected to the wireless
base-station (ORs) providing cloud services to users
that are either within a reasonable distance or directly
connected to the base-station.
We call the CN the underlay to distinguish it from
the overlay network which is built by the services.
The underlay network is supposed to be connected
and we assume each node knows whether other nodes
can be reached (i.e., next hop is known). We can
model the underlay graph as: G(N, E) where N
is the set of nodes connected to the outdoor routers
(ORs) present in the CNs and Eis the set of wire-
less links that connects them. Physical links between
nodes are characterized by a given bandwidth (Bi).
Furthermore, each link has a bandwidth capacity (Be)
(i.e., theoretical capacity). Each node in the network
has an availability score (Rn) derived from the real
measurements in the QMP network.
3.2 Service Graph
The services aimed in this work are at infrastructure
level (IaaS), as cloud services in current dedicated
datacenters. Therefore, the services are deployed
directly over the core resources of the network and
accessed by the clients. Services can be deployed by
QMP users or administrators.
The services we consider in this work are dis-
tributed services (i.e., independently deployable ser-
vices as in the Microservices Architecture6). The
distributed services can be composite services (non-
monolithic) built from simpler parts, e.g., video
streaming (built from the source and the peer com-
ponent), web service (built from the database, the
memcached and the client component) etc. In the
real deployment, one service component corresponds
to one Docker container. These parts or components
of the services create an overlay and interact with
each other to offer more complex services. Bandwidth
requirement between two services s1and s2is given
6http://microservices.io/patterns/
by βs1,s2.Atmostkcopies can be placed for each
service s.
A service may or may not be tied to a specific node
of the network. Each node can host one or more type
of services. In this work we assume an offline service
placement approach where a single or a set of appli-
cations are placed “in one shot” onto the underlying
physical network, i.e., different from online placement
[45]. We might rearrange (migrate) the placement
of the same service over the time because of the
service performance fluctuation (e.g., weather condi-
tions, node availability, changes in use pattern, and
etc.). We do not consider real-time service migration.
3.3 Service Placement Problem
The concept of service and network graph allows us
to formulate the problem statement more precisely as:
Given a service and network graph, how to place a
service on a network as to maximize user QoS and
QoE, while satisfying a required level of availability
for each node (N) and considering a maximum of k
service copies?
Let Bij be the bandwidth of the path to go from
node ito node j. We want a partition of kclusters
(i.e., services) : CC1,C
2,C
3,...,C
kof the set of
nodes in the mesh network. The cluster head iof clus-
ter Ciis the location of the node where the service will
be deployed. The partition maximizing the bandwidth
from the cluster head to the other nodes in the cluster
is given by the objective function:
arg maxC
k
i=1
jCi
Bij (1)
with respect to the following constraints:
1. The total bandwidth used per link cannot exceed
the total link capacity:
eE:
s1,s2S
βs1,s2(e) Be(2)
2. Availability-awareness: the node availability
should be higher than the predefined threshold λ:
nN:
nN
Rnλ(3)
3. Admission control: At most, kcopies can be
placed for each service:
|D|≤k(4)
177
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
M. Selimi et al.
3.4 Proposed Heuristic Algorithm: BASP
Solving the problem stated in the (1) in brute force
for any number of Nand kis NP-hard and very
costly. The naive brute force method can be esti-
mated by calculating the Stirling number of the second
kind [1] which counts the number of ways to par-
tition a set of nelements into knonempty subsets,
i.e., 1
k!k
j=0(1)jkn
kjnO(nkkn). Thus, due to
the obvious combinatorial explosion, we propose a
low-cost and fast heuristic called BASP.
The BASP (Bandwidth and Availability-aware Ser-
vice Placement) allocates services taking into account
the bandwidth of the network and the node availabil-
ity. BASP is executed every single time a (new) service
deployment is about to be made. In every run, the
BASP partitions the network topology into k(maxi-
mum allowed number of service replicas) and removes
the nodes that are under the pre-defined availability
threshold (Phase 1); estimates and computes the band-
width of the nodes (Phase 2); and finally re-assigns
nodes to selected clusters (Phase 3). Algorithm 1
depicts the pseudo-code and Fig. 10 demonstrates the
phases of the BASP.
The BASP runs in three phases:
1. Phase 1: Availability-awareness and K-Means:
Initially in this phase we check the availability of
the nodes in the network. The nodes that are under
the predefined availability threshhold are removed.
Then, we use the naive K-Means partitioning
algorithm in order to group nodes based on their
geo-location. The idea is to get back clusters of
nodes that are close to each other. The K-Means
algorithm forms clusters of nodes based on the
Euclidean distances between them, where the dis-
tance metrics in our case are the geographical
coordinates of the nodes. In traditional K-Means
algorithm, first, kout of nnodes are randomly
selected as the cluster centroids depicted with a
purple color in Fig. 10 (e.g., nodes E, Z and T).
Each of the remaining nodes decides its cluster
centroid nearest to it according to the Euclidean
distance. After each of the nodes in the network
is assigned to one of kclusters, the centroid of
each cluster is re-calculated. Each cluster contains
a full replica of a service, i.e., the algorithm in this
phase partitions the network topology into k(i.e.,
maximum allowed number of service replicas)
clusters. Grouping nodes based on geo-location
Algorithm 1 BASP
Require: G(N, E ) Network graph
(qmpTopology.xml)
kkpartition of clusters
CC1,C
2,C
3,...,C
k
Bi bandwidth of the node i
Rn availability of the node n
λavailability threshold
Phase 1 – Availability-awareness and K-Means
1: procedure AVAILABILITYAWARENESSKMEANS
(G, Rn,k)
2: if Rnλthen
3: PerformKMeans(G,k)
4: return C
5: end if
6: end procedure
Phase 2 – Aggregate Bandwidth Maximization
7: procedure FINDCLUSTERHEADS(C)
8: clust erH ead s list()
9: for all kCdo
10: for all iCkdo
11: Bi0
12: for all jsetdiff(C,i) do
13: BiBi+estimate.route.bandwidth
(G,i,j)
14: end for
15: clust erH ead s max Bi
16: end for
17: end for
18: return clusterH eads
19: end procedure
Phase 3 – Cluster Re-Computation
20: procedure RECOMPUTECLUSTERS
(clusterHeads, G)
21: C←list()
22: for all iclust erH ead s do
23: clust erilist()
24: for all jsetdiff(G, i) do
25: Bjestimate.route.bandwidth(G, j, i)
26: if Bjis best from other nodes ithen
27: clust erij
28: end if
29: C←clust eri
30: end for
31: end for
32: return C
33: end procedure
178
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
A Lightweight Service Placement Approach for Community Network Micro-Clouds
Fig. 10 Phases of the
BASP algorithm
Y
E
F
K
A
S
DR
G
N
BZ
J
L
V
Z
T
Y
E
F
K
A
S
DR
G
N
BZ
J
L
V
Z
T
Y
E
F
K
S
DR
N
BZ
A
J
L
V
Z
T
G
Phase 1
Phase 2
Phase 3
is in line with how the QMP is organized. The
nodes in the QMP are organized into a tree hier-
archy of zones. A zone can represent nodes from
a neighborhood or a city. Each zone can be fur-
ther divided in child zones that cover smaller
geographical areas where nodes are close to each
other. From the service perspective we consider
placements inside a particular zone. We use K-
Means with geo-coordinates as an initial heuristic
for our algorithm. As an alternative, clustering
based on network locality can be used. Several
graph community detection techniques are avail-
able for our environment [20].
2. Phase 2: Aggregate Bandwidth Maximization:
The second phase of the algorithm is based on the
concept of finding the cluster heads maximizing
the bandwidth between them and their member
nodes in the clusters Ckformed in the first phase.
The cluster heads computed are depicted with a
black color in Fig. 10 (e.g., nodes F, N and L).
The bandwidth between two nodes is estimated
as the bandwidth of the link having the minimum
bandwidth in the shortest path. The cluster heads
computed are the candidate nodes for the service
placement. This is plotted as Naive K-Means in
the Fig. 11.
Fig. 11 Average bandwidth
to the cluster heads
179
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
M. Selimi et al.
3. Phase 3: Cluster Re-Computation: The third
and the last phase of the algorithm includes reas-
signing the nodes to the selected cluster heads
having the maximum bandwidth, since the geo-
location of the nodes in the clusters formed during
phase one is not always correlated with their
bandwidth. The final cluster heads computed are
depicted with an orange color in Fig. 10 (e.g.,
nodes F, N and J) . This way the clusters are
formed based on nodes bandwidth. This is plotted
as BASP in the Fig. 11.
Complexity The complexity of the BASP is as follows:
for BASP, finding the optimal solution to the K-means
(i.e., phase one) clustering problem if kand d(the
dimension) are fixed (e.g., in our case n=77, and
d=2), the problem can be exactly solved in time
O(ndk+1log n), where n is the number of entities
to be clustered. The complexity for computing the
cluster heads in phase two is O(n2),andO(n) for
the reassigning the clusters in phase three. Therefore,
the overall complexity of BASP is polylogarithmic
O(n2k+1log n), which is significantly smaller than the
brute force method and thus practical for commodity
processors.
4 Evaluation
Setup We take a network snapshot (i.e., capture) from
77 physical nodes of the QMP network regarding the
bandwidth of the links and node availability. The data
obtained has been used to build the topology graph of
the QMP. The QMP topology graph is constructed by
considering only operational nodes, marked in “work-
ing” status, and having one or more links pointing to
another node. Additionally, we have discarded some
disconnected clusters. The links are bidirectional and
unidirectional, thus we we use a directed graph. The
nodes of QMP consists of Intel Atom N2600 CPU, 4
GB of RAM and 120 GB of disk space. Our experi-
ment is comprised of 5 runs and the presented results
are averaged over all the runs. Each run consists of 15
repetitions.
4.1 Comparison
To emphasize the importance of the different phases
of the Algorithm 1, we compare in this section two
phases of our heuristic algorithm with the Random
Placement, i.e., the default placement at the QMP.
Random Placement Currently, the service deploy-
ment (much as network deployment) at the QMP is not
centrally planned but initiated individually by the CN
members. Public, user and community-oriented ser-
vices are placed randomly on super-nodes and users’
premises, respectively. The only parameter taken into
account when placing services is that the devices must
be in “production” state. The network is not taken into
consideration at all. All nodes in the production state
appear equally to the users.
Naive K-Means Placement This corresponds to the
second phase of the heuristic Algorithm 1. The service
is placed on the node having the maximum bandwidth
on the initial clusters formed by K-Means. We limit
the choice of the cluster heads to be inside the sets of
clusters obtained using K-Means.
BASP Placement It includes the three phases of the
heuristic Algorithm 1. The service is placed on the
node having the maximum bandwidth after the clus-
ters are re-computed.
4.2 Results
Figure 11 depicts the average bandwidth to the cluster
heads obtained with the Random,Naive K-Means and
the BASP heuristic algorithm. This value reflects the
average bandwidth computed from the cluster heads
obtained, to the other non-cluster nodes within each
cluster.
Figure 11 reveals that for the considered number
of services k,BASP outperforms both Naive K-Means
and Random placement. For k=2, the average band-
width to the cluster heads has increased from 18.3
Mbps (Naive K-Means) to 27.7 Mbps (BASP), which
represents a 50% improvement. The highest increase
of 67% is achieved when k=7. On average, when
having up to 7 services in the network, the gain of
BASP over Naive K-Means is of 45%. Based on the
observations from Fig. 11, the gap between the two
algorithms grows as kincreases. We observe that
kwill increase as the network grows. And hence,
BASP will presumably render better results for larger
networks than the rest of strategies.
180
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
A Lightweight Service Placement Approach for Community Network Micro-Clouds
Tabl e 2 Centrality measures for the cluster heads
k=1k=2k=3k=5
Cluster[Cluster Head ID] C1 [27] C1 [20] C2 [39] C1 [20] C2 [39] C3 [49] C1 [20] C2 [4] C3 [49] C4 [51] C5 [39]
Clusterheaddegree 2066661061010126
Neighborhood connectivity 7.7 9.6 9.6 9.6 9.6 10.8 9.6 8.7 10.8 8.1 9.6
Diameter 65343542313
Random QMP - bandwidth
[Mbps]
5.3 6.34 13.4 11.9
Naive K-Means - bandwidth
[Mbps]
16.6 18.3 23 23.4
BASP - bandwidth [Mbps] 16.9 27.7 32.9 38.5
BASP - running time [seconds] 46 28 17 9
Regarding the comparison between BASP and Ran-
dom placement, we find that the Random placement
leads to an inefficient use of network’s resources, and
consequently to suboptimal performance. As depicted
in the Fig. 11, the average gain of BASP over naive
Random placement is 211% (i.e., 2xbandwidth gain).
Comparison to the Optimal Solution Note that our
heuristic enables us to select cluster heads that provide
much higher bandwidth than any other random or naive
approach. But, if we were about to look for the opti-
mum bandwidth within the clusters (i.e., optimum
average bandwidth for the cluster), then this problem
would be NP-hard. The reason is that finding the opti-
mal solution entails running our algorithm for all the
combinations of size kfrom a set of size n.Thisisa
combinatorial problem that becomes intractable even
for small sizes of kor n(e.g., k=5, n=71).
For instance, if we wanted to find the optimum band-
width for a cluster of size k=3, then the algorithm
would need to run for every possible (non-repeating)
combination of size 3 from a set of 71 elements, i.e.,
choose(71,3)=57Kcombinations. We managed to
do so and found that the optimum average was 62.7
Mbps. For k=2, the optimum was 49.1 Mbps. For
k=1, it was 16.9 Mbps.
The downside was that, the computation of the
optimal solution took very long time in a commod-
ity machine. Concretely, it took 5 hours for k=3
and30minutesfork=2. Instead, BASP spent only
17 seconds for k=3 and 28 seconds for k=2.
Table 2shows the improvement of BASP over Random
and Naive K-Means. To summarize, BASP is able to
achieve good bandwidth performance with very low
computation complexity.
Correlation with Centrality Metrics Tab l e 2shows
some centrality measures and some graph properties
obtained for each cluster head. Further, Fig. 12 shows
the neighborhood connectivity graph of the QMP
network. The neighborhood connectivity of a node vis
defined as the average connectivity of all neighbors of
16
5
17
52
19
43
41
15
36
18 47
1
14 22
420 3
13
24
40
49
48
45
25
44
8
10
9
7
11
53 26
29
28
39
26 30
33 31
32
34
27
38
037
50
35
51
42
12
46
21
23
Fig. 12 Neighborhood connectivity graph of the QMP
181
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
M. Selimi et al.
v. In the figure, nodes with low neighborhood connec-
tivity values are depicted with bright colors and high
values with dark colors. It is interesting to note that
some the nodes with the highest neighborhood con-
nectivity are those chosen by BASP as cluster heads.
The cluster heads (for k=2andk=3) are illustrated
with a rectangle in the graph. A deeper investigation
into the relationship between service placement and
network topological properties is out of the scope of
this paper and will be reserved as our future work.
4.3 Dynamic Service Placement
In Guifi.net (i.e., QMP) nodes are added by the com-
munity members using their home’s rooftop, which
are often at non-optimal locations. This fact produces
a high diversity in the quality of the links, making
some nodes to be sporadically unreachable. Figure 13
depicts the number of nodes in the QMP network dur-
ing the month of March 2017. Figure 13 reveals that
there is a churn i.e., change in the set of participat-
ing nodes in the network, due to failures, electric cuts,
nodes that have been upgraded, reconfigured, hanged,
etc. The minimum number of nodes observed in the
network is 67 and the maximum 74 nodes.
In order to see the performance of the BASP heuris-
tic algorithm with churn of nodes, we run it in every
day of March 2017. Figure 14 shows the average
bandwidth to the cluster heads obtained with the naive
K-Means and the BASP heuristic algorithm when
using different number of services k(k=1, k=2,
k=4andk=8). Figure shows that the gap between
the two algorithms grows as kincreases. For instance
when k=4 (Fig. 14c), the average bandwidth to the
cluster head obtained with K-Means algorithm is 18.9
Mbps and with BASP algorithm is 41 Mbps. This is
because we keep clustering nodes by their bandwidth
and the clusters are formed from the nodes with higher
bandwidth.
Furthermore, we observed also some outliers in
specific days of March 2017. For instance on 18th
of March, Fig. 14a and b reveals a performance (i.e.,
bandwidth) drop. After performing some measure-
ments we discovered that during these days one of
the gateways (i.e., proxies) in the network got discon-
nected. Because of this, nodes that use this gateway to
connect to the other nodes result in worst performance,
since different paths are used (i.e., longer and slower).
To summarize it, BASP outperforms the K-Means for
every day of the month March and for the considered
number of services k.
5 Experimental Evaluation
In order to foster the adoption and transition of
the community micro-cloud environment, we pro-
vide a real community cloud distribution, codenamed
Fig. 13 Number of nodes
in March 2017
72
67
68
69
70
71
72
73
74
1 3 5 7 9 1113151719212325272931
Da
y
(March 2017)
Number of nodes (daily average)
182
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
A Lightweight Service Placement Approach for Community Network Micro-Clouds
Days (March 2017)
Bandwidth to the cluster heads (Mbps)
K-Means
BASP
(a) k=1
Days (March 2017)
Bandwidth to the cluster heads (Mbps)
K-Means
BASP
(b) k=2
Days (March 2017)
Bandwidth to the cluster heads (Mbps)
K-Means
BASP
(c) k=4
(d) k=8
Fig. 14 K-Means vs. BASP (March 2017)
Web interface
Console
Service Layer
Streaming Stora
g
e Network
PeerStreamer
GVoD
VoIP
Serf
Avahi
Tahoe-LAFS
Syncthing
WebDAV
Proxy3
SNP Service
DNS Service
Network Coordination
Service
Discovery
Service
Announcement
Community Network
API
BASP
User
Fig. 15 Cloudy architecture
Cloudy [6], which contains the platform and applica-
tion services of the community cloud system.
5.1 Cloudy: a Service Hub for the Micro-Clouds
Cloudy is the core software of our micro-clouds,
because it unifies the different tools and services of
the cloud system in a Debian-based Linux distribution.
Cloudy is open-source and can be downloaded from
public repositories.7
Cloudy’s main components can be considered a
layered stack, with services residing both inside the
kernel and at the user level. Figure 15 reports some
of the available services running on Docker contain-
ers. Cloudy includes a tool for users to announce and
discover services in the micro-clouds based on Serf,
which is a decentralized solution for cluster member-
ship and orchestration. On the network coordination
layer, having sufficient knowledge about the under-
lying network topology, the BASP decides about the
placement of the service which then is announced via
SerfasshowninFig.15. Thus, the service can be
discovered by the other users.
7https://github.com/Clommunity/
183
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
M. Selimi et al.
5.2 Evaluation in a Real Production Community
Network
In order to understand the gains of our network-aware
service placement heuristic in a real production CN,
we deploy our algorithm in real hardware connected
to the nodes of the QMP network, located in the city
of Barcelona. We concentrate on benchmarking two
of the most popular network-intensive applications:
Live-video streaming service,andWeb 2.0 service
performed by the most popular websites.
5.2.1 Live-Video Streaming Service
PeerStreamer,8an open source live P2P video stream-
ing service, has been paradigmatically established as
the live streaming service in Cloudy. This service is
based on chunk diffusion, where peers offer a selection
of the chunks they own to some peers in their neigh-
borhood. A chunk consists of a part of the video to
be streamed (i.e., by default, this is one frame of the
video). PeerStreamer differentiates between a source
node and a peer node. A source node is responsible
for converting the video stream into chunks and send-
ing to the peers in the network. In our case, both the
source nodes and the peers run in Docker containers
atop the QMP nodes.
Setup We use 20 real nodes connected to the wire-
less nodes of the QMP. These nodes are co-located in
users homes (e.g., as home gateways, set-top-boxes,
etc.). They run the Cloudy operating system. As the
controller node, we leverage the experimental infras-
tructure of Community-Lab.9Community-Lab pro-
vides a central coordination entity that has knowledge
about the network topology in real time and allows
researchers to deploy experimental services and per-
form experiments in a production CN. The nodes
of the QMP that are running the live video stream-
ing service are part of the Community-Lab. In our
experiments, we connect a live streaming camera (i.e.,
maximum bit-rate of 512 kbps, 30 frame-per-second)
to a local PeerStreamer instance that acts as a source
node.
8http://peerstreamer.org/
9https://community-lab.net/
The location of the source in such a dynamic net-
work is therefore crucial. Placing the source in the
QMP node with a weak connectivity will negatively
impact the QoS and QoE of viewers. In order to deter-
mine the accuracy of the BASP upon choosing the
appropriate QMP node where to host the source, we
measure the average chunk loss percentage at the peer
side, which is defined as the percentage of chunks that
were lost and not arrived in time. This simple metric
will help us understand the role of the network on the
reliable operation of live-video streaming over a CN.
Our experiment is composed of 20 runs, where each
run has 10 repetitions. Results are averaged over all
the successful runs. Ninety percent of them were suc-
cessful. In the 10% of failed runs, the source was
unable to stream the captured images from the cam-
era, so peers did not receive the data. This experiment
was run for 2 weeks, with roughly 100 hours of live
video data and several GBytes of logged content. The
presented results are from one hour of continuous live
streaming from the PeerStreamer source.
Results Figure 16 shows the average chunk loss for
an increasing number of sources k. The data reveals
that for any number of source nodes k,theBASP
heuristic outperforms the currently adopted random
placement in the QMP network. For k=1, the BASP
decreases the average chunk loss from 12 to 10%.
This case corresponds to the scenario where there is
one single source node streaming to the 20 peers in
the QMP network. Based on the observations from
Fig. 16, the gap between the two algorithms is grow-
ing as kincreases. For instance, when k=3, we get a
3% points of improvement with respect to chunk loss,
and a significant 37% reduction in the packet loss rate.
Fig. 16 Average video chunk loss
184
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
A Lightweight Service Placement Approach for Community Network Micro-Clouds
5.2.2 Web 2.0 Service
The second type of service that we experiment is the
Web 2.0 Service. The workloads of Web2.0 websites
differ from the workloads of older generation web-
sites. Older generation websites typically served static
content, while Web2.0 websites serve dynamic con-
tent. The content is dynamically generated from the
actions of other users and from external sources, such
as news feeds from other websites. We are experi-
menting with a social networking service, which is
an example of a Microservices architecture, since it is
formed by a group of independently deployable ser-
vice components (i.e., web server, database server,
memcached server and clients). In this type of ser-
vice, the placement of the web server (together with
the database server) is decisive for the user QoS.
Setup For the evaluation, we use the dockerized ver-
sion of the CloudSuite Web Serving benchmark [26].
Cloudsuite benchmark has four tiers: the web server,
the database server, the memcached server, and the
clients. Each tier has its own Docker image. The web
server runs Elgg10 and it connects to the memcached
server and the database server. The Elgg social net-
working engine is a Web2.0 application developed in
PHP, similar in functionality to Facebook. The clients
(implemented using the Faban workload generator)
send requests to login to the social network and per-
form different operations. We use 10 available QMP
nodes in total, where three of them act as a client.
The other seven nodes are candidates for deploying
the web server. The web server, database server and
memcached server are always collocated in the same
host. On the client side, we measure the response time
when performing some operations such as login, live
feed update, message sending, etc. In Cloudsuite, to
each operation is assigned an individual QoS latency
limit. If less than 95% of the operations meet the QoS
latency limit, the benchmark is considered to be failed
(i.e., marked as ×in the Table 3). The location of the
web server, database server and memcached server has
a direct impact on the client response time.
Results Figure 17a and b depicts the response time
observed by three clients for the update live feed
10https://elgg.org/
Tabl e 3 Cloudsuite benchmark results
Operations Update live feed Do login
Threads 1020408010204080
QMP-Random ×××××
QMP-BASP ××
Stdev(sec) 0.02 0.03 0.01 0.01 0.02 0.02 0.01 0.03
Improvement 0.1 0.2 1.8 6.7 0.1 0.1 1.2 4.2
operation, when placing the web server with the Ran-
dom and the BASP, respectively.
When placing the web server with the Random
approach, Fig. 17a reveals that, as far as we increase
the number of threads (i.e., concurrent operations) per
client, the response time increases drastically in three
clients. For up to 120 operations per client (i.e., 20
threads), all clients perceive a similar response times
Number of concurrent threads (per client)
Response time (s)
Client 1
Client 2
Client 3
(a) Random
Number of concurrent threads (per client)
Response time (s)
Client 1
Client 2
Client 3
(b) BASP
Fig. 17 Response times of clients (Random vs. BASP)
185
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
M. Selimi et al.
(300–350 ms). Response time increases more than one
order of magnitude in Client 2 and Client 3, and an
order of magnitude in Client 1 when performing 160
operations (i.e., 80 threads).
Figure 17b reveals that, the client response times
for higher workloads decreases an order of magnitude
when using our BASP heuristic compared to Random
approach shown in the Fig. 17a (i.e., reaching 700
ms on average for 160 operations using BASP). For
up to 120 operations per client, the response times
that three clients perceive is slightly better (200–280
ms) than the response time when the web server is
deployed with the Random approach. Furthermore,
Table 3demonstrates the successful and failed tests
for the update feed and login operations in the Cloud-
suite benchmark. Table reveals that, using the BASP
heuristic the number of successful tests i.e., those that
met the QoS latency limit, is higher than the number
of successful tests with the Random approach. Further,
it also shows the standard deviation values and aver-
age client response time improvements when using the
BASP heuristic over Random approach. We can notice
that the gain brought by the BASP heuristic is higher
for more intensive workloads.
6 Related Work
Service placement is a key function of the cloud man-
agement systems. Typically, by monitoring all the
physical and virtual resources on a system, service
placement aims to balance load through the alloca-
tion, migration and replication of tasks. We looked
at the service placement problem in four different
environments: data center (DC), distributed data cen-
ters, wireless networks and IoT (Internet of things)
environment.
Data Centers Choreo [19] is a measurement-based
method for placing applications in the cloud infras-
tructures to minimize an objective function such as
application completion time. Choreo makes fast mea-
surements of cloud networks using packet trains as
well as other methods, profiles application network
demands using a machine-learning algorithm, and
places applications using a greedy heuristic. Volley
[2] is a system that performs automatic data place-
ment across geographically distributed datacenters
of Microsoft. Volley analyzes the logs or requests
using an iterative optimization algorithm based on
data access patterns and client locations, and outputs
migration recommendations back to the cloud service.
A large body of work of service placement in data
centers has been devoted to finding heuristic solutions
[16].
Most of the work in the data center environment is
not applicable to our case because we have a strong
heterogeneity given by the limited capacity of nodes
and links, as well as asymmetric quality of wireless
links. The difference/asymmetry in the link capaci-
ties across the network makes the service placement a
very different problem than in a mostly homogeneous
cloud datacenter. Our measurement results demon-
strate that 25% of the links have a symmetry deviation
higher than 40%.
Distributed Data Centers When the service placement
algorithms decide how the communication between
computation entities is routed in the substrate net-
work, then we speak of network-aware service place-
ment, i.e., closely tied to Virtual Network Embedding
(VNE). The work in [36] proposes efficient algorithms
for the placement of services in distributed cloud
environment. The algorithms need input on the sta-
tus of the network, computational resources and data
resources which are matched to application require-
ments. In [18] authors propose a selection algorithm
to allocate resources for service-oriented applications
and the work in [3] focuses on resource allocation
in distributed small datacenters. Another example of
a network-aware approach is the work from Moens
in [23] which employs a Service Oriented Architec-
ture (SOA), where applications are constructed as a
collection of services. Their approach performs node
and link mapping simultaneously. The work in [34]
extends the work of Moens in wireless settings taking
into account the IoT. Mycocloud [15] is another work,
which provides elasticity through self-organized ser-
vice placement in decentralized clouds. The work of
Elmroth [38] takes into account rapid user mobility
and resource cost when placing applications in Mobile
Cloud Networks (MCN). A recent work of Tantawi
[37] uses biased statistical sampling methods for cloud
workload placement. Regarding the service placement
through migration, the authors in [39]and[46] study
the dynamic service migration problem in mobile
186
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
A Lightweight Service Placement Approach for Community Network Micro-Clouds
edge-clouds that host cloud-based services at the n
edge. They formulate a sequential decision making
problem for service migration using the framework
of Markov Decision Process (MDP) and illustrate the
effectiveness of their approach by simulation using
real-world mobility traces of taxis in San Francisco.
Theworkin[22] evaluates the migration perfor-
mance of various real applications in mobile edge
clouds (MEC). The authors in [14] propose a fully
approach to the joint optimization problem of scaling
and placement of virtual network services. Spinnewyn
[35] provides a resilient placement of mission-critical
applications on geo-distributed clouds using heuristic
based on subgraph isomorphism detection.
Most of the work in the distributed clouds con-
sider micro-datacenters, where in our case the CN
micro-clouds consist of constraint/low-power devices
such us home gateways. Furthermore, in our case we
have a partial information regarding the computational
devices, so their approaches are not fully applicable to
our environment.
Wireless Environment In [17] the authors propose an
optimal allocation solution for ambient intelligence
environments using tasks replication to avoid network
performance degradation. Some other works done in
wireless settings are the work of Vega [40] and our work
[31] which proposes several placement algorithms that
minimize the coordination and overlay cost along a
CN. The work of Coimbra in [11] presents a parallel
and distributed solution designed as a scalable alterna-
tive for the problem of service placement in CNs.
The focus of the work in this paper is to design
a low-complexity service placement heuristic for CN
micro-clouds in order to maximize bandwidth and
improve user QoS and QoE.
IoT Environment The authors in [33] study the place-
ment of IoT services on fog resources taking into
account their QoS requirements. They show that their
optimization model leads to 35% less cost of execu-
tion when compared to a purely cloud-based approach.
Authors in [24] present a data placement strategy
for Fog infrastructures called iFogStor. They formu-
late the data placement problem as a Generalized
Assignment Problem (GAP) and propose heuristic
one based on geographical zoning to reduce the solv-
ing time. Most of the IoT approaches analyzed are
deployed in simulation environments using modeling
and simulation toolkits such us iFogSim, thus, their
results are not easily applicable to our context.
7Conclusion
In this paper, we motivated the need for bandwidth
and availability-aware service placement in CN micro-
cloud infrastructures. CNs provide a perfect scenario
to deploy and use community services in contribu-
tory manner. Previous work done in CNs has focused
on better ways to design the network to avoid hot
spots and bottlenecks, but did not relate to schemes for
network-aware placement of service instances.
However, as services become more network-
intensive, they can become bottlenecked by the net-
work, even in well-provisioned clouds. In the case of
CN micro-clouds, network awareness is even more
critical due to the limited capacity of nodes and links,
and an unpredictable network performance. Without a
network-aware system for placing services, locations
with poor network paths may be chosen while loca-
tions with faster, more reliable paths remain unused,
resulting ultimately in a poor user experience.
We proposed a low-complexity service placement
heuristic called BASP to maximize the bandwidth
allocation when deploying CN micro-clouds. We pre-
sented algorithmic details, analyzed its complexity,
and carefully evaluated its performance with realis-
tic settings. Our experimental results show that the
BASP consistently outperforms the currently adopted
random placement in Guifi.net by 2xbandwidth gain.
Moreover, as the number of services increases, the
gain tends to increase accordingly. Furthermore, we
deployed our service placement algorithm in a real
network segment of the QMP network, a production
CN, and quantified the performance and effects of our
algorithm. We conducted our study on the case of a
live video streaming service and Web 2.0 Service inte-
grated through Cloudy distribution. Our real experi-
mental results show that when using BASP heuristic
algorithm, the video chunk loss in the peer side is
decreased up to 3% points, i.e., worth a 37% reduction
in the packet loss rate. When using the BASP with the
Web 2.0 service, the client response times decreased
up to an order of magnitude, which is a significant
improvement.
187
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
M. Selimi et al.
As a future work, we plan to look into live service
migration, i.e., the controller needs to decide which
micro-cloud should perform the computation for a par-
ticular user, with the presence of user mobility and
other dynamic changes in the network.
Acknowledgements This work was supported by the Euro-
pean H2020 framework program projects RIFE (H2020-
644663), netCommons (H2020-688768), LightKone (H2020-
732505), and by the Spanish government under contract
TIN2016-77836-C2-2-R. This work was also supported by
the national funds through Fundac¸˜
ao para a Ciˆ
encia e a
Tecnologia in project ContexTWA with reference PTDC/EEI-
SCR/6945/2014.
Open Access This article is distributed under the terms of the
Creative Commons Attribution 4.0 International License (http://
creativecommons.org/licenses/by/4.0/), which permits unre-
stricted use, distribution, and reproduction in any medium,
provided you give appropriate credit to the original author(s)
and the source, provide a link to the Creative Commons license,
and indicate if changes were made.
References
1. Stirling Number of the Second Kind. http://mathworld.
wolfram.com/StirlingNumberoftheSecondKind.html
2. Agarwal, S., et al.: Volley: automated data placement for
geo-distributed cloud services. In: Proceedings of the 7th
USENIX Conference on Networked Systems Design and
Implementation, NSDI’10, pp. 2–2. USENIX Association,
Berkeley (2010)
3. Alicherry, M., Lakshman, T.V.: Network aware resource
allocation in distributed clouds. In: Proceedings of INFO-
COM, IEEE, pp. 963–971 (2012)
4. Apol´
onia, N., Freitag, F., Navarro, L.: Leveraging deploy-
ment models on low-resource devices for cloud services
in community networks. Simul. Model. Pract. Theory 77,
390–406 (2016)
5. Baig, R., Dalmau, L., Roca, R., Navarro, L., Freitag, F.,
Sathiaseelan, A.: Making Community Networks Econom-
ically Sustainable, the Guifi.Net Experience. GAIA ’16,
pp. 31–36. ACM, New York (2016)
6. Baig, R., Freitag, F., Navarro, L.: Cloudy in guifi.net:
establishing and sustaining a community cloud as open
commons. Futur. Gener. Comput. Syst. (2018)
7. Baig, R., Roca, R., Freitag, F., Navarro, L.: guifi.net,
a crowdsourced network infrastructure held in common.
Comput. Netw. 90, 150–165 (2015). Crowdsourcing
8. Bilalli, B., Abell´
o, A., Aluja-Banet, T., Wrembel, R.: Intel-
ligent assistance for data pre-processing. Computer Stan-
dards & Interfaces 57, 101–109 (2018)
9. Bonomi, F., Milito, R., Zhu, J., Addepalli, S.: Fog comput-
ing and its role in the internet of things. In: Proceedings of
the First Edition of the MCC Workshop on Mobile Cloud
Computing, MCC ’12, pp. 13–16. ACM, New York (2012)
10. Cerd`
a-Alabern, L., Neumann, A., Escrich, P.: Experimen-
tal evaluation of a wireless community mesh network. In:
Proceedings of the 16th ACM International Conference on
Modeling, Analysis and Simulation of Wireless and Mobile
Systems, MSWiM ’13, pp. 23–30. ACM, New York (2013)
11. Coimbra, M.E., Selimi, M., Francisco, A.P., Freitag, F.,
Veiga, L.: Gelly-scheduling distributed graph processing
for service placement in community networks. In: 33rd
ACM/SIGAPP Symposium on Applied Computing (SAC
2018). ACM (2018)
12. Dimogerontakis, E., Meseguer, R., Navarro, L.: Internet
Access for All: Assessing a Crowdsourced Web Proxy
Service in a Community Network, pp. 72–84. Springer
International Publishing, Cham (2017)
13. Dimogerontakis, E., Neto, J., Meseguer, R., Navarro, L.:
Client-side routing-agnostic gateway selection for hetero-
geneous wireless mesh networks. In: IFIP/IEEE Interna-
tional Symposium on Integrated Network Management
(IM) (2017)
14. Draxler, S., Karl, H., Mann, Z.A.: Joint optimization of
scaling and placement of virtual network services. In:
2017 17th IEEE/ACM International Symposium on Clus-
ter, Cloud and Grid Computing (CCGRID), pp. 365–370
(2017)
15. Dubois, D.J., Valetto, G., Lucia, D., Di Nitto, E.: Myco-
cloud: elasticity through self-organized service placement
in decentralized clouds. In: 2015 IEEE 8th International
Conference on Cloud Computing, pp. 629–636 (2015)
16. Ghanbari, H., et al.: Replica placement in cloud through
simple stochastic model predictive control. In: 2014 IEEE
7th International Conference on Cloud Computing, pp. 80–
87 (2014)
17. Herrmann, K.: Self-organized service placement in ambient
intelligence environments. ACM Trans. Auton. Adapt. Syst.
5(2), 6:1–6:39 (2010)
18. Klein, A., Ishikawa, F., Honiden, S.: Towards network-
aware service composition in the cloud. In: Proceedings
of the 21st International Conference on World Wide Web,
WWW ’12, pp. 959–968. ACM, New York (2012)
19. LaCurts, K., et al.: Choreo: network-aware task placement
for cloud applications. In: Proceedings of the 2013 Con-
ference on Internet Measurement Conference, IMC ’13,
pp. 191–204. ACM, New York (2013)
20. Lancichinetti, A., Fortunato, S.: Community detection algo-
rithms: a comparative analysis. Phys. Rev. E 80, 056117
(2009)
21. Lertsinsrubtavee, A., Ali, A., Molina-Jimenez, C., Sathi-
aseelan, A., Crowcroft, J.: Picasso: a lightweight edge
computing platform. In: IEEE 6th International Conference
on Cloud Networking (Cloudnet’17) (2017)
22. Machen, A., Wang, S., Leung, K.K., Ko, B.J., Salonidis,
T.: Live service migration in mobile edge clouds. In: IEEE
Wireless Communications (2017)
23. Moens, H., et al.: Hierarchical network-aware placement of
service oriented applications in clouds. In: 2014 IEEE Net-
work Operations and Management Symposium (NOMS),
pp. 1–8 (2014)
24. Naas, M.I., Raipin, P., Boukhobza, J., Lemarchand, L.:
iFogStor: an IoT data placement strategy for fog infras-
tructure. In: IEEE 1st International Conference on Fog and
Edge Computing. Madrid, Spain (2017)
188
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
A Lightweight Service Placement Approach for Community Network Micro-Clouds
25. Neumann, A., Lopez, E., Navarro, L.: An evaluation of
Bmx6 for community wireless networks. In: 8th IEEE
International Conference on Wireless and Mobile Comput-
ing, Networking and Communications (Wimob), 2012 I,
pp. 651–658 (2012)
26. Palit, T., Shen, Y., Ferdman, M.: Demystifying cloud
benchmarking. In: 2016 IEEE International Symposium on
Performance Analysis of Systems and Software (ISPASS),
pp. 122–132 (2016)
27. Selimi, M., Cerd`
a-Alabern, L., S´
anchez-Artigas, M., Fre-
itag, F., Veiga, L.: Practical service placement approach
for microservices architecture. In: Proceedings of the 17th
IEEE/ACM International Symposium on Cluster, Cloud
and Grid Computing, CCGrid ’17, pp. 401–410. IEEE
Press, Piscataway (2017)
28. Selimi, M., et al.: Integration of an assisted P2p live stream-
ing service in community network clouds. In: Proceedings
of the IEEE 7th International Conference on Cloud Com-
puting Technology and Science (CloudCom 2015). IEEE
(2015)
29. Selimi, M., Freitag, F., Cerd`
a-Alabern, L., Veiga, L.: Per-
formance evaluation of a distributed storage service in
community network clouds. Concurrency and Computa-
tion: Practice and Experience 28(11), 3131–3148 (2016).
cpe.3658
30. Selimi, M., Khan, A.M., Dimogerontakis, E., Freitag, F.,
Centelles, R.P.: Cloud services in the guifi.net community
network. Comput. Netw. 93, Part 2:373–388 (2015)
31. Selimi, M., Vega, D., Freitag, F., Veiga, L.: Towards
Network-Aware Service Placement in Community Net-
work Micro-Clouds, pp. 376–388. Springer International
Publishing, Berlin (2016)
32. Sharifi, L., Cerd`
a-Alabern, L., Freitag, F., Veiga, L.: Energy
efficient cloud service provisioning: keeping data cen-
ter granularity in perspective. Journal of Grid Computing
14(2), 299–325 (2016)
33. Skarlat, O., Nardelli, M., Schulte, S., Dustdar, S.: Towards
Qos-aware fog service placement. In: IEEE International
Conference on Fog and Edge Computing (ICFEC 2017).
Madrid, Spain (2017)
34. Spinnewyn, B., Braem, B., Latr´
e, S.: Fault-tolerant appli-
cation placement in heterogeneous cloud environments. In:
Network and Service Management (CNSM), pp. 192–200
(2015)
35. Spinnewyn, B., Mennes, R., Botero, J.F., Latr ´
e, S.: Resilient
application placement for geo-distributed cloud networks.
J. Netw. Comput. Appl. 85, 14–31 (2017). Intelligent Sys-
tems for Heterogeneous Networks
36. Steiner, M., et al.: Network-aware service placement in a
distributed cloud environment. In: Proceedings of the ACM
SIGCOMM 2012 Conference, SIGCOMM ’12, pp. 73–74.
ACM, New York (2012)
37. Tantawi, A.N.: Solution biasing for optimized cloud work-
load placement. In: 2016 IEEE International Conference on
Autonomic Computing (ICAC), pp. 105–110 (2016)
38. Tarneberg, W., Mehta, A., Wadbro, E., Tordsson, J., Eker,
J., Kihl, M., Elmroth, E.: Dynamic application placement in
the mobile cloud network. Futur. Gener. Comput. Syst. 70,
163–177 (2017)
39. Urgaonkar, R., Wang, S., He, T., Zafer, M., Chan, K., Leung,
K.K.: Dynamic service migration and workload scheduling
in edge-clouds. Perform. Eval. 91, 205–228 (2015)
40. Vega, D., Meseguer, R., Cabrera, G., Marques, J.M.:
Exploring local service allocation in community net-
works. In: 10th International Conference on Wireless
and Mobile Computing, Networking and Communications
(Wimob’14), IEEE, pp. 273–280 (2014)
41. Vega, D., Baig, R., Cerd`
a-Alabern, L., Medina, E.,
Meseguer, R., Navarro, L.: A technological overview of
the guifi.net community network. Comput. Netw. 93,Part
2:260–278 (2015)
42. Vega, D., Cerd`
a-Alabern, L., Navarro, L., Meseguer, R.:
Topology patterns of a community network: Guifi.net.
In: 1st International Workshop on Community Networks
and Bottom-Up-Broadband (CNBub 2012), within IEEE
Wimob. Barcelona, Spain, pp. 612–619 (2012)
43. Verespej, H., Pasquale, J.: A characterization of node
uptime distributions in the Planetlab test bed. In: 2011 IEEE
30th International Symposium on Reliable Distributed Sys-
tems, pp. 203–208 (2011)
44. Wang, L., Bayhan, S., Ott, J., Kangasharju, J., Sathi-
aseelan, A., Crowcroft, J.: Pro-diluvian: understanding
scoped-flooding for content discovery in information-
centric networking, pp. 9–18. ACM, New York (2015)
45. Wang, S., Zafer, M., Leung, K.K.: Online placement of
multi-component applications in edge computing environ-
ments. IEEE ACCESS 5, 2514–2533 (2017)
46. Wang, S., Urgaonkar, R., He, T., Chan, K., Zafer, M.,
Leung, K.K.: Dynamic service placement for mobile micro-
clouds with predicted future costs. IEEE Trans. Parallel
Distrib. Syst. 28(4), 1002–1016 (2017)
189
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers and authorised users (“Users”),
for small-scale personal, non-commercial use provided that all copyright, trade and service marks and other proprietary notices are
maintained. By accessing, sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of use
(“Terms”). For these purposes, Springer Nature considers academic use (by researchers and students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and conditions, a relevant site licence or
a personal subscription. These Terms will prevail over any conflict or ambiguity with regards to the relevant terms, a site licence or
a personal subscription (to the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of the
Creative Commons license used will apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may also use these personal data
internally within ResearchGate and Springer Nature and as agreed share it, in an anonymised way, for purposes of tracking,
analysis and reporting. We will not otherwise disclose your personal data outside the ResearchGate or the Springer Nature group of
companies unless we have your permission as detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial use, it is important to note that
Users may not:
use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to
circumvent access control;
use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil
liability, or is otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by
Springer Nature in writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer
Nature journal content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates
revenue, royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain.
Springer Nature journal content cannot be used for inter-library loans and librarians may not upload Springer Nature journal
content on a large scale into their, or any other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any
information or content on this website and may remove it or features or functionality at our sole discretion, at any time with or
without notice. Springer Nature may revoke this licence to you at any time and remove access to any copies of the Springer Nature
journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express
or implied with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or
warranties imposed by law, including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be
licensed from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other
manner not expressly permitted by these Terms, please contact Springer Nature at
onlineservice@springernature.com
... Edge Computing has the potential to reduce latency and bandwidth costs, improve security and privacy [29], with application in various fields, including consumer applications, industrial applications, e-health services, and smart mobility applications [14]. Community Clouds are another example of edge infrastructure gaining attention in the last decade [27]. In these networks, the infrastructure is managed as a common resource and established by the participants. ...
... Finally, groups are recalculated taking into account the bandwidth information to group the nodes with higher bandwidth available between them. Recent work [27] made the system more dynamic (able to incorporate continuously updated network state information) and more responsive, by reacting with a faster heuristicdriven approach to changes in resource availability across the network. ...
Article
Full-text available
Cloud Computing has been successful in providing substantial amounts of resources to deploy scalable and highly available applications. However, there is a growing necessity of lower latency services and cheap bandwidth access to accommodate the expansion of IoT and other applications that reside at the internet’s edge. The development of community networks and volunteer computing, together with the today’s low cost of compute and storage devices, is making the internet’s edge filled with a large amount of still underutilized resources. Due to this, new computing paradigms like Edge Computing and Fog Computing are emerging. This work presents Caravela a Docker’s container orchestrator that utilizes volunteer edge resources from users to build an Edge Cloud where it is possible to deploy applications using standard Docker containers. Current cloud solutions are mostly tied to a centralized cluster environment deployment. Caravela employs a completely decentralized architecture, resource discovery and scheduling algorithms to cope with (1) the large amount of volunteer devices, volatile environment, (2) wide area networks that connects the devices and (3) nonexistent natural central administration.
... The Lyapunov optimization is a centralized approach that may cause the problem of high delay in large-scale edge networks, especially for real-time applications. In the reviewed articles, model-based techniques [71,118,222,236] mostly used mathematics-based techniques for identifying optimal fitness value for a fitness function. Mathematics-based techniques are mature and provide near-optimal results. ...
Article
Full-text available
The fog paradigm extends the cloud capabilities at the edge of the network. Fog computing-based real-time applications (Online gaming, 5G, Healthcare 4.0, Industrial IoT, autonomous vehicles, virtual reality, augmented reality, and many more) are growing at a very fast pace. There are limited resources at the fog layer compared to the cloud, which leads to resource constraint problems. Edge resources need to be utilized efficiently to fulfill the growing demand for a large number of IoT devices. Lots of work has been done for the efficient utilization of edge resources. This paper provided a systematic review of fog resource management literature from the year 2016–2021. In this review paper, the fog resource management approaches are divided into 9 categories which include resource scheduling, application placement, load balancing, resource allocation, resource estimation, task offloading, resource provisioning, resource discovery, and resource orchestration. These resource management approaches are further subclassified based on the technology used, QoS factors, and data-driven strategies. Comparative analysis of existing articles is provided based on technology, tools, application area, and QoS factors. Further, future research prospects are discussed in the context of QoS factors, technique/algorithm, tools, applications, mobility support, heterogeneity, AI-based, distributed network, hierarchical network, and security. A systematic literature review of existing survey papers is also included. At the end of this work, key findings are highlighted in the conclusion section.
... PlanetLab allows multiple services to run concurrently and continuously, each in its slice of PlanetLab. With hundreds of research projects hosted on PlanetLab, some studies in service placement have also utilised this testbed [263,264,265]. PlanetLab was officially shut down in May 2020 [266]. ...
Article
Full-text available
The advent of new cloud-based applications such as mixed reality, online gaming, autonomous driving, and healthcare has introduced infrastructure management challenges to the underlying service network. Multi-access edge computing (MEC) extends the cloud computing paradigm and leverages servers near end-users at the network edge to provide a cloud-like environment. The optimum placement of services on edge servers plays a crucial role in the performance of such service-based applications. Dynamic service placement problem addresses the adaptive configuration of application services at edge servers to facilitate end-users and those devices that need to offload computation tasks. While reported approaches in the literature shed light on this problem from a particular perspective, a panoramic study of this problem reveals the research gaps in the big picture. This paper introduces the dynamic service placement problem and outline its relations with other problems such as task scheduling, resource management, and caching at the edge. We also present a systematic literature review of existing dynamic service placement methods for MEC environments from networking, middleware, applications, and evaluation perspectives. In the first step, we review different MEC architectures and their enabling technologies from a networking point of view. We also introduce different cache deployment solutions in network architectures and discuss their design considerations. The second step investigates dynamic service placement methods from a middleware viewpoint. We review different service packaging technologies and discuss their trade-offs. We also survey the methods and identify eight research directions that researchers follow. Our study categorises the research objectives into six main classes, proposing a taxonomy of design objectives for the dynamic service placement problem. We also investigate the reported methods and devise a solutions taxonomy comprising six criteria. In the third step, we concentrate on the application layer and introduce the applications that can take advantage of dynamic service placement. The fourth step investigates evaluation environments used to validate the solutions, including simulators and testbeds. We introduce real-world datasets such as edge server locations, mobility traces, and service requests used to evaluate the methods. We compile a list of open issues and challenges categorised by various viewpoints in the last step.
... As of today, for all major Cloud providers the Edge is totally reliant on the Cloud; it should become autonomous and function even disconnected from the Cloud, and, in that direction, a lot of research is being done. Selimi et al. [31] attack the problem from a bottom-up approach and propose a framework for placing cloud services in Community Networks. Ramachandran et al. identified the challenges to provide a peer-to-peer standing for the Edge to the Cloud [29]. ...
Chapter
Although smart devices markets are increasing their sales figures, their computing capabilities are not sufficient to provide good-enough-quality services. This paper proposes a solution to organize the devices within the Cloud-Edge Continuum in such a way that each one, as an autonomous individual –Agent–, processes events/data on its embedded compute resources while offering its computing capacity to the rest of the infrastructure in a Function-as-a-Service manner. Unlike other FaaS solutions, the described approach proposes to transparently convert the logic of such functions into task-based workflows backing on task-based programming models; thus, agents hosting the execution of the method generate the corresponding workflow and offloading part of the workload onto other agents to improve the overall service performance. On our prototype, the function-to-workflow transformation is performed by COMPSs; thus, developers can efficiently code applications of any of the three envisaged computing scenarios – sense-process-actuate, streaming and batch processing – throughout the whole Cloud-Edge Continuum without struggling with different frameworks specifically designed for each of them.
Article
Internet of Things (IoT) represents a new generation of information and communication technology for anyone, anytime and anywhere. Cloud service‐based IoT applications significantly increase latency and network utilization. The fog environment is closer to the user to perform computing, communication, and storage tasks on network edge devices. Therefore, it can greatly reduce the latency of real‐time applications. It is an essential feature of fog computing and its most important advantage compared to cloud computing. This study proposed a new approach to service placement generated by running applications on IoT devices in the fog computing. IoT devices send applications to the fog environment that each application contains a set of services. The purpose of solving the IoT services placement problem is to efficiently deploy these services on fog cells. For this purpose, it is assumed that the received services from the IoT applications are received as a directed acyclic graph that depicts the communication between the cells within the graph that shows the communication between the services. Then, the imperialist competitive algorithm is used to place and select the destination for IoT services. The simulation results of the iFogSim simulator in different experiments showed that the imperialist competitive algorithm with the proposed graph partitioning approach has improved service placement on the fog infrastructure compared to the genetic algorithm and best‐fit algorithm.
Article
Full-text available
This papers studies a high-performance node-level service grid model, which aims to solve the problem that the current pod-level service grid model affects the service operation and consumes many computing resources. The main method of the node-level service grid model is to improve pod-accompanied service grid sidecar with the node-accompanied service grid sidecar sharing of multiple pods, combined with the cut-through of user mode protocol stack and scaling of node-level service grid sidecar. By the performance comparison of pod-level service grid model and node-level service grid, we can conclude that node-level service grid model can isolate pod services without affecting service operation, significantly reduce memory consumption without multiplying with the number of pods, and largely reduce end-to-end network delay about 30% but the overall CPU consumption as the same as that of the pod service grid model. It indicates that the node service grid model can obtain better business benefits than the pod service grid model in container cloud, cloud service providers can provide grid services for more tenants with less memory resources and network latency, and adding grid services has no impact on the operation of user applications.
Article
Full-text available
With rapid developments of the Internet of Things (IoT) applications in recent years, their use to facilitate day-to-day activities in various domains for enhancing the quality of human life has significantly increased. Fog computing has been developed to overcome the limitations of cloud-based networks and to address the challenges posed by the massive growth of IoT devices. This paradigm can provide better Quality of Service (QoS) in terms of low energy consumption and fast response, and cope with latency and bandwidth limitations. Since IoT applications are offered in the form of multiple IoT services with different QoS requirements, it is essential to develop an efficient IoT service deployment mechanism in a fog environment with distributed fog nodes and centralized fog servers. This is referred to as the Fog Services Placement (FSP) problem. Hence, we propose a QoS-aware IoT services placement policy with different objectives as a multi-objective optimization problem. Given the proven effectiveness of meta-heuristic techniques in solving optimization problems, we have used the Open-source Development Model Algorithm (ODMA) to deploy IoT services on fog nodes called FSP-ODMA. FSP-ODMA uses the service cost, energy consumption, response time, latency, and fog resource utilization as objective functions to find the optimal IoT service placement plan. In addition, we propose a three-layer conceptual computing framework (i.e., cloud-fog-IoT) to describe the interactions between system components and the FSP problem-solving policy. The simulation results obtained demonstrate that the proposed solution increases the resource usage and service acceptance ratio and reduces the service delay and the energy consumption compared with the other metaheuristic-based mechanisms.
Article
Full-text available
Over the last few years, service placement has become a strategic and fundamental management operation that allows cloud providers to deploy and arrange their services on the high-performance computation/storage servers, while taking various constraints (e.g., resource usage, security levels, data transfer time, SLA) into consideration. Despite the huge number of service placement schemes, most of them are static and do not take the cloud changes into account. To cope with this issue, predicting the cloud zones’ performance and availability should precede the placement task. For this purpose, we adopt gated recurrent neural network as a deep learning variant that allows forecasting the next short-term resource consumption on cloud servers and predicting the future service migration traffic between them. Also, to place cloud services’ application/data components on the optimum cloud zones, the frequently used high-performance servers are selected by mining the graph-like placement history, i.e. previous placement plans. To do so, we propose a Frequent Subgraph Mining algorithm that is reinforced with a tuning method to increase the probability of executing the past placement schemes. Experimental results have proved that our predictive approach outperforms state-of-the-art placement schemes in terms of performance and prediction quality.
Article
Volunteer Computing is a type of large-scale distributed system formed aggregating computers voluntarily donated by volunteers. These computers are usually off-the-self heterogeneous resources belonging to different administrative authorities (users) that have an uncertain behavior regarding connectivity and failure. Thus, the resource allocation methods in such systems are highly dependent on the availability of resources. On one hand, resources tend to be scarce, but on the other hand, computers exhibiting low availability patterns – which are the most frequent type – are discarded or used at a high cost only when high available nodes are crowded. This paper presents the Complementary Low-Availability Resource-Allocation (CLARA) mechanism, a novel clustering-based resource allocation mechanism that takes advantage of complementarities between nodes with low availability patterns. The combination of them into complementary nodes offers an availability level equivalent to the level offered by a single high-available node. These groups of complementary nodes are maintained using a lazy reassignment algorithm. Consequently, a significant number of nodes with low-availability patterns are considered by the resource allocation mechanism for service placement. Our method has been validated over a simulation environment of a real volunteer network. The analysis of the results shows how our mechanism maximizes the use of poor quality computational resources to satisfy the user quality requirements while minimizes the number of USs replicas reassignments between nodes. As well, the capacity of the system for providing user services is highly increased while the load of the high-available nodes is remarkably reduced.
Conference Paper
Full-text available
Community networks (CNs) have seen an increase in the last fifteen years. Their members contact nodes which operate Internet proxies, web servers, user file storage and video streaming services, to name a few. Detecting communities of nodes with properties (such as co-location) and assessing node eligibility for service placement is thus a key-factor in optimizing the experience of users. We present a novel solution for the problem of service placement as a two-phase approach, based on: 1) community finding using a scalable graph label propagation technique and 2) a decentralized election procedure to address the multi-objective challenge of optimizing service placement in CNs. Herein we: i) highlight the applicability of leader election heuristics which are important for service placement in community networks and scheduler-dependent scenarios; ii) present a parallel and distributed solution designed as a scal-able alternative for the problem of service placement, which has mostly seen computational approaches based on centralization and sequential execution.
Conference Paper
Full-text available
Fog computing provides a decentralized approach to data processing and resource provisioning in the Internet of Things (IoT). Particular challenges of adopting fog-based computational resources are the adherence to geographical distribution of IoT data sources, the delay sensitivity of IoT services, and the potentially very large amounts of data emitted and consumed by IoT devices. Despite existing foundations, research on fog computing is still at its very beginning. A major research question is how to exploit the ubiquitous presence of small and cheap computing devices at the edge of the network in order to successfully execute IoT services. Therefore, in this paper, we study the placement of IoT services on fog resources, taking into account their QoS requirements. We show that our optimization model prevents QoS violations and leads to 35% less cost of execution if compared to a purely cloud-based approach.
Conference Paper
Full-text available
Global access to the Internet for all requires a dramatic reduction in Internet access costs particularly in developing areas. This access is often achieved through several proxy gateways shared across local or regional access networks. These proxies allow individuals or organisations to share the capacity of their Internet connection with other users. We present a measurement study of a crowdsourced Internet proxy service in the guifi.net community network that provides free Web access to a large community with many small proxy servers spread over the network. The dataset consists of Squid proxy logs for one month, combined with network topology and traffic data. Our study focuses on a representative subset of the whole network with about 900 nodes and roughly 470 users of the web proxy service. We analyse the service from three viewpoints: Web content traffic from users, performance of proxies and influence of the access network. We find clear daily patters of usage, excess capacity and little reuse of content which makes caching almost unnecessary. We also find variations and small inefficiencies in the distribution of traffic load across proxies and the access network, related to the locality and manual proxy choice. Finally, users experience an overall usable Internet access with good throughput for a free crowdsourced service.
Article
Commons are natural or human-made resources that are managed cooperatively. The guifi.net community network is a successful example of a digital infrastructure, a computer network, managed as an open commons. Inspired by the guifi.net case and its commons governance model, we claim that a computing cloud, another digital infrastructure, can also be managed as an open commons if the appropriate tools are put in place. In this paper, we explore the feasibility and sustainability of community clouds as open commons: open user-driven clouds formed by community-managed computing resources. We propose organising the infrastructure as a service (IaaS) and platform as a service (PaaS) cloud service layers as common-pool resources (CPR) for enabling a sustainable cloud service provision. On this basis, we have outlined a governance framework for community clouds, and we have developed Cloudy, a cloud software stack that comprises a set of tools and components to build and operate community cloud services. Cloudy is tailored to the needs of the guifi.net community network, but it can be adopted by other communities. We have validated the feasibility of community clouds in a deployment in guifi.net of some 60 devices running Cloudy for over two years. To gain insight into the capacity of end-user services to generate enough value and utility to sustain the whole cloud ecosystem, we have developed a file storage application and tested it with a group of 10 guifi.net users. The experimental results and the experience from the action research confirm the feasibility and potential sustainability of the community cloud as an open commons.
Conference Paper
Recent trends show that deploying low cost devices with lightweight virtualisation services is an attractive alternative for supporting the computational requirements at the network edge. Examples include inherently supporting the computational needs for local applications like smart homes and applications with stringent Quality of Service (QoS) requirements which are naturally hard to satisfy by traditional cloud infrastructures or supporting multi-access edge computing requirements of network in the box type solutions. The implementation of such platform demands precise knowledge of several key system parameters, including the load that a service can tolerate and the number of service instances that a device can host. In this paper, we introduce PiCasso, a platform for lightweight service orchestration at the edges, and discuss the benchmarking results aimed at identifying the critical parameters that PiCasso needs to take into consideration.
Conference Paper
Citizens develop Wireless Mesh Networks (WMN) in many areas as an alternative or their only way for local interconnection and access to the Internet. This access is often achieved through the use of several shared web proxy gateways. These network infrastructures consist of heterogeneous technologies and combine diverse routing protocols. Network-aware state-of-art proxy selection schemes for WMNs do not work in this heterogeneous environment. We developed a client-side gateway selection mechanism that optimizes the client-gateway selection, agnostic to underlying infrastructure and protocols, requiring no modification of proxies nor the underlying network. The choice is sensitive to network congestion and proxy load, without requiring a minimum number of participating nodes. Extended Vivaldi network coordinates are used to estimate client-proxy network performance. The load of each proxy is estimated passively by collecting the Time-to-First-Byte of HTTP requests, and shared across clients. Our proposal was evaluated experimentally with clients and proxies deployed in guifi.net, the largest community wireless network in the world. Our selection mechanism avoids proxies with heavy load and slow internal network paths, with overhead linear to the number of clients and proxies.
Article
A data mining algorithm may perform differently on datasets with different characteristics, e.g., it might perform better on a dataset with continuous attributes rather than with categorical attributes, or the other way around. Typically, a dataset needs to be pre-processed before being mined. Taking into account all the possible pre-processing operators, there exists a staggeringly large number of alternatives. As a consequence, non-experienced users become overwhelmed with pre-processing alternatives. In this paper, we show that the problem can be addressed by automating the pre-processing with the support of meta-learning. To this end, we analyzed a wide range of data pre-processing techniques and a set of classification algorithms. For each classification algorithm that we consider and a given dataset, we are able to automatically suggest the transformations that improve the quality of the results of the algorithm on the dataset. Our approach will help non-expert users to more effectively identify the transformations appropriate to their applications, and hence to achieve improved results.