Information-Centric Multi-Access Edge Computing
Platform for Community Mesh Networks
University of Cambridge
University of Cambridge
University of Cambridge
University of Cambridge
Abstract—Edge computing is shaping the way to run services
in the Internet by allowing the computations to become available
close to the user’s proximity. Many implementations have
been recently proposed to facilitate the service delivery in
data centers and distributed networks. However, we argue
that those implementations cannot fully support the operations
in Community Mesh Networks (CMNs) since the network
connection is highly intermittent and unreliable. In this
paper, we propose PiCasso, a novel multi-access edge computing
platform that combines the advances in lightweight virtualisation
and Information-Centric Networking (ICN). PiCasso utilises in-
network caching and name based routing of ICN to optimise the
forwarding path of service delivery. We analyse the data collected
from Guiﬁ.net, the biggest CMN worldwide, to develop smart
heuristic for the service deployment. Through a real deployment
in Guiﬁ.net, we show that our service deployment heuristic
HANET, improves the response time up to 53% and 28.7%
for stateless and stateful services. Finally, using PiCasso for the
service delivery in Guiﬁ.net, we achieve 43% trafﬁc reduction
compared to the traditional host-centric communication.
Community Mesh Networks (CMNs) are self-managed, large-
scale networks that are built and organized in a non-centralized
and open manner. As participation in these networks is open,
they grow organically, since new links are created every time
a host is added. Because of this, the network presents a high
degree of heterogeneity with respect to devices and links used
in the infrastructure and its management. Due to the large
and irregular topology , highly skewed bandwidth/trafﬁc
distribution  and high software and hardware diversity 
in the CMNs, the provisioning of the services is not so simple.
Unfortunately, the current architectures and platforms in CMNs
are failing to capture the dynamics of the network and therefore
they fail to deliver the satisfying QoS . These challenges
bring a lot of attention to CMNs to build infrastructures that
support lightweight multi-tenancy at the network edge by
allowing ﬂexible hosting and fast delivery of local services.
The latest advances in lightweight virtualisation technologies
(e.g., Docker , Unikernels ), allow many developers to
build local edge computing platforms that could be used to
deliver services within CMNs . Despite delivering these
lightweight services within a data center is trivial, delivering
them across intermittent connectivity of CMNs has a lot of
challenges . As a matter of fact, most of the edge computing
platforms still rely on the host-centric communication that binds
the connection to the ﬁxed entity. The host-centric approach
struggles for service delivery i.e., transporting the service
instances to the network edge as the connectivity can fail
at any time , . In addition to that, those platforms do
not have speciﬁc strategy for the service deployment in CMN
environment. This raises several questions: Which services
should be delivered? When should they be delivered? What are
the suitable criteria for node selection to host the service? Is
network-aware placement enough to deliver satisfactory perfor-
mance to CMN users? However, this is not trivial and requires
an effective strategy to manage the service delivery in CMNs.
On the other hand, Information-Centric Networking
(ICN)  has recently emerged as a potential solution for
delivering named contents. Instead of using IP address for
communication, ICN identiﬁes a content by name and forwards
a user request through name-based routing. This decouples
the content from its original location, where the content can
be delivered from any host that currently has it in its local
storage . Although ICN brings a lot of ﬂexibility in terms
of content delivery, the current ICN implementations are rather
focused on the simple static content (e.g., short message, video
ﬁle). In this essence, we argue that ICN should be extended
to better support the service delivery in edge computing.
To overcome the above-mentioned challenges, in this paper
we present PiCasso, a uniﬁed edge computing platform that
brings together the lightweight vitualisation technologies and
a novel ICN paradigm to support both service delivery and
service deployment in the CMN environment. We underpin
PiCasso with a Docker container-based service that can be
seamlessly delivered, cached and deployed at the network
edge. The core of PiCasso is the decision engine component
that deploys services on the basis of the service speciﬁcations
and the status of the resources of the hosting devices. Unlike
other edge computing platforms, PiCasso creates a new service
abstraction layer using ICN to enable more ﬂexibility in service
Fig. 1: Outdoor Devices in the qMp Network
delivery. Instead of hosting services in the ﬁxed centralised
location (e.g., service repository), PiCasso beneﬁts from inherit
name-based routing and in-network caching capabilities of ICN
by allowing edge devices to retrieve services from the nearest
caches. Furthermore, PiCasso is also integrated with service
controller and full functional monitoring system to optimise
the service deployment decision in CMNs. Speciﬁcally, our
key contributions are summarized as follows:
, we characterize the performance of the Guiﬁ.net
CMN in the city of Barcelona. We determine the key
features of the network and the node selection criteria.
Based on that, we identify the key performance indicators
(i.e., metrics) in the network to be used by our service
, driven by the ﬁndings in the Guiﬁ.net CMN, we
design PiCasso, a multi-access edge computing platform
which deploys QoS-sensitive services at the network
edge. We present a system architecture and demonstrate
the capabilities (i.e., efﬁciency and effectiveness) of the
platform by focusing on its core features. First, we show
how PiCasso achieves a better end-user experience (e.g.,
low latency, great responsiveness) using the HANET
service deployment heuristic. Then, we show how PiCasso
achieves more efﬁcient use of network bandwidth using
its ICN capabilities.
, we deploy the PiCasso platform in a production
Guiﬁ.net CMN and quantify the performance of the
platform with real services. To the best of our knowledge,
this is the ﬁrst ICN deployment in a production wireless
CMN such as Guiﬁ.net.
II. COMMUNITY ME SH NE TWORKS:QMP CASE
qMp (Quick Mesh Project)  is a wireless mesh network
which started to operate in
in the city of Barcelona,
Spain. qMp network is a subset of Guiﬁ.net (i.e., located
in urban area) and at the time of this writing, it has
operating nodes. In the network, there are two gateways (i.e.,
proxies) distributed that connect the qMp network to the rest
of Guiﬁ.net and the Internet. In the rest of the paper, we will
use the name qMp to refer also to the Guiﬁ.net.
In terms of hardware, the qMp users have an outdoor router
(OR) with a WiFi interface on the roof as shown in the Figure
Fig. 2: qMp Network Topology (Barcelona)
1. The ORs are used to build P2P (point-to-point) links in the
network. The ORs are connected through Ethernet to an indoor
AP (access point) as a premises network where the edge ser-
vices are running (e.g., on home-gateways, Raspberry Pi’s etc.).
ORs in qMp are using BMX6 as the mesh routing protocol .
For our experimental cases, we deploy (i.e., attach) Raspberry
Pi’s at the ORs in the network and use them as servers.
Methodology and data collection: We have collected
network data by connecting via SSH to each qMp OR
and running basic system commands available in the qMp
distribution. Live measurements have been taken hourly during
the entire month of September
. Our live monitoring
system is operational and can be seen in . Further, the
data collected is publicly available on the Internet. We use
this data to analyse the main aspects of the qMp network.
A. qMp Network Characterisation
The failure of services gaining traction
in Guiﬁ.net and qMp CMN was largely due to the difﬁculty of
implementing the services and for the end-users to consume
these services. To overcome these issues, one solution for CMN
enthusiasts was to design micro-cloud distributions such as
Cloudy , Guinux  etc., where users were able to deploy
their preferred services and share with the others in CMNs.
Key characteristics of these distributions were a set of scripts
that automated the conﬁguration process of services. However,
there was no logic behind the service deployment. Services
were placed randomly in the network (i.e., in VMs) without
considering the performance of the underlying network.
Guiﬁ.net, in general, is composed of numerous
distributed CMNs (e.g., qMp) and they represent a different
type of network topologies. The overall topology is constantly
changing and there is no ﬁxed topology as in the data center
(DC) environment. The network has a mesh topology in the
backbone, and each node of the backbone (i.e., super-node)
provides access to the client nodes . Figure 2 depicts the
topology of the qMp network in Barcelona. qMp network
shows some typical patterns from the urban networks (i.e.,
mesh networks) combined with an unusual deployment, that
do not completely ﬁt neither with organically grown networks
nor with planned networks.
Figure 3 shows the Empirical
Cumulative Distribution Function (ECDF) of the node
0 10 20 30 40 50 60 70 80 90 100
10% of the nodes < 90% availability
1 10 100
Link throughput [Mbps] (log10 scale)
Average bandwidth: 11.7 Mbps
1 3 5 7 9 11 13 15 17 19 21
Average node out-degree: 6.9
Fig. 3: Node availability
0 10 20 30 40 50 60 70 80 90 100
10% of the nodes < 90% availability
1 10 100
Link throughput [Mbps] (log10 scale)
Average bandwidth: 11.7 Mbps
1 3 5 7 9 11 13 15 17 19 21
Average node out-degree: 6.9
Fig. 4: Node out-degree
0 10 20 30 40 50 60 70 80 90 100
10% of the nodes < 90% availability
1 10 100
Link throughput [Mbps] (log10 scale)
Average bandwidth: 11.7 Mbps
1 3 5 7 9 11 13 15 17 19 21
Average node out-degree: 6.9
Fig. 5: Bandwidth distribution
availability collected for a period of one month. We deﬁne the
availability of a node as the percentage of times that the node
appears in a capture. A capture is an hourly network snapshot
that we take from the qMp network (i.e., we took
in total). Figure 3 reveals that
of the nodes have availability
. In a CMN such as qMp, users do not tend to
deliberately reboot the device unless they have to perform an
upgrade, which is not very common. Hence, the percentage of
times that node appears in a capture is a relatively good measure
of the node availability due to random failures (e.g., electric
cuts, misconﬁgurations etc). Figure 4 shows node out-degree
in the network. Figure 4 reveals that on average, around
of the nodes in the network have more than
links and around
of the nodes have at least
links with an overall average
. This shows that the network is well-connected.
First, we characterize the wireless
links of the qMp network by studying their bandwidth. Figure
5 shows the average bandwidth distribution of all the links.
The ﬁgure shows that the link throughput can be ﬁtted with
a mean of
Mbps. At the same time Figure 5 reveals
of the nodes have
Mbps or less throughput.
In order to measure the link asymmetry, Figure 6 depicts
the bandwidth measured in each direction. A boxplot of
the absolute value of the deviation over the mean is also
depicted on the right. The ﬁgure shows that around
the links have a deviation higher than
. After performing
some measurements regarding the signaling power of the
devices, we discovered that some of the community members
have re-tuned the radios of their devices (e.g., transmission
power, channel and other parameters), trying to achieve better
performance, thus, changing the characteristics of the links.
B. Key Observations
Here are some observations that we have derived from the
measurements in the qMp network:
Absence of the service-enabler platforms:
achieving the sharing of bandwidth, Guiﬁ.net and qMp CMNs
have not been able to widely extend the sharing of ubiquitous
cloud services, such as private data storage and backup, instant
messaging, media sharing, social networking etc., which is a
common practice in today’s Internet through cloud computing.
There have been efforts to develop and promote different
services and applications from within community networks
0 10 20 30 40
Fig. 6: Bandwidth asymmetry
through community micro-clouds  but without signiﬁcant
adoption. Currently, there is no open source platform to
bootstrap and manage decentralized community services. The
platforms that are easy to use, reliable, low-cost and with
smart decision making algorithms can deﬁnitely boost the
adoption of local services in the network.
The qMp network is highly dynamic
and diverse due to many reasons, e.g., its community nature in
an urban area; its decentralized organic growth with extensive
diversity in the technological choices for hardware, wireless
media, link protocols, channels, routing protocols etc.; its
mesh topology etc. The current network deployment model
is based on geographic singularities rather than QoS. The
network is not scale-free. The topology is organic and different
with respect to conventional ISP networks. This implies that a
solution (i.e., algorithm) that works in a certain topology might
not work in another one. There is a need for fast, adaptive and
effective heuristics that can cope with the topology dynamics.
Non-uniform resource distribution:
The resources are
not uniformly distributed in the network. Wireless links are
with the asymmetric quality of services (
% of the links
have a deviation higher than
%). There is a highly skewed
bandwidth, trafﬁc and latency distribution. Currently used
organic placement scheme in the qMp and Guiﬁ.net in general,
is inefﬁcient, failing to capture the dynamics of the network and
therefore it fails to deliver the satisfying QoS. The symmetry of
the links, an assumption often used in the literature of wireless
mesh networks, is not very realistic for our case and algorithms
Docker E ngine
(a) The overview of the PiCasso platform
Service Controller Service Execution Gateway
(b) PiCasso’s function blocks
Fig. 7: The architecture of PiCasso
(heuristics) unquestionably need to take this into account.
III. PICAS SO : LIGHTWEIGHT ED GE COMPUTING PL ATFO RM
To overcome the challenges in CMNs, PiCasso is developed
based on three main aspects: lightweight virtualisation, service
abstraction layer over ICN and smart service orchestration.
The lightweight virtualisation technology such as Docker
container  substantially reduces the size of service image
as the system libraries can be customised for each particular
service. This makes the service deployment process in CMNs
more efﬁcient as it requires less bandwidth for delivering the
service. We also implement the service abstraction layer over
ICN which decouples the service from its original location.
The node requesting a service image by name can dynamically
choose the optimal forwarding path to retrieve a copy of service
image from the nearest cache. This is very useful for service
delivery in CMNs as the link to the service repository can
be highly intermittent (e.g., link broken, limited bandwidth).
Lastly, deploying service in CMNs requires the smart service
orchestration to select a suitable node to host the service. Given
that the node availability in CMNs is vastly ﬂuctuated, a node
can become suddenly unavailable (e.g., disconnected) or it
might not have enough resources to host the service. In this
regard, we build the full functional monitoring system that can
monitor the nodes in the network. Subsequently, the decision
engine applies this monitoring data along with the smart
algorithm to make the optimal decision for service deployment.
A. System Overview
The overview of PiCasso platform is presented in Figure 7a.
The key entity is referred to Service Controller (SC) that period-
ically observes the network topology and resource consumption
of potential nodes for the service deployment. In our model,
we assume that the service providers upload their services to
a service repository inside the SC before distributing to the
network edge. To achieve the QoS and overcome the network
connectivity problems, SC augments the monitoring data along
with service deployment algorithms to decide where and when
to place the services. We also introduce the Service Execution
Gateway (SEG) which provides a virtualisation capability to run
a service instance at the network edge (e.g., users’ house). In Pi-
Casso, we use Docker, a container-based virtualisation to build
lightweight services and deploy across the SEGs. Each SEG is
also equipped with the access point daemon (e.g., hostapd )
to act as the point of attachment for the end-users to access
the services via WiFi connection. A prototype of SEG has
been developed on the Raspberry Pi
running the Hypriot OS
Version 1.2.03 . The Forwarding Node (FN) is responsible
for forwarding the requests towards the original content source
or nearby caches. Each FN is equipped with a storage while dy-
namically caching the content chunks that ﬂow through it. No-
tice that, FN does not necessarily need to execute the services.
B. System Architecture
PiCasso’s architecture is presented in Figure 7b which
contains the function blocks of each entity (i.e., SC, SEG, and
FN). There are several ICN implementations , , ,
,  have been proposed during the past decade. Among
those implementations, Named Data Networking (NDN) 
is the most suitable candidate for PiCasso as it uses a simple
stateful forwarding plane to utilise the distributed in-network
caching without any controlling entity. Currently, PiCasso is
written in Python, and implemented on top of NDN protocol
stack and Docker.
NFD forwarding plane
sits between the application and
transport layers while looking at the content names and oppor-
tunistically forwarding the requests to an appropriate network
interface. It creates an ICN overlay to support name-based
routing over the network. We integrate the NFD forwarding
plane to PiCasso architecture through a python wrapper of NDN
APIs, called PyNDN . The NFD maintains three types of data
structure: Forwarding Information Base (FIB), Pending Interest
Table (PIT), and Content Store (CS). FIB table maintains name
preﬁxes with the outgoing interfaces based on routing protocols
(e.g., static, NLSR ) and forwarding strategies(e.g., broad-
cast). PIT keeps track of the Interest requests that have already
been forwarded by recording incoming faces and name of Inter-
est messages. CS is a local cache integrated in every NFD node.
Fig. 8: PiCasso Monitoring dashboard
In PiCasso, we have also extended the NDN protocol
stack by introducing a DTN face to facilitate the operation
in challenge network environment like post-disaster scenario.
This new face communicates with an underlying DTN
implementation that handles intermittence by encapsulating
Interest and Data packets into a DTN bundle. The details of
implementation and evaluation can be found in .
runs on the SEG and has major function-
alities as follows: registers the SEG to the service controller,
receives push command to instantiate and terminate services
dynamically regarding the decision of service deployment.
This module uses docker-py , a python wrapper for Docker
to expose the controlling messages to Docker engine.
reports the SEG’s status to SC by
considering two layers of measurements. First, it measures
the underlying hardware resources such as current memory
usage, CPU utilisation, and CPU load. Second, it associates
with Docker engine to report the status of running containers
(e.g., container names) and resource consumption inside each
container (e.g., CPU and memory usage).
Decision Engine (DE)
is the core component of PiCasso
that can make autonomic decisions of service deployment
based on the combination of various measurement metrics
such as resource consumption of underlying hardware, network
topology, and service requirement. The DE also contains the
algorithm repository where the service deployment algorithms
can be dynamically updated regarding different deployment
scenarios and service level agreements.
is a repository storing dockerized compressed
images. Our implementation allows the third party
service providers to upload their service along with a
deployment description augmented with speciﬁcations and
QoS requirements. This description is written in JSON format.
periodically collects the monitoring
data from each SEG and stores in the database (Monitoring
DB). It is implemented based on a time series database, called
InﬂuxDB . We also implemented the dashboard for mon-
itoring system using Grafana  to visualise time series data
for SEG’s measurements and application analytics (Figure 8).
C. PiCasso’s Operations
This section explains main operations used in PiCasso.
1) Collecting Monitoring Data: This operation follows the
native pull-based communication model of NDN. As shown
(a) Pull-based model
(b) Push-based model
Fig. 9: Key operations in PiCasso. (a) Monitoring Manager
retrieves the monitoring data from SEGs. (b) Decision Engine
delivers the service to the SEG
in Figure 9a, the monitoring manager places the pull requests
towards SEG1 and SEG2 while conﬁguring name-preﬁxes
as /picasso/monitoring/SEG1/ and /picasso/monitoring/SEG2/
respectively. When the SEG receives this pull Interest message,
it attaches the current monitoring data with JSON format to
the Data message and forwards to the same path that Interest
message (reverse path forwarding) came from by using infor-
mation in the PIT. To avoid receiving outdated data from the
caches, we set the data freshness to a small value (e.g., 10ms).
2) Decision Making for Service Deployment: PiCasso relies
on smart service deployment algorithms that aim to maximise
the QoS as well as the network resources. This operation is con-
trolled by the DE where it dynamically selects the appropriate
algorithm from the repository regarding the scenario and re-
quirements of the network. The output of the algorithm is a list
of selected nodes for the service deployment and instantiation.
In this paper, we propose HANET (HArdware and NETwork
Resources) heuristic algorithm, which is designed speciﬁcally
for service deployment in the unreliable network environment
such as CMNs. HANET uses the state of the underlying
CMN (i.e., qMp) to optimize service deployment decision. In
particular, it considers three sources of information: i) network
bandwidth, ii) node availability, iii) hardware resources to
make optimal decisions . First, we test HANET with the
static data obtained from the qMp network (i.e., bandwidth,
availability, and CPU data). Then we ran HANET in a real
CMN (i.e., qMp) and quantify the performance achieved after
deploying real services. The HANET heuristic algorithm (see
Algorithm 1) runs in three phases:
Phase 1 - Network Setup Phase:
We initially build the
topology graph of the qMp network. The qMp topology graph
is constructed by considering only operational nodes, marked
in “working” status, and having one or more links pointing to
another node (i.e., we remove the disconnected nodes). Once
the topology graph is constructed, we check the availability
of the nodes in the network. The nodes that are under the
predeﬁned availability threshold (
) are removed. Then, we use
the K-Means partitioning algorithm to group nodes based on
their geo-location. The idea is to get back clusters of nodes that
are close to each other. The K-Means algorithm forms clusters
of nodes based on the Euclidean distances between them,
where the distance metrics in our case are the geographical
coordinates of the nodes. Each cluster contains a full replica of
a service, i.e., the algorithm in this phase partitions the network
(maximum allowed number of service replicas)
clusters. This is plotted as KMeans C in the Figure 10.
Phase 2 - Computation Phase:
This phase is based
on the concept of ﬁnding the cluster heads maximizing
i=1∑j∈Ci Bi j
) the bandwidth (
) between them
and their member nodes in the clusters
formed in the ﬁrst
phase. The bandwidth between two nodes is estimated as the
bandwidth of the link having the minimum bandwidth in the
shortest path. The computed cluster heads are the candidate
nodes for the service deployment.
Phase 3 - Content Placement Phase:
After the cluster
heads are computed in Phase
, the services are placed on
the selected cluster heads if their CPU load is under the
predeﬁned threshold (
). If this condition is satisﬁed, the
service image is pulled from the Service Repo and pushed
to the selected edge nodes (i.e., deployed and started). Notice
that the threshold can be set at the monitoring dashboard and
the notiﬁcation will be sent to the DE when the measured
CPU load violates this threshold.
Algorithmic Performance and Complexity: Figure 10 depicts
the average bandwidth to the cluster heads obtained with the
Random (default strategy in Guiﬁ.net), K-Means (Phase 1 of
the algorithm) and the HANET heuristic algorithm. This value
reﬂects the average bandwidth computed from the cluster heads
to the other nodes within each cluster. Figure 10 reveals that
for the considered number of services
, HANET outperforms
both K-Means and Random placement. For
, the average
bandwidth to the cluster heads has increased from
Mbps (HANET), which represents a
improvement. The highest increase of
is achieved when
. On average, when having up to
services (i.e. clusters)
in the network, the gain of HANET over K-Means is of
Based on the observations from Figure 10, the gap between the
two algorithms grows as
increases. We observe that
increase as the network grows. Accordingly, HANET will pre-
sumably render better results for larger networks than the rest of
strategies. The overall complexity of HANET is polylogarithmic
, which is signiﬁcantly smaller than the brute
force method and thus practical for commodity processors.
3) Deliver Service to the Edge: When the DE retrieves a list
of selected node names from the service deployment algorithm,
it will start the service delivery process which requires the
push-based communication model. However, with the current
implementation of NDN, it supports only the pull-based model
where a consumer (i.e., SEG) has to initiate the communication.
To support this operation, we have implemented the push com-
munication model based on Interest/Data exchange of primitive
NDN. We follow the publish-subscribe model  where a data
producer (DE) publishes contents or services via Interest mes-
sage to a subscribed consumer which in turn trigger an Interest
back from the consumer to fetch the data. Figure 9b illustrates
Algorithm 1 HANET Algorithm
Require: input =qM pTo pology.xml
Rn availability of node n
CPUch CPU load of cluster head
Phase 1 – Network Setup Phase
1: procedure NETW OR KSE TU P(input)
2: g=BuildTo pology(input)
3: g0=SanitizeGra ph(g)
4: for each line in g0do // sanitization process
5: Remove disconnected nodes
6: Ensure bidirectional links
7: Remove nodes with no metrics
8: end for
9: return g0
10: if Rn ≥λthen
11: Per f or mKMeans(g0, k)
12: return C
13: end if
14: end procedure
Phase 2 – Computation Phase (Bandwidth Max.)
15: procedure COMPUTEHEAD S(C)
16: clusterHeads ←list()
17: for all k∈Cdo
18: for all i∈Ck do
19: Bi ←0
20: for all j∈setd i f f (C,i)do
21: Bi ←Bi+estimate.route.bandw(g0,i,j)
22: end for
23: clusterHeads ←argmaxC∑k
24: end for
25: end for
26: return clusterHeads
27: end procedure
Phase 3 – Content Placement Phase (Hardware)
28: procedure PLACEMENTPHASE
29: for each clusterHeads do
30: if CPUch ≤αthen
33: end if
34: GoForNextClusterHead ()
35: end for
36: end procedure
the Interest/Data exchange of the push-based model, where the
DE initially sends a push Interest message to SEG1 with a name
preﬁx: /picasso/service deployment/push/SEG1/service name.
To distinguish the push Interest message from the NDN
pull model, a name component, “push” is added after the
Fig. 10: Average bandwidth to the cluster heads
operation name (i.e., “service deployment”). Consequently,
when SEG1 receives the push Interest message, it discards
the (“push”) and (“SEG ID”) preﬁxes while reconstructing
a new Interest name: /picasso/service deployment/service
name/#00 to request the service image. In NDN, a content is
divided into several chunks, the last preﬁx is reserved for the
requesting chunk ID which is started from zero (e.g., #00).
IV. PICAS SO DE PL OYM EN T IN GU IFI .NE T
In order to understand the feasibility of running the PiCasso
platform and the possible gains of our service deployment
heuristic HANET in a real production CMN, we deploy PiCasso
in a real hardware connected to the nodes of the qMp network
located in the city of Barcelona. We have strategically deployed
SEGs to cover the area of qMp network as presented in
Figure 11. In our conﬁguration, SEGs are connected to the
ORs via Ethernet cable and the service controller is centrally
set up inside the main campus of Universitat Politecnica de
Catalunya (UPC) where the Guiﬁ lab is located.
The location of the the ﬁve SEGs deployed
is chosen based on the output of the HANET algorithm (i.e.,
highlighted with the red color in Figure 11). This corresponds
to the top-ranked nodes (i.e., cluster heads) selected from
the HANET; with higher bandwidth, availability and CPU
resources. Based on this, we deploy ﬁve Raspberry Pi’s to the
selected ORs given by the HANET algorithm. The other ﬁve
ORs in the qMp are selected randomly for comparison purposes.
In this set, we cover nodes with different properties: high degree
centrality, nodes that are not well connected, nodes acting as
bridges etc. All nodes are well-distributed in the qMp network.
We follow the ICN-as-an-Overlay
approach  to construct the ICN shim layer on top
of the existing qMp’s routing protocol (i.e., BMX6/7). The
NFD forwarding plane is responsible for managing the name
based routing in this ICN layer. In this deployment trial, we
use a static routing to setup the forwarding table (FIB) of
each SEG and service controller based on actual information
taken from the IP routing table of ORs in the qMp network.
V. PERFORMANCE EVALUATI ON
This section analyses the performance of PiCasso platform
deployed in the qMp network. We concentrate on the bench-
Fig. 11: The topology of PiCasso deployment in qMp
marking of two services: user and network-focused services.
From the user services, we quantify the performance of the
HANET heuristic using a stateless service (ApacheBench) and
a stateful Web2.0 service (Cloudsuite web serving benchmark).
The evaluation of end-user services is based on the web tech-
nology while the response time is the key performance metric.
On the other hand, the evaluation of network services focuses
on the efﬁciency of service delivery in PiCasso comparing with
a traditional host-centric communication (HCN) approach.
A. Evaluation of End-user Services
Undoubtedly, deploying multiple service instances can
signiﬁcantly improve the QoS, since servers or containers can
balance the load and response to user requests faster. However,
in practice, it is not trivial to deliver a service instance in
every location as it comes with extra costs such as memory
usage and bandwidth consumption. To balance this trade-off,
we apply the HANET service deployment heuristic to decide
where to place the services. We compare the HANET heuristic
with the Random heuristic i.e., the existing in-place and
naturally fast strategy in the qMp network.
1) Impact on Stateless User Services: In this evaluation,
we focus on the response time of the HTTP requests while
considering a different number of replicas (e.g.,
). The location of the replica is determined by the HANET
algorithm using the measurements from the qMp dataset as
well as the real-time monitoring data from the PiCasso platform.
Based on HANET,
respectively, as highlighted in Figure 11. In
this experiment, we consider a lightweight web server, namely
hypriot/rpi-busybox-httpd which contains a static single HTML
document with a link to a local jpeg image (the payload size
is 304 bytes). This service image is delivered to the selected
SEGs by using the operation in Figure 9b. To generate the
HTTP requests, the Apache tool is run in all deployed
as client nodes. In each node, we conﬁgured the Apache tool to
create a number of concurrent active users as
sending 500 HTTP requests in total to the closest replica.
Figure 12 illustrates the CDF of the response time collected
from the Apache client nodes. Generally, HANET achieves sig-
niﬁcantly lower response times compared to the Random heuris-
tic. We observed that, for
% of the requests achieve
10 50 100 500 5000
0.0 0.2 0.4 0.6 0.8 1.0
Response time (ms)
Fig. 12: Response time of HTTP requests
response time less than
ms when using HANET and
when using the Random, respectively. Furthermore, increasing
the number of replicas to
also reduces the response time of
both algorithms. By considering
% of the requests, HANET
reduces the response time up to
ms and Random up to
ms, that is about
% improvement compared
case. For HANET,
is quite sufﬁcient as almost
% of the requests can achieve the response time less than
ms which is widely acceptable for the static web application.
2) Impact on Stateful User Services: The second experiment
is the Web 2.0 service which mimics a social networking
application (e.g., Facebook). The content of the Web 2.0
website is dynamically generated from the actions of multiple
users i.e., a dynamic content. For the evaluation, we use the
dockerized version of the CloudSuite Web Serving benchmark
. Cloudsuite benchmark has four tiers: the web server, the
database server, the memcached server, and the clients. Each
tier has its own Docker image. The web server runs Elgg 
social networking engine and it connects to the memcached
server and the database server. The clients (implemented using
the Faban workload generator) send requests to login to the
social network and perform different operations.
SEGs attached to the qMp ORs, where nine of
them act as clients. One of the nodes is used to deploy the web
server. The web server, database server, and memcached server
are always collocated in the same host. On the clients side, we
measure the response time when performing some operations
such as: posting on the wall, sending a chat message, updating
live feed operation, etc. In Cloudsuite, to each operation is
assigned an individual QoS latency limit. If less than
of the operations meet the QoS latency limit, the benchmark
is considered to be failed. The location of the web server,
database server, and memcached server has a direct impact
on the client response time.
Figure 13 depicts three Cloudsuite operations performed
when placing the web server with the HANET and Random
heuristic. Figure 13 reveals that HANET outperforms Random
for all the operations; for PostingInTheWall operation the
Fig. 13: Cloudsuite Operations (HANET vs. Random)
improvement brought by HANET is
, for SendChatMes-
and for UpdateActivity operation
We can notice that the gain brought by HANET is higher for
more intensive workloads (i.e., on average
operations per client). Further, Figure
13 shows the average CPU load observed in the clients when
performing a different number of operations. The ﬁgure reveals
operations per client, CPU is reaching a load of
3, and as a result of this we have higher response times.
B. Evaluation of Network Services
To evaluate PiCasso in terms of network services, we focus
on service delivery capability while considering how service
instances are made available at the network edge. We focus on
the delivery cost which is the total time counting from when
the DE makes a service deployment decision until the service
is delivered to the SEG. We compare the delivery cost of our
solution (PiCasso) with the classic host-centric networking
approach (HCN) which is commonly used in many edge
computing platforms such as Cloudy  and Paradrop . To
implement this approach, we disable in-network caching facility
of PiCasso and direct the service to be delivered from the
service repo to each SEG, which is also similar to the IP unicast.
1) Analysis of Service Delivery Cost: In this evaluation, we
select four dockerised containers which have different image
sizes from the docker hub (see details in Table I) and migrate
them from the service repo to all the deployed SEGs.
Image name Size HCN PiCasso
hypriot/rpi-nano-httpd 88 kB 0.401 s 0.139 s
hypriot/rpi-busybox-httpd 2.16MB 2.566 s 1.014 s
armhf-alpine-nginx 14.95 MB 16.021 s 6.741 s
armbuild/debian 145 MB 154.94 s 70.741s
TABLE I: Comparison of the average delivery cost
Overall, the average delivery cost achieved by PiCasso is sub-
stantially lower than the HCN approach. For instance, PiCasso
Delivery Cost (second)
0 50 100 150 200 250 300
Fig. 14: Inspecting the delivery cost of each SEG
can reduce the delivery cost of the armbuild/debian image from
seconds which is about
% improvement com-
pared to the HCN solution. To have a closer look how a service
image is delivered, we focus on the Debian image and plot
the delivery time across each node, as presented in Figure 14.
By comparing HCN and PiCasso, we observe that every SEG
is better off through the in-network caching and named-based
routing capabilities of PiCasso. The SEGs running PiCasso
are able to retrieve the data chunks from the nearest cache
(discussion will be provided with Figure 15). On the other hand,
the HCN approach is inefﬁcient in terms of bandwidth utilisa-
tion. Given an example of SEG6, HCN acquires
deliver the service which is converted to
However, from the iperf measurement, the bandwidth between
SEG6 and the service repo is approximately
previously stated in qMp Network Characterisation section, the
resources in qMp network are not uniformly distributed. This
indicates that the traditional HCN approach is not sufﬁcient
to support the service delivery in this dynamic environment.
2) Investigating Trafﬁc Consumption of Service Delivery:
Previous results demonstrated that PiCasso efﬁciently improves
the service delivery in the qMp network. To further investigate
this, we perform sensitivity analysis on the amount of trafﬁc
that is consumed for delivering the service images to the SEGs.
We inspect the amount of trafﬁc among SEGs and the service
controller from the nfd-status reports . However, the infor-
mation from these reports contains only the trafﬁc of an overlay
network. To construct the actual trafﬁc that spread over the
qMp network, we map the paths from PiCasso overlay with the
routing tables of BMX6/7 routing protocol used in the qMp. For
instance, the path between service controller and SEG5 (see Fig-
ure 11) can be mapped to UPC-Portal - UPC-Alix - GSgV rb -
GSgranVia - CanBruixa (i.e., names denote as the OR nodes).
Figure 15 presents the distribution of data trafﬁc sent among
the ORs to deliver a service image to all
SEGs. Here, we
solely present the results of delivering armbuild/debian image
(the largest image size in the experiments) due to space con-
straints. The total amount of trafﬁc consumed by HCN approach
GB while our PiCasso achieved only
GB which is about
% reduction. In case of HCN,
the most dominant trafﬁc path is a link between GSgV rb
and UPC-Portal since this is a bottleneck link between nodes
deployed in qMp and the service controller at UPC Campus
North. In contrast, PiCasso signiﬁcantly reduces the trafﬁc over
this link. The reason is that PiCasso takes beneﬁts of the edge
caching by allowing SEGs retrieve the service image from the
closer node. As illustrated in Figure 11, we deployed SEG1 at
the node GSgV rb which has the highest degree centrality (i.e.,
it is well connected by other nodes). In this manner, several
nodes (e.g., SEG2, SEG5, SEG6, SEG8, SEG9) can directly
retrieve the data chunks from the cache of SEG1. This is very
useful as the cache is utlised closer to the network edge.
Our deployment indicates that PiCasso is more effective in
terms of trafﬁc reduction where most of the gain comes from
in-network caching and name based routing. PiCasso utilises
native multicast support to achieve efﬁcient network utilisation
during service deployment across several distributed devices.
Technically, PiCasso’s node (e.g., SEG) is able to discover the
closest node and dynamically retrieve the service image from
the nearest cache. This is very crucial for CMNs as the network
bandwidth is highly ﬂuctuated and congested, especially during
the peak hours. To achieve even better performance, PiCasso
requires a number of participating nodes to formulate a larger
ICN overlay. The results in Figure 15 indicates that trafﬁc
reduction is not yet optimal. Taking an example of GSgranVia
OR node, there are several redundant trafﬁc generated from
many peers. In theory, if we could deploy a SEG to this OR,
PiCasso would be able to reduce the data trafﬁc up to
From our experience in deploying PiCasso, there are many
issues that hinder to increase the number of PiCasso nodes in
the network. Some problems include: some owners of the ORs
were not willing to plug the Raspberry Pi’s to their nodes due to
the trafﬁc and electricity consumption, some ORs do not have
enough ports to plug the Raspberry Pi’s and some owners are
away from the community. These are few examples that can not
be solved by the technological aspect. To overcome these chal-
lenges, it requires lots of support from the community, which
emphasises the importance of a collaborative model in CMNs.
The inherent in-network caching capabilities of PiCasso
also provides lots of support for service caching (data +
computation) which enables localisation of services in CMNs.
PiCasso is also integrated with decision engine and full
functional monitoring system that motivate and enable multiple
local community service providers to use our system for service
deployment. Overall, PiCasso platform could empower local
communities to bootstrap their own service infrastructures,
enable efﬁcient resource pooling of their common pool
resources and build a sustainable service ecosystem.
VII. REL ATED WO RK
PiCasso brings together many building blocks aiming
at developing an efﬁcient platform for service delivery in
challenging network environments. From this aspect, we can
classify three main related areas of work as follows:
0 200 400 600 800 1000 1200
(a) Experiments with HCN
0 200 400 600 800 1000 1200
(b) Experiments with PiCasso
Fig. 15: The data trafﬁc distributed over qMp network. The X and Y axises denotes the name of QMP routers while the
gradient on each coordination represents the density of trafﬁc (MBytes) over a link between two routers.
Information Centric Networks:
The clean slate approach
called Information Centric Networks (ICNs) has recently
emerged which inherently integrated the content delivery
capability in the architecture . Several research projects
have been proposed to cope with the efﬁciency of content
delivery, which have also been considered as the future Internet
architecture , , , , . Among those ICNs real-
isations, NDN  aims to utilise the widely distributed caching
in the network by delivering contents based on name-based
routing with a simple stateful forwarding plane. In contrast,
PURSUIT  and RIFE  architectures are designed based
on a centralised solution where there is a central entity to con-
trol the published and subscribed requests. In PiCasso, we have
extended the NDN code base in order to leverage the distributed
in-network caching while integrating a new service abstraction
layer to support service delivery rather than static content.
Many researchers have leveraged the
advantage of lightweight virtualisation technologies (e.g.,
Docker , Unikernel ) by proposing the edge computing
platforms to improve the QoS, security and privacy , ,
, , , . In , Sathiaseelan et al. propose
Cloudrone, an edge computing platform for delivering services
over a cluster of ﬂying drones. However, this work reports
only a feasibility study of the system and evaluation of scaling
massive docker containers over a single Raspberry Pi. Similar
to , Yehia et al. only study the scalability of docker
containers with different generations of the Raspberry Pi.
Accordingly, these works are still lacking vital components for
edge computing platforms such as orchestration, monitoring and
communication modules. The prototype of PiCasso has been
introduced in . However, the evaluation of communication
protocol for delivering the service has not been discussed yet. In
contrast, this paper presents a complete architecture of PiCasso
and evaluates the performance of service delivery with HANET
algorithm and NDN solution. Paradrop  is a speciﬁc
edge computing platform that allows third-party developers
to ﬂexibly create new types of services. Cloudy  is the
core software of the community clouds , as it uniﬁes the
different tools and services for the distributed cloud system with
a Debian-based Linux distribution. The common limitation of
these two platforms is lacking a service controller who automati-
cally applies complex algorithms for service deployment regard-
ing network condition and hardware resources. Furthermore,
they rely on host-centric communication which is not efﬁcient
for CMNs as discussed in our results. Similar to our work
is SCANDEX , a service-centric networking framework
for challenged decentralised networks by bringing together the
lightweight virtulisation, ICN and DTN technologies. However,
the authors propose only the conceptual design architecture.
NFaaS  is another platform that aims to leverage the
information-centric communication. NFaaS architecture is
based on unikernel and NDN while enabling the seamless
execution of stateless microservices across the network. How-
ever, the authors only evaluate the system through simulation
while the real implementation is still under development.
Al Arnaut in , , proposes a
content replication scheme for wireless mesh networks. The
proposed scheme is divided into two phases including the
selection of replica nodes (network setup phase) and content
placement, where content is cached in the replicas based on
popularity. The work of Elmroth  takes into account rapid
user mobility and resource cost when placing applications in
Mobile Cloud Networks (MCN). Spinnewyn  provides
a resilient placement of mission-critical applications on geo-
distributed clouds using a heuristic based on subgraph iso-
morphism detection. Tantawi ,  uses biased statistical
sampling methods and hierarchical placement policies for cloud
workload placement. Wang in  studies the dynamic service
migration problem in mobile edge-clouds that host cloud-based
services at the network edge. Coimbra in  proposes a novel
service placement approach based on community ﬁnding (using
a scalable graph label propagation technique) and decentralized
election procedure. Most of the work in the data centers and
distributed clouds consider micro-datacenters, where in our
case the CMNs such as qMp network consist of constraint/low-
power devices such us Raspberry Pi’s. Further, most of the
above mentioned works are not applicable to our case because
we have a strong heterogeneity given by the limited capacity of
nodes and links, as well as asymmetric quality of wireless links.
A particularity of CMNs is that they are heterogeneous in
nature with a high level of node and network diversity, including
different topologies. As a result, they face several technical
challenges, including problems related to resource management,
instability, and unavailability. In this paper, we have analysed
the characteristics of a production CMN such as Guiﬁ.net, to
identify the key requirements for developing an edge computing
platform. From the analysis, we argued that most of the existing
platforms are not suitable for the CMNs since they rely on
the Host-Centric Communication. In this aspect, we proposed
PiCasso, a ﬂexible edge computing platform that utilises
the strength of the lightweight virtualisation technology and
Information-Centric Networking (ICN) to overcome the chal-
lenges in CMNs. Unlike other platforms, PiCasso contains the
Decision Engine that manages the service deployment operation
in CMNs. We augmented the Decision Engine with a service
deployment heuristic called HANET, which considers both
hardware and network resources when placing services. Based
on the results, HANET optimally selects the nodes to host the
services and ensures that the end-users achieve an improved
QoS. Apart from improving the QoS of end-users, our results
show that ICN plays a key role in improving the service delivery
time as well as reducing the trafﬁc consumption in CMNs.
In future work, we intend to develop several algorithms (e.g.,
for different topologies) that could support different scenarios
and requirements for service deployment. Furthermore, we
wish to deploy PiCasso in other CMNs which might have
A Python library for the Docker Engine API.
https://github.com/docker/docker-py. Accessed: 2018-02-10.
Docker technology. https://www.docker.com/what-docker. Accessed:
Grafana: The open platform for analytics and monitoring.
https://grafana.com/. Accessed: 2018-02-10.
 Guinux. https://guiﬁ.net/en/node/29320. Accessed: 2018-02-10.
Hostapd: Host access point daemon. https://wiki.gentoo.org/wiki/Hostapd.
Hypriot Docker Image for Raspberry Pi.
https://blog.hypriot.com/downloads/. Accessed: 2018-02-10.
InﬂuxDB: The Time Series Database. https://www.inﬂuxdata.com/time-
series-platform/inﬂuxdb/. Accessed: 2018-02-10.
Introducing a powerful open source social networking engine.
https://elgg.org/. Accessed: 2018-02-10.
NDN client library with TLV wire format support in native Python.
https://github.com/named-data/PyNDN2. Accessed: 2018-02-10.
NetInf - Network of Information. http://www.netinf.org. Accessed:
PURSUIT a Pub/Sub Internet. http://www.fp7-pursuit.eu/PursuitWeb/.
qMp live monitoring. http://dsg.ac.upc.edu/qmpsu/index.php. Accessed:
RIFE: Architecture for an Internet for everybody. https://rife-project.eu/.
Scalable and Adaptive Internet Solutions (SAIL). http://www.sail-
project.eu. Accessed: 2018-02-10.
 AFAN ASY EV, A. NFD Developer’s Guide. Tech. rep., Feb. 2018.
AL-ARNAO UT, Z., FU, Q. , ANDFR EA N, M. A content replication scheme
for wireless mesh networks. In Proceedings of the 22Nd International
Workshop on Network and Operating System Support for Digital Audio
and Video (New York, NY, USA, 2012), NOSSDAV ’12, ACM, pp. 39–
AL-ARNAO UT, Z., FU, Q. , AND FRE AN, M . An efﬁcient replica
placement heuristic for community wmns. In 2014 IEEE 25th An-
nual International Symposium on Personal, Indoor, and Mobile Radio
Communication (PIMRC) (Sept 2014), pp. 2076–2081.
ANIL, M., AND SC OTT, D. J. Unikernels: Rise of the Virtual Library
Operating System. Queue 11, 11 (Dec. 2013), 30:30–30:44.
BAIG, R., CEN TE LLE S, R. P., F RE ITAG, F., A ND NAVARRO , L. On edge
microclouds to provide local container-based services. In 2017 Global
Information Infrastructure and Networking Symposium, GIIS 2017, Saint
Pierre, France, October 25-27, 2017 (2017), pp. 31–36.
BAIG, R., FRE ITAG, F. , AND NAVARRO , L. Cloudy in guiﬁ.net:
Establishing and sustaining a community cloud as open commons. Future
Generation Computer Systems (2018).
A-ALAB ER N, L. , NEUMANN, A., AN D ESCRICH, P. Experimental
evaluation of a wireless community mesh network. In Proceedings
of the 16th ACM International Conference on Modeling, Analysis and
Simulation of Wireless and Mobile Systems (New York, NY, USA, 2013),
MSWiM ’13, ACM, pp. 23–30.
COIMBRA, M. E., SELIMI, M ., FRANCISCO, A. P., FR EITAG , F., AN D
VEI GA, L. Gelly-scheduling: Distributed graph processing for service
placement in community networks. In 33rd ACM/SIGAPP Symposium
On Applied Computing (SAC 2018) (Apr. 2018), ACM.
DESILVA, U., LERT SI NSR UBTAVEE , A., SATHIASEELAN, A., MO LI NA-
JIMENEZ, C., AND KANCHANASUT, K. Implementation and evaluation
of an information centric-based smart lighting controller. In Proceedings
of the 12th Asian Internet Engineering Conference (2016), AINTEC ’16.
ELK HATIB , Y., PORT ER , B., R IB EIR O, H. B ., ZHAN I, M. F. , QAD IR ,
J., AND RIVI
ER E, E. On using micro-clouds to deliver the fog. IEEE
Internet Computing 21, 2 (Mar 2017), 8–15.
HOQU E, A . K. M. M ., A MIN, S. O. , ALYYAN , A., ZHA NG , B., Z HA NG,
L., AN D WANG, L. Nlsr: Named-data link state routing protocol. In
Proceedings of the 3rd ACM SIGCOMM Workshop on Information-centric
Networking (New York, NY, USA, 2013), ICN ’13, ACM, pp. 15–20.
JACOBSON, V., S METTERS, D. K., THORNTON, J. D., P LA SS, M . F.,
BRIGGS, N. H., AN D BRAYNA RD , R. L. Networking named content. In
Proceedings of the 5th International Conference on Emerging Networking
Experiments and Technologies (New York, NY, USA, 2009), CoNEXT
’09, ACM, pp. 1–12.
OL , M., AND PSARAS, I. Nfaas: Named function as a service.
In Proceedings of the 4th ACM Conference on Information-Centric
Networking (New York, NY, USA, 2017), ICN ’17, ACM, pp. 134–144.
LERT SIN SRU BTAVEE , A., A LI , A., M OL INA -JIMENEZ, C ., SATHIA SE E-
LA N, A. , AN D CROWC ROF T, J . Picasso: A lightweight edge computing
platform. In Proceedings of the 6th IEEE International Conference on
Cloud Networking (2017), CloudNet’17.
LIU , P., WILLIS, D., AND BANERJEE, S. Paradrop: Enabling lightweight
multi-tenancy at the network’s extreme edge. In 2016 IEEE/ACM
Symposium on Edge Computing (SEC) (Oct. 2016), vol. 00, pp. 1–13.
MACC ARI , L. , AND CI GN O, R. L. A week in the life of three large
wireless community networks. Ad Hoc Networks 24 (2015), 175 – 190.
Modeling and Performance Evaluation of Wireless Ad-Hoc Networks.
NEU MAN N, A., LO PEZ , E. , AND NAVARRO , L. An evaluation of bmx6
for community wireless networks. In 8th IEEE International Conference
on Wireless and Mobile Computing, Networking and Communications
(WiMob), 2012 I (Oct 2012), pp. 651–658.
PALI T, T., SH EN, Y., AN D FERDMAN, M. Demystifying cloud benchmark-
ing. In 2016 IEEE International Symposium on Performance Analysis of
Systems and Software (ISPASS) (April 2016), pp. 122–132.
RAH MAN , A., TROS SE N, D. , KUTSCHER, D., AN D RAVINDRAN, R.
Deployment Considerations for Information-Centric Networking (ICN) .
Internet-Draft, Jan. 2018.
SAR ROS, C.-A., LE RTS INS RUB TAVEE, A ., MO LI NA-JIMENEZ, C. , PR A-
SO POU LO S, K. , DIAMANTOPOULOS, S. , VARDALIS, D. , AND SATHI -
AS EEL AN , A . Icn-based edge service deployment in challenged networks.
In Proceedings of the 4th ACM Conference on Information-Centric
Networking (New York, NY, USA, 2017), ICN ’17, ACM, pp. 210–211.
SATHI ASE EL AN, A ., LERTS IN SRUB TAVEE, A ., JAG AN, A ., BASKARAN,
P., AND CR OWC ROF T, J. Cloudrone: Micro clouds in the sky. In
Proc. 2Nd Workshop on Micro Aerial Vehicle Networks, Systems, and
Applications for Civilian Use (DroNet’16) (2016).
SATHI ASE EL AN, A ., WAN G, L. , AUC INA S, A. , TY SON , G., A ND
CROWC ROF T, J. Scandex: Service centric networking for challenged
decentralised networks. In Proc. 2015 Workshop on Do-it-yourself
Networking: an Interdisciplinary Approach (DIYNetworking ’15) (2015).
SELIMI, M., CER D
A-ALAB ER N, L. , FR EITAG , F., VE IGA , L., SATHI -
AS EEL AN , A., A ND CROW CRO FT, J. A lightweight service placement
approach for community network micro-clouds. Journal of Grid
Computing (Feb 2018).
SELIMI, M., KHA N, A. M., D IMOGERONTAKIS, E. , FR EITAG , F., AN D
CEN TEL LE S, R. P. Cloud services in the guiﬁ.net community network.
Computer Networks 93, Part 2 (2015), 373 – 388.
SPI NNE WY N, B. , ME NNE S, R. , BOT ERO , J. F., A ND LATR
E, S. Resilient
application placement for geo-distributed cloud networks. Journal of
Network and Computer Applications 85 (2017), 14 – 31. Intelligent
Systems for Heterogeneous Networks.
TANTAWI , A. N. Quantitative placement of services in hierarchical clouds.
In Proceedings of the 12th International Conference on Quantitative
Evaluation of Systems - Volume 9259 (New York, NY, USA, 2015),
QEST 2015, Springer-Verlag New York, Inc., pp. 195–210.
TANTAWI , A. N. Solution biasing for optimized cloud workload
placement. In 2016 IEEE International Conference on Autonomic
Computing (ICAC) (July 2016), pp. 105–110.
ARNEBERG, W., ME HTA, A., WADB RO, E ., TORDSSON, J ., EKER , J.,
KIHL, M., AND EL MROT H, E. Dynamic application placement in the
mobile cloud network. Future Generation Computer Systems 70 (2017),
163 – 177.
VEG A, D. , BAIG, R. , CER D
A-ALAB ER N, L. , ME DINA , E. , MES EG UER ,
R., AN D NAVAR RO, L. A technological overview of the guiﬁ.net
community network. Computer Networks 93, Part 2 (2015), 260 –
VEG A, D. , CE RDA- AL ABE RN , L., NAVAR RO, L., AND MES EGU ER , R.
Topology patterns of a community network: Guiﬁ.net. In 1st International
Workshop on Community Networks and Bottom-up-Broadband (CNBuB
2012), within IEEE WiMob (Barcelona, Spain, Oct. 2012), pp. 612–619.
WANG, S., URGAONKAR, R., HE, T., CHA N, K., ZA FE R, M. , AN D
LEU NG, K. K. Dynamic service placement for mobile micro-clouds with
predicted future costs. IEEE Trans. Parallel Distrib. Syst. 28, 4 (Apr.
XYLOMENOS, G., VERVERIDIS, C. N., AND N NIKO S FOTI OU , V. A. S .,
TSILOPOULOS, C., VAS IL AKO S, X. , KATS ARO S, K. V., A ND POLY ZOS ,
G. C. A survey of information-centric networking research. IEEE
Communications Surveys & Tutorials 16, 2 (May 2014), 1024–1049.