Conference PaperPDF Available

Information-Centric Multi-Access Edge Computing Platform for Community Mesh Networks

Authors:

Abstract and Figures

Edge computing is shaping the way to run services in the Internet by allowing the computations to become available close to the user's proximity. Many implementations have been recently proposed to facilitate the service delivery in data centers and distributed networks. However, we argue that those implementations cannot fully support the operations in Community Mesh Networks (CMNs) since the network connection is highly intermittent and unreliable. In this paper, we propose PiCasso, a novel multi-access edge computing platform that combines the advances in lightweight virtualisation and Information-Centric Networking (ICN). PiCasso utilises in-network caching and name based routing of ICN to optimise the forwarding path of service delivery. We analyse the data collected from Guifi.net, the biggest CMN worldwide, to develop smart heuristic for the service deployment. Through a real deployment in Guifi.net, we show that our service deployment heuristic HANET, improves the response time up to 53% and 28.7% for stateless and stateful services. Finally, using PiCasso for the service delivery in Guifi.net, we achieve 43% traffic reduction compared to the traditional host-centric communication.
Content may be subject to copyright.
Information-Centric Multi-Access Edge Computing
Platform for Community Mesh Networks
Adisorn Lertsinsrubtavee
University of Cambridge
Cambridge, UK
Llorenc¸ Cerd`
a-Alabern
UPC BarcelonaTech
Barcelona, Spain
Mennan Selimi
University of Cambridge
Cambridge, UK
Leandro Navarro
UPC BarcelonaTech
Barcelona, Spain
Arjuna Sathiaseelan
University of Cambridge
Cambridge, UK
Jon Crowcroft
University of Cambridge
Cambridge, UK
Abstract—Edge computing is shaping the way to run services
in the Internet by allowing the computations to become available
close to the user’s proximity. Many implementations have
been recently proposed to facilitate the service delivery in
data centers and distributed networks. However, we argue
that those implementations cannot fully support the operations
in Community Mesh Networks (CMNs) since the network
connection is highly intermittent and unreliable. In this
paper, we propose PiCasso, a novel multi-access edge computing
platform that combines the advances in lightweight virtualisation
and Information-Centric Networking (ICN). PiCasso utilises in-
network caching and name based routing of ICN to optimise the
forwarding path of service delivery. We analyse the data collected
from Guifi.net, the biggest CMN worldwide, to develop smart
heuristic for the service deployment. Through a real deployment
in Guifi.net, we show that our service deployment heuristic
HANET, improves the response time up to 53% and 28.7%
for stateless and stateful services. Finally, using PiCasso for the
service delivery in Guifi.net, we achieve 43% traffic reduction
compared to the traditional host-centric communication.
I. INTRODUCTION
Community Mesh Networks (CMNs) are self-managed, large-
scale networks that are built and organized in a non-centralized
and open manner. As participation in these networks is open,
they grow organically, since new links are created every time
a host is added. Because of this, the network presents a high
degree of heterogeneity with respect to devices and links used
in the infrastructure and its management. Due to the large
and irregular topology [44], highly skewed bandwidth/traffic
distribution [30] and high software and hardware diversity [43]
in the CMNs, the provisioning of the services is not so simple.
Unfortunately, the current architectures and platforms in CMNs
are failing to capture the dynamics of the network and therefore
they fail to deliver the satisfying QoS [38]. These challenges
bring a lot of attention to CMNs to build infrastructures that
support lightweight multi-tenancy at the network edge by
allowing flexible hosting and fast delivery of local services.
The latest advances in lightweight virtualisation technologies
(e.g., Docker [2], Unikernels [18]), allow many developers to
build local edge computing platforms that could be used to
deliver services within CMNs [19]. Despite delivering these
lightweight services within a data center is trivial, delivering
them across intermittent connectivity of CMNs has a lot of
challenges [36]. As a matter of fact, most of the edge computing
platforms still rely on the host-centric communication that binds
the connection to the fixed entity. The host-centric approach
struggles for service delivery i.e., transporting the service
instances to the network edge as the connectivity can fail
at any time [21], [34]. In addition to that, those platforms do
not have specific strategy for the service deployment in CMN
environment. This raises several questions: Which services
should be delivered? When should they be delivered? What are
the suitable criteria for node selection to host the service? Is
network-aware placement enough to deliver satisfactory perfor-
mance to CMN users? However, this is not trivial and requires
an effective strategy to manage the service delivery in CMNs.
On the other hand, Information-Centric Networking
(ICN) [46] has recently emerged as a potential solution for
delivering named contents. Instead of using IP address for
communication, ICN identifies a content by name and forwards
a user request through name-based routing. This decouples
the content from its original location, where the content can
be delivered from any host that currently has it in its local
storage [26]. Although ICN brings a lot of flexibility in terms
of content delivery, the current ICN implementations are rather
focused on the simple static content (e.g., short message, video
file). In this essence, we argue that ICN should be extended
to better support the service delivery in edge computing.
To overcome the above-mentioned challenges, in this paper
we present PiCasso, a unified edge computing platform that
brings together the lightweight vitualisation technologies and
a novel ICN paradigm to support both service delivery and
service deployment in the CMN environment. We underpin
PiCasso with a Docker container-based service that can be
seamlessly delivered, cached and deployed at the network
edge. The core of PiCasso is the decision engine component
that deploys services on the basis of the service specifications
and the status of the resources of the hosting devices. Unlike
other edge computing platforms, PiCasso creates a new service
abstraction layer using ICN to enable more flexibility in service
Fig. 1: Outdoor Devices in the qMp Network
delivery. Instead of hosting services in the fixed centralised
location (e.g., service repository), PiCasso benefits from inherit
name-based routing and in-network caching capabilities of ICN
by allowing edge devices to retrieve services from the nearest
caches. Furthermore, PiCasso is also integrated with service
controller and full functional monitoring system to optimise
the service deployment decision in CMNs. Specifically, our
key contributions are summarized as follows:
First
, we characterize the performance of the Guifi.net
CMN in the city of Barcelona. We determine the key
features of the network and the node selection criteria.
Based on that, we identify the key performance indicators
(i.e., metrics) in the network to be used by our service
deployment heuristic.
Second
, driven by the findings in the Guifi.net CMN, we
design PiCasso, a multi-access edge computing platform
which deploys QoS-sensitive services at the network
edge. We present a system architecture and demonstrate
the capabilities (i.e., efficiency and effectiveness) of the
platform by focusing on its core features. First, we show
how PiCasso achieves a better end-user experience (e.g.,
low latency, great responsiveness) using the HANET
service deployment heuristic. Then, we show how PiCasso
achieves more efficient use of network bandwidth using
its ICN capabilities.
Third
, we deploy the PiCasso platform in a production
Guifi.net CMN and quantify the performance of the
platform with real services. To the best of our knowledge,
this is the first ICN deployment in a production wireless
CMN such as Guifi.net.
II. COMMUNITY ME SH NE TWORKS:QMP CASE
qMp (Quick Mesh Project) [21] is a wireless mesh network
which started to operate in
2009
in the city of Barcelona,
Spain. qMp network is a subset of Guifi.net (i.e., located
in urban area) and at the time of this writing, it has
80
operating nodes. In the network, there are two gateways (i.e.,
proxies) distributed that connect the qMp network to the rest
of Guifi.net and the Internet. In the rest of the paper, we will
use the name qMp to refer also to the Guifi.net.
In terms of hardware, the qMp users have an outdoor router
(OR) with a WiFi interface on the roof as shown in the Figure
Fig. 2: qMp Network Topology (Barcelona)
1. The ORs are used to build P2P (point-to-point) links in the
network. The ORs are connected through Ethernet to an indoor
AP (access point) as a premises network where the edge ser-
vices are running (e.g., on home-gateways, Raspberry Pi’s etc.).
ORs in qMp are using BMX6 as the mesh routing protocol [31].
For our experimental cases, we deploy (i.e., attach) Raspberry
Pi’s at the ORs in the network and use them as servers.
Methodology and data collection: We have collected
network data by connecting via SSH to each qMp OR
and running basic system commands available in the qMp
distribution. Live measurements have been taken hourly during
the entire month of September
2017
. Our live monitoring
system is operational and can be seen in [12]. Further, the
data collected is publicly available on the Internet. We use
this data to analyse the main aspects of the qMp network.
A. qMp Network Characterisation
Service-Enablers:
The failure of services gaining traction
in Guifi.net and qMp CMN was largely due to the difficulty of
implementing the services and for the end-users to consume
these services. To overcome these issues, one solution for CMN
enthusiasts was to design micro-cloud distributions such as
Cloudy [20], Guinux [4] etc., where users were able to deploy
their preferred services and share with the others in CMNs.
Key characteristics of these distributions were a set of scripts
that automated the configuration process of services. However,
there was no logic behind the service deployment. Services
were placed randomly in the network (i.e., in VMs) without
considering the performance of the underlying network.
Topology:
Guifi.net, in general, is composed of numerous
distributed CMNs (e.g., qMp) and they represent a different
type of network topologies. The overall topology is constantly
changing and there is no fixed topology as in the data center
(DC) environment. The network has a mesh topology in the
backbone, and each node of the backbone (i.e., super-node)
provides access to the client nodes [44]. Figure 2 depicts the
topology of the qMp network in Barcelona. qMp network
shows some typical patterns from the urban networks (i.e.,
mesh networks) combined with an unusual deployment, that
do not completely fit neither with organically grown networks
nor with planned networks.
Node Characteristics:
Figure 3 shows the Empirical
Cumulative Distribution Function (ECDF) of the node
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
0 10 20 30 40 50 60 70 80 90 100
Availability (%)
ECDF
10% of the nodes < 90% availability
0.0
0.2
0.4
0.6
0.8
1.0
1 10 100
Link throughput [Mbps] (log10 scale)
ECDF
min/mean/max: 0.02/11.7/91.6
Average bandwidth: 11.7 Mbps
0.0
0.2
0.4
0.6
0.8
1.0
1 3 5 7 9 11 13 15 17 19 21
out degree
ECDF
min/mean/max: 1.0/6.9/22.0
Average node out-degree: 6.9
Fig. 3: Node availability
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
0 10 20 30 40 50 60 70 80 90 100
Availability (%)
ECDF
10% of the nodes < 90% availability
0.0
0.2
0.4
0.6
0.8
1.0
1 10 100
Link throughput [Mbps] (log10 scale)
ECDF
min/mean/max: 0.02/11.7/91.6
Average bandwidth: 11.7 Mbps
Average node out-degree: 6.9
Fig. 4: Node out-degree
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
0 10 20 30 40 50 60 70 80 90 100
Availability (%)
ECDF
10% of the nodes < 90% availability
0.0
0.2
0.4
0.6
0.8
1.0
1 10 100
Link throughput [Mbps] (log10 scale)
ECDF
min/mean/max: 0.02/11.7/91.6
Average bandwidth: 11.7 Mbps
0.0
0.2
0.4
0.6
0.8
1.0
1 3 5 7 9 11 13 15 17 19 21
out degree
ECDF
min/mean/max: 1.0/6.9/22.0
Average node out-degree: 6.9
Fig. 5: Bandwidth distribution
availability collected for a period of one month. We define the
availability of a node as the percentage of times that the node
appears in a capture. A capture is an hourly network snapshot
that we take from the qMp network (i.e., we took
744
captures
in total). Figure 3 reveals that
10%
of the nodes have availability
lower than
90%
. In a CMN such as qMp, users do not tend to
deliberately reboot the device unless they have to perform an
upgrade, which is not very common. Hence, the percentage of
times that node appears in a capture is a relatively good measure
of the node availability due to random failures (e.g., electric
cuts, misconfigurations etc). Figure 4 shows node out-degree
in the network. Figure 4 reveals that on average, around
90%
of the nodes in the network have more than
2
links and around
40%
of the nodes have at least
5
links with an overall average
degree of
6.9
. This shows that the network is well-connected.
Network Performance:
First, we characterize the wireless
links of the qMp network by studying their bandwidth. Figure
5 shows the average bandwidth distribution of all the links.
The figure shows that the link throughput can be fitted with
a mean of
11.7
Mbps. At the same time Figure 5 reveals
that the
60%
of the nodes have
10
Mbps or less throughput.
In order to measure the link asymmetry, Figure 6 depicts
the bandwidth measured in each direction. A boxplot of
the absolute value of the deviation over the mean is also
depicted on the right. The figure shows that around
25%
of
the links have a deviation higher than
40%
. After performing
some measurements regarding the signaling power of the
devices, we discovered that some of the community members
have re-tuned the radios of their devices (e.g., transmission
power, channel and other parameters), trying to achieve better
performance, thus, changing the characteristics of the links.
B. Key Observations
Here are some observations that we have derived from the
measurements in the qMp network:
Absence of the service-enabler platforms:
Despite
achieving the sharing of bandwidth, Guifi.net and qMp CMNs
have not been able to widely extend the sharing of ubiquitous
cloud services, such as private data storage and backup, instant
messaging, media sharing, social networking etc., which is a
common practice in today’s Internet through cloud computing.
There have been efforts to develop and promote different
services and applications from within community networks
0
10
20
30
40
0 10 20 30 40
Throughput [Mbps]
Throughput [Mbps]
0
10
20
30
40
50
60
70
80
90
100
Deviation
%
Fig. 6: Bandwidth asymmetry
through community micro-clouds [20] but without significant
adoption. Currently, there is no open source platform to
bootstrap and manage decentralized community services. The
platforms that are easy to use, reliable, low-cost and with
smart decision making algorithms can definitely boost the
adoption of local services in the network.
Dynamic topology:
The qMp network is highly dynamic
and diverse due to many reasons, e.g., its community nature in
an urban area; its decentralized organic growth with extensive
diversity in the technological choices for hardware, wireless
media, link protocols, channels, routing protocols etc.; its
mesh topology etc. The current network deployment model
is based on geographic singularities rather than QoS. The
network is not scale-free. The topology is organic and different
with respect to conventional ISP networks. This implies that a
solution (i.e., algorithm) that works in a certain topology might
not work in another one. There is a need for fast, adaptive and
effective heuristics that can cope with the topology dynamics.
Non-uniform resource distribution:
The resources are
not uniformly distributed in the network. Wireless links are
with the asymmetric quality of services (
25
% of the links
have a deviation higher than
40
%). There is a highly skewed
bandwidth, traffic and latency distribution. Currently used
organic placement scheme in the qMp and Guifi.net in general,
is inefficient, failing to capture the dynamics of the network and
therefore it fails to deliver the satisfying QoS. The symmetry of
the links, an assumption often used in the literature of wireless
mesh networks, is not very realistic for our case and algorithms
Service Execution
Gateway (SEG)
Service Controller
(SC)
AP
Daemon
Docker E ngine
Forwarding
Node (FN)
PiCasso
Stack
End Users
Service Provider
Monitoring
Manager
Decision
Engine
Service
Repository
(a) The overview of the PiCasso platform
Service Controller Service Execution Gateway
NFD Forwarding
TCP DTN
Monitoring
Manager
Decision
Engine
Monitoring
DB
Service
Repo
Network Interfaces
UDP
NFD Forwarding
TCP DTN
Monitoring
Agent
Service
Execution
Network Interfaces
UDP
Service
Docker Engine
Service Service
NFD Forwarding
TCP DTN
Network Interfaces
UDP
Forwarding Node
(b) PiCasso’s function blocks
Fig. 7: The architecture of PiCasso
(heuristics) unquestionably need to take this into account.
III. PICAS SO : LIGHTWEIGHT ED GE COMPUTING PL ATFO RM
To overcome the challenges in CMNs, PiCasso is developed
based on three main aspects: lightweight virtualisation, service
abstraction layer over ICN and smart service orchestration.
The lightweight virtualisation technology such as Docker
container [2] substantially reduces the size of service image
as the system libraries can be customised for each particular
service. This makes the service deployment process in CMNs
more efficient as it requires less bandwidth for delivering the
service. We also implement the service abstraction layer over
ICN which decouples the service from its original location.
The node requesting a service image by name can dynamically
choose the optimal forwarding path to retrieve a copy of service
image from the nearest cache. This is very useful for service
delivery in CMNs as the link to the service repository can
be highly intermittent (e.g., link broken, limited bandwidth).
Lastly, deploying service in CMNs requires the smart service
orchestration to select a suitable node to host the service. Given
that the node availability in CMNs is vastly fluctuated, a node
can become suddenly unavailable (e.g., disconnected) or it
might not have enough resources to host the service. In this
regard, we build the full functional monitoring system that can
monitor the nodes in the network. Subsequently, the decision
engine applies this monitoring data along with the smart
algorithm to make the optimal decision for service deployment.
A. System Overview
The overview of PiCasso platform is presented in Figure 7a.
The key entity is referred to Service Controller (SC) that period-
ically observes the network topology and resource consumption
of potential nodes for the service deployment. In our model,
we assume that the service providers upload their services to
a service repository inside the SC before distributing to the
network edge. To achieve the QoS and overcome the network
connectivity problems, SC augments the monitoring data along
with service deployment algorithms to decide where and when
to place the services. We also introduce the Service Execution
Gateway (SEG) which provides a virtualisation capability to run
a service instance at the network edge (e.g., users’ house). In Pi-
Casso, we use Docker, a container-based virtualisation to build
lightweight services and deploy across the SEGs. Each SEG is
also equipped with the access point daemon (e.g., hostapd [5])
to act as the point of attachment for the end-users to access
the services via WiFi connection. A prototype of SEG has
been developed on the Raspberry Pi
3
running the Hypriot OS
Version 1.2.03 [6]. The Forwarding Node (FN) is responsible
for forwarding the requests towards the original content source
or nearby caches. Each FN is equipped with a storage while dy-
namically caching the content chunks that flow through it. No-
tice that, FN does not necessarily need to execute the services.
B. System Architecture
PiCasso’s architecture is presented in Figure 7b which
contains the function blocks of each entity (i.e., SC, SEG, and
FN). There are several ICN implementations [10], [11], [13],
[14], [26] have been proposed during the past decade. Among
those implementations, Named Data Networking (NDN) [26]
is the most suitable candidate for PiCasso as it uses a simple
stateful forwarding plane to utilise the distributed in-network
caching without any controlling entity. Currently, PiCasso is
written in Python, and implemented on top of NDN protocol
stack and Docker.
NFD forwarding plane
sits between the application and
transport layers while looking at the content names and oppor-
tunistically forwarding the requests to an appropriate network
interface. It creates an ICN overlay to support name-based
routing over the network. We integrate the NFD forwarding
plane to PiCasso architecture through a python wrapper of NDN
APIs, called PyNDN [9]. The NFD maintains three types of data
structure: Forwarding Information Base (FIB), Pending Interest
Table (PIT), and Content Store (CS). FIB table maintains name
prefixes with the outgoing interfaces based on routing protocols
(e.g., static, NLSR [25]) and forwarding strategies(e.g., broad-
cast). PIT keeps track of the Interest requests that have already
been forwarded by recording incoming faces and name of Inter-
est messages. CS is a local cache integrated in every NFD node.
Fig. 8: PiCasso Monitoring dashboard
In PiCasso, we have also extended the NDN protocol
stack by introducing a DTN face to facilitate the operation
in challenge network environment like post-disaster scenario.
This new face communicates with an underlying DTN
implementation that handles intermittence by encapsulating
Interest and Data packets into a DTN bundle. The details of
implementation and evaluation can be found in [34].
Service Execution
runs on the SEG and has major function-
alities as follows: registers the SEG to the service controller,
receives push command to instantiate and terminate services
dynamically regarding the decision of service deployment.
This module uses docker-py [1], a python wrapper for Docker
to expose the controlling messages to Docker engine.
Monitoring Agent
reports the SEG’s status to SC by
considering two layers of measurements. First, it measures
the underlying hardware resources such as current memory
usage, CPU utilisation, and CPU load. Second, it associates
with Docker engine to report the status of running containers
(e.g., container names) and resource consumption inside each
container (e.g., CPU and memory usage).
Decision Engine (DE)
is the core component of PiCasso
that can make autonomic decisions of service deployment
based on the combination of various measurement metrics
such as resource consumption of underlying hardware, network
topology, and service requirement. The DE also contains the
algorithm repository where the service deployment algorithms
can be dynamically updated regarding different deployment
scenarios and service level agreements.
Service Repo
is a repository storing dockerized compressed
images. Our implementation allows the third party
service providers to upload their service along with a
deployment description augmented with specifications and
QoS requirements. This description is written in JSON format.
Monitoring Manager
periodically collects the monitoring
data from each SEG and stores in the database (Monitoring
DB). It is implemented based on a time series database, called
InfluxDB [7]. We also implemented the dashboard for mon-
itoring system using Grafana [3] to visualise time series data
for SEG’s measurements and application analytics (Figure 8).
C. PiCasso’s Operations
This section explains main operations used in PiCasso.
1) Collecting Monitoring Data: This operation follows the
native pull-based communication model of NDN. As shown
Monitoring
Manager SEG1
INTEREST (prefix1)
SEG2
INTEREST (prefix2)
DATA (prefix1)
DATA (prefix2)
(a) Pull-based model
Decision
Engine SEG1
INTEREST (push)
DATA (#0)
INTEREST (#0)
DATA (#1)
INTEREST (#1)
DATA (#n)
INTEREST (#n)
:
(b) Push-based model
Fig. 9: Key operations in PiCasso. (a) Monitoring Manager
retrieves the monitoring data from SEGs. (b) Decision Engine
delivers the service to the SEG
in Figure 9a, the monitoring manager places the pull requests
towards SEG1 and SEG2 while configuring name-prefixes
as /picasso/monitoring/SEG1/ and /picasso/monitoring/SEG2/
respectively. When the SEG receives this pull Interest message,
it attaches the current monitoring data with JSON format to
the Data message and forwards to the same path that Interest
message (reverse path forwarding) came from by using infor-
mation in the PIT. To avoid receiving outdated data from the
caches, we set the data freshness to a small value (e.g., 10ms).
2) Decision Making for Service Deployment: PiCasso relies
on smart service deployment algorithms that aim to maximise
the QoS as well as the network resources. This operation is con-
trolled by the DE where it dynamically selects the appropriate
algorithm from the repository regarding the scenario and re-
quirements of the network. The output of the algorithm is a list
of selected nodes for the service deployment and instantiation.
In this paper, we propose HANET (HArdware and NETwork
Resources) heuristic algorithm, which is designed specifically
for service deployment in the unreliable network environment
such as CMNs. HANET uses the state of the underlying
CMN (i.e., qMp) to optimize service deployment decision. In
particular, it considers three sources of information: i) network
bandwidth, ii) node availability, iii) hardware resources to
make optimal decisions [37]. First, we test HANET with the
static data obtained from the qMp network (i.e., bandwidth,
availability, and CPU data). Then we ran HANET in a real
CMN (i.e., qMp) and quantify the performance achieved after
deploying real services. The HANET heuristic algorithm (see
Algorithm 1) runs in three phases:
Phase 1 - Network Setup Phase:
We initially build the
topology graph of the qMp network. The qMp topology graph
is constructed by considering only operational nodes, marked
in “working” status, and having one or more links pointing to
another node (i.e., we remove the disconnected nodes). Once
the topology graph is constructed, we check the availability
of the nodes in the network. The nodes that are under the
predefined availability threshold (
λ
) are removed. Then, we use
the K-Means partitioning algorithm to group nodes based on
their geo-location. The idea is to get back clusters of nodes that
are close to each other. The K-Means algorithm forms clusters
of nodes based on the Euclidean distances between them,
where the distance metrics in our case are the geographical
coordinates of the nodes. Each cluster contains a full replica of
a service, i.e., the algorithm in this phase partitions the network
topology into
k
(maximum allowed number of service replicas)
clusters. This is plotted as KMeans C in the Figure 10.
Phase 2 - Computation Phase:
This phase is based
on the concept of finding the cluster heads maximizing
(
argmaxCk
i=1jCi Bi j
) the bandwidth (
Bi j
) between them
and their member nodes in the clusters
Ck
formed in the first
phase. The bandwidth between two nodes is estimated as the
bandwidth of the link having the minimum bandwidth in the
shortest path. The computed cluster heads are the candidate
nodes for the service deployment.
Phase 3 - Content Placement Phase:
After the cluster
heads are computed in Phase
2
, the services are placed on
the selected cluster heads if their CPU load is under the
predefined threshold (
α
). If this condition is satisfied, the
service image is pulled from the Service Repo and pushed
to the selected edge nodes (i.e., deployed and started). Notice
that the threshold can be set at the monitoring dashboard and
the notification will be sent to the DE when the measured
CPU load violates this threshold.
Algorithmic Performance and Complexity: Figure 10 depicts
the average bandwidth to the cluster heads obtained with the
Random (default strategy in Guifi.net), K-Means (Phase 1 of
the algorithm) and the HANET heuristic algorithm. This value
reflects the average bandwidth computed from the cluster heads
to the other nodes within each cluster. Figure 10 reveals that
for the considered number of services
k
, HANET outperforms
both K-Means and Random placement. For
k=2
, the average
bandwidth to the cluster heads has increased from
18.3
Mbps
(K-Means) to
27.7
Mbps (HANET), which represents a
33.8%
improvement. The highest increase of
45.67%
is achieved when
k=11
. On average, when having up to
11
services (i.e. clusters)
in the network, the gain of HANET over K-Means is of
33%
.
Based on the observations from Figure 10, the gap between the
two algorithms grows as
k
increases. We observe that
k
will
increase as the network grows. Accordingly, HANET will pre-
sumably render better results for larger networks than the rest of
strategies. The overall complexity of HANET is polylogarithmic
O(n2k+1logn)
, which is significantly smaller than the brute
force method and thus practical for commodity processors.
3) Deliver Service to the Edge: When the DE retrieves a list
of selected node names from the service deployment algorithm,
it will start the service delivery process which requires the
push-based communication model. However, with the current
implementation of NDN, it supports only the pull-based model
where a consumer (i.e., SEG) has to initiate the communication.
To support this operation, we have implemented the push com-
munication model based on Interest/Data exchange of primitive
NDN. We follow the publish-subscribe model [23] where a data
producer (DE) publishes contents or services via Interest mes-
sage to a subscribed consumer which in turn trigger an Interest
back from the consumer to fetch the data. Figure 9b illustrates
Algorithm 1 HANET Algorithm
Require: input =qM pTo pology.xml
Rn availability of node n
λavailability threshold
CPUch CPU load of cluster head
αCPU threshold
Phase 1 – Network Setup Phase
1: procedure NETW OR KSE TU P(input)
2: g=BuildTo pology(input)
3: g0=SanitizeGra ph(g)
4: for each line in g0do // sanitization process
5: Remove disconnected nodes
6: Ensure bidirectional links
7: Remove nodes with no metrics
8: end for
9: return g0
10: if Rn λthen
11: Per f or mKMeans(g0, k)
12: return C
13: end if
14: end procedure
Phase 2 – Computation Phase (Bandwidth Max.)
15: procedure COMPUTEHEAD S(C)
16: clusterHeads list()
17: for all kCdo
18: for all iCk do
19: Bi 0
20: for all jsetd i f f (C,i)do
21: Bi Bi+estimate.route.bandw(g0,i,j)
22: end for
23: clusterHeads argmaxCk
i=1jCiBi j
24: end for
25: end for
26: return clusterHeads
27: end procedure
Phase 3 – Content Placement Phase (Hardware)
28: procedure PLACEMENTPHASE
29: for each clusterHeads do
30: if CPUch αthen
31: DeployService()
32: StartService()
33: end if
34: GoForNextClusterHead ()
35: end for
36: end procedure
the Interest/Data exchange of the push-based model, where the
DE initially sends a push Interest message to SEG1 with a name
prefix: /picasso/service deployment/push/SEG1/service name.
To distinguish the push Interest message from the NDN
pull model, a name component, “push” is added after the
Fig. 10: Average bandwidth to the cluster heads
operation name (i.e., “service deployment”). Consequently,
when SEG1 receives the push Interest message, it discards
the (“push”) and (“SEG ID”) prefixes while reconstructing
a new Interest name: /picasso/service deployment/service
name/#00 to request the service image. In NDN, a content is
divided into several chunks, the last prefix is reserved for the
requesting chunk ID which is started from zero (e.g., #00).
IV. PICAS SO DE PL OYM EN T IN GU IFI .NE T
In order to understand the feasibility of running the PiCasso
platform and the possible gains of our service deployment
heuristic HANET in a real production CMN, we deploy PiCasso
in a real hardware connected to the nodes of the qMp network
located in the city of Barcelona. We have strategically deployed
10
SEGs to cover the area of qMp network as presented in
Figure 11. In our configuration, SEGs are connected to the
ORs via Ethernet cable and the service controller is centrally
set up inside the main campus of Universitat Politecnica de
Catalunya (UPC) where the Guifi lab is located.
Node Selection:
The location of the the five SEGs deployed
is chosen based on the output of the HANET algorithm (i.e.,
highlighted with the red color in Figure 11). This corresponds
to the top-ranked nodes (i.e., cluster heads) selected from
the HANET; with higher bandwidth, availability and CPU
resources. Based on this, we deploy five Raspberry Pi’s to the
selected ORs given by the HANET algorithm. The other five
ORs in the qMp are selected randomly for comparison purposes.
In this set, we cover nodes with different properties: high degree
centrality, nodes that are not well connected, nodes acting as
bridges etc. All nodes are well-distributed in the qMp network.
ICN Overlay:
We follow the ICN-as-an-Overlay
approach [33] to construct the ICN shim layer on top
of the existing qMp’s routing protocol (i.e., BMX6/7). The
NFD forwarding plane is responsible for managing the name
based routing in this ICN layer. In this deployment trial, we
use a static routing to setup the forwarding table (FIB) of
each SEG and service controller based on actual information
taken from the IP routing table of ORs in the qMp network.
V. PERFORMANCE EVALUATI ON
This section analyses the performance of PiCasso platform
deployed in the qMp network. We concentrate on the bench-
OR: MelciorPalau
OR: UPC-Portal
SEG3
OR: Pisuerga
SEG7
Service
Controller
OR: CanBruixa
SEG5
OR: Nevaristoar
SEG8
SEG4
SEG2
OR: BCN-Salou
SEG9
OR: JardinBotanic
SEG6
OR: GSgV-rb
SEG1
SEG10
OR: UPC-EETAC
Ethernet
Wireless
#1
OR: UPC-Alix
OR: GS26gener
Fig. 11: The topology of PiCasso deployment in qMp
marking of two services: user and network-focused services.
From the user services, we quantify the performance of the
HANET heuristic using a stateless service (ApacheBench) and
a stateful Web2.0 service (Cloudsuite web serving benchmark).
The evaluation of end-user services is based on the web tech-
nology while the response time is the key performance metric.
On the other hand, the evaluation of network services focuses
on the efficiency of service delivery in PiCasso comparing with
a traditional host-centric communication (HCN) approach.
A. Evaluation of End-user Services
Undoubtedly, deploying multiple service instances can
significantly improve the QoS, since servers or containers can
balance the load and response to user requests faster. However,
in practice, it is not trivial to deliver a service instance in
every location as it comes with extra costs such as memory
usage and bandwidth consumption. To balance this trade-off,
we apply the HANET service deployment heuristic to decide
where to place the services. We compare the HANET heuristic
with the Random heuristic i.e., the existing in-place and
naturally fast strategy in the qMp network.
1) Impact on Stateless User Services: In this evaluation,
we focus on the response time of the HTTP requests while
considering a different number of replicas (e.g.,
k=1
and
k=2
). The location of the replica is determined by the HANET
algorithm using the measurements from the qMp dataset as
well as the real-time monitoring data from the PiCasso platform.
Based on HANET,
{SEG1}
and
{SEG1,SEG8}
are selected
for
k=1
and
k=2
respectively, as highlighted in Figure 11. In
this experiment, we consider a lightweight web server, namely
hypriot/rpi-busybox-httpd which contains a static single HTML
document with a link to a local jpeg image (the payload size
is 304 bytes). This service image is delivered to the selected
SEGs by using the operation in Figure 9b. To generate the
HTTP requests, the Apache tool is run in all deployed
10
SEGs
as client nodes. In each node, we configured the Apache tool to
create a number of concurrent active users as
10
, subsequently
sending 500 HTTP requests in total to the closest replica.
Figure 12 illustrates the CDF of the response time collected
from the Apache client nodes. Generally, HANET achieves sig-
nificantly lower response times compared to the Random heuris-
tic. We observed that, for
k=1
,
80
% of the requests achieve
10 50 100 500 5000
0.0 0.2 0.4 0.6 0.8 1.0
Response time (ms)
CDF
HANET k=1
Random k=1
HANET k=2
Random k=2
Fig. 12: Response time of HTTP requests
response time less than
360
ms when using HANET and
700
ms
when using the Random, respectively. Furthermore, increasing
the number of replicas to
k=2
also reduces the response time of
both algorithms. By considering
80
% of the requests, HANET
reduces the response time up to
190
ms and Random up to
324
ms, that is about
47.22
% and
53.71
% improvement compared
to
k=1
case. For HANET,
k=2
is quite sufficient as almost
90
% of the requests can achieve the response time less than
500
ms which is widely acceptable for the static web application.
2) Impact on Stateful User Services: The second experiment
is the Web 2.0 service which mimics a social networking
application (e.g., Facebook). The content of the Web 2.0
website is dynamically generated from the actions of multiple
users i.e., a dynamic content. For the evaluation, we use the
dockerized version of the CloudSuite Web Serving benchmark
[32]. Cloudsuite benchmark has four tiers: the web server, the
database server, the memcached server, and the clients. Each
tier has its own Docker image. The web server runs Elgg [8]
social networking engine and it connects to the memcached
server and the database server. The clients (implemented using
the Faban workload generator) send requests to login to the
social network and perform different operations.
We use
10
SEGs attached to the qMp ORs, where nine of
them act as clients. One of the nodes is used to deploy the web
server. The web server, database server, and memcached server
are always collocated in the same host. On the clients side, we
measure the response time when performing some operations
such as: posting on the wall, sending a chat message, updating
live feed operation, etc. In Cloudsuite, to each operation is
assigned an individual QoS latency limit. If less than
95%
of the operations meet the QoS latency limit, the benchmark
is considered to be failed. The location of the web server,
database server, and memcached server has a direct impact
on the client response time.
Figure 13 depicts three Cloudsuite operations performed
when placing the web server with the HANET and Random
heuristic. Figure 13 reveals that HANET outperforms Random
for all the operations; for PostingInTheWall operation the
Fig. 13: Cloudsuite Operations (HANET vs. Random)
improvement brought by HANET is
26.4%
, for SendChatMes-
sage operation
35.7%
and for UpdateActivity operation
24%
.
We can notice that the gain brought by HANET is higher for
more intensive workloads (i.e., on average
53%
improvement
when performing
40
operations per client). Further, Figure
13 shows the average CPU load observed in the clients when
performing a different number of operations. The figure reveals
that for
40
operations per client, CPU is reaching a load of
3, and as a result of this we have higher response times.
B. Evaluation of Network Services
To evaluate PiCasso in terms of network services, we focus
on service delivery capability while considering how service
instances are made available at the network edge. We focus on
the delivery cost which is the total time counting from when
the DE makes a service deployment decision until the service
is delivered to the SEG. We compare the delivery cost of our
solution (PiCasso) with the classic host-centric networking
approach (HCN) which is commonly used in many edge
computing platforms such as Cloudy [20] and Paradrop [29]. To
implement this approach, we disable in-network caching facility
of PiCasso and direct the service to be delivered from the
service repo to each SEG, which is also similar to the IP unicast.
1) Analysis of Service Delivery Cost: In this evaluation, we
select four dockerised containers which have different image
sizes from the docker hub (see details in Table I) and migrate
them from the service repo to all the deployed SEGs.
Image name Size HCN PiCasso
hypriot/rpi-nano-httpd 88 kB 0.401 s 0.139 s
hypriot/rpi-busybox-httpd 2.16MB 2.566 s 1.014 s
armhf-alpine-nginx 14.95 MB 16.021 s 6.741 s
armbuild/debian 145 MB 154.94 s 70.741s
TABLE I: Comparison of the average delivery cost
Overall, the average delivery cost achieved by PiCasso is sub-
stantially lower than the HCN approach. For instance, PiCasso
1234568910
SEG Name
Delivery Cost (second)
0 50 100 150 200 250 300
HCN
PiCasso
Fig. 14: Inspecting the delivery cost of each SEG
can reduce the delivery cost of the armbuild/debian image from
154.94
to
70.74
seconds which is about
54
% improvement com-
pared to the HCN solution. To have a closer look how a service
image is delivered, we focus on the Debian image and plot
the delivery time across each node, as presented in Figure 14.
By comparing HCN and PiCasso, we observe that every SEG
is better off through the in-network caching and named-based
routing capabilities of PiCasso. The SEGs running PiCasso
are able to retrieve the data chunks from the nearest cache
(discussion will be provided with Figure 15). On the other hand,
the HCN approach is inefficient in terms of bandwidth utilisa-
tion. Given an example of SEG6, HCN acquires
295
second to
deliver the service which is converted to
0.49
MBps throughput.
However, from the iperf measurement, the bandwidth between
SEG6 and the service repo is approximately
1.32
MBps. As
previously stated in qMp Network Characterisation section, the
resources in qMp network are not uniformly distributed. This
indicates that the traditional HCN approach is not sufficient
to support the service delivery in this dynamic environment.
2) Investigating Traffic Consumption of Service Delivery:
Previous results demonstrated that PiCasso efficiently improves
the service delivery in the qMp network. To further investigate
this, we perform sensitivity analysis on the amount of traffic
that is consumed for delivering the service images to the SEGs.
We inspect the amount of traffic among SEGs and the service
controller from the nfd-status reports [15]. However, the infor-
mation from these reports contains only the traffic of an overlay
network. To construct the actual traffic that spread over the
qMp network, we map the paths from PiCasso overlay with the
routing tables of BMX6/7 routing protocol used in the qMp. For
instance, the path between service controller and SEG5 (see Fig-
ure 11) can be mapped to UPC-Portal - UPC-Alix - GSgV rb -
GSgranVia - CanBruixa (i.e., names denote as the OR nodes).
Figure 15 presents the distribution of data traffic sent among
the ORs to deliver a service image to all
10
SEGs. Here, we
solely present the results of delivering armbuild/debian image
(the largest image size in the experiments) due to space con-
straints. The total amount of traffic consumed by HCN approach
is approximately
5.375
GB while our PiCasso achieved only
3.05
GB which is about
43.24
% reduction. In case of HCN,
the most dominant traffic path is a link between GSgV rb
and UPC-Portal since this is a bottleneck link between nodes
deployed in qMp and the service controller at UPC Campus
North. In contrast, PiCasso significantly reduces the traffic over
this link. The reason is that PiCasso takes benefits of the edge
caching by allowing SEGs retrieve the service image from the
closer node. As illustrated in Figure 11, we deployed SEG1 at
the node GSgV rb which has the highest degree centrality (i.e.,
it is well connected by other nodes). In this manner, several
nodes (e.g., SEG2, SEG5, SEG6, SEG8, SEG9) can directly
retrieve the data chunks from the cache of SEG1. This is very
useful as the cache is utlised closer to the network edge.
VI. DISCUSSIONS
Our deployment indicates that PiCasso is more effective in
terms of traffic reduction where most of the gain comes from
in-network caching and name based routing. PiCasso utilises
native multicast support to achieve efficient network utilisation
during service deployment across several distributed devices.
Technically, PiCasso’s node (e.g., SEG) is able to discover the
closest node and dynamically retrieve the service image from
the nearest cache. This is very crucial for CMNs as the network
bandwidth is highly fluctuated and congested, especially during
the peak hours. To achieve even better performance, PiCasso
requires a number of participating nodes to formulate a larger
ICN overlay. The results in Figure 15 indicates that traffic
reduction is not yet optimal. Taking an example of GSgranVia
OR node, there are several redundant traffic generated from
many peers. In theory, if we could deploy a SEG to this OR,
PiCasso would be able to reduce the data traffic up to
726
MB.
From our experience in deploying PiCasso, there are many
issues that hinder to increase the number of PiCasso nodes in
the network. Some problems include: some owners of the ORs
were not willing to plug the Raspberry Pi’s to their nodes due to
the traffic and electricity consumption, some ORs do not have
enough ports to plug the Raspberry Pi’s and some owners are
away from the community. These are few examples that can not
be solved by the technological aspect. To overcome these chal-
lenges, it requires lots of support from the community, which
emphasises the importance of a collaborative model in CMNs.
The inherent in-network caching capabilities of PiCasso
also provides lots of support for service caching (data +
computation) which enables localisation of services in CMNs.
PiCasso is also integrated with decision engine and full
functional monitoring system that motivate and enable multiple
local community service providers to use our system for service
deployment. Overall, PiCasso platform could empower local
communities to bootstrap their own service infrastructures,
enable efficient resource pooling of their common pool
resources and build a sustainable service ecosystem.
VII. REL ATED WO RK
PiCasso brings together many building blocks aiming
at developing an efficient platform for service delivery in
challenging network environments. From this aspect, we can
classify three main related areas of work as follows:
GSgV_rb
UPC_Alix
UPC_Portal
GS26gener
GSgranVia
Pisuerga
HWCtraColl
UPC_Terrat
melciorpalau
JpTarradel
Nevaristoar
CanBruixa
JardiBotanic
BCNSants
BCN_Salou
UPC_EETAC
UPC_EETAC
BCN_Salou
BCNSants
JardiBotanic
CanBruixa
Nevaristoar
JpTarradel
melciorpalau
UPC_Terrat
HWCtraColl
Pisuerga
GSgranVia
GS26gener
UPC_Portal
UPC_Alix
GSgV_rb
0 200 400 600 800 1000 1200
Value
0 200
Color Key
and Histogram
Count
(a) Experiments with HCN
GSgV_rb
UPC_Alix
UPC_Portal
GS26gener
GSgranVia
Pisuerga
HWCtraColl
UPC_Terrat
melciorpalau
JpTarradel
Nevaristoar
CanBruixa
JardiBotanic
BCNSants
BCN_Salou
UPC_EETAC
UPC_EETAC
BCN_Salou
BCNSants
JardiBotanic
CanBruixa
Nevaristoar
JpTarradel
melciorpalau
UPC_Terrat
HWCtraColl
Pisuerga
GSgranVia
GS26gener
UPC_Portal
UPC_Alix
GSgV_rb
0 200 400 600 800 1000 1200
Value
0 200
Color Key
and Histogram
Count
(b) Experiments with PiCasso
Fig. 15: The data traffic distributed over qMp network. The X and Y axises denotes the name of QMP routers while the
gradient on each coordination represents the density of traffic (MBytes) over a link between two routers.
Information Centric Networks:
The clean slate approach
called Information Centric Networks (ICNs) has recently
emerged which inherently integrated the content delivery
capability in the architecture [46]. Several research projects
have been proposed to cope with the efficiency of content
delivery, which have also been considered as the future Internet
architecture [10], [11], [13], [14], [26]. Among those ICNs real-
isations, NDN [26] aims to utilise the widely distributed caching
in the network by delivering contents based on name-based
routing with a simple stateful forwarding plane. In contrast,
PURSUIT [11] and RIFE [13] architectures are designed based
on a centralised solution where there is a central entity to con-
trol the published and subscribed requests. In PiCasso, we have
extended the NDN code base in order to leverage the distributed
in-network caching while integrating a new service abstraction
layer to support service delivery rather than static content.
Edge Computing:
Many researchers have leveraged the
advantage of lightweight virtualisation technologies (e.g.,
Docker [2], Unikernel [18]) by proposing the edge computing
platforms to improve the QoS, security and privacy [20], [24],
[27], [29], [35], [36]. In [35], Sathiaseelan et al. propose
Cloudrone, an edge computing platform for delivering services
over a cluster of flying drones. However, this work reports
only a feasibility study of the system and evaluation of scaling
massive docker containers over a single Raspberry Pi. Similar
to [24], Yehia et al. only study the scalability of docker
containers with different generations of the Raspberry Pi.
Accordingly, these works are still lacking vital components for
edge computing platforms such as orchestration, monitoring and
communication modules. The prototype of PiCasso has been
introduced in [28]. However, the evaluation of communication
protocol for delivering the service has not been discussed yet. In
contrast, this paper presents a complete architecture of PiCasso
and evaluates the performance of service delivery with HANET
algorithm and NDN solution. Paradrop [29] is a specific
edge computing platform that allows third-party developers
to flexibly create new types of services. Cloudy [20] is the
core software of the community clouds [38], as it unifies the
different tools and services for the distributed cloud system with
a Debian-based Linux distribution. The common limitation of
these two platforms is lacking a service controller who automati-
cally applies complex algorithms for service deployment regard-
ing network condition and hardware resources. Furthermore,
they rely on host-centric communication which is not efficient
for CMNs as discussed in our results. Similar to our work
is SCANDEX [36], a service-centric networking framework
for challenged decentralised networks by bringing together the
lightweight virtulisation, ICN and DTN technologies. However,
the authors propose only the conceptual design architecture.
NFaaS [27] is another platform that aims to leverage the
information-centric communication. NFaaS architecture is
based on unikernel and NDN while enabling the seamless
execution of stateless microservices across the network. How-
ever, the authors only evaluate the system through simulation
while the real implementation is still under development.
Service Placement:
Al Arnaut in [16], [17], proposes a
content replication scheme for wireless mesh networks. The
proposed scheme is divided into two phases including the
selection of replica nodes (network setup phase) and content
placement, where content is cached in the replicas based on
popularity. The work of Elmroth [42] takes into account rapid
user mobility and resource cost when placing applications in
Mobile Cloud Networks (MCN). Spinnewyn [39] provides
a resilient placement of mission-critical applications on geo-
distributed clouds using a heuristic based on subgraph iso-
morphism detection. Tantawi [40], [41] uses biased statistical
sampling methods and hierarchical placement policies for cloud
workload placement. Wang in [45] studies the dynamic service
migration problem in mobile edge-clouds that host cloud-based
services at the network edge. Coimbra in [22] proposes a novel
service placement approach based on community finding (using
a scalable graph label propagation technique) and decentralized
election procedure. Most of the work in the data centers and
distributed clouds consider micro-datacenters, where in our
case the CMNs such as qMp network consist of constraint/low-
power devices such us Raspberry Pi’s. Further, most of the
above mentioned works are not applicable to our case because
we have a strong heterogeneity given by the limited capacity of
nodes and links, as well as asymmetric quality of wireless links.
VIII. CONCLUSION
A particularity of CMNs is that they are heterogeneous in
nature with a high level of node and network diversity, including
different topologies. As a result, they face several technical
challenges, including problems related to resource management,
instability, and unavailability. In this paper, we have analysed
the characteristics of a production CMN such as Guifi.net, to
identify the key requirements for developing an edge computing
platform. From the analysis, we argued that most of the existing
platforms are not suitable for the CMNs since they rely on
the Host-Centric Communication. In this aspect, we proposed
PiCasso, a flexible edge computing platform that utilises
the strength of the lightweight virtualisation technology and
Information-Centric Networking (ICN) to overcome the chal-
lenges in CMNs. Unlike other platforms, PiCasso contains the
Decision Engine that manages the service deployment operation
in CMNs. We augmented the Decision Engine with a service
deployment heuristic called HANET, which considers both
hardware and network resources when placing services. Based
on the results, HANET optimally selects the nodes to host the
services and ensures that the end-users achieve an improved
QoS. Apart from improving the QoS of end-users, our results
show that ICN plays a key role in improving the service delivery
time as well as reducing the traffic consumption in CMNs.
In future work, we intend to develop several algorithms (e.g.,
for different topologies) that could support different scenarios
and requirements for service deployment. Furthermore, we
wish to deploy PiCasso in other CMNs which might have
different environments.
REFERENCES
[1]
A Python library for the Docker Engine API.
https://github.com/docker/docker-py. Accessed: 2018-02-10.
[2]
Docker technology. https://www.docker.com/what-docker. Accessed:
2018-02-10.
[3]
Grafana: The open platform for analytics and monitoring.
https://grafana.com/. Accessed: 2018-02-10.
[4] Guinux. https://guifi.net/en/node/29320. Accessed: 2018-02-10.
[5]
Hostapd: Host access point daemon. https://wiki.gentoo.org/wiki/Hostapd.
Accessed: 2018-02-10.
[6]
Hypriot Docker Image for Raspberry Pi.
https://blog.hypriot.com/downloads/. Accessed: 2018-02-10.
[7]
InfluxDB: The Time Series Database. https://www.influxdata.com/time-
series-platform/influxdb/. Accessed: 2018-02-10.
[8]
Introducing a powerful open source social networking engine.
https://elgg.org/. Accessed: 2018-02-10.
[9]
NDN client library with TLV wire format support in native Python.
https://github.com/named-data/PyNDN2. Accessed: 2018-02-10.
[10]
NetInf - Network of Information. http://www.netinf.org. Accessed:
2018-02-10.
[11]
PURSUIT a Pub/Sub Internet. http://www.fp7-pursuit.eu/PursuitWeb/.
Accessed: 2018-02-10.
[12]
qMp live monitoring. http://dsg.ac.upc.edu/qmpsu/index.php. Accessed:
2018-02-10.
[13]
RIFE: Architecture for an Internet for everybody. https://rife-project.eu/.
Accessed: 2018-02-10.
[14]
Scalable and Adaptive Internet Solutions (SAIL). http://www.sail-
project.eu. Accessed: 2018-02-10.
[15] AFAN ASY EV, A. NFD Developer’s Guide. Tech. rep., Feb. 2018.
[16]
AL-ARNAO UT, Z., FU, Q. , ANDFR EA N, M. A content replication scheme
for wireless mesh networks. In Proceedings of the 22Nd International
Workshop on Network and Operating System Support for Digital Audio
and Video (New York, NY, USA, 2012), NOSSDAV ’12, ACM, pp. 39–
44.
[17]
AL-ARNAO UT, Z., FU, Q. , AND FRE AN, M . An efficient replica
placement heuristic for community wmns. In 2014 IEEE 25th An-
nual International Symposium on Personal, Indoor, and Mobile Radio
Communication (PIMRC) (Sept 2014), pp. 2076–2081.
[18]
ANIL, M., AND SC OTT, D. J. Unikernels: Rise of the Virtual Library
Operating System. Queue 11, 11 (Dec. 2013), 30:30–30:44.
[19]
BAIG, R., CEN TE LLE S, R. P., F RE ITAG, F., A ND NAVARRO , L. On edge
microclouds to provide local container-based services. In 2017 Global
Information Infrastructure and Networking Symposium, GIIS 2017, Saint
Pierre, France, October 25-27, 2017 (2017), pp. 31–36.
[20]
BAIG, R., FRE ITAG, F. , AND NAVARRO , L. Cloudy in guifi.net:
Establishing and sustaining a community cloud as open commons. Future
Generation Computer Systems (2018).
[21]
CER D
`
A-ALAB ER N, L. , NEUMANN, A., AN D ESCRICH, P. Experimental
evaluation of a wireless community mesh network. In Proceedings
of the 16th ACM International Conference on Modeling, Analysis and
Simulation of Wireless and Mobile Systems (New York, NY, USA, 2013),
MSWiM ’13, ACM, pp. 23–30.
[22]
COIMBRA, M. E., SELIMI, M ., FRANCISCO, A. P., FR EITAG , F., AN D
VEI GA, L. Gelly-scheduling: Distributed graph processing for service
placement in community networks. In 33rd ACM/SIGAPP Symposium
On Applied Computing (SAC 2018) (Apr. 2018), ACM.
[23]
DESILVA, U., LERT SI NSR UBTAVEE , A., SATHIASEELAN, A., MO LI NA-
JIMENEZ, C., AND KANCHANASUT, K. Implementation and evaluation
of an information centric-based smart lighting controller. In Proceedings
of the 12th Asian Internet Engineering Conference (2016), AINTEC ’16.
[24]
ELK HATIB , Y., PORT ER , B., R IB EIR O, H. B ., ZHAN I, M. F. , QAD IR ,
J., AND RIVI
`
ER E, E. On using micro-clouds to deliver the fog. IEEE
Internet Computing 21, 2 (Mar 2017), 8–15.
[25]
HOQU E, A . K. M. M ., A MIN, S. O. , ALYYAN , A., ZHA NG , B., Z HA NG,
L., AN D WANG, L. Nlsr: Named-data link state routing protocol. In
Proceedings of the 3rd ACM SIGCOMM Workshop on Information-centric
Networking (New York, NY, USA, 2013), ICN ’13, ACM, pp. 15–20.
[26]
JACOBSON, V., S METTERS, D. K., THORNTON, J. D., P LA SS, M . F.,
BRIGGS, N. H., AN D BRAYNA RD , R. L. Networking named content. In
Proceedings of the 5th International Conference on Emerging Networking
Experiments and Technologies (New York, NY, USA, 2009), CoNEXT
’09, ACM, pp. 1–12.
[27]
KR
´
OL , M., AND PSARAS, I. Nfaas: Named function as a service.
In Proceedings of the 4th ACM Conference on Information-Centric
Networking (New York, NY, USA, 2017), ICN ’17, ACM, pp. 134–144.
[28]
LERT SIN SRU BTAVEE , A., A LI , A., M OL INA -JIMENEZ, C ., SATHIA SE E-
LA N, A. , AN D CROWC ROF T, J . Picasso: A lightweight edge computing
platform. In Proceedings of the 6th IEEE International Conference on
Cloud Networking (2017), CloudNet’17.
[29]
LIU , P., WILLIS, D., AND BANERJEE, S. Paradrop: Enabling lightweight
multi-tenancy at the network’s extreme edge. In 2016 IEEE/ACM
Symposium on Edge Computing (SEC) (Oct. 2016), vol. 00, pp. 1–13.
[30]
MACC ARI , L. , AND CI GN O, R. L. A week in the life of three large
wireless community networks. Ad Hoc Networks 24 (2015), 175 – 190.
Modeling and Performance Evaluation of Wireless Ad-Hoc Networks.
[31]
NEU MAN N, A., LO PEZ , E. , AND NAVARRO , L. An evaluation of bmx6
for community wireless networks. In 8th IEEE International Conference
on Wireless and Mobile Computing, Networking and Communications
(WiMob), 2012 I (Oct 2012), pp. 651–658.
[32]
PALI T, T., SH EN, Y., AN D FERDMAN, M. Demystifying cloud benchmark-
ing. In 2016 IEEE International Symposium on Performance Analysis of
Systems and Software (ISPASS) (April 2016), pp. 122–132.
[33]
RAH MAN , A., TROS SE N, D. , KUTSCHER, D., AN D RAVINDRAN, R.
Deployment Considerations for Information-Centric Networking (ICN) .
Internet-Draft, Jan. 2018.
[34]
SAR ROS, C.-A., LE RTS INS RUB TAVEE, A ., MO LI NA-JIMENEZ, C. , PR A-
SO POU LO S, K. , DIAMANTOPOULOS, S. , VARDALIS, D. , AND SATHI -
AS EEL AN , A . Icn-based edge service deployment in challenged networks.
In Proceedings of the 4th ACM Conference on Information-Centric
Networking (New York, NY, USA, 2017), ICN ’17, ACM, pp. 210–211.
[35]
SATHI ASE EL AN, A ., LERTS IN SRUB TAVEE, A ., JAG AN, A ., BASKARAN,
P., AND CR OWC ROF T, J. Cloudrone: Micro clouds in the sky. In
Proc. 2Nd Workshop on Micro Aerial Vehicle Networks, Systems, and
Applications for Civilian Use (DroNet’16) (2016).
[36]
SATHI ASE EL AN, A ., WAN G, L. , AUC INA S, A. , TY SON , G., A ND
CROWC ROF T, J. Scandex: Service centric networking for challenged
decentralised networks. In Proc. 2015 Workshop on Do-it-yourself
Networking: an Interdisciplinary Approach (DIYNetworking ’15) (2015).
[37]
SELIMI, M., CER D
`
A-ALAB ER N, L. , FR EITAG , F., VE IGA , L., SATHI -
AS EEL AN , A., A ND CROW CRO FT, J. A lightweight service placement
approach for community network micro-clouds. Journal of Grid
Computing (Feb 2018).
[38]
SELIMI, M., KHA N, A. M., D IMOGERONTAKIS, E. , FR EITAG , F., AN D
CEN TEL LE S, R. P. Cloud services in the guifi.net community network.
Computer Networks 93, Part 2 (2015), 373 – 388.
[39]
SPI NNE WY N, B. , ME NNE S, R. , BOT ERO , J. F., A ND LATR
´
E, S. Resilient
application placement for geo-distributed cloud networks. Journal of
Network and Computer Applications 85 (2017), 14 – 31. Intelligent
Systems for Heterogeneous Networks.
[40]
TANTAWI , A. N. Quantitative placement of services in hierarchical clouds.
In Proceedings of the 12th International Conference on Quantitative
Evaluation of Systems - Volume 9259 (New York, NY, USA, 2015),
QEST 2015, Springer-Verlag New York, Inc., pp. 195–210.
[41]
TANTAWI , A. N. Solution biasing for optimized cloud workload
placement. In 2016 IEEE International Conference on Autonomic
Computing (ICAC) (July 2016), pp. 105–110.
[42]
T
¨
ARNEBERG, W., ME HTA, A., WADB RO, E ., TORDSSON, J ., EKER , J.,
KIHL, M., AND EL MROT H, E. Dynamic application placement in the
mobile cloud network. Future Generation Computer Systems 70 (2017),
163 – 177.
[43]
VEG A, D. , BAIG, R. , CER D
`
A-ALAB ER N, L. , ME DINA , E. , MES EG UER ,
R., AN D NAVAR RO, L. A technological overview of the guifi.net
community network. Computer Networks 93, Part 2 (2015), 260 –
278.
[44]
VEG A, D. , CE RDA- AL ABE RN , L., NAVAR RO, L., AND MES EGU ER , R.
Topology patterns of a community network: Guifi.net. In 1st International
Workshop on Community Networks and Bottom-up-Broadband (CNBuB
2012), within IEEE WiMob (Barcelona, Spain, Oct. 2012), pp. 612–619.
[45]
WANG, S., URGAONKAR, R., HE, T., CHA N, K., ZA FE R, M. , AN D
LEU NG, K. K. Dynamic service placement for mobile micro-clouds with
predicted future costs. IEEE Trans. Parallel Distrib. Syst. 28, 4 (Apr.
2017), 1002–1016.
[46]
XYLOMENOS, G., VERVERIDIS, C. N., AND N NIKO S FOTI OU , V. A. S .,
TSILOPOULOS, C., VAS IL AKO S, X. , KATS ARO S, K. V., A ND POLY ZOS ,
G. C. A survey of information-centric networking research. IEEE
Communications Surveys & Tutorials 16, 2 (May 2014), 1024–1049.
... PiCasso implements an ICN using smart MEC gateways [72]. Content Delivery Networks (CDNs) are related to ICNs, working in the application layer and using caches to provide content to users [100]. ...
... The works that implement MEC as a middlebox, improving network efficiency [51,74], the works that integrate MEC applications and VNFs [10,14,117], and also the work using smart gateways to offer MEC services [72]. According to the networking community, these works reflect the entanglement between MEC and the other elements managed by the MNOs. ...
... PiCasso is a MEC system designed to deploy MEC services on information-centric community mesh networks [72]. PiCasso proposes a special type of node, called Service Execution Gateway (SEG), and incorporates these nodes into information-centric community mesh networks. ...
Article
Multi-Access Edge Computing (MEC) attracts much attention from the scientific community due to its scientific, technical, and commercial implications. In particular, the ETSI standard convergence consolidates the discussions around MEC. Still, the existing MEC practical initiatives are incomplete in their majority, hardening or invalidating their effective deployment. To fill this gap, it is essential to understand a series of experimental prototypes, implementations, and deployments. The early implementations can reveal the potential, the limitations, the related technologies, and the development tools for MEC adoption. In this context, this work first brings a discussion on existing MEC initiatives regarding the use cases they target and their vision (i.e., whether they are more network-related or more distributed systems). Second, we survey MEC practical initiatives according to their strategies, including the ETSI MEC standard. Besides, we compare the strategies according to related limitations, impact, and deployment efforts. We also survey the existing tools making MEC systems a reality. Finally, we give hints to issues yet to be addressed in practice. By bringing a better comprehension of MEC initiatives, we believe this survey will help researchers and developers design their own MEC systems or improve and simplify the usability of existing ones.
... The real time context aware applications could be accomplished with the correct coordination between MEC platforms. ICN provides considerable opportunities for context-aware data distribution in the networks by allowing content distribution over unreliable radio links and transparent mobility between heterogeneous network [58], [59]. Since the latency stems from RAN and core network as well as the backhaul between them, the cooperation of MEC and caching technologies can be employed to reduce the latency significantly [60]. ...
... The programmable environment in the MEC platforms allow the deployment of ICN software components integrating service elements in an information-centric architecture [59]. For instance, information-centric IoT can be realized by adding caches and ICN routers in the MEC platforms and corresponding ICN clients (adapters) in the IoT devices. ...
Article
Full-text available
Multi-access Edge Computing (MEC) is a novel edge computing paradigm that moves cloud-based processing and storage capabilities closer to the mobile users by implementing server resources in the access nodes. MEC helps fulfill the stringent requirements of 5G and beyond networks to offer anytime-anywhere connectivity for many devices with ultra-low delay and huge bandwidths. Information-Centric Networking (ICN) is another prominent network technology that builds on a content-centric network architecture to overcome host-centric routing/operation shortcomings and to realize efficient pervasive and ubiquitous networking. It is envisaged to be employed in Future Internet including Beyond 5G (B5G) networks. The consolidation of ICN with MEC technology offers new opportunities to realize that vision and serve advanced use cases. However, various integration challenges are yet to be addressed to enable the wide-scale co-deployment of ICN with MEC in future networks. In this paper, we discuss and elaborate on ICN MEC integration to provide a comprehensive survey with a forward-looking perspective for Beyond 5G networks. In that regard, we deduce lessons learned from related works (for both 5G and Beyond 5G networks). We present ongoing standardization activities to highlight practical implications of such efforts. Moreover, we render key B5G use cases and highlight the role for ICN MEC integration for addressing their requirements. Finally, we layout research challenges and identify potential research Gürkan Gür is with the
... There are also numerous mesh deployment works [13][14][15][16]. Researchers have also focused on systems for these environments, including Johnson et al. [17] developing tools for sharing media, Raza et al. [18] building caching tools, and a variety of groups focused on platforms for service distribution [19,20]. ...
... Local Customization: In a less technical sense, community networks are owned and operated locally, and have the need to be customized to meet local development, sustainability, or social goals [19,20,45,46]. Traditional centralized telecom architectures prohibit this customization as most services and configuration are placed at the core. ...
Conference Paper
In this paper we introduce CoLTE, a solution for LTE-based community networks. CoLTE is a lightweight, Internet-only LTE core network (EPC) designed to facilitate the deployment and operation of small-scale, community owned and operated LTE networks in rural areas with limited and unreliable backhaul. The key differentiator of CoLTE, when compared to existing LTE solutions, is that in CoLTE the EPC is designed to be located in the field and deployed alongside a small number of cellular radios (eNodeBs), as opposed to the centralized model seen in large-scale telecom networks. We also provide performance results and lessons learned from a real-world CoLTE network deployed in rural Indonesia. This network has been sustainably operating for over six months, currently serves over 40 active users, and provides measured backhaul reductions of up to 45% when compared to cloud-core solutions.
... The consolidation of today's cloud technologies offer CNs the possibility to collectively build CN micro-clouds (Lertsinsrubtavee et al. 2018), building upon user-provided networks, and extending towards an ecosystem of cloud services. In CN micro-clouds, services are hosted at edge nodes with communication, computation, and storage capabilities. ...
Conference Paper
Full-text available
The growing demand for network connectivity has boosted the number of community networks (CNs).CNs are decentralized and self-organized communication networks owned and managed at the edge by volunteers. Due to the heterogeneity of edge node characteristics, high software and hardware diversity,irregular topology and unreliable behavior of the network, the performance of its services varie depending on where they are hosted. These characteristics of CNs and edge platforms running on them require of advanced simulation-optimization methods to place services. In this context, we propose a simheuristic algorithm to address this stochastic problem. The core of this approach relies on a multi-start meta heuristic with a multi-objective optimization method. Our approach combines Monte Carlo simulation and the multi-criteria optimal placement heuristic, The method is tested using real traces of Guifi.netCN, which is considered to be largest CN worldwide.
... Lertsinsrubtavee et al. [94] propose Picasso, an ICN-based MEC framework. Picasso is designed to adapt within the high network dynamic where service delivery can fail due to links' instability. ...
Article
Internet usability is expanded form just human-to-human interactions towards different communication types, while the communication itself is shifting from the host-centric model to the content-centric paradigm. The 5G and beyond networks promise not only to support such changes but also to provide massive data exchange and connectivity with high reliability. The next-generation networking technologies are the key enabled for 5G that aim at building a new ecosystem. One promising piece of this ecosystem is the Information-Centric Network (ICN), which is a future network architecture that tends to tackle the current host-centric model issues. It natively supports several features, including abstraction content naming and transparent in-network content caching that contribute to improve network performance, reduce traffic, and improve the latency. In this paper, we first provide a potential road map by introducing different next-generation active technologies to enable the big picture of 5G, including Mobile Edge Computing (MEC), Software-Defined Networking (SDN), and Network Function Virtualization (NFV). Then, we discuss the need for ICN and its coexistence within this ecosystem. Later, we present an in-depth review of the recent content naming schemes and a comprehensive review of in-network content caching solutions. We classify these solutions into different classes based on the used technologies and their working principle. Finally, we highlight some research challenges and propose promising directions for the research community.
... We observed that the resources are not uniformly distributed in the network. There is a highly skewed bandwidth and traffic distribution 12 . ...
Article
Full-text available
Decentralization, in the form of mesh networking and blockchain, two promising technologies, is coming to the telecommunications industry. Mesh networking allows wider low‐cost Internet access with infrastructures built from routers contributed by diverse owners, whereas blockchain enables transparency and accountability for investments, revenue, or other forms of economic compensations from sharing of network traffic, content, and services. Crowdsourcing network coverage, combined with crowdfunding costs, can create economically sustainable yet decentralized Internet access. This means that every participant can invest in resources and pay or be paid for usage to recover the costs of network devices and maintenance. While mesh networks and mesh routing protocols enable self‐organized networks that expand organically, cryptocurrencies and smart contracts enable the economic coordination among network providers and consumers. We explore and evaluate two existing blockchain software stacks, Hyperledger Fabric (HLF) and Ethereum geth with Proof of Authority (PoA) intended as a local lightweight distributed ledger, deployed in a real city‐wide production mesh network and in laboratory network. We quantify the performance and bottlenecks and identify the current limitations and opportunities for improvement to serve locally the needs of wireless mesh networks, without the privacy and economic cost of relying on public blockchains.
... Comparably, the studies [21] [22][23] [24] [8] propose deployment platforms and programming models for service provisioning in the fog. Similarly, a framework and software implementation for dynamic service deployment based on availability and processing resources of edge clouds are presented in [25]. To model resource cost in edge networks for fog service provisioning, the authors in [26] propose a model for resource contract establishment between edge infrastructure provider and cloud service providers based on auctioning. ...
Article
Full Text: http://www.utdallas.edu/~ashkan/papers/QDFSP.pdf *************************************************************************************************** Recent advances in the areas of Internet of Things (IoT), Big Data, and Machine Learning have contributed to the rise of a growing number of complex applications. These applications will be data-intensive, delay-sensitive, and real-time as smart devices prevail more in our daily life. Ensuring Quality of Service (QoS) for delay-sensitive applications is a must, and fog computing is seen as one of the primary enablers for satisfying such tight QoS requirements, as it puts compute, storage, and networking resources closer to the user. In this paper, we first introduce FogPlan, a framework for QoS-aware Dynamic Fog Service Provisioning (QDFSP). QDFSP concerns the dynamic deployment of application services on fog nodes, or the release of application services that have previously been deployed on fog nodes, in order to meet low latency and QoS requirements of applications while minimizing cost. FogPlan framework is practical and operates with no assumptions and minimal information about IoT nodes. Next, we present a possible formulation (as an optimization problem) and two efficient greedy algorithms for addressing the QDFSP at one instance of time. Finally, the FogPlan framework is evaluated using a simulation based on real-world traffic traces.
Conference Paper
Full-text available
By leveraging resources from the Fed4Fire+ CityLab testbed, we design the \textit{PiGeon} edge computing platform that experiments solution that enable ICN based edge services in wireless mesh networks (WMNs). PiGeon combines into a platform several trends in edge computing namely the ICN (Information-Centric Networking), the containerization of services exemplified by Docker, novel service placement algorithms and the increasing availability of energy efficient but still powerful hardware at user premises (Raspberry Pi, mini-PCs, and enhanced home gateways). We underpin the PiGeon platform with Docker container-based service that can be seamlessly delivered, cached and deployed at the network edge. The core of the PiGeon platform is the Decision Engine making a decision on where and when to deploy a service instance to satisfy the service requirements while considering the network status and available hardware resources. We collect network data from a real citywide mesh network such as CityLab FIRE testbed located at the city of Antwerp, Belgium. The collected data is used to feed our service placement heuristic within the PiGeon platform. Through a real deployment in CityLab testbed, we show that our service placement heuristic improves the response time up to 37% for stateful services (Web2.0 service). Apart from improving the QoS for end-users, our results show that ICN plays a key role in improving the service delivery time as well as reducing the traffic consumption in WMNs. The overall effect of ICN in our platform is that most content and service delivery requests can be satisfied very close to the client device, many times just one hop away, decoupling QoS from intra-network traffic and origin server load.
Technical Report
Full-text available
NDN Forwarding Daemon (NFD) is a network forwarder that implements the Named Data Networking (NDN) protocol. NFD is designed with modularity and extensibility in mind to enable easy experiments with new protocol features, algorithms , and applications for NDN. To help developers extend and improve NFD, this document explains NFD's internals including the overall design, major modules, their implementations, and their interactions. Revision history • Revision 8 (February 19, 2018):-Updated description of face system-Interface whitelist and blacklist for multicast faces-TCP permanent face-IPv6 support in MulticastUdpTransport-New ad hoc link type-Content Store policy configuration and policy API-Unsolicited data policy-Forwarding pipeline updates, including semantics of removing Link from Interest when it reaches producer region-Description of new semantics of NextHopFaceId-Scope control in strategies-Strategy parameters-Updated description of multicast strategy-Command Authenticator-Updated face management to match current NFD implementation-RIB-to-NLSR readvertise-New section on Congestion Control • Revision 7 (October 4, 2016):-Added brief description and reference to the new Adaptive SRTT-based (ASF) forwarding strategy-Update description of Strategy API to reflect latest changes-Miscellaneous updates 1 • Revision 6 (March 25, 2016):-Added description of refactored Face system (Face, LinkService, Transport)-Added description of WebSocket transport-Updated description of RIB management-Added description of Nack processing-Added introductory description of NDNLP-Added description of best-route retransmission suppression-Other updates to synchronize description with current NFD implementation • Revision 5 (Oct 27, 2015):-Add description of CS CachePolicy API, including information about new LRU policy-BroadcastStrategy renamed to MulticastStrategy-Added overview of how forwarder processes Link objects-Added overview of the new face system (incomplete)-Added description of the new automatic prefix propagation feature-Added description of the refactored management-Added description of NetworkRegionTable configuration-Added description about client.conf and NFD • Revision 4 (May 12, 2015): New section about testing and updates for NFD version 0.3.2:-Added description of new ContentStore implementation, including a new async lookup model of CS-Added description of the remote prefix registration-Updated Common Services section • Revision 3 (February 3, 2015): Updates for NFD version 0.3.0:-In Strategy interface, beforeSatisfyPendingInterest renamed to beforeSatisfyInterest-Added description of dead nonce list and related changes to forwarding pipelines-Added description of a new strategy_choice config file subsection-Amended unix config text to reflect removal of "listen" option-Added discussion about encapsulationg of NDN packets inside WebSocket messages-Revised FaceManager description, requiring canonical FaceUri in create operations-Added description of the new access router strategy
Article
Full-text available
Community networks (CNs) have gained momentum in the last few years with the increasing number of spontaneously deployed WiFi hotspots and home networks. These networks, owned and managed by volunteers, offer various services to their members and to the public. While Internet access is the most popular service, the provision of services of local interest within the network is enabled by the emerging technology of CN micro-clouds. By putting services closer to users, micro-clouds pursue not only a better service performance, but also a low entry barrier for the deployment of mainstream Internet services within the CN. Unfortunately, the provisioning of these services is not so simple. Due to the large and irregular topology, high software and hardware diversity of CNs, a "careful" placement of micro-clouds services over the network is required to optimize service performance. This paper proposes to leverage state information about the network to inform service placement decisions, and to do so through a fast heuristic algorithm, which is critical to quickly react to changing conditions. To evaluate its performance, we compare our heuristic with one based on random placement in Guifi.net, the biggest CN worldwide. Our experimental results show that our heuristic consistently outperforms random placement by 2x in bandwidth gain. We quantify the benefits of our heuristic on a real live video-streaming service, and demonstrate that video chunk losses decrease significantly, attaining a 37% decrease in the packet loss rate. Further, using a popular Web 2.0 service, we demonstrate that the client response times decrease up to an order of magnitude when using our heuristic. Since these improvements translate in the QoE (Quality of Experience) perceived by the user, our results are relevant for contributing to higher QoE, a crucial parameter for using services from volunteer-based systems and adapting CN micro-clouds as an ecosystem for service deployment.
Article
Full-text available
The Internet has crossed new frontiers with access to it getting faster and cheaper. Considering that the architectural foundations of today's Internet were laid more than three decades ago, the Internet has done remarkably well until today coping with the growing demand. However, the future Internet architecture is expected to support not only the ever growing number of users and devices, but also a diverse set of new applications and services. Departing from the traditional host-centric access paradigm, where access to a desired content is mapped to its location, an information-centric model enables the association of access to a desired content with the content itself, irrespective of the location where it is being held. UMOBILE tailors the information-centric communication model to meet the requirements of opportunistic communications, integrating those connectivity approaches into a single architecture. By pushing services near the edge of the network, such an architecture can pervasively operate in any networking environment and allows for the development of innovative applications, providing access to data independent of the level of end-to-end connectivity availability.
Conference Paper
Full-text available
Community networks (CNs) have seen an increase in the last fifteen years. Their members contact nodes which operate Internet proxies, web servers, user file storage and video streaming services, to name a few. Detecting communities of nodes with properties (such as co-location) and assessing node eligibility for service placement is thus a key-factor in optimizing the experience of users. We present a novel solution for the problem of service placement as a two-phase approach, based on: 1) community finding using a scalable graph label propagation technique and 2) a decentralized election procedure to address the multi-objective challenge of optimizing service placement in CNs. Herein we: i) highlight the applicability of leader election heuristics which are important for service placement in community networks and scheduler-dependent scenarios; ii) present a parallel and distributed solution designed as a scal-able alternative for the problem of service placement, which has mostly seen computational approaches based on centralization and sequential execution.
Conference Paper
Full-text available
Recent years have seen a trend towards decentralisation - from initiatives on decentralized web to decentralized network infrastructures. In this position paper, we present an architectural vision for decentralising cloud service infrastructures. Our vision is on community cloud infrastructures on top of decentralised access infrastructures i.e. community networks, using resources pooled from the community. Our architectural vision considers some fundamental challenges of integrating the current state of the art virtualisation technologies such as Software Defined Networking (SDN) into community infrastructures which are highly unreliable. Our proposed design goal is to include lightweight network and processing virtualization with fault tolerance mechanisms to ensure sufficient level of reliability to support local services.
Conference Paper
Full-text available
In the past, the Information-centric networking (ICN) community has focused on issues mainly pertaining to traditional content delivery (e.g., routing and forwarding scalability, congestion control and in-network caching). However, to keep up with future Internet architectural trends the wider area of future Internet paradigms, there is a pressing need to support edge/fog computing environments, where cloud functionality is available more proximate to where the data is generated and needs processing. With this goal in mind, we propose Named Function as a Service (NFaaS), a framework that extends the Named Data Networking architecture to support in-network function execution. In contrast to existing works, NFaaSbuilds on very lightweight VMs and allows for dynamic execution of custom code. Functions can be downloaded and run by any node in the network. Functions can move between nodes according to user demand, making resolution of moving functions a first-class challenge. NFaaSincludes a Kernel Store component, which is responsible not only for storing functions, but also for making decisions on which functions to run locally. NFaaSincludes a routing protocol and a number of forwarding strategies to deploy and dynamically migrate functions within the network. We validate our design through extensive simulations, which show that delay-sensitive functions are deployed closer to the edge, while less delay-sensitive ones closer to the core.
Article
Community networks (CNs) have gained momentum in the last few years with the increasing number of spontaneously deployed WiFi hotspots and home networks. These networks, owned and managed by volunteers, offer various services to their members and to the public. While Internet access is the most popular service, the provision of services of local interest within the network is enabled by the emerging technology of CN micro-clouds. By putting services closer to users, micro-clouds pursue not only a better service performance, but also a low entry barrier for the deployment of mainstream Internet services within the CN. Unfortunately, the provisioning of these services is not so simple. Due to the large and irregular topology, high software and hardware diversity of CNs, a "careful" placement of micro-clouds services over the network is required to optimize service performance. This paper proposes to leverage state information about the network to inform service placement decisions, and to do so through a fast heuristic algorithm, which is critical to quickly react to changing conditions. To evaluate its performance, we compare our heuristic with one based on random placement in Guifi.net, the biggest CN worldwide. Our experimental results show that our heuristic consistently outperforms random placement by 2x in bandwidth gain. We quantify the benefits of our heuristic on a real live video-streaming service, and demonstrate that video chunk losses decrease significantly, attaining a 37% decrease in the packet loss rate. Further, using a popular Web 2.0 service, we demonstrate that the client response times decrease up to an order of magnitude when using our heuristic. Since these improvements translate in the QoE (Quality of Experience) perceived by the user, our results are relevant for contributing to higher QoE, a crucial parameter for using services from volunteer-based systems and adapting CN micro-clouds as an eco-system for service deployment.
Article
Commons are natural or human-made resources that are managed cooperatively. The guifi.net community network is a successful example of a digital infrastructure, a computer network, managed as an open commons. Inspired by the guifi.net case and its commons governance model, we claim that a computing cloud, another digital infrastructure, can also be managed as an open commons if the appropriate tools are put in place. In this paper, we explore the feasibility and sustainability of community clouds as open commons: open user-driven clouds formed by community-managed computing resources. We propose organising the infrastructure as a service (IaaS) and platform as a service (PaaS) cloud service layers as common-pool resources (CPR) for enabling a sustainable cloud service provision. On this basis, we have outlined a governance framework for community clouds, and we have developed Cloudy, a cloud software stack that comprises a set of tools and components to build and operate community cloud services. Cloudy is tailored to the needs of the guifi.net community network, but it can be adopted by other communities. We have validated the feasibility of community clouds in a deployment in guifi.net of some 60 devices running Cloudy for over two years. To gain insight into the capacity of end-user services to generate enough value and utility to sustain the whole cloud ecosystem, we have developed a file storage application and tested it with a group of 10 guifi.net users. The experimental results and the experience from the action research confirm the feasibility and potential sustainability of the community cloud as an open commons.
Conference Paper
Recent trends show that deploying low cost devices with lightweight virtualisation services is an attractive alternative for supporting the computational requirements at the network edge. Examples include inherently supporting the computational needs for local applications like smart homes and applications with stringent Quality of Service (QoS) requirements which are naturally hard to satisfy by traditional cloud infrastructures or supporting multi-access edge computing requirements of network in the box type solutions. The implementation of such platform demands precise knowledge of several key system parameters, including the load that a service can tolerate and the number of service instances that a device can host. In this paper, we introduce PiCasso, a platform for lightweight service orchestration at the edges, and discuss the benchmarking results aimed at identifying the critical parameters that PiCasso needs to take into consideration.