Conference PaperPDF Available

Invited Paper: How Future Buildings Could Redefine Distributed Computing

Authors:

Figures

Content may be subject to copyright.
How future buildings could redefine distributed computing
Invited paper
Yanik Ngoko, Nicolas Sainth´
erant
Qarnot Computing
Montrouge, France
yanik.ngoko@qarnot-computing.com
Christophe C´
erin
University of Paris 13
Paris, France
christophe.cerin@lipn.univ-paris13.fr
Denis Trystram
University of Grenoble Alpes
Grenoble, France
trystram@imag.fr
Abstract—One of the most important challenges for the
Internet of Things (IoT) is the implementation of edge com-
puting platforms. In order to improve the response time, the
confidentiality, or the energy consumption of IoT applications,
part of IoT services must be operated on servers, deployed close
to connected devices. This said, much remains to be done for
the practical realization of this vision. What are edge servers?
Where will they be deployed? These are some questions that
are poorly addressed in the literature. In this paper, we propose
to build an edge computing framework around new datacenter
models in which computing servers are seamlessly integrated in
buildings. The datacenter models we are considering have been
developed around the concept of data furnace. This concept
has been implemented in European cities for district heating.
Our paper introduces a new processing model in which edge
and distributed cloud computing are operated with the same
data furnace servers. We then discuss the challenges in the
utilization of this model for edge computing. Our discussion
deals with urban integration, service computing frameworks
and performance issues. Finally, we end with an impact analysis
of our proposition on the future of cloud computing.
Keywords-edge computing; data furnace; distributed cloud
computing; system convergence; district heating.
I. INTRODUCTION
In order to improve the response time, the privacy, the
autonomy, or the energy consumption of IoT applications,
part of IoT services must be operated on servers, located near
connected devices [1]. This is known as edge computing and
implementing such platforms is one of the most important
challenge for the Internet of Things (IoT). If there is a
large consensus on the need for edge computing, there
are few propositions of concrete physical architectures for
edge computing. What is an edge server? Where will such
servers be deployed? How will they be connected? On these
questions, the literature is poor, but not nonexistent.
In the pioneer work of Bonomi et al. on Fog comput-
ing [2], the authors propose an heterogeneous vision in
which Cisco routers, connected vehicles or smart-grids could
serve as Fog nodes. This proposition is interesting but, its
feasibility can be discussed. For instance, if a connected
vehicle nowadays could embed a huge computing power, it
is not sure that its firmware was built to allow the provision-
ing of this computing power to external applications. This
however is mandatory to serve as edge computing nodes.
Personal computers are also a good edge server opportu-
nity. Past works in desktop grids [3] and volunteer clouds [4]
already demonstrated that an urban network based of desktop
computers might be used for distributed cloud computing
(DCC) services. However, it is important to notice that
the experimental validation of desktop grid architectures
has often been done on opportunistic workloads in which
computations are only deployed on personal computers
in idle periods [5]. Such workloads do not capture the
foundations of real-time applications that one intends to
improve with edge computing. In particular, we do believe
that the execution of edge computing workloads on personal
computers will introduce new discomfort problems for end-
users like: unexpected heat, noises or the fact of not being
able to fully use their computing power as they wish.
A third option is to consider new models, like the model of
heating servers that emerged around the data furnace model
of computing [6] (data furnace servers), in particular, the
digital heaters promoted by Qarnot computing1. Deployed
in homes, offices, public places, the Qarnot digital heaters,
also called Q.rads (See Figure 1), can be considered as
a classical server where the cooling system is replaced
by a heat diffusion system. Connected to Internet (wired
connection) and totally silent, each of these servers embeds
3or 4microprocessors and integrates a service computing
stack that allows external applications to deploy containers
or virtual machines on them.
The objective of this work is to convince that data furnace
servers (DF servers) are certainly one of the most interesting
directions for the implementation of edge computing. For
this purpose, we introduce the Data Furnace in three flows
(DF3): a new processing model where DF servers are used
for heating, local (edge) and remote (cloud) computing. We
analyze the DF3 model on three of the main challenges that,
in our viewpoint, will decide on the future edge servers.
The first challenge is urban and building integration. Edge
servers should seamlessly be integrated in modern cities.
This is important to improve the response time and the
1https://qarnot.com
(a) External view (b) Some sensors
Figure 1: The Qarnot digital heater.
energy consumed by IoT applications. The problem however,
is that computers produce heat, noise and need a minimal
logistics (cable, power outlet, space, etc.). How could we
then improve the quality of service of IoT applications while
not deteriorating the quality of life in cities where edge
servers are deployed? The second challenge is the need of
acomputing model that corresponds to the expectations of
edge computing. For instance, real time applications should
be handled in edge computing. The third challenge is on
performance in term of running time, storage, privacy, etc..
For instance, transportation services in smart-cities should
be delivered in real (or near real) time with a guarantee on
the confidentiality of user data.
Our paper shows that in the DF3 model, each of these
challenges can be efficiently addressed. We also propose an
impact analysis of DF3 for the future of cloud computing.
The remainder of this paper is organized as follows. In
Section II, we present the data furnace model of comput-
ing and motivates the utilization of DF servers for edge
computing. We also present the DF3 model. Section III
discusses the challenges in the utilization of data furnace
for edge computing. The discussion covers urban integration
(Section III-A), the computing model(Section III-B) and
performance (Section III-C). In Section IV, we propose
an impact analysis of our vision on the future of cloud
computing. Section V is devoted to other related works and
we conclude in Section VI.
II. DATA FUR NACE
A. The model
Data furnace is a DCC model whose core idea is to deploy
computing servers in homes and offices where they serve
as a heat source. Data furnace promotes a new paradigm
(the compute-and-heat paradigm [7]) for which the goal of
computers is not only to process data but also, to serve as the
sub-stations of a district heating service. The data furnace
is not just an idea; it has successfully been implemented in
cities.
The pioneer implementation of this concept was proposed
in 2010, when Qarnot computing invented the digital heater,
a computing server that also serves as a space heater. Qarnot
also developed a distributed cloud computing middleware
that operates on top of digital heaters. Nowadays, this
platform is used by major banks and financial services in
France, but also by 3D rendering studios, researchers all over
the world. Following the Qarnot initiative, other companies
proposed data furnace services. This includes Nerdalize
in Netherlands, Stimergy in France or CloudandHeat in
Germany.
It is obvious to notice that data furnace promotes a more
energy-efficient approach for cloud computing. In compar-
ison with traditional datacenters, we can avoid (with data
furnace) the energy costs induced by cooling. The energy
gain this lead to could be huge. For instance, CloudandHeat
claims a PUE (Power Usage Efficiency) value of 1.026 in
some of their datacenters. This is better than the one obtained
by Google, using deep learning techniques. There is also an
obvious economic interest in data furnace. Indeed, the model
makes it possible to build a datacenter by reusing existing
infrastructures (buildings, networks etc.). For the reader
interested in a deeper economic analysis, we recommend
the paper of Liu et al. [6].
Compared to traditional cloud computing approaches, data
furnace has some weaknesses. In particular, the model might
not be suitable for all types of computing applications. In
their fundamental paper, Liu et al. [6] recommend to use
data furnace on three types of applications. The first class
comprises seasonal and opportunistic applications like those
we have in the BOINC middleware [8]. The second class
consists of low-bandwidth neighborhood applications and
the third concerns eco-friendly applications. In this classi-
fication, low-bandwidth neighborhood applications include:
Internet television services and location-based services such
as map serving, traffic estimation, local navigation etc. As
one can notice, these latter applications are representative
of the scope of applications targeted in Edge computing. To
conclude on data furnace, we will present in the next section
a Representative subset of computing servers developed for
data furnace.
B. Data furnace servers
Data furnace is revolutionizing district heating service
with new models of servers used for heating. We present
below two classes of the most used servers, namely, digital
heaters and digital boilers.
1) Digital heaters: The concept of digital heater was
introduced by Qarnot computing with the Q.rad (See Fig-
ure 1). Each Q.rad includes 3-4 CPUs, interconnected with
an Ethernet connection. Each Q.rad consumes 500 W and
110-230 V. Q.rads also include several sensors, interfaces
and actuators for humidity, temperature, noises, wireless
charge, light etc. In a processing viewpoint, the Q.rads
support a software stack that interacts with the Qarnot
middleware (by optic fiber connection) for DCC. The stack
can run computations embedded in containers or virtual
machines. In the homes and offices were they are deployed,
the digital heaters are cooled by the ambient temperature
(free cooling). The residents of the home can also control the
internal temperature. Depending on the target temperature
they set, the internal software stack will then send these
heat requirements to the Qarnot middleware and the load
and frequency of the CPUs of the Q.rad will be adjusted
accordingly.
The Nerdalize e-radiator2is another type of digital
heaters. Unlike Q.rads, these heaters consume 1000 W and
include a double pipeline for heating. Thus, during the
winter, the processors heat is redirected into homes while in
the summer, the heat is expelled outside. From an installation
viewpoint, this function poses a constraint: the need to make
a hole in the wall.
Finally, digital heaters are receiving a growing interest
in the community of coin miners. Comino3and the Qarnot
crypto-heater4are special servers, built to serve both as
a space heater and a crypto currency miner. Each Qarnot
crypto-heater consumes 650 W and integrates 2GPUs.
2) Digital boilers: The concept of digital boiler takes the
idea of digital heater but applies it to the heating of water
or oil. Thus, a digital boiler can be seen as a box or a rack
that integrates several computing servers and whose heat is
used to produce hot water or oil, required by the heating grid
of the building in which they are deployed. In comparison
with a digital heater, the digital boilers integrate a more
important and connected computing power. For instance the
boiler produced by Asperitas5(See Figure 2) integrates 200
CPUs connected by a 10 Gbps Ethernet connection for an
energy consumption of 20 kW . For some buildings, 20 kW
might not be necessary for heating. In these cases, there
exist several alternatives. For instance, Stimergy6produces
oil-immersed systems with 1to 4kW that integrate between
2https://www.nerdalize.com
3https://comino.io
4https://www.qarnot.com/crypto-heater qc1/
5https://asperitas.com
6https://stimergy.com/
20 and 40 computing servers. These systems are designed
to serve as boiler in buildings.
Figure 2: AIC24: Digital boiler by Asperitas
To conclude this section, we would like to emphasize that
other types of DF servers will certainly be introduced in the
future. For instance, there are already several initiatives to
create micro-datacenters in cities that through heat pumps
deliver heat to local heating grid. An obvious consequence
of this trend is that future cities would probably embed
more computing servers than today. Also, why not use these
servers to build edge computing services? This point is
discussed in the next section.
C. Data furnace in three flows
In a request viewpoint, traditional data furnace models
are designed to support two flows. The first flow is those
of heating requests. The purpose of these requests is to
deliver heat to the environment in which the DF server is
deployed. With digital heaters, numerical targets could be
defined in such requests. For instance, one can ask to a
Qarnot heater to set the temperature at 20 degrees (Celsius).
Heating requests could be collaborative or individual. The
former case corresponds to the situation where we want
to set the mean temperature in rooms of an apartment to
a certain value while in the latter case, the request only
concern a specific server.
The second flow of requests is the one of Internet com-
puting requests. Sent by Internet users, these requests ask
to run a computation on DF servers7. Most data furnace
companies use a cloud computing model to service these
requests. Here, the main challenge for the cloud middleware
is to make sure that the heat produced by the run of Internet
computing requests was requested by those who host the
corresponding DF servers. This problem is in particular
challenging because it is based on a supply and demand
model where the arrival laws of Internet and heating requests
do not necessarily depend on the same parameters. In
particular, the seasonality clearly affects the law of heating
7Storage is not interesting because of it does not produce much heat.
This is also an argument against the name data furnace.
requests while business opportunities will impact the second
law.
Our proposition is to introduce a third flow: that of local
computing requests. Sent from a local network in which
the DF server is located, these requests do not necessarily
use the Internet (IoT low power network is an option). The
goal is to deliver in near-real time computing services to
users in the edge of a DF server. There are two sorts of
local computing requests: direct and indirect requests. Direct
requests are directly sent to a DF server. In this case, the
edge user has a direct connection to the server. As DF
servers are also used for Internet requests, direct requests
can raise several security issues. For their implementation,
it is important to formulate a good resource sharing and
network segmentation model. In the case of indirect requests,
we assume local computing environments where DF servers
are coordinated by a master node. The request is sent to the
master node that will schedule it processing on the servers.
Indirect requests might be preferable for security. However,
they imply to pay an additional latency cost in the processing
of requests.
Figure 3 summarizes the computing model we envision.
The abstract architecture we propose here will be revisited
in Section III. With DF3, we propose to operate distributed
cloud and edge on the same platform. We also suggest to
have a single middleware both for district heating, edge
and DCC. To the best of our knowledge, the literature on
distributed middleware researches has not yet addressed such
a proposition. In the remainder of this paper, we will discuss
of the feasibility of this proposal in emphasizing the main
challenges to overcome.
DF server
Master
DF server
DF server
Internet
Local requests (indirect)
local requests (direct)
Heating requests
Internet requests
Figure 3: Illustration of data furnace in three flows
III. CHA LL EN GE S IN U SI NG DATA FU RNACE SERVE RS
FOR EDGE COMPUTING
Data furnace is no longer just a concept. In 2016, the
Qarnot rendering platform 8( based on digital heaters) had
1100 users that rendered 600,000 images for 11,000,000
8https://render.qarnot.com
hours of computations. Digital heaters and boilers are de-
ployed all around the world. However, in comparison to
datacenters, the scale of these deployments remains rather
small. In France, one of the leading country for data furnace,
the DF servers park does not exceed 30,000 cores whereas
Amazon AWS uses more than 2million servers. For deliv-
ering edge services to a large set of users, the DF3 model
assumes a large scale deployment of data furnace servers.
This section discusses of associated challenges.
A. Urban integration
The urban heat island is a urban area whose temperature
is significantly higher than that of surrounding areas. Several
studies suggest that the human activity could be the cause
of such islands [9]. At first sight, it can be expected that a
broad deployment of DF servers could create or increase the
intensity of urban heat island. For instance, some DF servers
(Nerdalize e-radiator) are built for continuing the heat pro-
duction (outside) during the summer. This corresponds to
the functioning of air conditioners that contribute to urban
heat islands [10]. Fortunately, it is possible to define the
heat delivery in data furnace as an on demand service where
the heat is only produced according to comfort constraints
(target temperature, target humidity etc.), set by the hosts
of the DF servers. In such an approach, we minimize waste
heat. Qarnot proposes for example an hybrid infrastructure
for data furnace that combines its digital heater with data
center nodes. The embedded motherboards of the heater
are turned off when no heat is requested or when the
inertia of the heater produces enough heat. In these special
settings, if there are still many Internet computing requests
computations to perform, then these requests are processed
in classical datacenter nodes. The DF3 adds supplementary
challenges in heat production. In particular, edge requests
are sometimes issued from a real-time or near real-time
applications. We however believe that the main challenge
still remains in the calibration of a decision system that states
what to do locally and remotely (on a remote DF server or
in datacenter).
Noise pollution and energy consumption are other points
to consider in large scale urban integration. About noise, let
us notice that most computers emit noise; nonetheless, the
loudest component is in general the cooling system which
is not necessary in DF servers. On energy consumption, let
us notice that public policies in nowadays cities encourage
the reduction of energy consumption. Does the large scale
deployment of DF servers go against these policies? Not
necessarily. The first reason is that electric heaters are
already used in cities; a solution is to replace such space
heaters by DF servers. Secondly, DF servers would not
necessarily consume more energy than classical electric
heaters. For instance, the Qarnot heater consumes 500W, the
Nerdalize heater 1000W. This corresponds to consumption
quite reasonable if not reduced for electric heating.
Finally, the vision we propose could only be realized if
people accept to be heated with DF servers. In our viewpoint,
there are three classes of potential customers to distinguish:
those who already have an electric heater, those who are
using a non-electric heating system and customers that do
not have heating system. For the former class of customers,
we do believe that through a building renovation process,
they can easily accept to acquire a DF server. For the
second class, it will be more difficult. The only interesting
case here is when these customers are heated from a urban
heat network in which we can connect a sub-station of DF
servers. Finally, for those who do not still have a system, it is
important to notice that as shown in [7], with DF servers, we
can reach the same level of comfort than with other heating
systems (See Figure 4 for the average temperature in room
heated by Qarnot heater in winter).
17
18
19
20
21
22
23
24
25
26
11 12 1 2 3 4 5
temperature (°c)
month
Figure 4: Average temperature From November (11) to May
(5) 2016 on Qarnot computing sites.
To conclude this section, there are several ways for a
large scale urban integration of DF servers. However, we do
believe that for a successful integration, one should deploy
the servers as part of a smart-grid. An obvious task of the
the smart-grid manager is to ensure that the heat process-
ing of computing requests produces the heat requested by
customers. The manager must also negotiate will external
systems (e.g: energy operators, edge computing services
smart-cities services) to calibrate its energy consumption and
service delivery to the demand. We end here our discussion
on urban integration, in the next, we will discuss on the
implementation of edge services with DF3.
B. Computing model
One of the objective of edge computing is to propose a
platform for real-time applications based on cloud services.
This is achievable with DF servers. For instance, in [11],
it is shown that near real-time applications for audio alarm
detection (alarm sound, fall detection, etc.) could be operated
on digital heaters. Technically, we can envision to use a
cloud or DCC software stack to build edge platforms and
services. But, we should find the right system organization.
At this point, there are two classes of architectures we
propose to consider. In both classes each DF server could
either run: an edge gateway system, a DCC gateway system
or a worker system. The gateways receive external comput-
ing requests and assign them to workers (we do not consider
direct edge requests). The edge gateway will differ from the
DCC gateway on the network interface it supports. Indeed,
low power networks and communication protocols [12]
(Zigbee, Lora, Sigfox, Enocean etc.) are inevitable in edge
computing. In addition, for IoT applications, we must con-
sider the sense-compute-actuate paradigm that implies to
frequently collect data. Both considered architectures imply
to define clusters of nodes that state what are the workers
controlled by the gateways. To decide on the components of
clusters, we can either use clustering techniques developed
in wireless sensor networks [13] or define clusters as the set
of DF servers of a physical building or district.
In the first class of DF3 architecture, workers can ei-
ther service edge or DCC requests. This solution poses
some problems, in particular about the management of
context switching. Ideally, the environment deployed on
nodes (firmware, base system, containers, etc.) must cover
the need of edge and DCC requests. Otherwise, we should
be able to reboot workers nodes. However, rebooting could
have an impact on the processing of edge requests. Another
problem is the level of isolation of edge and DCC system.
For performance in DCC applications, it is better to define
a single local network between workers. This will speed
up communication in parallel applications. However, to
guarantee the privacy of edge data, it is preferable to have
two local networks, one for edge and one for DCC. A third
problem is the management of requests peak. In the case
there are too many DCC requests, it might be impossible
to schedule the processing of an edge request (the cluster
is full). In this case, there are two possible solutions. The
first one is to use preemption [14] to reschedule some DCC
requests. However, as the number of workers connected to
a gateway is limited, in the case where there are two many
requests, it could be impossible to schedule some of them.
The second solution is to use offloading [15]. Offloading can
be of two kinds: vertical and horizontal. Vertical offloadings
are the ones done towards datacenter nodes. Horizontal
offloadings are done towards another cluster of DF servers.
This latter case implies to define coordination mechanisms
between edge gateways. This case also raises questions about
the fairness of cooperation between clusters [16]. Finally, let
us observe that we can also decide not to scale but to delay
the processing. However, the interest in this choice depends
on future resource availability. In all cases, we recommend to
modelize the computational problem as a decision problem
that can be solved by an automated system.
In the second class of DF3 architecture, we still have edge
Worker
Edge
gateway
Worker
Worker
DCC
gateway
Cluster
Edge
gateway
Data center
vertical offloading
horizontal offloading
Low power network
Internet,
IoT communication protocols
Regulation
system
Figure 5: A component architecture for data furnace in three flows
and DCC gateways; but, a dedicated number of workers
within the set of all workers. With a dedicated number of
workers, we can guarantee a minimal quality of service,
what is particularly interesting if there are few requests.
The solution is also interesting in the case where edge
requests consist of short-term tasks. The management of
context switching in such an architecture could also be
easier. Finally, we can envision to put the dedicated edge
servers in a (virtual) private network to ensure that the
isolation with DCC workers is guaranteed. This second class
of architecture also has weaknesses: How do we decide
on the number of workers? How do we manage peak of
requests? These are some issues for which there is no ideal
solution.
The last important aspect, we propose to consider is the
management of the heat demand. As already said, heat in
data furnace systems is produced by running computations.
If we only consider the arrival law of requests, we could
easily fall into the situation where the heat produced does
not correspond to the expectations. To make sure that the
expectations will be complied, we propose to add a heat
regulator system in each DF server. The heat regulator
implements a DVFS based technique (voltage and frequency
regulation) [17] to guarantee that the energy consumed
corresponds to the heat demand.
C. Performance
There is no doubt that with DF servers, we can build
systems with near real-time response time. But at what scale,
with what security level? This is more tricky. Ideally, in a
territory where edge computing is delivered by DF servers,
we should have a uniform distribution of DF servers across
the territory. Unfortunately, it is not certain that the demand
in DF servers for heating will follow this law. The physical
security of DF servers can also be a concern. Nonetheless,
let us observe that DF companies have been providing cloud
computing services to bank or e-delivering services to bank
or 3D animation studio for years without any problem.
Finally, the availability and stability of DF servers could
also be a problem. In particular the computing power of DF
servers depends on the heat demand. With digital boilers, the
problem might not be important because we can continue
to produce hot water independently of heating requests.
However, this will generate waste heat. Some studies reveal
that classical boilers already contribute to urban heat island.
With a boiler that always generates heat, the intensity of the
waste heat rejected will be more important.
A solution to manage the variability in heat demand is
to build a predictive computing platform, with a model
to predict the heat demand and the thermosensitivity in
houses equipped with DF servers. Several studies reveal
that the thermosensitivity is in general correlated to the
external weather. To conclude on stability, let us add that
economic incentives could play a role. For instance, in the
Qarnot computing model, the hosts of DF servers do not
pay electricity. Consequently, during the winter, these hosts
generally keep the same target temperature.
We end here our analysis of the main challenges in the
large scale deployment of DF servers. It is important to
notice that there are several points we did not cover. For
instance the cooling approach of DF servers might cause the
acceleration of processor aging and consequently, the need to
replace them inside DF servers. The large scale deployment
of DF servers will also raise maintenance challenges; this
will even be the case in buildings where there is an existing
electric heating infrastructure because we must additionally
consider the management of the network and local central
servers. The previous feasibility analysis is incomplete. But,
it has shown several arguments for the realization of DF3. In
the next section, we will propose an analysis of the impact
of our proposition on the future of distributed computing.
IV. A FUTURE FOR DISTRIBUTED COMPUTING
These last past years, the clouds popularized a service
oriented view of computing whose ultimate realization is the
serverless computing paradigm. The service model carried
by clouds has certainly disrupted the field of distributed
computing. But the model has some limitations. In particular,
it hides the fact that whatever the distributed systems, the
quality of the delivered services depends on the resources.
Performance in cloud gaming are not the same depending on
the location of servers and players. A virtual machine does
not have the same performance in any physical machine.
In addition, this vision confines the role of resources to
compute, store and communicate, whereas the emergence
of cyberphysical computing is introducing new roles like
sensing or heating. In this paper, we formulated a new
distributed computing model from the concept of resources.
We are convinced of the need to reinforce the focus on
resources. Computers are machines, their role and function
is always to reinvent. On this point, we recommend to read
Mark Weiser’s paper on the Computer for the 21st cen-
tury [18]. In a middleware viewpoint, the resource oriented
computing (ROC) vision we have promotes decentralization.
This means that in the design of middleware for a DF
servers, we start by clarifying the software stack of DF
servers and then build an engine to compose interactions
between the servers. Such an approach can easily guarantee
that the basic services delivered by the resources (heat for
instance) will continue to be delivered even if there are
problems in the central point. Historically, RESTful APIs
were introduced for defining uniform resource interface
that supports this ROC view. The goal was to define a
generic interface of functions for resources (and abstract
resources) in order to transform the design of distributed
middlewares as the problem of automatically composing
resource functions [19]. For the reader interested in resource
oriented computing, we recommend to read the the paper of
Peter Rodgers et al. [20].
In an economic viewpoint, data furnace introduces another
dimension to classical cloud pricing models: the seasonality.
One can argue that the seasonality is already important
even in classical pricing because the cost for producing the
electricity varies. But with data furnace, the variability is
also on the number of computing capacity: in winter, the heat
demand increases the computing power that is then reduced
in the summer. We are convinced that for SLAs designers,
data furnace is a field of research that can still lead to very
innovative proposals.
The proposition of this work encourages the formulation
of modern resource sharing techniques. This should not
be restricted to virtual machines: virtual private networks,
virtual private clouds, techniques to build self organized
wireless sensors, hybrid clouds or systems of systems are
promoted by our work. Most of these techniques have
already been experimented in traditional clouds. We are
thinking for instance of the possibility to create a virtual
private cloud in a cloud. However DF3 introduces new
situations like the resource sharing between a DCC and edge
computing system. By nature, these two classes of systems
could largely differ on the hardware and SLAs requirements.
Finally data furnace could disrupt blockchain [21] and
crowd computing [22]. In the former domain, DF servers
constitute a significant computing power. In the latter do-
main, users that host DF servers could have an additional
computing power for contributing to the resolution of crowd
computing requests.
One question at this point is when do we think that the
vision we have described will come true? Our answer is: in
the next decade. We participate to initiatives for implement-
ing this vision. One of them is the Greco project9, funded by
the French national agency for research (ANR). The general
objective of the project is to build a cloud middleware that
is able to use as computing resources, not only classical
computing servers but also cyberphysical resources and IoT
devices. The Greco project only focuses on two aspects of
such a middleware: the resource scheduling and the storage
management. Obviously, the objective of the project are
not enough for a full implementation of the DF3 model.
But, this is already an important starting point. We are
also involved in an European project (Catalyst10) on smart
energy grids. This project has received funding from the EU
H2020 innovation action programme under grant agreement
No 768739. In this project, we work on the formulation and
implementation of the global framework in which the DF3
model will be implemented.
V. OTHER REL ATED WOR KS
Distributed cloud computing is an area of growing inter-
est. Impulsed by desktop grids and the BOINC project [8],
the idea evolved and led to data furnace and volunteer
clouds [4]. Some experts consider that DCC is enough to
run edge applications. We disagree since workloads for
distributed cloud computing do not have the proximity
constraints of edge workloads. There exist alternatives to DF
servers for edge computing. For instance, Schneider electric
promotes micro-datacenters [23] that can be distributed in
cities. Classical clusters infrastructures, clusters of raspberry
pi or private cloud infrastructures are also serious options
for edge computing. Finally, the infrastructure deployed for
content delivery network (CDN) could also be used. All
these architectures are very good candidates. Some of them
in particular have already been deployed at large scale in
cities (CDN). However, let us observe that DF servers are
more energy efficient. To conclude this section, we would
9https://anr-greco.net/
10http://project-catalyst.eu/
like to emphasize that there is an increasing computing
power in connected devices and JavaScript frameworks are
more and more used to develop server applications in the
perspective of a client server model where the connected
device runs a fat client (See the Google Chrome book
project11). For these architectures, there is no need of to
have other edge servers.
VI. CONCLUSION
Clouds are the heart of a convergence of computing
systems towards machines located in datacenters. Big Data,
IoT, Machine Learning are some systems that are massively
operated in datacenters. The proposition of this paper is
to impulse another convergence towards DF servers. We
showed that we can envision to service distributed cloud
computing, edge and district heating from DF servers. We
also proposed a preliminary feasibility analysis on the real-
ization of such a convergence. How far can we go in such
a convergence? There are limits to consider. One of them is
the market size of electric heating. Electric heating is not the
dominant system in Europe and studies on thermosensitivity
suggest to reduce its market shares. One can argue that a
DF server is not just an electric heater, but, this might not
be enough. One can also argue that the local production of
renewable energies is opening interesting perspectives for
autonomous buildings equipped with electric heaters. But
such systems are not yet democratized. Finally, one can
argue that only in France, in 2010, there were more than 9
millions of households that used electric heater. Even if this
is more than the 2millions of servers used by Amazon, let us
observe that while the need for edge computing is growing,
there is a growing opposition against electric heating.
Another limit is on the type of applications, suitable for
data furnace. Tightly coupled applications will have poor
network performance on data furnace systems. Compute
intensive jobs with a huge running time are also not ap-
propriate. Indeed, processors need to be cooled and for jobs
that have a huge running time, the free cooling system of
DF servers might not be enough. Finally, storage services
are not interesting because they do not produce heat.
REFERENCES
[1] R. Buyya, S. N. Srirama, G. Casale, R. N. Calheiros,
Y. Simmhan, B. Varghese, E. Gelenbe, B. Javadi, L. M.
Vaquero, M. A. S. Netto, A. N. Toosi, M. A. Rodriguez, I. M.
Llorente, S. D. C. di Vimercati, P. Samarati, D. S. Milojicic,
C. A. Varela, R. Bahsoon, M. D. de Assunc¸˜
ao, O. F. Rana,
W. Zhou, H. Jin, W. Gentzsch, A. F. Zomaya, and H. Shen, “A
manifesto for future generation cloud computing: Research
directions for the next decade,CoRR, vol. abs/1711.09123,
2017. [Online]. Available: http://arxiv.org/abs/1711.09123
11https://www.google.com/chromebook/
[2] F. Bonomi, R. Milito, J. Zhu, and S. Addepalli, “Fog com-
puting and its role in the internet of things,” in Proceedings
of the First Edition of the MCC Workshop on Mobile Cloud
Computing, ser. MCC ’12. New York, NY, USA: ACM,
2012, pp. 13–16.
[3] C. Cerin and G. Fedak, Desktop Grid Computing, 1st ed.
Chapman & Hall/CRC, 2012.
[4] Y. Ngoko, C. C´
erin, P. Gianessi, and C. Jiang, “Energy-
aware service provisioning in volunteers clouds,IJBDI,
vol. 2, no. 4, pp. 262–284, 2015. [Online]. Available:
https://doi.org/10.1504/IJBDI.2015.072171
[5] D. P. Anderson, J. Cobb, E. Korpela, M. Lebofsky, and
D. Werthimer, “SETI@home: an experiment in public-
resource computing,” Communications of the ACM, vol. 45,
no. 11, pp. 56–61, Nov. 2002.
[6] J. Liu, M. Goraczko, S. James, C. Belady, J. Lu, and
K. Whitehouse, “The data furnace: Heating up with cloud
computing.” USENIX, June 2011.
[7] Y. Ngoko, “Heating as a cloud-service, A position paper
(industrial presentation),” in Proceedings of Euro-Par 2016:
International Conference Parallel and Distributed Comput-
ing, August 2016, pp. 389–401.
[8] D. P. Anderson, “Boinc: A system for public-resource com-
puting and storage,” in Proceedings of the 5th IEEE/ACM
International Workshop on Grid Computing, ser. GRID ’04.
Washington, DC, USA: IEEE Computer Society, 2004, pp.
4–10.
[9] B. Zhou, D. Rybski, and J. P. Kropp, “On the statistics of
urban heat island intensity,Geophysical Research Letters,
vol. 40, no. 20, pp. 5486–5491, 2013.
[10] B. Tremeac, P. Bousquet, C. de Munck, G. Pigeon, V. Mas-
son, C. Marchadier, M. Merchat, P. Poeuf, and F. Meunier,
“Influence of air conditioning management on heat island in
paris air street temperatures,” Applied Energy, vol. 95, no. C,
pp. 102–110, 2012.
[11] A. Durand, Y. Ngoko, and C. C´
erin, “Distributed and in-situ
machine learning for smart-homes and buildings: Application
to alarm sounds detection,” in 2017 IEEE International Paral-
lel and Distributed Processing Symposium Workshops, IPDPS
Workshops 2017, Orlando / Buena Vista, FL, USA, May 29 -
June 2, 2017, 2017, pp. 429–432.
[12] P. Barker and M. Hammoudeh, “A survey on low power net-
work protocols for the internet of things and wireless sensor
networks,” in Proceedings of the International Conference on
Future Networks and Distributed Systems, ser. ICFNDS ’17.
New York, NY, USA: ACM, 2017, pp. 44:1–44:8.
[13] A. A. Abbasi and M. Younis, “A survey on clustering algo-
rithms for wireless sensor networks,” Computer Communica-
tions, vol. 30, no. 14, pp. 2826 – 2841, 2007, network Cov-
erage and Routing Schemes for Wireless Sensor Networks.
[14] P. Dutot, G. Mouni´
e, and D. Trystram, “Scheduling parallel
tasks approximation algorithms,” in Handbook of Scheduling
- Algorithms, Models, and Performance Analysis., 2004.
[15] P. Sermpezis and T. Spyropoulos, “Offloading on the edge:
Performance and cost analysis of local data storage and
offloading in hetnets,” in 2017 13th Annual Conference on
Wireless On-demand Network Systems and Services (WONS),
Feb 2017, pp. 49–56.
[16] F. Pascual, K. Rzadca, and D. Trystram, “Cooperation in
multi-organization scheduling,Concurrency and Computa-
tion: Practice and Experience, vol. 21, no. 7, pp. 905–921,
2009.
[17] E. Le Sueur and G. Heiser, “Dynamic voltage and frequency
scaling: The laws of diminishing returns,” in Proceedings of
the 2010 International Conference on Power Aware Comput-
ing and Systems, ser. HotPower’10. Berkeley, CA, USA:
USENIX Association, 2010, pp. 1–8.
[18] M. Weiser, “The computer for the 21st century,SIGMOBILE
Mob. Comput. Commun. Rev., vol. 3, no. 3, pp. 3–11, Jul.
1999.
[19] Y. Ngoko, A. Goldman, and D. S. Milojicic, “Service se-
lection in web service compositions optimizing energy con-
sumption and service response time,” J. Internet Services and
Applications, vol. 4, no. 1, pp. 19:1–19:12, 2013.
[20] P. Rodgers, “Introduction to resource-oriented computing,
http://resources.1060research.com/docs/IntroductionToResourceOrientedComputing-
1.pdf, online; accessed 26 Febuary 2018.
[21] S. Underwood, “Blockchain beyond bitcoin,Commun. ACM,
vol. 59, no. 11, pp. 15–17, Oct. 2016.
[22] D. G. Murray, E. Yoneki, J. Crowcroft, and S. Hand, “The
case for crowd computing,” in Proceedings of the Second
ACM SIGCOMM Workshop on Networking, Systems, and
Applications on Mobile Handhelds, ser. MobiHeld ’10. New
York, NY, USA: ACM, 2010, pp. 39–44.
[23] V. Avelar, “Practical options for deploying small server rooms
and micro data centers,” Schneider electric, white paper 174,
online; accessed 26 Febuary 2018.
... An alternative interesting approach was that proposed by the Dutch company Nerdalize. The idea was to offer a distributed data center by displacing servers in the residential buildings [38]. The immersion cooled servers would exchange heat with water that was then used for indoor heating and hot tab water. ...
Preprint
Full-text available
Air cooling is the traditional solution to chill servers in data centers. However, the continuous increase in global data center energy consumption combined with the increase of the racks' power dissipation calls for the use of more efficient alternatives. Immersion cooling is one such alternative. In this paper, we quantitatively examine and compare air cooling and immersion cooling solutions. The examined characteristics include power usage efficiency (PUE), computing and power density, cost, and maintenance overheads. A direct comparison shows a reduction of about 50% in energy consumption and a reduction of about two-thirds of the occupied space, by using immersion cooling. In addition, the higher heat capacity of used liquids in immersion cooling compared to air allows for much higher rack power densities. Moreover, immersion cooling requires less capital and operational expenditures. However, challenging maintenance procedures together with the increased number of IT failures are the main downsides. By selecting immersion cooling, cloud providers must trade-off the decrease in energy and cost and the increase in power density with its higher maintenance and reliability concerns. Finally, we argue that retrofitting an air-cooled data center with immersion cooling will result in high costs and is generally not recommended.
... Unlike cloud computing, edge devices are commonly decentralized. In order to monitor UHI from distributed sensors, edge computing offers closer contacts to each individual sensor, thus reducing energy consumption and response time during the transfer of observation data (Ngoko et al. 2018). Edge devices are those mounted directly on the edge for urban sensing of properties such as microclimate, having better durability compared to wireless devices. ...
Chapter
Full-text available
Smart cities evolve rapidly along with the technical advances in wireless and sensor networks, information science, and human–computer interactions. Urban computing provides the processing power to enable the integration of such technologies to improve the living quality of urban citizens, including health care, urban planning, energy, and other aspects. This chapter uses different computing capabilities, such as cloud computing, mobile computing, and edge computing, to support smart cities using the urban heat island of the greater Washington DC area as an example. We discuss the benefits of leveraging cloud, mobile, and edge computing to address the challenges brought by the spatiotemporal dynamics of the urban heat island, including elevated emissions of air pollutants and greenhouse gases, compromised human health and comfort, and impaired water quality. Cloud computing brings scalability and on-demand computing capacity to urban system simulations for timely prediction. Mobile computing brings portability and social interactivity for citizens to report instantaneous information for better knowledge integration. Edge computing allows data produced by in-situ devices to be processed and analyzed at the edge of the network, reducing the data traffic to the central repository and processing engine (data center or cloud). Challenges and future directions are discussed for integrating the three computing technologies to achieve an overall better computing infrastructure supporting smart cities. The integration is discussed in aspects of bandwidth issue, network access optimization, service quality and convergence, and data integrity and security.
... A Q.rad is a heater ( 500-700 W) whose heat sink consists of embedded microprocessors (in general 3-4 Intel i7 or AMD Pro-Rizen). Totally silent and based on free cooling, Q.rads belong to the category of data furnace servers disrupting volunteer computing, distributed cloud computing and edge computing (Ngoko et al., 2018). ...
Article
Full-text available
We introduce a new parallel and distributed algorithm for the solution of the satisfiability problem. It is based on an algorithm portfolio and is intended to be used for servicing requests in a distributed cloud. The core of our contribution is the modeling of the optimal resource sharing schedule in parallel executions and the proposition of heuristics for its approximation. For this purpose, we reformulate a computational problem introduced in a prior work. The main assumption is that it is possible to learn optimal resource sharing from traces collected on past executions on a representative set of instances. We show that the learning can be formalized as a set coverage problem. Then we propose to solve it by approximation and dynamic programming algorithms based on classical greedy algorithms for the maximum coverage problem. Finally, we conduct an experimental evaluation for comparing the performance of the various algorithms proposed. The results show that some algorithms become more competitive if we intend to determine the trade-off between their quality and the runtime required for their computation.
Article
Sustainable cities address their citizens’ environmental, social, economic, cultural, and institutional needs in an integrated manner. In order to successfully transform urban systems in this context, key enabling technologies must be investigated that can facilitate such changes. In addition, computer-based modeling approaches for analyzing the benefits of such technologies for urban organizations and activities are highly desirable. In this paper, we propose a model-based approach to address sustainable urban environments that employs green edge computing technology to meet the energy and digital service requirements of urban systems. By utilizing the solar energy potential of urban roofs, this technology can be adapted to meet the crucial demands for power arising from digital activity in various urban contexts. As a result of its unique features, the proposed modeling framework is a valuable analysis tool for assisting decision-making to improve the quality of urban systems.
Article
Full-text available
The Cloud computing paradigm has revolutionized the computer science horizon during the past decade and has enabled the emergence of computing as the fifth utility. It has captured significant attention of academia, industries and government bodies. Now, it has emerged as the backbone of modern economy by offering subscription-based services anytime, anywhere following a pay-as-you-go model. This has instigated (1) shorter establishment times for start-ups, (2) creation of scalable global enterprise applications, (3) better cost-to-value associativity for scientific and high performance computing applications, and (4) different invocation/execution models for pervasive and ubiquitous applications. The recent technological developments and paradigms such as serverless computing, software-defined networking, Internet of Things, and processing at network edge are creating new opportunities for Cloud computing. However, they are also posing several new challenges and creating the need for new approaches and research strategies, as well as the re-evaluation of the models that were developed to address issues such as scalability, elasticity, reliability, security, sustainability, and application models. The proposed manifesto addresses them by identifying the major open challenges in Cloud computing, emerging trends and impact areas. It then offers research directions for the next decade, thus helping in the realisation of Future Generation Cloud Computing.
Conference Paper
Full-text available
In this paper, we discuss a novel utility computing approach, implemented by the company Qarnot computing in private clouds. The approach promotes a new computing paradigm in which computers are considered as machines that produce both data and heat. It is based on two main technologies: a new model of servers and a new resource manager for servicing both computing and heating as a cloud-service. This paper focuses on the resource manager promoted by this utility computing approach. We summarize the architecture of the middleware and describe the key computational challenges. We also provide a performance characterization on the thermal comfort and processing time. Some preliminary results show that the proposed utility computing approach can lead to distributed systems that are competitive with both traditional cloud solutions and heating systems.
Article
Full-text available
A challenging task in Web service composition is the runtime binding of a set of interconnected abstract services to concrete ones. This question, formulated as the service selection problem, has been studied in the area of service compositions implementing business processes. Despite the abundance of work on this topic, few of them match some practical needs that we are interested in. Indeed, while considering the business process implemented by service compositions, we can distinguish between two classes: compositions that correspond to single business process and those implementing multiple communicating processes. While most of the prior work focuses only on the first case, it is the latter that interests us in this paper. This paper contributes to the service selection by proposing a new algorithm that, in polynomial time, generates a mixed linear integer program for optimizing service compositions based on the service response time and the energy consumption. The novelty in this work is our focus on multi-process composition and energy consumption. The paper also proposes a new analysis of the service selection and an evaluation of the proposed algorithm.
Article
Full-text available
The rapid increase in data traffic demand has overloaded existing cellular networks. Planned upgrades in the communication architecture (e.g. LTE), while helpful, are not expected to suffice to keep up with demand. As a result, extensive densification through small cells, caching content closer to or even at the device, and device-to-device (D2D) communications are seen as necessary components for future heterogeneous cellular networks to withstand the data crunch. Nevertheless, these options imply new CAPEX and OPEX costs, extensive backhaul support, contract plan incentives for D2D, and a number of interesting tradeoffs arise for the operator. In this paper, we propose an analytical model to explore how much local storage and communication through "edge" nodes could help offload traffic in various heterogeneous network (HetNet) setups and levels of user tolerance to delays. We then use this model to optimize the storage allocation and access mode of different contents as a tradeoff between user satisfaction and cost to the operator. Finally, we validate our findings through realistic simulations and show that considerable amounts of traffic can be offloaded even under moderate densification levels.
Conference Paper
Low power communication is becoming an increasingly critical factor in the design and implementation of large-scale Internet of Things (IoT) and Wireless Sensor Networks (WSN). Recently, new protocols have been introduced to help reduce such system's power can cost. This paper presents a survey of recent research on low power consumption networking for IoT and WSN systems, highlighting the move from battery life of hours or days to months and years Then the paper flags some Cyber Security vulnerabilities of specific IoT interest as well as identifying key areas for further work.
Article
Blockchain technology has the potential to revolutionize applications and redefine the digital economy.
Article
Desktop Grid Computing presents common techniques used in numerous models, algorithms, and tools developed during the last decade to implement desktop grid computing. These techniques enable the solution of many important sub-problems for middleware design, including scheduling, data management, security, load balancing, result certification, and fault tolerance. The books first part covers the initial ideas and basic concepts of desktop grid computing. The second part explores challenging current and future problems. Each chapter presents the sub-problems, discusses theoretical and practical issues, offers details about implementation and experiments, and includes references to further reading and notes. One of the first books to give a thorough and up-to-date presentation of this topic, this resource describes various approaches and models as well as recent trends that underline the evolution of desktop grids. It balances the theory of designing desktop grid middleware and architecture with applications and real-world deployment on large-scale platforms.