Conference PaperPDF Available

Offline-first strategies in heterogeneous, distributed and virtualized infrastructures

Authors:

Abstract and Figures

This Ph.D project aims to investigate the outsourc-ing of cloud services to clients by including heterogeneous , client-side resources and cooperation between the cloud and the client in the execution of services. To this end, an offline-first strategy will be formulated, developed, and investigated as a counter-design to the cloud-first strategy, enabling the usabil-ity of services without permanent services and a permanent cloud connection. This pursues the goal of application autonomy with simultaneous cooperation with cloud services. The goal is to achieve autonomy and simultaneous cooperation of applications in this context. Methods of service migration are to be researched to enable the provision of services. The influence of migration on the quality of services will also be investigated. From this, methods for determining the quality of service provide the end user with information about the service provided. In summary , this should lead to an increase in the resilience of the use of services to increase the stability of services along the cloud-to-edge supply chain and enable transparent use for the end user.
Content may be subject to copyright.
Offline-first strategies in heterogeneous, distributed and virtualized
infrastructures
Henry Cocos a
Department of Computer Science and Engineerung,
Frankfurt University of Applied Sciences,
Nibelungeplatz 1, Frankfurt D-60318, Germany
cocos@fb2.fra-uas.de
Abstract
This Ph.D project aims to investigate the outsourc-
ing of cloud services to clients by including hetero-
geneous, client-side resources and cooperation be-
tween the cloud and the client in the execution of
services. To this end, an offline-first strategy will be
formulated, developed, and investigated as a counter-
design to the cloud-first strategy, enabling the usabil-
ity of services without permanent services and a per-
manent cloud connection. This pursues the goal of
application autonomy with simultaneous cooperation
with cloud services. The goal is to achieve autonomy
and simultaneous cooperation of applications in this
context. Methods of service migration are to be re-
searched to enable the provision of services. The in-
fluence of migration on the quality of services will
also be investigated. From this, methods for deter-
mining the quality of service provide the end user
with information about the service provided. In sum-
mary, this should lead to an increase in the resilience
of the use of services to increase the stability of ser-
vices along the cloud-to-edge supply chain and enable
transparent use for the end user.
1 RESEARCH PROBLEM
The Frankfurt University of Applied Sciences devel-
ops an innovative platform for the creation and use
of virtual learning environments, the SKILL project
(Strategic Competence Platform - Innovative Learn-
ing and Teaching) (Baun et al., 2024). This virtual
teaching service is implemented to deliver virtual re-
sources for university teaching activities. For this pur-
pose, lecturers, students, and staff can create virtual
environments to integrate these into their teaching.
One of the concepts of the SKILL project involves
the use of the service by students. The SKILL con-
ahttps://orcid.org/0009-0001-7573-0361
cept envisages that users use the virtual environments
in the infrastructure provided. This means that the vir-
tual machines are to be operated on server hardware
and thus use the project’s resources.
However, using the services requires the project’s
resources on the server side, as the end users of the
services access the server hardware via a client. Since
there are end users who themselves have devices with
a large amount of resources, this could create oppor-
tunities and relieve the core service on the server side.
A suitable method for relieving the load on cloud
services is to outsource the service or parts of it to
the client. Therefore, utilizing the clients’ resources
holds the potential for reducing load on the server
side. Since the resources on the client side, which
are not usually required elsewhere in parallel during
the server-side service, can be used as an extension of
the core service.
This concept can relieve the cloud service using
the end user’s resources. It also reduces latencies on
the network side. Running the service locally can also
benefit the latency when interacting with the service.
One conceivable scenario is the use of the service via
a long-distance data connection with low bandwidth,
as can still be found in many rural areas today.
For example, according to a Federal Ministry for
Digital and Transport (BMDV) survey, only 68% of
households in rural areas have a 100 Mbit/s connec-
tion or better (BMD, ). The figures for businesses are
at a similar level. Low bandwidths on the network
side are problematic when using an online cloud ser-
vice. These problems can be found in cloud services
beyond the SKILL project and in many scenarios in
which a slow data connection impairs the use of a
cloud service. It also affects many companies in rural
locations, where the need for more broadband access
over a considerable distance leads to low bandwidths
on the network side. In such a case, low bandwidth
becomes a problem when connecting a company to a
cloud service.
An extension of the service using the client could
counteract this problem, but this approach raises
many open questions that need to be answered. For
example, the clients are highly heterogeneous com-
puter environments without uniform resources. The
heterogeneity on the client side is determined, among
other things, by resources such as CPU performance,
memory size, and mass storage. This means that an
end user can use the service with comparatively weak
hardware, such as an older notebook, while another
uses a desktop computer with a powerful CPU.
2 OUTLINE OF OBJECTIVES
One of the Ph.D. project’s objectives is to research the
outsourcing and cooperation of cloud services with
the client, including the client-side resources. The
aim is to investigate which services from the cloud
could be outsourced to the client and when this pro-
cedure makes sense. For this purpose, an offline-first
strategy shall be formulated, which, in contrast to the
currently prevailing cloud-first strategy, makes ser-
vices available in the local network and thus provides
a network-independent provision of cloud services.
With this approach, end users should be able to
use services in their immediate proximity without be-
ing dependent on the cloud. This is intended to give
local services greater autonomy and resilience for lo-
cal services, without sacrificing the advantages of a
cloud environment. Defined criteria for the decision
to outsource computing-intensive tasks are to be de-
termined. Determining these metrics and criteria is a
task for which there has yet to be a generally accepted
method in current research.
Another objective is the investigation of Service
migration (Rodrigues et al., 2021) for the distribution
of resources. This method of distributing applications
is concerned with placing services at the level of end
devices and the cooperation between the cloud, end
devices, and services, as well as the use of the services
by the end user. The decisive factor here is which
technical methods exist for local resource utilization
and how these positively affect service use.
Generally, service migration can be achieved ver-
tically (between the cloud and end devices) or hori-
zontally (between end devices). The objectives here
are, among other things, to research the influences on
the service associated with the migration and the tech-
nical options for migrating services.
Another aim of the Ph.D. studies is to research the
resilience of services (Eberz-Eder et al., 2021; Har-
chol et al., 2020) and the impact of service migra-
tion on the quality of service. For example, service
migration can induce latency during use, negatively
Network
Compute Storage Network
Node
Compute Storage Network
Node
Compute Storage Network
Node
Cloud
Compute Storage Network
Service
Client
CT VM
Service
CT VM
Service
CT VM
Service
CT VM
Figure 1: Concept of service migration
impacting the service’s quality. Also, many factors
for the decision to migrate need to be monitored, as
the client’s hardware resources (e.g., processor, main
memory) differ from those of the server. Which qual-
ity characteristics (e.g., performance) are influenced
by migration and how the migration affects the in-
dividual features must be investigated. An estimate
of the migration’s effect on service quality could be
helpful in this context. The definition of quality pre-
sented here differs from the quality of service since,
in the context of the project, the quality of the applica-
tion’s content is to be examined rather than the quality
of the service in terms of connection metrics. These
are content-related requirements for the quality of the
service (e.g., correctness of the results depending on
time constraints). The quality characteristics at the
service’s runtime would be helpful in such a context
for decisions for or against the migration of a service.
These and other criteria are to be researched during
the Ph.D. project.
3 STATE OF THE ART
One significant focus lies on virtual machine migra-
tion and its effects on the quality of services (Choud-
hary et al., 2017). Service migration can only occur
when the surrounding and isolating virtual machine is
also migrated and moved from a source server offer-
ing the service to the terminal device, consuming the
service. Reducing blackout times during migration of
services can be achieved using live migration features
in VMs. However, VMs are very heavy-weight, so
migration over larger WAN links is only feasible for
legacy services, which cannot switch to containerized
environments (Winkelhofer, 2019). Container virtual-
ization (also called operating system virtualization) is
more suitable for operation on terminal devices since
they have a smaller footprint regarding resource con-
sumption. This isolation mechanism for application is
very promising and widely used. However, it is only
suitable for newer applications and micro-service ar-
rangements.
Another critical point in this Ph.D. project is the
network since service migration can only occur over
a stable and efficient network link. One challenge is
emulating a local network for the virtual machine or
container applications. One possible solution widely
used in the industry and science are overlay networks
employed by Software Defined Networking (Wang
et al., 2019). By using virtual overlay networks, ap-
plications can operate transparently with one another
over a WAN link. The control flow and data flow are
separated, making the administration of such over-
lay networks easier. However, establishing overlay
networks over WAN to connect terminal devices to
the servers operating the service is a considerable
challenge that needs to be investigated in this Ph.D.
project.
Most of the current research regarding service mi-
gration and resource sharing stems from the field of
edge computing (Gedeon et al., 2019). This Ph.D.
project follows a common principle but differentiates
itself by spanning the focus to end devices with more
resources (e.g., Laptops or client PCs). The focus of
edge computing, on the other hand, lies heavily on us-
ing low-power and weak terminal devices. Outsourc-
ing services and resource use require methods for dis-
tributing and providing components.
Edge computing can be seen as an extension
of cloud computing, bringing services closer to the
data producers and users, accelerating communica-
tion, and reducing latencies (Merkl and Cocos, 2020).
The area of wireless sensor networks benefits from the
edge computing paradigm (Cocos and Merkl, 2019).
Although this project is investigating methods and
Horizontal
Scaling
Vertical
Scaling Geograhic
Scaling
Service Replication
Ressource Scaling
Location
+ Ressources
+ Service Instances
e.g. Kubernetes
more Pods
e.g. Kubernetes
more CPU, RAM,
etc.
Relocation to end
device
Scale Out
Scale Up
"Scale Away"
Relocation of
Instances
Figure 2: Dimensions of resource scaling
technologies of edge computing, unlike the classic
edge computing use case, this project focuses on the
inclusion of clients. In this approach, the client should
use its resources to extend the cloud service, and, at
the same time, there should be the option of working
cooperatively together with the cloud service.
4 METHODOLOGY
Figure 1 schematically shows the migration between
the individual layers, whereby the migration between
the cloud/network and the client is vertical, and mi-
gration between devices at the same level (e.g., within
the cloud) is referred to as horizontal migration.
The migration of the virtual resources between the
different layers is taking place over the wide area net-
work (WAN), which has a significantly lower band-
width than the local area network (LAN). Therefore,
the decision on the virtual resource’s exact location
and the migration’s timing has to be made.
Using methods from edge and fog computing is
beneficial in implementing suitable measures for mi-
gration and communication in said scenario. The
connection and communication between virtual ma-
chines or containers place a high interest in the under-
lying networking technology. Overlay networks like
VXLAN are a reasonable choice and, therefore, shall
be employed in realizing the project.
Figure 2 presents the dimensions of scalability in
virtual resources ranging from vertical scaling of re-
sources and horizontal scaling of resources. These
two dimensions are implemented by adding resources
to the virtual resource (so-called upscaling) Alterna-
tively, by replicating virtual resources (so-called out
scaling). A third dimension not investigated in re-
search is geographic scaling, which involves relocat-
ing services and their components closer to end de-
Local Area Network
VXLAN
Switch
Source
Server
Router and
Storage Server
Shared Storage
vHDD/vSSD
iSCSI or NFS
OS: Debian 12
HV: QEMU/KVM
NW: OVS
OS: Debian 12
HV: QEMU/KVM
NW: OVS
OS: Debian 12
HV: QEMU/KVM
NW: OVS
End Devices
VTEPVTEP
12
3
4
5 5
Figure 3: Experimental setup for WAN migration of services
vices. More precisely on the end device itself. This
dimension is a focal point of research in this Ph.D.
project.
5 EXPECTED OUTCOME
The expected outcome of this Ph.D. project is an in-
creased resilience of cloud service offerings by lever-
aging the performance of terminal devices and an of-
fline operation of services. The terminal devices be-
come an enhancement of the cloud service, spanning
its service offering to terminal devices, making ser-
vices more accessible to end users. Another expected
outcome is the reduced network latency of service
offerings in conjunction with a reduced server load,
making the operation of services more effective from
a user perspective and increasing the reliability of the
service offering. In summary, the Ph.D. project inves-
tigates ways to increase service offerings’ autonomy
and resilience by using vertical scalability and geo-
graphic scalability (see figure 2).
Investigating suitable technologies and methods
for the seamless distribution of services and use of
resources is another expected outcome of the Ph.D.
project. In the end the findings of the research will
be beneficial for the application of methods in the fu-
ture, resulting in solid arguments for the embodiment
of new standards and methods for the migration and
communication of services in heterogeneous and dis-
tributed environments.
6 STAGE OF THE RESEARCH
The initial literature research phase has already been
completed, resulting in a concept for implementing
the Ph.D. project and the choice of technologies and
methods. In the next step, initial experimentation with
the technologies will give an insight into the feasibil-
ity of the technologies and methods.
An analysis of the migration of virtual units (VM,
container) between different end devices (servers, lap-
tops, etc.) concerning the influence of applications
and different amounts of resources (vCPU, vRAM,
etc.) on the behavior of the virtual environments dur-
ing migration is set up as a first indicator for further
research. This setup is highly relevant for legacy ser-
vices and their behavior during migration and can be
seen in figure 3.
Therefore, a first experiment is carried out to in-
vestigate the feasibility of live virtual machine migra-
tion over WAN using VXLAN overlay networks. This
first experiment will give insights into the applicabil-
ity of the setup and guidance on the direction of future
research in the field.
ACKNOWLEDGEMENTS
I thank my supervisors, Prof. Dr. Martin Kappes and
Prof. Dr. Christian Baun, for their support and guid-
ance during my research.
REFERENCES
Bericht zum Breitbandatlas Teil 1: Ergebnisse (Stand Mitte
2021). Zugegriffen am: 31.05.2022.
Baun, C., Kappes, M., Cocos, H.-N., and Koch, M. M.
(2024). Eine plattform zur erstellung und verwen-
dung komplexer virtualisierter IT-Strukturen in lehre
und forschung mit open source software. Informatik-
Spektrum.
Choudhary, A., Govil, M. C., Singh, G., Awasthi, L. K.,
Pilli, E. S., and Kapil, D. (2017). A critical survey of
live virtual machine migration techniques. J. Cloud
Comput., 6(1).
Cocos, H.-N. and Merkl, D. (2019). Decentralized Data
Processing on the Edge Accessing Wireless Sensor
Networks with Edge Computing. In Weghorn, H., ed-
itor, Proceedings of the 16th International Conference
on Applied Computing 2019, Cagliari, Italy, pages
265–269. IADIS, Nov. 2019.
Eberz-Eder, D., Kuntke, F., Schneider, W., and Reuter,
C. (2021). Technologische Umsetzung des Resilient
Smart Farming (RSF) durch den Einsatz von Edge
Computing. In 41. GIL-Jahrestagung, Informations-
und Kommunikationstechnologie in kritischen Zeiten.
Gesellschaft f¨
ur Informatik e.V. Zugegriffen am:
March 15, 2024.
Gedeon, J., Brandherm, F., Egert, R., Grube, T., and
M¨
uhlh¨
auser, M. (2019). What the Fog? Edge Com-
puting Revisited: Promises, Applications and Future
Challenges. IEEE Access, 7:152847–152878.
Harchol, Y., Mushtaq, A., Fang, V., McCauley, J., Panda,
A., and Shenker, S. (2020). Making edge-computing
resilient. In Proceedings of the 11th ACM Symposium
on Cloud Computing, SoCC ’20, page 253–266, New
York, NY, USA. Association for Computing Machin-
ery.
Merkl, D. and Cocos, H. (2020). Complex Event Processing
on the Edge - Bringing Data Consolidation and Pro-
cessing closer to Wireless Sensor Networks. In 2020
IEEE International Workshop on Metrology for Indus-
try 4.0 IoT, pages 395–400.
Rodrigues, T. K., Liu, J., and Kato, N. (2021). Applica-
tion of cybertwin for offloading in mobile multiaccess
edge computing for 6g networks. IEEE Internet of
Things Journal, 8(22):16231–16242.
Wang, H., Li, Y., Zhang, Y., and Jin, D. (2019). Vir-
tual machine migration planning in software-defined
networks. IEEE Transactions on Cloud Computing,
7(4):1168–1182.
Winkelhofer, S. (2019). Konzeption und umsetzung von
live-migration f¨
ur docker-container.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Zusammenfassung In diesem Artikel wird eine an der Frankfurt University of Sciences entwickelte virtuelle Plattform für die Hochschullehre beschrieben, die insbesondere die Erstellung und Verwendung komplexer IT-Strukturen auch für Nichtexperten erlaubt. Bisher waren komplexe Netzwerkstrukturen in der Lehre meist weder virtuell noch physisch mit vertretbarem Aufwand realisierbar, obwohl sie zentraler Bestandteil fundamentaler Lehrveranstaltungen wie Rechnernetze, Verteilte Systeme oder IT-Sicherheit sind. Die hier beschriebene Plattform ermöglicht ein flexibles und praxisnahes Angebot für Dozenten, Wissenschaftler und Studentinnen und Studenten zugleich, indem kurzfristig und einfach komplexe IT-Infrastrukturen virtuell verfügbar gemacht werden. Sie bietet so neue Möglichkeiten zur Gestaltung und Nutzbarmachung virtueller Ressourcen und basiert ausschließlich auf freien Softwarekomponenten. Eingesetzt werden u. a. die Hypervisor-Technologie Proxmox Virtual Environment, die zusammen mit der verteilten Speicherlösung Ceph als Objektspeicherdienst eine Virtualisierungsplattform zur Bereitstellung virtueller Ressourcen anbietet.
Article
Full-text available
Abstract: Edge computing offers the possibility to realize Resilient Smart Farming (RSF). The present work deals with possibilities for the most fail-safe digitization of agriculture as a critical infrastructure and shows that decentralized solutions of edge computing now offer innovative technological possibilities for the realization of an RSF. The advantages of local data processing at the point of origin in combination with regional networking offer new possibilities in the age of 5G infrastructures and the use of IoT sensor networks. The focus of this work is on edge computing as a technology for implementing an RSF. Abstract: Edge Computing bietet die Möglichkeit zur Realisierung des Resilient Smart Farming (RSF). Die vorliegende Arbeit setzt sich mit Möglichkeiten zur möglichst ausfallsicheren Digitalisierung der Landwirtschaft als kritische Infrastruktur auseinander und zeigt auf, dass dezentrale Lösungen des Edge Computing inzwischen innovative technologische Möglichkeiten zur Realisierung eines RSF bieten. Die Vorteile der lokalen Datenverarbeitung am Entstehungsort in Kombination mit einer regionalen Vernetzung bieten neue Möglichkeiten im Zeitalter von 5G Infrastrukturen und dem Einsatz von IoT-Sensornetzwerken. Der Fokus dieser Arbeit liegt auf das Edge Computing als Technologie zur Umsetzung eines RSF legen. Keywords: Resilient Smart Farming, Resilienz, Edge Computing, Dezentralisierung, IoT
Article
Full-text available
Edge computing brings computing and storage resources closer to (mobile) end users and data sources, thus bypassing expensive and slow links to distant cloud computing infrastructures. Often leveraged opportunistically, these heterogeneous resources can be used to offload data and computations, enabling upcoming demanding applications such as augmented reality and autonomous driving. Research in this direction has addressed various challenges, from architectural concerns to runtime optimizations. As of today, however, we lack a widespread availability of edge computing—partly because it remains unclear which of the promised benefits of edge computing are relevant for what types of applications. This article provides a comprehensive snapshot of the current edge computing landscape, with a focus on the application perspective. We outline the characteristics of edge computing and its postulated benefits and drawbacks. To understand the functional composition of applications, we first define common application components that are relevant w.r.t. edge computing. We then present a classification of proposed use cases and analyze them according to their expected benefits from edge computing and which components they use. Furthermore, we illustrate existing products and industry solutions that have recently surfaced and outline future research challenges.
Article
Full-text available
Virtualization techniques effectively handle the growing demand for computing, storage, and communication resources in large-scale Cloud Data Centers (CDC). It helps to achieve different resource management objectives like load balancing, online system maintenance, proactive fault tolerance, power management, and resource sharing through Virtual Machine (VM) migration. VM migration is a resource-intensive procedure as VM’s continuously demand appropriate CPU cycles, cache memory, memory capacity, and communication bandwidth. Therefore, this process degrades the performance of running applications and adversely affects efficiency of the data centers, particularly when Service Level Agreements (SLA) and critical business objectives are to be met. Live VM migration is frequently used because it allows the availability of application service, while migration is performed. In this paper, we make an exhaustive survey of the literature on live VM migration and analyze the various proposed mechanisms. We first classify the types of Live VM migration (single, multiple and hybrid). Next, we categorize VM migration techniques based on duplication mechanisms (replication, de-duplication, redundancy, and compression) and awareness of context (dependency, soft page, dirty page, and page fault) and evaluate the various Live VM migration techniques. We discuss various performance metrics like application service downtime, total migration time and amount of data transferred. CPU, memory and storage data is transferred during the process of VM migration and we identify the category of data that needs to be transferred in each case. We present a brief discussion on security threats in live VM migration and categories them in three different classes (control plane, data plane, and migration module). We also explain the security requirements and existing solutions to mitigate possible attacks. Specific gaps are identified and the research challenges in improving the performance of live VM migration are highlighted. The significance of this work is that it presents the background of live VM migration techniques and an in depth review which will be helpful for cloud professionals and researchers to further explore the challenges and provide optimal solutions.
Article
Multi-access Edge Computing is an essential technology that academia and industry have recognized as fundamental for the future of the Internet of Things. Current research on the subject utilizes virtual machines as the intermediary between end devices and cloud servers. However, recently a new framework was proposed that utilizes Cybertwins instead of virtual machines for the same function. Such framework comes with a myriad of advantages but, most importantly, in this case, it includes a control plane capable of enabling cooperation between the Cybertwins. In this paper, we present a mathematical model of the total service delay of a Cybertwin-based Multi-access Edge Computing system that includes user mobility, migration of virtual servers, multiple physical servers at different network tiers, fronthaul and backhaul communication, processing, and content request/caching. We also propose algorithms for guiding the operation of Cybertwins and the control plane in a Multi-access Edge Computing scenario. Finally, a performance analysis between Cybertwin and a virtual machine-based scheme is offered. Simulations show that Cybertwin brings significant improvement for the assumed scenario in the form of a faster overall service due to the higher cooperation. The models and simulations here were designed with the characteristics of future networks, beyond the current 5G, in mind, making them likely relevant for future networks, where Multi-access Edge Computing and the Internet of Things should play an even more important role.
Conference Paper
In general wireless sensor networks lack the proper hardware resources to analyze and visualize the gathered data by its sensors. Such sensor networks are commonly used to acquire data and forward it to a more powerful device, where the data can be handled properly. These devices could be for instance servers, which monitor, visualize and perform statistical analysis. Based on that current situation, we are proposing an edge computing approach as an addition to sensor networks. With the usage of edge nodes some of the compute-intensive tasks can be brought closer to the sensor network, resulting in a pre-processing of data. Moreover this will be done by using the complex event processing engine Siddhi on an edge node. Our proposal will be applied on a wireless sensor network, which measures several physical quantities to determine the indoor air quality.
Article
Live migration is a key technique for virtual machine (VM) management in data center networks, which enables flexibility in resource optimization, fault tolerance, and load balancing. Despite its usefulness, the live migration still introduces performance degradations during the migration process. Thus, there has been continuous efforts in reducing the migration time in order to minimize the impact. From the network’s perspective, the migration time is determined by the amount of data to be migrated and the available bandwidth used for such transfer. In this paper, we examine the problem of how to schedule the migrations and how to allocate network resources for migration when multiple VMs need to be migrated at the same time. We consider the problem in the Software-defined Network (SDN) context since it provides flexible control on routing. More specifically, we propose a method that computes the optimal migration sequence and network bandwidth used for each migration. We formulate this problem as a mixed integer programming, which is NP-hard. To make it computationally feasible for large scale data centers, we propose an approximation scheme via linear approximation plus fully polynomial time approximation, and obtain its theoretical performance bound and computational complexity. Through extensive simulations, we demonstrate that our fully polynomial time approximation (FPTA) algorithm has a good performance compared with the optimal solution of the primary programming problem and two state-of-the-art algorithms. That is, our proposed FPTA algorithm approaches to the optimal solution of the primary programming problem with less than 10% variation and much less computation time. Meanwhile, it reduces the total migration time and service downtime by up to 40% and 20% compared with the state-of-the-art algorithms, respectively.
Decentralized Data Processing on the Edge -Accessing Wireless Sensor Networks with Edge Computing
  • H.-N Cocos
  • D Merkl
Cocos, H.-N. and Merkl, D. (2019). Decentralized Data Processing on the Edge -Accessing Wireless Sensor Networks with Edge Computing. In Weghorn, H., editor, Proceedings of the 16th International Conference on Applied Computing 2019, Cagliari, Italy, pages 265-269. IADIS, Nov. 2019.
Konzeption und umsetzung von live-migration für docker-container
  • S Winkelhofer
Winkelhofer, S. (2019). Konzeption und umsetzung von live-migration für docker-container.