Conference PaperPDF Available

Overcome Vendor Lock-In by Integrating Already Available Container Technologies - Towards Transferability in Cloud Computing for SMEs

Authors:

Abstract

Container clusters have an inherent complexity. A distributed container application in the cloud can be complex at planning, installation and configuration, maintenance and search for failures. Small and medium enterprises (SMEs) are mostly limited by their personnel and financial restrictions. Using advanced cloud technologies like a container cluster often requires high personnel expenses or the use of an external system builder. In addition to economical, security-and governance issues there is also the concern of technical vendor lock-ins. This paper introduces C 4 S, an open source system for SMEs to deploy and operate their container application with features like elasticity, auto-scaling and load balancing. The system also supports transferability features for migrating container between different Infrastructure as a Service (IaaS) platforms. This paper presents a solution for SMEs to use the benefits of cloud computing without the disadvantages of vendor lock-in.
CLOUD COMPUTING 2016: The Seventh International Conference on Cloud Computing, GRIDs, and Virtualization
Overcome Vendor Lock-In by Integrating Already Available Container Technologies
Towards Transferability in Cloud Computing for SMEs
Peter-Christian Quint, Nane Kratzke
L¨
ubeck University of Applied Sciences, Center of Excellence CoSA
L¨
ubeck, Germany
email: {peter-christian.quint, nane.kratzke}@fh-luebeck.de
Abstract—Container clusters have an inherent complexity. A
distributed container application in the cloud can be complex
at planning, installation and configuration, maintenance and
search for failures. Small and medium enterprises (SMEs) are
mostly limited by their personnel and financial restrictions.
Using advanced cloud technologies like a container cluster often
requires high personnel expenses or the use of an external system
builder. In addition to economical, security- and governance
issues there is also the concern of technical vendor lock-ins.
This paper introduces C4S, an open source system for SMEs
to deploy and operate their container application with features
like elasticity, auto-scaling and load balancing. The system also
supports transferability features for migrating container between
different Infrastructure as a Service (IaaS) platforms. This paper
presents a solution for SMEs to use the benefits of cloud
computing without the disadvantages of vendor lock-in.
KeywordsMicroservice; Container; Docker; Container Cluster;
Software Defined Network; Cloud Computing; SME
I. INTRODUCTION
Infrastructure as a service (IaaS) enables companies to
get resources like computational power, storage and network
connectivity on demand. IaaS can be obtained on public or
private clouds. Public clouds are provided by third parties for
general public use. Type representatives are Amazon’s Elastic
Compute Cloud (EC2) and Google Compute Engine (GCE).
Private Cloud are intended for the exclusive use by a single
organization [1]. They are mostly installed on the respective
companies own infrastructure. OpenStack is a cloud platform
for providing (not exclusively) private clouds. One big benefit
using cloud computing is the elastic scaling. Elasticity means
the possibility to match available resources with the current
demands as closely as possible [2]. Scalability is the ability of
the system to accommodate larger loads by adding resources
or accommodate weakening loads by removing resources [3].
With autoscaling, resources can be added automatically when
they are needed and removed when they are not in use [4].
The resources are allocated on demand and the customer only
has to pay for resources he really used. The system described
in this paper will support several, public and private, cloud
environments. Features like elastic scaling and transferability
will also be available. The authors define transferability as the
possibility to migrate some or all containers between different
cloud platforms. This is needed to avoid vendor lock-in by
the cloud providers, which is a major obstacle for small and
medium enterprises (SMEs) in cloud computing [5]. Only a
few research projects deal with the specific needs of SMEs in
cloud computing [6].
In the last few years, container technologies like Docker
got more and more common. Docker is an open source and
lightweight virtualization solution to provide an application
deployment without having the overhead of virtual machines
[7]. With Docker, applications can be easily deployed on
several machine types. This makes it possible to launch con-
tainers from the same application (image), e.g., on a personal
computer or a datacenter server.
Container clusters like Kubernetes (arose from Google
Borg [8]) and Mesos [9] can deploy a huge number of
containers on private and public clouds. A big benefit of cluster
technologies is the horizontal scalability of the containers, the
fast development and the contained software defined network,
which is often necessary for distributed container applications.
Container and container cluster software are mostly open
source and free to use.
SMEs are mostly financially and personnel-wise restricted
(see the European definition of SME [10]). Since the man-
agement of container cluster applications with features like
transferability and elasticity is complex, it can be very difficult
to achieve for a small (maybe only one person size) IT
department. Getting started using services like Infrastructure
as a Service (IaaS) might be very simple. But the use of
advanced cloud technologies like clusters, containers and cloud
benefits like auto-scaling and load balancing can quickly grow
into complex technical solutions. The cloud provider supplied
services (i.e., auto-scaling) might pose another issue due to
often having non-standardized service APIs. This is often
resulting in inherent vendor lock-in [11]. However, there are
products and services to manage these technologies like the
T-Systems Cloud Broker and Amazon EC2 Container Service
(ECS). These management solutions also have disadvantages.
For example, the Cloud Broker is a commercial product, which
is inherently designed for very big companies. This kind of
cloud broker services move the dependencies from the cloud
provider to the system/service provider like T-Systems. ECS
works only with Amazon EC2-instances, which means there is
still a vendor lock-in. Both solutions just shift vendor lock-in
to another company. Creating an open source system for easy
deployment and managing of cloud applications in a container
cluster would support SMEs using these technologies without
worries about vendor lock-in.
The software presented in this paper is called C4S. The
name is an acronym for Container Cluster in Cloud Computing
System. C4Sis designed to (automatically) deploy an operating
container cluster application without vendor lock-in. Moreover,
the system will be able to monitor the cloud platform, the
container cluster and the containers themselves. Beside bare
reporting, the system will offer methods to keep the application
running in most failure states. Altogether, the C4Scan make
container cluster cloud computing technologies usable for
SMEs without large and highly specialized IT departments.
CLOUD COMPUTING 2016: The Seventh International Conference on Cloud Computing, GRIDs, and Virtualization
C4Sis not exclusively designed for SMEs. The user group
of C4S is not limited to specific company types, so (e.g.,
big international) companies with small and specific in-house-
departments can use it, too.
Section II describes the features and the requirements of
the C4Ssystem. An overview about the C4Sarchitecture
is given in Section III. The intended validation of concept,
which is performed at the final phase, is presented in Section
IV. Related work, several business services and products are
described in Section V. Finally, the conclusion is presented in
SectionVI.
II. GENERAL REQUIREMENTS OF C4S
C4Sis designed to handle the high complexity of a
container cluster with benefits like elasticity, auto-scaling and
transferability. Feature requirements and the technical speci-
fications are explained below. By designing and developing a
generic cloud service description language, a solution to define
secure, transferable and elastic services of typical complexity
will be provided. Thus, they are deployable to any IaaS cloud
infrastructure. This work promotes the implementation of easy-
to-handle, elastic and transferable cloud applications.
A. Deploying and Controlling Applications
The basic feature of C4Sis to deploy a distributed container
application on cloud environments. Therefore, the user can
easily configure the needed containers, the interfaces and the
cloud environments. According to the user set configuration,
the system will automatically deploy the application on the
container cluster and the cloud platforms. Overall, there are
three controlling levels the management solution C4Shas to
support:
The Container Application can be configured, launched,
controlled, changed and stopped. Application parts (e.g., con-
tainer types) must be replaceable (e.g., changing of container
images to keep the application up to date). Details like the
status and the configuration of every single container and an
overview of all running containers should also be visible for
the customers.
The Container Cluster is used for automatic deployment
of the container on the available virtual machines (running
on IaaS platforms). Therefore, the cluster solution can create,
terminate and transfer containers. The user is able to set values
like the deployment limitations and restrictions in the container
host selection.
Virtual Machines on IaaS platforms form the host system
of the container cluster with the containers. The system should
support a management solution for monitoring the status as
well as creating and terminating virtual machines on several
cloud platforms. The user is able to set values like the
virtual machine limitations and the favored platform for each
container type.
A view with all used machines, their type, running time,
and other data is also necessary to observe, e.g., economic
information like actual costs. The system has to be able
to monitor on all controlling levels. In case of failure, the
monitoring system should trigger reports and automatic actions
to keep the application running.
B. Usage of Cloud Features
As described in Section I, cloud computing enables features
like elasticity,scalability and load balancing.C4Senables the
user to handle the inherent complexity of these features in an
easy way. Auto-Scaling containers on the cluster and virtual
machines on IaaS platforms is also featured.
C. Prevent Dependencies
To avoid vendor lock-in by the cloud provider, the
system can install a multi-cloud container cluster with transfer-
ability features. On demand, some or all containers can migrate
from one cloud provider to another. Accordingly, the user has
full control over where the containers are running.
To prevent dependencies by used software and ser-
vices, the C4Swill be published under MIT license. It is
recommended that all third-party parts like the cluster software
are also open source. Thus, the consuming companies avoid
dependencies to the C4S system and adapt the source code
to their special needs. The system has to be designed generic
for several cloud platforms. A modular architecture enables
extensions for other platforms. Beside the cloud platforms, the
users should not be limited by the choice of the container
cluster. The modular architecture enables later extensions for
missing cluster connectivity.
III. C4SARCHITECTURE
The architecture is divided into four layers. The core of
C4Sis the deployment and the monitoring engine. The user
can manage the deployment and get the monitoring events over
two interfaces. The other two parts are the container cluster
and the IaaS environments.
Graphical)User)Interface)
...)
Container)
)Cluster)
Command)Line)Interface)
IaaS)
Pla:orms)
Deployment)
Engine)
Monitoring)
Engine)
))))))))))))))))))))Deployment)
))))))))))))))))))))language)
1)
2)
3)
4)
Virtual)
Machines)
Virtual)
Machines)Virtual)
Machines)Virtual)
Machines)
Data)
Storage)
Engine)
Figure 1. C4Sarchitecture overview
A. Interfaces À
The management system will provide a web-based graph-
ical user interface (GUI) and a command line interface (CLI),
see Figure 1, À. Here the user can set the the account data
and limits of the IaaS platforms, the configuration for scaling
and also set transfer orders (e.g., moving containers to another
CLOUD COMPUTING 2016: The Seventh International Conference on Cloud Computing, GRIDs, and Virtualization
cloud platform). It is also possible to set rules for creating,
terminating and moving containers on a special event. By
setting these rules, the system can deal with failures like a
cloud platform interrupt. Other features like the automatic
container cluster software installation can be started using the
interfaces. Status information about the containers, the cluster
and the cloud platform are visible, monitoring events and the
triggered actions are also reported.
B. Deployment and Monitoring Engines Á
This subsection describes the engines shown at Figure
1, Á. The deployment engine is responsible for creating
and deleting instances on the cloud platforms, managing the
container cluster and deployment of the containers on it,
making it the main core of C4S. Features like load balancing
and container transfer are also controlled by this engine. The
deployment language is designed for stating the constraints of
the configuration. All needed informations about the container
application, the cluster and the IaaS platforms can be described
by the deployment language. The monitoring engine observes
the containers on the cluster and virtual machines on the IaaS
platform. Additional to reporting states and failures, actions
can be triggered by events. For example, if the engine registers
an exceptionally high workload of a container, it reports to the
deployment engine to scale out containers, and vice versa. The
engine can deploy the application on different cloud platforms
and is able to transfer containers between them. The data
storage engine is compatible with several block and object
storage systems to avoid vendor lock-in. The engine also
enables scalability and security features for data storage.
C. Container Cluster Â
The container cluster deploys the containers on the virtual
cloud machines (see Figure 1, Â). The management system
can terminate and create the containers in the cluster network.
The system can also transfer a container from one virtual
machine to another, which is not even necessarily running on
the same cloud platform. These actions are controlled by the
deployment engine and supervised by the monitoring engine.
In combination with the container cluster, the engines make it
possible to migrate services from one private or public cloud
infrastructure to another (not necessarily compatible) cloud
infrastructure.
D. IaaS Platforms Ã
The C4Scan manage several cloud platforms (see Figure
1, Ã). It is possible to create and terminate IaaS instances
accordingly on demand. Hence, it has to communicate with the
cloud platforms. Because of missing standardization [12] for
every provider API a special driver has to be designed. These
require a modular and easy to adapt software architecture. The
deployment language is designed for informations like access
data, limits and all other settings which can be set over the
described interfaces.
IV. INTENDED VALIDATION OF CONCEPT
It will be shown that SMEs can manage a container cluster
over (multi) cloud platforms. At first it will be demonstrated
that building a system, which fits all the required features, is
possible. Therefore, a working, open source C4Sprototype,
which conforms the requirements set in Section II, will be
developed. The system has to be implemented in a modular
and extendable way. As cluster platform, C4Swill support
Kubernetes first, other cluster environments will follow. Pre-
senting interchangeability and the open source type of C4S
will show that dependencies by the used software can be
prevented. To avoid vendor lock-in by the cloud provider,
the prototype must be able to install a multi-cloud container
cluster. First, the system will be compatible with the IaaS cloud
platform type representatives Amazon EC2, Google GCE and
OpenStack. To support other platforms, appropriate drivers
can be implemented. Transferability features like moving all
containers from one cloud platform to another will be imple-
mented. Terminating all containers and virtual machines on
one provider and creating them on another at the same time,
without changes in features like elasticity and auto-scaling, will
proof that C4Sis preventing vendor lock-in. The software will
also manage container application deployment. It will deploy a
container cluster, create and terminate containers and is usable
for deploying applications. Also, workloads will be created to
test the autoscaling features. With enforced failure states, the
robustness of the system will be demonstrated. It will be shown
that the system is able to keep the applications running even
when containers and virtual machines get disconnected. In the
second part of the proof of concept, a company will employ
the software. Thus, the expense for a small business using the
container cluster manager will be evaluated. Finally, a proof of
concept will be realized by several business companies. These
companies will use the C4Ssystem on their own for testing
a productive application deployment with real workload. Load
balancing, elasticity, auto-scaling and transferability features
will be applied in production. This way it will be shown that
SMEs can handle the complexity of a container cluster ap-
plication running on multiple cloud platforms without vendor
lock-in or dispensing with features like auto-scaling.
V. RELATED WORK
There are several solutions with overlapping features and/or
usage scenarios available. However, a system which fits all
requirements and features set in Section II for the C4Sde-
ployment manager does not exist.
A. Container Cluster, Load Balancing and Scaling
A Container Cluster should run on homogeneous machine
types to provide fine-grained resource allocation capabilities
[9]. In previous work, the similarity of different cloud provider
instance types was analyzed. It was concluded that only a
few instance pairs are really similar. There are just a few
virtual machine types, which should be used when running
an application on a multi cloud platform environment [13].
Another issue to consider is the network performance impacts
using technologies like container and software defined net-
works (SDN). Previous investigations found that performance
impacts depends i.e., on the used machine types and the
message sizes [14]. Using a (encrypted) cross-provider SDN
also causes performance impacts, especially when using low-
core machines [15].
B. IaaS Management and Transferability
Container migration from one cloud provider to another is
an important feature of C4S. Vendor lock-in is caused, i.a., by a
lack of standards [12]. Currently the proprietary EC2 is the de-
facto standard specification for managing cloud infrastructure.
CLOUD COMPUTING 2016: The Seventh International Conference on Cloud Computing, GRIDs, and Virtualization
However, open standards like OCCI and CIMI are important to
reduce vendor lock-in situations [16]. C4Sincludes a special
IaaS driver for each supported cloud provider. Other research
approaches in cloud migration can be reviewed under [17].
There are several solutions like Apache Libcloud, KOALA
[18], Scalr, Apache jclouds and deltacloud and T-Systems
Cloud Broker for managing and deploying virtual machines on
IaaS platforms. Except for the T-Systems Cloud Broker, the
solutions are open source but have mostly payable services,
reduced functionality or limited virtual machine quantities.
These systems support features like creating, stopping and
scaling virtual machines on IaaS cloud platforms. Some of
them like the T-Systems Cloud Broker, Scalr and Apache
jclouds are designed for cross-platform IaaS deployment. In
contrast to the C4Srequirements, the presented cloud managers
are limited to IaaS managing and do not offer container
deploying services. Some of them do not prevent vendor lock-
in by cloud providers or create new dependencies by itself
(e.g., T-System Cloud Broker, KOALA is limited to Amazon
AWS API compatible services).
C. Application Deployment
Peinl et al. [19] has defined requirements for a container
application deploying system. These coincide strongly with the
requirements for the C4Ssystem, which have been discussed
in Section II-A. He also gives an overview about container
cluster managing. For easy deployment a container application
with monitoring, scaling and controlling benefits, there exist
several commercial solutions like the Amazon EC2 Container
Service (ECS), Microsoft Azure Container Service and Giant
Swarm. Limited to the providers own IaaS infrastructure,
these solutions are not designed for multi-cloud platform
usages, especially between public clouds (a requirement of
C4S). Open source cluster managers are Apache Mesos and
Kubernetes. These systems are designed to run workloads
across tens of thousands of machines. The benefits and issues
using cluster technologies are very high reliability, availability
and scalability [9] [8]. However, they are not designed to
create and terminate virtual machines (like AWS instances),
but to deploy applications on given resources. So, they cannot
prevent cloud provider dependencies on their own, but provide
essential ingredients to do so. Another cluster management
tool for increasing the efficiency of datacenter servers is
called Quasar, which is developed by the Stanford University
and designed for maximizing resource utilization. The system
performs coordinated resource allocation. Several techniques
analyze performance interferences, scaling (up and out) and
resource heterogeneity [20].
VI. CONCLUSION
The C4Sis in the planning state, although some parts are
already implemented, like the cloud platform driver for fast
deploying IaaS instances. The next step is the creation of
a deployment language for dedicated containers to run on a
Kubernetes container cluster, finding solutions for container
cluster scaling problems and handling stateful tasks like file
storage. The system will be implemented in a modular and
generic way to allow an easy adaptation for different cloud
platforms and container cluster software. With C4SSMEs will
be able to deploy and operate their container applications on
an elastic, auto-scaling and load balancing multi-cloud cluster
with transferability features to prevent vendor lock-in.
ACKNOWLEDGMENT
This research is funded by German Federal Min-
istry of Education and Research (Project Cloud TRANSIT,
03FH021PX4). The author thank the University of L¨
ubeck
(Institute of Telematics) and fat IT solution GmbH (Kiel) for
their support of Cloud TRANSIT.
REFERENCES
[1] P. Mell and T. Grance, “The nist definition of cloud computing,” 2011.
[2] M. Armbrust et al., “A view of cloud computing,” Communications of
the ACM, vol. 53, no. 4, 2010, pp. 50–58.
[3] L. M. Vaquero, L. Rodero-Merino, and R. Buyya, “Dynamically scaling
applications in the cloud,” ACM SIGCOMM Computer Communication
Review, vol. 41, no. 1, 2011, pp. 45–52.
[4] M. Mao and M. Humphrey, “Auto-scaling to minimize cost and meet
application deadlines in cloud workflows,” in Proceedings of 2011 In-
ternational Conference for High Performance Computing, Networking,
Storage and Analysis. ACM, 2011, p. 49.
[5] N. Kratzke, “Lightweight virtualization cluster - howto overcome cloud
vendor lock-in,” Journal of Computer and Communication (JCC), vol. 2,
no. 12, oct 2014.
[6] R. Sahandi, A. Alkhalil, and J. Opara-Martins, “Cloud computing from
smes perspective: A survey-based investigation,” Journal of Information
Technology Management, vol. 24, no. 1, 2013, pp. 1–12.
[7] J. Turnbull, The Docker Book: Containerization is the new virtualiza-
tion. James Turnbull, 2014.
[8] A. Verma, L. Pedrosa, M. Korupolu, D. Oppenheimer, E. Tune, and
J. Wilkes, “Large-scale cluster management at google with borg,” in
Proceedings of the Tenth European Conference on Computer Systems.
ACM, 2015, p. 18.
[9] B. Hindman et al., “Mesos: A platform for fine-grained resource sharing
in the data center.” in NSDI, vol. 11, 2011, pp. 22–22.
[10] Definition recommendation of micro, small and medium-sized
enterprises by the european communities. Last access 12th Nov.
2015. [Online]. Available: http://eur-lex.europa.eu/legal-content/EN/
TXT/?uri=uriserv:OJ.L .2003.124.01.0036.01.ENG
[11] N. Kratzke, “A lightweight virtualization cluster reference architecture
derived from open source paas platforms,” Open J. Mob. Comput. Cloud
Comput, vol. 1, 2014, pp. 17–30.
[12] J. Opara-Martins, R. Sahandi, and F. Tian, “Critical review of vendor
lock-in and its impact on adoption of cloud computing,” in Information
Society (i-Society), 2014 International Conference on. IEEE, 2014,
pp. 92–97.
[13] N. Kratzke and P.-C. Quint, “How to operate container clusters more
efficiently? some insights concerning containers, software-defined-
networks, and their sometimes counterintuitive impact on network
performance,” International Journal on Advances in Networks and
Services, vol. 8, no. 3&4, 2015, (in press).
[14] N. Kratzke and P.-C. Quint, “About automatic benchmarking of iaas
cloud service providers for a world of container clusters,” Journal of
Cloud Computing Research, vol. 1, no. 1, 2015, pp. 16–34.
[15] N. Kratzke, “About microservices, containers and their underestimated
impact on network performance,” CLOUD COMPUTING 2015, 2015,
p. 180.
[16] C. Pahl, L. Zhang, and F. Fowley, “Interoperability standards for cloud
architecture,” in 3rd International Conference on Cloud Computing and
Services Science, CLOSER. Dublin City University, 2013, pp. 8–10.
[17] P. Jamshidi, A. Ahmad, and C. Pahl, “Cloud migration research: a
systematic review,” Cloud Computing, IEEE Transactions on, vol. 1,
no. 2, 2013, pp. 142–157.
[18] C. Baun, M. Kunze, and V. Mauch, “The koala cloud manager: Cloud
service management the easy way,” in Cloud Computing (CLOUD),
2011 IEEE International Conference on. IEEE, 2011, pp. 744–745.
[19] R. Peinl and F. Holzschuher, “The docker ecosystem needs consolida-
tion.” in CLOSER, 2015, pp. 535–542.
[20] C. Delimitrou and C. Kozyrakis, “Quasar: Resource-efficient and qos-
aware cluster management,” ACM SIGPLAN Notices, vol. 49, no. 4,
2014, pp. 127–144.
... This component is capable to operate and transfer elastic container platforms in multi-cloud contexts at runtime without downtimes. We refer to the original studies [49,50,51,52] and our software prototypes PLAIN, ECP DEPLOY, and OPEN4SSH (see Appendix D: Table 18) for more indepth information. ...
... (2015) ) [32] x x x Automatic Benchmarking of IaaS Cloud Service Providers for a World of Container Clusters (2015) [33] x x x CloudTRANSIT -Transferierbare IT-Services mittels einer generischen Cloud Service Description Language (2014) [90] x Lightweight Virtualization Cluster -Howto overcome Cloud Vendor Lock-in (2014) [15] x A Lightweight Virtualization Cluster Reference Architecture Derived from Open Source PaaS Platforms (2014) [16] x Conference Papers Towards a Lightweight Multi-Cloud DSL for Elastic and Transferable Cloud-native Applications (2018) [59] x x About being the Tortoise or the Hare? Making Cloud Applications too Fast and Furious for Attackers (2018) [80] x About an Immune System Understanding for Cloud-native Applications (2018) [79] x Smuggling Multi-Cloud Support into Cloud-native Applications using Elastic Container Platforms (2017) [49] x x x Towards a Description of Elastic Cloud-native Applications for Transferable Multi-Cloud-Deployments (2017) [60] x ClouNS -a Cloud-Native Application Reference Model for Enterprise Architects (2016) [1] x ppbench -A Visualizing Network Benchmark for Microservices [30] x x Overcome Vendor Lock-In by Integrating Already Available Container Technologies (2016) [50] x x About Microservices, Containers and their Underestimated Impact on Network Performance (2015) [29] x ...
Preprint
Full-text available
The project CloudTRANSIT dealt with the question how to transfer cloud applications and services at runtime without downtime across cloud infrastructures from different public and private cloud service providers. This technical report summarizes the outcomes of more than 20 research papers and reports that have been published throughout the course of the project. The intent of this report is to provide an integrated birds-eye view on these-so far-isolated papers. The report references to the original papers where ever possible. This project also systematically investigated practitioner initiated cloud application engineering trends of the last three years that provide several promising technical opportunities to avoid cloud vendor lock-in pragmatically. Especially European cloud service providers should track such kind of research because of the technical opportunities to bring cloud application workloads back home to Europe that are currently often deployed and inherently bound to U.S. providers. Intensified EU General Data Protection (GDPR) policies, European Cloud Initiatives, or "America First" policies might even make this imperative. There are technical solutions needed for these scenarios that are manageable not only by large but also by small and medium sized enterprises. Therefore, this project systematically analyzed commonalities of cloud infrastructures and of cloud applications. Latest evolutions of cloud standards and cloud engineering trends (like containerization) were used to derive a cloud-native reference model (ClouNS) that guided the development of a pragmatic cloud-transferability solution. This solution intentionally separated the infrastructure-agnostic operation of elastic container platforms (like Swarm, Kubernetes, Mesos/Marathon, etc.) via a multi-cloud-scaler and the platform-agnostic definition of cloud-native applications and services via an unified cloud application modeling language. Both components are independent but complementary. Because of their independence they can even contribute (although not intended) to other fields like moving target based cloud security. The report summarizes the main outcomes and insights of a proof-of-concept solution to realize transferability for cloud applications and services at runtime without downtime.
... This component is capable to operate and transfer elastic container platforms in multi-cloud contexts at runtime without downtimes. We refer to the original studies [51,52,53,54] and our software prototypes PLAIN, ECP DEPLOY, and OPEN4SSH (see Appendix D: Table 18) for more in-depth information. ...
... (2015) [34] x x x Automatic Benchmarking of IaaS Cloud Service Providers for a World of Container Clusters (2015) [35] x x x CloudTRANSIT -Transferierbare IT-Services mittels einer generischen Cloud Service Description Language (2014) [92] x Lightweight Virtualization Cluster -Howto overcome Cloud Vendor Lock-in (2014) [16] x Conference Papers Towards Distributed Clouds (2018) [2] x Towards a Lightweight Multi-Cloud DSL for Elastic and Transferable Cloud-native Applications (2018) [61] x x About being the Tortoise or the Hare? Making Cloud Applications too Fast and Furious for Attackers (2018) [82] x About an Immune System Understanding for Cloud-native Applications (2018) [81] x Smuggling Multi-Cloud Support into Cloud-native Applications using Elastic Container Platforms (2017) [51] x x x ClouNS -a Cloud-Native Application Reference Model for Enterprise Architects (2016) [1] x ppbench -A Visualizing Network Benchmark for Microservices (2016) [32] x x Overcome Vendor Lock-In by Integrating Already Available Container Technologies (2016) [52] x x About Microservices, Containers and their Underestimated Impact on Network Performance (2015) [31] x ...
Technical Report
Full-text available
The project CloudTRANSIT dealt with the question of how to transfer cloud applications and services at runtime without downtime across cloud infrastructures from different public and private cloud service providers. This technical report summarizes the outcomes of approximately 20 research papers that have been published throughout the project. This report intends to provide an integrated birds-eye view on these-so far-isolated papers. The report references the original papers where ever possible. This project also systematically investigated practitioner initiated cloud application engineering trends of the last three years that provide several promising technical opportunities to avoid cloud vendor lock-in pragmatically. Especially European cloud service providers should track such kind of research because of the technical opportunities to bring cloud application workloads back home to Europe. Such workloads are currently often deployed and inherently bound to U.S. providers. Intensified EU General Data Protection (GDPR) policies, European Cloud Initiatives, or "America First" policies might even make this imperative. So, technical solutions needed for these scenarios that are manageable not only by large but also by small and medium-sized enterprises. Therefore, this project systematically analyzed commonalities of cloud infrastructures and cloud applications. Latest evolutions of cloud standards and cloud engineering trends (like containerization) were used to derive a cloud-native reference model (ClouNS) that guided the development of a pragmatic cloud-transferability solution. This solution intentionally separated the infrastructure-agnostic operation of elastic container platforms (like Swarm, Kubernetes, Mesos/Marathon, etc.) via a multi-cloud-scaler and the platform-agnostic definition of cloud-native applications and services via an unified cloud application modeling language. Both components are independent but complementary. Because of their independence, they can even contribute (although not intended) to other fields like moving target based cloud security-but also distributed ledger technologies (block-chains) made provide options here. The report summarizes the main outcomes and insights of a proof-of-concept solution to realize transferability for cloud applications and services at runtime without downtime.
... SolarEdge, Sungevity, SolarCity 284 , EnerNOC); 6. media społecznościowe (np. LinkedIn, Medium) 285 . ...
Book
Full-text available
W ostatniej dekadzie radykalnie wzrosło zainteresowanie koncepcją modeli biznesu z uwagi na przekonanie, że o sukcesie w biznesie w pierwszej kolejności decyduje zmonetyzowana uwaga klienta, a nie sam produkt czy zastosowana technologia. Żyjemy w czasach, w których budowanie produktu jest łatwiejsze i tańsze niż kiedykolwiek wcześniej, a podaż samych usług jest największa w historii. Jednocześnie, żeby pochwalić się sukcesem, trzeba z największą uwagą podejść zarówno do produktu, jak i do modelu biznesu, za pomocą którego produkt znajdzie klientów. Parafrazując Ash Maurya: „Model biznesu, a nie usługa, to usługa. To model biznesu kupują inwestorzy i klienci, a nie oferowane im rozwiązanie”. Koncepcja ta jest szczególnie interesująca w kontekście projektowania, skalowania, okresów przejściowych i planowania zmian o znaczeniu strategicznym. Samo modelowanie, weryfikacja i prototypowanie są permanentnym wyzwaniem każdej działalności twórczej. Każdy architekt, lekarz czy księgowy zaczyna edukację od modelu. Mimo że architektów uczy się posługiwania makietami, przedsiębiorców szkoli się głównie w działalności operacyjnej. Starają się i są innowacyjni, ale zbyt rzadko posługują się narzędziami o cechach modeli. Koncepcja modeli biznesu jest narzędziem poręcznym daleko bardziej niż biznesplan i bardziej zrozumiałym. Ponadto projektowanie modeli biznesu tym różni się od biznesplanu, że projekt nie zawiera elementu akceptacji, ale stanowi przygotowanie planu, szkic zawierający kilka wariantów ewentualnych planów i opisujący różne sposoby działania. Projektant wyznacza cele i omawia środki prowadzące w danych okolicznościach do ich realizacji. Skoncentrowanie się na usługach profesjonalnych jest obiecującym i ciekawym przedmiotem analizy z zakresu modeli biznesu. Badanie modeli biznesu tych przedsiębiorstw wpisuje się w coraz wyraźniejszy trend nadawania tym usługom wzorcowego znaczenia w kategorii przedsiębiorstw opartych na wiedzy. Usługi profesjonalne to właściwie konglomerat wzorowych postaw zarządzania wiedzą, w szczególności wobec uczenia się, budowania relacji, autokoncepcji, poczucia tożsamości, kompetencji i stanu poznania. Zrozumienie konstrukcji przedsiębiorstw profesjonalnych ma znaczenie także dla pozostałych rodzajów usług. Przedsiębiorstwa tej klasy opierają swój rozwój na wiedzy i rosną zdecydowanie najszybciej we współczesnej gospodarce. Problematyka projektowania innowacyjnych modeli biznesu usług z uwzględnieniem potencjału mediów społecznościowych jest nadal aktualna. W minionych czasach budowa i prowadzenie przedsiębiorstwa wymagały zaangażowania różnego rodzaju interesariuszy oraz dostępu do kapitału, mediów masowych, kanałów dystrybucji itd. Dzisiaj każdy, kto ma dostęp do internetu może zwiększyć swój „koszyk zasobów” poprzez crowdsourcing i crowdfunding, czego dobitnym przykładem są wdrożenia chmur obliczeniowych i oprogramowania open source. Przy tym na rynek pracy wkroczyło pokolenie milenialsów i post-milenialsów. Ta generacja charakteryzuje się przede wszystkim tym, że „egzystuje” w świecie nowych technologii, ich światem są media społecznościowe, a za decyzją zakupu nie stoi już marka, ale kultura i wartości, jakie przejawia dane przedsiębiorstwo. Pokolenie C uczy się codziennie, w szczególności konfigurując swoje sieci społecznościowe jako wirtualne uniwersytety.
... The design and effective deployment of reliable, scalable and portable complex computational systems is a major issue in many research and industrial fields [38,53]. This is the reason why there have been significant progresses in building virtualization layers for operating systems and, more recently, software applications [20,33]. ...
Article
Many bioinformatic applications require to exploit the capabilities of several computational resources to effectively access and process large and distributed datasets. In this context, Grid computing has been largely used to face unprecedented challenges in Computational Biology, at the cost of complex workarounds needed to make applications successfully running. The Grid computing paradigm, in fact, has always suffered from a lack of flexibility. Although this has been partially solved by Cloud computing, the on-demand approach is way distant from the original idea of volunteering computing that boosted the Grid paradigm. A solution to outpace the impossibility of creating custom environments for running applications in Grid is represented by the containerization technology. In this paper, we describe our experience in exploiting a Docker-based approach to run in a Grid environment a novel, computationally intensive, bioinformatic application, which models the DNA spatial conformation inside the nucleus of eukaryotic cells. Results assess the feasibility of this approach in terms of performance and efforts to run large experiments.
... Many researchers are working and had given solutions to address the problem. Peter-Christian Quint and Nane Kratzke (2016) [5] worked on a solution by integrating already available container technologies towards transferability in cloud computing for small and medium enterprises (SMEs) to avoid vendor lock-in. They are introducing C 4 S which is an acronym for Container Cluster in Cloud Computing Systems (C 4 S). ...
Article
Full-text available
Cloud computing is a big technological advancement now days and everyone in the industry are either intentionally or unintentionally using it to some extent. Since Cloud Computing is in its growing phase hence it has some pros and cons, like it is helping the IT departments in efforts deduction and C level managers from paying the upfront cash all at ones, but had certain issues also, like security, privacy and dependency on someone else's services, which might be a big concern for some organizations dealing with critical data. Researchers and industry experts are working a lot on resolving many critical issues of cloud computing, but beside all the other issues one of the major problem that arises with the cloud users is the vendor lock-in problem which should be kept in mind before moving to the cloud. This paper discusses & reviews the concepts of cloud in depth, causes, and some avoidance suggestions of vendor lock-in problem, which if prevented will result in increasing clients and cloud based services, since a huge number of organizations are afraid what-if they are stucked and not satisfied with services of the vendor.
... Many researchers are working and had given solutions to address the problem. Peter-Christian Quint and Nane Kratzke (2016) [5] worked on a solution by integrating already available container technologies towards transferability in cloud computing for small and medium enterprises (SMEs) to avoid vendor lock-in. They are introducing C 4 S which is an acronym for Container Cluster in Cloud Computing Systems (C 4 S). ...
... Practitioner models do this often. By using the insights from our systematic mapping study [24] and our review of cloud standards [5], we compiled a reference model of cloud-native applications. This layered reference model is shown and explained in Figure 3. ...
Article
Full-text available
This paper presents a review of cloud application architectures and its evolution. It reports observations being made during a research project that tackled the problem to transfer cloud applications between different cloud infrastructures. As a side effect, we learned a lot about commonalities and differences from plenty of different cloud applications which might be of value for cloud software engineers and architects. Throughout the research project, we analyzed industrial cloud standards, performed systematic mapping studies of cloud-native application-related research papers, did action research activities in cloud engineering projects, modeled a cloud application reference model, and performed software and domain-specific language engineering activities. Two primary (and sometimes overlooked) trends can be identified. First, cloud computing and its related application architecture evolution can be seen as a steady process to optimize resource utilization in cloud computing. Second, these resource utilization improvements resulted over time in an architectural evolution of how cloud applications are being built and deployed. A shift from monolithic service-oriented architectures (SOA), via independently deployable microservices towards so-called serverless architectures, is observable. In particular, serverless architectures are more decentralized and distributed and make more intentional use of separately provided services. In other words, a decentralizing trend in cloud application architectures is observable that emphasizes decentralized architectures known from former peer-to-peer based approaches. This is astonishing because, with the rise of cloud computing (and its centralized service provisioning concept), the research interest in peer-to-peer based approaches (and its decentralizing philosophy) decreased. However, this seems to change. Cloud computing could head into the future of more decentralized and more meshed services.
... By using the insights from our systematic mapping study [24] and our review of cloud standards [5] we compiled a reference model of cloud-native applications. This layered reference model is shown and explained in Figure 3. ...
Preprint
Full-text available
This paper presents a review of cloud application architectures and its evolution. It reports observations being made during the course of a research project that tackled the problem to transfer cloud applications between different cloud infrastructures. As a side effect we learned a lot about commonalities and differences from plenty of different cloud applications which might be of value for cloud software engineers and architects. Throughout the course of the research project we analyzed industrial cloud standards, performed systematic mapping studies of cloud-native application related research papers, performed action research activities in cloud engineering projects, modeled a cloud application reference model, and performed software and domain specific language engineering activities. Two major (and sometimes overlooked) trends can be identified. First, cloud computing and its related application architecture evolution can be seen as a steady process to optimize resource utilization in cloud computing. Second, this resource utilization improvements resulted over time in an architectural evolution how cloud applications are being build and deployed. A shift from monolithic servce-oriented architectures (SOA), via independently deployable microservices towards so called serverless architectures is observable. Especially serverless architectures are more decentralized and distributed, and make more intentional use of independently provided services. In other words, a decentralizing trend in cloud application architectures is observable that emphasizes decentralized architectures known from former peer-to-peer based approaches. That is astonishing because with the rise of cloud computing (and its centralized service provisioning concept) the research interest in peer-to-peer based approaches (and its decentralizing philosophy) decreased. But this seems to change. Cloud computing could head into future of more decentralized and more meshed services.
... Even if the container cluster is deployed across different cloud service providers (2nd benefit) it can be accessed as logical single cluster, which is of course a great benefit from a vendor lock-in avoiding (3rd benefit) point of view. Our own research deals with aspects on this layer [44]. ...
Article
Full-text available
The capability to operate cloud-native applications can generate enormous business growth and value. But enterprise architects should be aware that cloud-native applications are vulnerable to vendor lock-in. We investigated cloud-native application design principles, public cloud service providers, and industrial cloud standards. All results indicate that most cloud service categories seem to foster vendor lock-in situations which might be especially problematic for enterprise architectures. This might sound disillusioning at first. However, we present a reference model for cloud-native applications that relies only on a small subset of well standardized IaaS services. The reference model can be used for codifying cloud technologies. It can guide technology identification, classification, adoption, research and development processes for cloud-native application and for vendor lock-in aware enterprise architecture engineering methodologies.
Article
Cloud brokering has emerged a new service paradigm because of the proliferation of a large number of cloud service providers. Cloud brokering attracts users because it aggregates and extends computing services from multiple cloud service providers. Service providers offer services with varying capabilities, technologies and pricing models which creates a heterogeneous service environment. Cloud users are also increasing day by day with varied requirements. Users prefer services from multiple service providers in order to fulfill their money and application requirements. Therefore, service allocation becomes one of most challenging task to fulfill all the requirements of users within resource constraints. Service allocation problem in multiple cloud environment has been studied as combinatorial optimization problem. Broker guided service allocation increases application resilience, provides unified billing and simplifies service management. In this paper we have proposed a broker guided service allocation (BGSA) model for Federated Cloud. BGSA employs a genetic algorithm based technique for service allocation. BGSA optimizes service allocation and derives well distributed set of non-dominated solutions. Simulation based evaluation of BGSA has been done and obtained results are compared with other two multi-objective algorithms. Observed results improve both execution time and total utilization.
Article
Full-text available
In previous work, we concluded that container technologies and overlay networks typically have negative performance impacts, mainly due to an additional layer to networking. This is what everybody would expect, only the degree of impact might be questionable. These negative performance impacts can be accepted (if they stay moderate), due to a better flexibility and manageability of the resulting systems. However, we draw our conclusion only on data covering small core machine types. This extended work additionally analyzed the impact of various (high core) machine types of different public cloud service providers (Amazon Web Services, AWS and Google Compute Engine, GCE) and comes to a more differentiated view and some astonishing results for high core machine types. Our findings stand to reason that a simple and cost effective strategy is to operate container cluster with highly similar high core machine types (even across different cloud service providers). This strategy should cover major relevant and complex data transfer rate reducing effects of containers, container clusters and software-defined-networks appropriately.
Article
Full-text available
Cloud service selection can be a complex and challenging task for a cloud engineer. Most current approaches try to identify a best cloud service provider by evaluating several relevant criteria like prices, processing, memory, disk, network performance, quality of service and so on. Nevertheless, the decision making problem involves so many variables, that it is hard to model it appropriately. We present an approach that is not about selecting a best cloud service provider. It is about selecting most similar resources provided by different cloud service providers. This fits much better practical needs of cloud service engineers. Especially, if container clusters are involved. EasyCompare, an automated benchmarking tool suite to compare cloud service providers, is able to benchmark and compare virtual machine types of different cloud service providers using an Euclidian distance measure. It turned out, that only 1% of theoretical possible machine pairs have to be considered in practice. These relevant machine types can be identiífied by systematic benchmark runs in less than three hours. We present some expectable but also astonishing evaluation results of EasyCompare used to evaluate two major and representative public cloud service providers: Amazon Web Services and Google Compute Engine.
Conference Paper
Full-text available
Docker provides a good basis to run composite applications in the cloud, especially if those are not cloud- aware, or cloud-native. However, Docker concentrates on managing containers on one host, but SaaS provi- ders need a container management solution for multiple hosts. Therefore, a number of tools emerged that claim to solve the problem. This paper classifies the solutions, maps them to requirements from a case study and identifies gaps and integration requirements. We conclude that the Docker ecosystem could help moving from IaaS and PaaS solutions towards a runtime environment for SaaS applications, but needs consolidation.
Conference Paper
Full-text available
Enabling cloud infrastructures to evolve into a transparent platform raises interoperability issues. Interoper-ability requires standard data models and communication technologies compatible with the existing Internet infrastructure. To reduce vendor lock-in situations, cloud computing must implement common strategies regarding standards, interoperability and portability. Open standards are of critical importance and need to be embedded into interoperability solutions. Interoperability is determined at the data level as well as the service level. Relevant modelling standards and integration solutions shall be analysed in the context of clouds.
Conference Paper
Full-text available
Microservices are used to build complex applications composed of small, independent and highly decoupled processes. Recently, microservices are often mentioned in one breath with container technologies like Docker. That is why operating system virtualization experiences a renaissance in cloud computing. These approaches shall provide horizontally scalable, easily deployable systems and a high-performance alternative to hypervisors. Nevertheless, performance impacts of containers on top of hypervisors are hardly investigated. Furthermore, microservice frameworks often come along with software defined networks. This contribution presents benchmark results to quantify the impacts of container, software defined networking and encryption on network performance. Even containers, although postulated to be lightweight, show a noteworthy impact to network performance. These impacts can be minimized on several system layers. Some design recommendations for cloud deployed systems following the microservice architecture pattern are derived.
Article
Full-text available
Cloud computing has the potential to play a major role in addressing inefficiencies and make a fundamental contribution to the growth and competitiveness of enterprises mainly for SMEs. By adopting cloud computing services SMEs will be able to obtain the latest technology, without the need for upfront cost. This paper explores the perception of cloud computing from an SME stance. Motivations, requirements, and concerns surrounding the adoption of cloud computing are discussed. A survey of 300 SMEs conducted in the UK shows an interest in exploiting cloud services. However, SMEs have shown concerns with regards to security and vendor lock-in. These concerns could have influenced the speed of cloud computing adoption. The findings are expected to assist SMEs in their adoption of cloud computing services; they may also inform service providers with respect to end-users’ concerns.
Conference Paper
Full-text available
Cloud computing offers an innovative business model for organizations to adopt IT services at a reduced cost with increased reliability and scalability. However organizations are slow in adopting the cloud model due to the prevalent vendor lock-in issue and challenges associated with it. While the existing cloud solutions for public and private companies are vendor locked-in by design, their existence is subject to limited possibility to interoperate with other cloud systems. In this paper we have presented a critical review of pertinent business, technical and legal issues associated with vendor lock-in, and how it impacts on the widespread adoption of cloud computing. The paper attempts to reflect on the issues associated with interoperability and portability, but with a focus on vendor lock-in. Moreover, the paper demonstrates the importance of interoperability, portability and standards applicable to cloud computing environments along with highlighting other corporate concerns due to the lock-in problem. The outcome of this paper provides a foundation for future analysis and review regarding the impact of vendor neutrality for corporate cloud computing application and services.
Article
Cloud computing promises flexibility and high performance for users and high cost-efficiency for operators. Nevertheless, most cloud facilities operate at very low utilization, hurting both cost effectiveness and future scalability. We present Quasar, a cluster management system that increases resource utilization while providing consistently high application performance. Quasar employs three techniques. First, it does not rely on resource reservations, which lead to underutilization as users do not necessarily understand workload dynamics and physical resource requirements of complex codebases. Instead, users express performance constraints for each workload, letting Quasar determine the right amount of resources to meet these constraints at any point. Second, Quasar uses classification techniques to quickly and accurately determine the impact of the amount of resources (scale-out and scale-up), type of resources, and interference on performance for each workload and dataset. Third, it uses the classification results to jointly perform resource allocation and assignment, quickly exploring the large space of options for an efficient way to pack workloads on available resources. Quasar monitors workload performance and adjusts resource allocation and assignment when needed. We evaluate Quasar over a wide range of workload scenarios, including combinations of distributed analytics frameworks and low-latency, stateful services, both on a local cluster and a cluster of dedicated EC2 servers. At steady state, Quasar improves resource utilization by 47% in the 200-server EC2 cluster, while meeting performance constraints for workloads of all types.
Article
Google's Borg system is a cluster manager that runs hundreds of thousands of jobs, from many thousands of different applications, across a number of clusters each with up to tens of thousands of machines. It achieves high utilization by combining admission control, efficient task-packing, over-commitment, and machine sharing with process-level performance isolation. It supports high-availability applications with runtime features that minimize fault-recovery time, and scheduling policies that reduce the probability of correlated failures. Borg simplifies life for its users by offering a declarative job specification language, name service integration, real-time job monitoring, and tools to analyze and simulate system behavior. We present a summary of the Borg system architecture and features, important design decisions, a quantitative analysis of some of its policy decisions, and a qualitative examination of lessons learned from a decade of operational experience with it.