Conference PaperPDF Available

Towards a Description of Elastic Cloud-native Applications for Transferable Multi-Cloud-Deployments

Authors:

Abstract and Figures

Elastic container platforms (ECP) like Docker Swarm, Kubernetes (k8s) and Apache Mesos have gotten more and more attention by practitioners in recent years [1]. Elastic container platforms fit very well with existing cloud-native application (CNA) architecture approaches [6]. Corresponding system designs often follow a microservice-based architecture [8, 5]. Nevertheless , the reader should be aware that the effective and elastic operation of such kind of elastic container platforms is still a question in research-although there are interesting approaches making use of bare metal [1] as well as public and private cloud infrastructures [6]. What is more, there are even more severe open issues like how to design, define and operate cloud applications on top of such container platforms. This is especially true for multi-cloud contexts. Such open issues in scheduling microservices to the cloud come along with questions regarding interoperability, application topology and composition aspects 1 [7] as well as elastic runtime adaption aspects of that kind of cloud-native applications [2]. The combination of these three aspects (multi-cloud interoperability, application topology definition/composition and elastic runtime adaption) is-to the best of the authors' knowledge-not solved satisfactorily so far. These three problems are often seen in isolation. In consequence topology based multi-cloud approaches often do not consider elastic runtime adaption of deployments (see [7] as research type representative). But to be honest, elastic runtime adaptive solutions focusing multi-cloud capability do not make use of topology based approaches as well (take [6] as an example). And finally, (topology-based) cloud-native applications making use of elastic runtime adaption are often inherently bound to specific cloud infrastructure services (like cloud provider specific monitoring, scaling and messaging services) making it hard to transfer these cloud applications easily to another cloud provider or even operate them across providers at the same time [5]. Furthermore, Heinrich et al. mention several research challenges and directions that come along with microservice focused monitoring and runtime adaption approaches [3]. All in all, it seems like cloud engineers (and researchers as well) just trust in picking only two out of three options. In our further research we have developed a possibility for deploying CNA on a transferable, auto-scaling multi-cloud-platform. Figure 1 illustrates such an ECP based CNA deployment. The ECP (Layer 4 according to the CNA reference model [5]) enables multi-cloud deployments. For avoiding vendor lock-in, the ECP has to be transferable at runtime. While a descriptive cluster definition model can be used for describing the elastic platform [4], we have identified the need to also describe the application topologies without dependence to a special, static ECP with a domain-specific language (DSL) (see the DSL mark in the figure). The focus of the presentation is on the identification of description possibilities for the Layers 5 and 6. First, the particular characteristics of such a deployment have to be identified. Therefore, we analyzed and compared the architectures and concepts of container cluster-platforms. As representatives, we have chosen three most often used container cluster platforms Kubernetes, Docker Swarm 1 These approaches are mostly TOSCA based.
Content may be subject to copyright.
Towards a Description of Elastic Cloud-native
Applications for Transferable Multi-Cloud-Deployments
Peter-Christian Quint, Nane Kratzke
ubeck University of Applied Sciences
Center of Excellence for Communication, Systems and Applications (CoSA)
ubeck, Germany
extended abstract
Elastic container platforms (ECP) like Docker Swarm, Kubernetes (k8s) and Apache Mesos
have gotten more and more attention by practitioners in recent years [1]. Elastic container
platforms fit very well with existing cloud-native application (CNA) architecture approaches
[6]. Corresponding system designs often follow a microservice-based architecture [8,5]. Never-
theless, the reader should be aware that the effective and elastic operation of such kind of elastic
container platforms is still a question in research – although there are interesting approaches
making use of bare metal [1] as well as public and private cloud infrastructures [6]. What is
more, there are even more severe open issues like how to design, define and operate cloud ap-
plications on top of such container platforms. This is especially true for multi-cloud contexts.
Such open issues in scheduling microservices to the cloud come along with questions regarding
interoperability, application topology and composition aspects1[7] as well as elastic runtime
adaption aspects of that kind of cloud-native applications [2]. The combination of these three
aspects (multi-cloud interoperability, application topology definition/composition and elastic
runtime adaption) is – to the best of the authors’ knowledge – not solved satisfactorily so far.
These three problems are often seen in isolation. In consequence topology based multi-cloud
approaches often do not consider elastic runtime adaption of deployments (see [7] as research
type representative). But to be honest, elastic runtime adaptive solutions focusing multi-cloud
capability do not make use of topology based approaches as well (take [6] as an example).
And finally, (topology-based) cloud-native applications making use of elastic runtime adaption
are often inherently bound to specific cloud infrastructure services (like cloud provider specific
monitoring, scaling and messaging services) making it hard to transfer these cloud applications
easily to another cloud provider or even operate them across providers at the same time [5].
Furthermore, Heinrich et al. mention several research challenges and directions that come along
with microservice focused monitoring and runtime adaption approaches [3]. All in all, it seems
like cloud engineers (and researchers as well) just trust in picking only two out of three options.
In our further research we have developed a possibility for deploying CNA on a transferable,
auto-scaling multi-cloud-platform. Figure 1illustrates such an ECP based CNA deployment.
The ECP (Layer 4 according to the CNA reference model [5]) enables multi-cloud deployments.
For avoiding vendor lock-in, the ECP has to be transferable at runtime. While a descriptive
cluster definition model can be used for describing the elastic platform [4], we have identified
the need to also describe the application topologies without dependence to a special, static
ECP with a domain-specific language (DSL) (see the DSL mark in the figure). The focus of the
presentation is on the identification of description possibilities for the Layers 5 and 6. First, the
particular characteristics of such a deployment have to be identified. Therefore, we analyzed
and compared the architectures and concepts of container cluster-platforms. As representatives,
we have chosen three most often used container cluster platforms Kubernetes, Docker Swarm
1These approaches are mostly TOSCA based.
Towards a Description of Elastic CNAs for transferable Multi-Cloud-Deployments Quint and Kratzke
DSL
...
CNA reference model
Figure 1: An ECP based CNA deployment.
Mode and Apache Mesos with Marathon. After defining requirements for the DSL, we iden-
tified, investigated and analyzed approaches like TOSCA, CAML, MOCCA, MULTICLAPP,
CoudMIG, CloudML and MODACloudsML.
In our presentation we will give an introduction about ECP based CNA and why they can
help small and medium sized enterprises to use cloud services without vendor lock-in. We will
talk about creating a DSL for CNA based on transferable ECP and our investigation about
existing approaches. After presenting our requirements, the analysis and evaluation, we will
derive our results. In the end we will present our conclusion and outlook, including several
lessons learned.
References
[1] Carlos de Alfonso, Amanda Calatrava, and Germ´an Molt´o. Container-based virtual elastic clusters.
Journal of Systems and Software, 127(January):1–11, 2017.
[2] Maria Fazio, Antonio Celesti, Rajiv Ranjan, Chang Liu, Lydia Chen, and Massimo Villari. Open
Issues in Scheduling Microservices in the Cloud. IEEE Cloud Computing, 3(5):81–88, sep 2016.
[3] Robert Heinrich, Andr´e van Hoorn, Holger Knoche, Fei Li, Lucy Ellen Lwakatare, Claus Pahl, Stefan
Schulte, and Johannes Wettinger. Performance Engineering for Microservices: Research Challenges
and Directions. In 8th ACM/SPEC on Int. Conf. on Performance Engineering Companion, number
April, pages 223–226, 2017.
[4] Nane Kratzke. Smuggling Multi-cloud Support into Cloud-native Applications using Elastic Con-
tainer Platforms. 2016.
[5] Nane Kratzke and Rene Peinl. ClouNS - a Cloud-Native Application Reference Model for Enterprise
Architects. In 2016 IEEE 20th International Enterprise Distributed Object Computing Workshop
(EDOCW), pages 198–207, Vienna, sep 2016. IEEE.
[6] Nane Kratzke and Peter-Christian Quint. Understanding cloud-native applications after 10 years
of cloud computing - A systematic mapping study. The Journal of Systems and Software, 126:1–16,
2017.
[7] Karoline Saatkamp, Uwe Breitenb¨ucher, Oliver Kopp, and Frank Leymann. Topology Splitting
and Matching for Multi-Cloud Deployments. In 8th Int. Conf. on Cloud Computing and Service
Sciences (CLOSER 2017), 2017.
[8] Alan Sill. The Design and Architecture of Microservices. IEEE Cloud Computing, 3(5):76–80, 2016.
2
... This Section will present the Unified Cloud Application Modeling Language prototype (UCAML, see Table 18). We refer to the original studies [59,60] for more in-depth information. ...
... (2015) ) [32] x x x Automatic Benchmarking of IaaS Cloud Service Providers for a World of Container Clusters (2015) [33] x x x CloudTRANSIT -Transferierbare IT-Services mittels einer generischen Cloud Service Description Language (2014) [90] x Lightweight Virtualization Cluster -Howto overcome Cloud Vendor Lock-in (2014) [15] x A Lightweight Virtualization Cluster Reference Architecture Derived from Open Source PaaS Platforms (2014) [16] x Conference Papers Towards a Lightweight Multi-Cloud DSL for Elastic and Transferable Cloud-native Applications (2018) [59] x x About being the Tortoise or the Hare? Making Cloud Applications too Fast and Furious for Attackers (2018) [80] x About an Immune System Understanding for Cloud-native Applications (2018) [79] x Smuggling Multi-Cloud Support into Cloud-native Applications using Elastic Container Platforms (2017) [49] x x x Towards a Description of Elastic Cloud-native Applications for Transferable Multi-Cloud-Deployments (2017) [60] x ClouNS -a Cloud-Native Application Reference Model for Enterprise Architects (2016) [1] x ppbench -A Visualizing Network Benchmark for Microservices [30] x x Overcome Vendor Lock-In by Integrating Already Available Container Technologies (2016) [50] x x About Microservices, Containers and their Underestimated Impact on Network Performance (2015) [29] x ...
Preprint
Full-text available
The project CloudTRANSIT dealt with the question how to transfer cloud applications and services at runtime without downtime across cloud infrastructures from different public and private cloud service providers. This technical report summarizes the outcomes of more than 20 research papers and reports that have been published throughout the course of the project. The intent of this report is to provide an integrated birds-eye view on these-so far-isolated papers. The report references to the original papers where ever possible. This project also systematically investigated practitioner initiated cloud application engineering trends of the last three years that provide several promising technical opportunities to avoid cloud vendor lock-in pragmatically. Especially European cloud service providers should track such kind of research because of the technical opportunities to bring cloud application workloads back home to Europe that are currently often deployed and inherently bound to U.S. providers. Intensified EU General Data Protection (GDPR) policies, European Cloud Initiatives, or "America First" policies might even make this imperative. There are technical solutions needed for these scenarios that are manageable not only by large but also by small and medium sized enterprises. Therefore, this project systematically analyzed commonalities of cloud infrastructures and of cloud applications. Latest evolutions of cloud standards and cloud engineering trends (like containerization) were used to derive a cloud-native reference model (ClouNS) that guided the development of a pragmatic cloud-transferability solution. This solution intentionally separated the infrastructure-agnostic operation of elastic container platforms (like Swarm, Kubernetes, Mesos/Marathon, etc.) via a multi-cloud-scaler and the platform-agnostic definition of cloud-native applications and services via an unified cloud application modeling language. Both components are independent but complementary. Because of their independence they can even contribute (although not intended) to other fields like moving target based cloud security. The report summarizes the main outcomes and insights of a proof-of-concept solution to realize transferability for cloud applications and services at runtime without downtime.
... This Section will present the Unified Cloud Application Modeling Language prototype (UCAML, see Table 18). We refer to the original studies [61,62] for more in-depth information. ...
Technical Report
Full-text available
The project CloudTRANSIT dealt with the question of how to transfer cloud applications and services at runtime without downtime across cloud infrastructures from different public and private cloud service providers. This technical report summarizes the outcomes of approximately 20 research papers that have been published throughout the project. This report intends to provide an integrated birds-eye view on these-so far-isolated papers. The report references the original papers where ever possible. This project also systematically investigated practitioner initiated cloud application engineering trends of the last three years that provide several promising technical opportunities to avoid cloud vendor lock-in pragmatically. Especially European cloud service providers should track such kind of research because of the technical opportunities to bring cloud application workloads back home to Europe. Such workloads are currently often deployed and inherently bound to U.S. providers. Intensified EU General Data Protection (GDPR) policies, European Cloud Initiatives, or "America First" policies might even make this imperative. So, technical solutions needed for these scenarios that are manageable not only by large but also by small and medium-sized enterprises. Therefore, this project systematically analyzed commonalities of cloud infrastructures and cloud applications. Latest evolutions of cloud standards and cloud engineering trends (like containerization) were used to derive a cloud-native reference model (ClouNS) that guided the development of a pragmatic cloud-transferability solution. This solution intentionally separated the infrastructure-agnostic operation of elastic container platforms (like Swarm, Kubernetes, Mesos/Marathon, etc.) via a multi-cloud-scaler and the platform-agnostic definition of cloud-native applications and services via an unified cloud application modeling language. Both components are independent but complementary. Because of their independence, they can even contribute (although not intended) to other fields like moving target based cloud security-but also distributed ledger technologies (block-chains) made provide options here. The report summarizes the main outcomes and insights of a proof-of-concept solution to realize transferability for cloud applications and services at runtime without downtime.
... This format can be transformed into platform-specific definition formats like Swarm compose, Kubernetes manifest files, and more. This Unified Cloud Application Modeling Language (UCAML) is explained in [8,9]. Both approaches mutually influenced each other and therefore have been evaluated in parallel by deploying and transferring several cloud reference applications [10] at runtime [7,9]. ...
Article
Full-text available
This paper presents a review of cloud application architectures and its evolution. It reports observations being made during a research project that tackled the problem to transfer cloud applications between different cloud infrastructures. As a side effect, we learned a lot about commonalities and differences from plenty of different cloud applications which might be of value for cloud software engineers and architects. Throughout the research project, we analyzed industrial cloud standards, performed systematic mapping studies of cloud-native application-related research papers, did action research activities in cloud engineering projects, modeled a cloud application reference model, and performed software and domain-specific language engineering activities. Two primary (and sometimes overlooked) trends can be identified. First, cloud computing and its related application architecture evolution can be seen as a steady process to optimize resource utilization in cloud computing. Second, these resource utilization improvements resulted over time in an architectural evolution of how cloud applications are being built and deployed. A shift from monolithic service-oriented architectures (SOA), via independently deployable microservices towards so-called serverless architectures, is observable. In particular, serverless architectures are more decentralized and distributed and make more intentional use of separately provided services. In other words, a decentralizing trend in cloud application architectures is observable that emphasizes decentralized architectures known from former peer-to-peer based approaches. This is astonishing because, with the rise of cloud computing (and its centralized service provisioning concept), the research interest in peer-to-peer based approaches (and its decentralizing philosophy) decreased. However, this seems to change. Cloud computing could head into the future of more decentralized and more meshed services.
... This format can be transformed into platform-specific definition formats like Swarm compose, Kubernetes manifest files, and more. This unified cloud application modeling language UCAML is explained in [8,9]. Both approaches mutually influenced each other and therefore have been evaluated in parallel by deploying and transferring several cloud reference applications [10] at runtime [7,9]. ...
Preprint
Full-text available
This paper presents a review of cloud application architectures and its evolution. It reports observations being made during the course of a research project that tackled the problem to transfer cloud applications between different cloud infrastructures. As a side effect we learned a lot about commonalities and differences from plenty of different cloud applications which might be of value for cloud software engineers and architects. Throughout the course of the research project we analyzed industrial cloud standards, performed systematic mapping studies of cloud-native application related research papers, performed action research activities in cloud engineering projects, modeled a cloud application reference model, and performed software and domain specific language engineering activities. Two major (and sometimes overlooked) trends can be identified. First, cloud computing and its related application architecture evolution can be seen as a steady process to optimize resource utilization in cloud computing. Second, this resource utilization improvements resulted over time in an architectural evolution how cloud applications are being build and deployed. A shift from monolithic servce-oriented architectures (SOA), via independently deployable microservices towards so called serverless architectures is observable. Especially serverless architectures are more decentralized and distributed, and make more intentional use of independently provided services. In other words, a decentralizing trend in cloud application architectures is observable that emphasizes decentralized architectures known from former peer-to-peer based approaches. That is astonishing because with the rise of cloud computing (and its centralized service provisioning concept) the research interest in peer-to-peer based approaches (and its decentralizing philosophy) decreased. But this seems to change. Cloud computing could head into future of more decentralized and more meshed services.
Conference Paper
Full-text available
Elastic container platforms (like Kubernetes, Docker Swarm, Apache Mesos) fit very well with existing cloud-native application architecture approaches. So it is more than astonishing, that these already existing and open source available elastic platforms are not considered more consequently in multi-cloud research. Elastic container platforms provide inherent multi-cloud support that can be easily accessed. We present a solution proposal of a control process which is able to scale (and migrate as a side effect) elastic container platforms across different public and private cloud-service providers. This control loop can be used in an execution phase of self-adaptive auto-scaling MAPE loops (monitoring, analysis, planning, execution). Additionally, we present several lessons learned from our prototype implementation which might be of general interest for researchers and practitioners. For instance, to describe only the intended state of an elastic platform and let a single control process take care to reach this intended state is far less complex than to define plenty of specific and necessary multi-cloud aware workflows to deploy, migrate, terminate, scale up and scale down elastic platforms or applications.
Conference Paper
Full-text available
Microservices complement approaches like DevOps and continuous delivery in terms of software architecture. Along with this architectural style, several important deployment technologies, such as container-based virtualization and container orchestration solutions, have emerged. These technologies allow to efficiently exploit cloud platforms, providing a high degree of scalability, availability, and portability for microservices. Despite the obvious importance of a sufficient level of performance, there is still a lack of performance engineering approaches explicitly taking into account the particularities of microservices. In this paper, we argue why new solutions to performance engineering for microservices are needed. Furthermore, we identify open issues and outline possible research directions with regard to performance-aware testing, monitoring, and modeling of microservices
Article
Full-text available
eScience demands large-scale computing clusters to support the efficient execution of resource-intensive scientific applications. Virtual Machines (VMs) have introduced the ability to provide customizable execution environments, at the expense of performance loss for applications. However, in recent years, containers have emerged as a light-weight virtualization technology compared to VMs. Indeed, the usage of containers for virtual clusters allows better performance for the applications and fast deployment of additional working nodes, for enhanced elasticity. This paper focuses on the deployment, configuration and management of Virtual Elastic computer Clusters (VEC) dedicated to processs scientific workloads. The nodes of the scientific cluster are hosted in containers running on bare-metal machines. The open-source tool Elastic Cluster for Docker (EC4Docker) is introduced, integrated with Docker Swarm to create auto-scaled virtual computer clusters of containers across distributed deployments. We also discuss the benefits and limitations of this solution and analyse the performance of the developed tools under a real scenario by means of a scientific use case that demonstrates the feasibility of the proposed approach.
Article
Full-text available
It is common sense that cloud-native applications (CNA) are intentionally designed for the cloud. Although this understanding can be broadly used it does not guide and explain what a cloud-native application exactly is. The term "cloud-native" was used quite frequently in birthday times of cloud computing (2006) which seems somehow obvious nowadays. But the term disappeared almost completely. Suddenly and in the last years the term is used again more and more frequently and shows increasing momentum. This paper summarizes the outcomes of a systematic mapping study analyzing research papers covering "cloud-native" topics, research questions and engineering methodologies. We summarize research focuses and trends dealing with cloud-native application engineering approaches. Furthermore, we provide a definition for the term "cloud-native application" which takes all findings, insights of analyzed publications and already existing and well-defined terminology into account.
Conference Paper
Full-text available
The capability to operate cloud-native applications can create enormous business growth and value. But enterprise architects should be aware that cloud-native applications are vulnerable to vendor lock-in. We investigated cloud-native application design principles, public cloud service providers, and industrial cloud standards. All results indicate that most cloud service categories seem to foster vendor lock-in situations which might be especially problematic for enterprise architectures. This might sound disillusioning at first. However, we present a reference model for cloud-native applications that relies only on a small subset of well standardized IaaS services. The reference model can be used for codifying cloud technologies. It can guide technology identification, classification, adoption, research and development processes for cloud-native application and for vendor lock-in aware enterprise architecture engineering methodologies.
Conference Paper
For automating the deployment of applications in cloud environments, a variety of deployment automation technologies have been developed in recent years. These technologies enable specifying the desired deployment in the form of deployment models, which can be automatically executed. However, changing internal or external conditions often lead to strategical decisions that must be reflected in all deployment models of a company’s IT. Unfortunately, while creating such deployment models is difficult, adapting them is even harder as typically a variety of technologies must be replaced. In this paper, we present the Split and Match Method that enables splitting a deployment model following a manually specified distribution on the business layer. The method also enables automatically deploying the resulting model without the need for a manual intervention and, thus, significantly eases reflecting strategical decisions on the technical deployment layer. We present a formalization and algorithms to automate the steps of the method. Moreover, we validate the practical feasibility of the presented concepts by a prototype based on the TOSCA standard and the OpenTOSCA ecosystem.
Article
Microservices are sweeping through cloud design architectures, at once embodying new trends and making use of previous paradigms. This column explores the basis for these trends in both modern and historical standards, and sets out a direction for the future of microservices development.
Article
The adoption of container-based microservices architectures is revolutionizing application design. By adopting a microservices architecture, developers can engineer applications that are composed of multiple lightweight, self-contained, and portable runtime components deployed across a large number of geodistributed servers.