Conference PaperPDF Available

Investigation of Impacts on Network Performance in the Advance of a Microservice Design

Authors:

Abstract and Figures

Due to REST-based protocols, microservice architectures are inherently horizontally scalable. That might be why the microservice architectural style is getting more and more attention for cloud-native application engineering. Corresponding microservice architectures often rely on a complex technology stack which includes containers, elastic platforms and software defined networks. Astonishingly, there are almost no specialized tools to figure out performance impacts (coming along with this microservice architectural style) in the upfront of a microservice design. Therefore, we propose a benchmarking solution intentionally designed for this upfront design phase. Furthermore, we evaluate our benchmark and present some performance data to reflect some often heard cloud-native application performance rules (or myths).
Content may be subject to copyright.
A preview of the PDF is not available
... Different studies focused only on product characteristics ( [13], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23]), on process characteristics ( [13], [24], [20], [22], [23]) or on both ( [13], [15], [20], [22], [23], [25], [26], [27]). Moreover, other studies ( [13], [24], [16]) investigated and compared costs. ...
... As for the product characteristics, the most frequently addressed one is performance (see Table 2). In detail, the papers [13], [14], [15], [16], [17], [18], [19], [20], [22] have a focus on performance. This is followed by scalability, which is discussed by the papers [14], [15], [16], [17], [18], [19], [21], and [22]. ...
... In detail, the papers [13], [14], [15], [16], [17], [18], [19], [20], [22] have a focus on performance. This is followed by scalability, which is discussed by the papers [14], [15], [16], [17], [18], [19], [21], and [22]. Other characteristics like availability ( [15], [20]) or maintenance ( [13], [16], [18], [23]) are considered only in a few papers. ...
Article
Full-text available
Context Re-architecting monolithic systems with Microservices-based architecture is a common trend. Various companies are migrating to Microservices for different reasons. However, making such an important decision like re-architecting an entire system must be based on real facts and not only on gut feelings. Objective The goal of this work is to propose an evidence-based decision support framework for companies that need to migrate to Microservices, based on the analysis of a set of characteristics and metrics they should collect before re-architecting their monolithic system. Method We conducted a survey done in the form of interviews with professionals to derive the assessment framework based on Grounded Theory. Results We identified a set consisting of information and metrics that companies can use to decide whether to migrate to Microservices or not. The proposed assessment framework, based on the aforementioned metrics, could be useful for companies if they need to migrate to Microservices and do not want to run the risk of failing to consider some important information.
... For example, regarding design, MSA requires microservice identification and af- terwards careful balancing of microservices' granularity. Too fine-grained microservices increase network load, thereby decreasing performance [9], whereas too coarsegrained services counteract scalability. In addition, MSA fosters technology heterogeneity [1] by enabling teams to independently decide for implementation technologies, e.g., frameworks and databases, which may result in additional maintainability cost and steeper learning curves for new team members [10]. ...
... Lines 2 and 3 add imports for the Kafka technology model9 and the CQRS technology model (cf. ...
... Researchers have chosen 3 fundamental stages in the software lifecycle to construct the taxonomy [80] development, deployment, and operational (figure 5). After a critical review, identified the three main categories that microservice research is moving towards in near future AI, cloud, and architecture. ...
Article
Full-text available
The software industry widely used monolithic system architecture in the past to build enterprise-grade software. Such software is deployed on the self-managed on-premises servers. Monolithic architecture systems introduced many difficulties when transitioning to cloud platforms and new technologies due to scalability, flexibility, performance issues, and lower business value. As a result, people are bound to consider the new software paradigm with the separation of concern concept. Microservice architecture was introduced to the world as an emerging software architecture style for overcoming monolithic architectural limitations. This paper illustrates the taxonomical classification of microservice architecture and a systematic review of the current state of the microservice architecture by comparing it to the past and future using the PRISMA model. Conference papers and journal papers the base on the defined keywords from well-known research publishers. The results showcase that most researchers and enterprise-grade companies use microservice architecture to develop cloud-native applications. On the contrary, they are struggling with certain performance issues in the overall application. The acquired results can facilitate the researchers and architects in the software engineering domain who aspire to be concerned with new technology trends about service-oriented architecture and cloud-native development.
... Too coarse-grained microservice capabilities neglect the aforementioned benefits of MSA in terms of service-specific independence. Too fine-grained microservices, on the other hand, may require an inefficiently high amount of communication and thus net-work traffic at runtime [45]. Although there exist approaches such as Domain-driven Design (DDD) [25] to support in the systematic decomposition and granularity determination of a microservice architecture [53], their perceived complexity hampers widespread adoption in practice [27,8]. ...
Preprint
Full-text available
Purpose: Microservice Architecture (MSA) denotes an increasingly popular architectural style in which business capabilities are wrapped into autonomously developable and deployable software components called microservices. Microservice applications are developed by multiple DevOps teams each owning one or more services. In this article, we explore the state of how DevOps teams in small and medium-sized organizations (SMOs) cope with MSA and how they can be supported. Methods: We show through a secondary analysis of an exploratory interview study comprising six cases, that the organizational and technological complexity resulting from MSA poses particular challenges for small and medium-sized organizations (SMOs). We apply Model-Driven Engineering to address these challenges. Results: As results of the second analysis, we identify the challenge areas of building and maintaining a common architectural understanding, and dealing with deployment technologies. To support DevOps teams of SMOs in coping with these challenges, we present a model-driven workflow based on LEMMA - the Language Ecosystem for Modeling Microservice Architecture. To implement the workflow, we extend LEMMA with the functionality to (i) generate models from API documentation; (ii) reference remote models owned by other teams; (iii) generate deployment specifications; and (iv) generate a visual representation of the overall architecture. Conclusion: We validate the model-driven workflow and our extensions to LEMMA through a case study showing that the added functionality to LEMMA can bring efficiency gains for DevOps teams. To develop best practices for applying our workflow to maximize efficiency in SMOs, we plan to conduct more empirical research in the field in the future.
Chapter
We present MSDBench – a set of benchmarks designed to illuminate the effects of deployment choices and operating system abstractions on microservices performance in IoT settings. The microservices architecture has emerged as a mainstay set of design principles for cloud-hosted, network-facing applications. Their utility as a design pattern for “The Internet of Things” (IoT) is less well understood.We use MSDBench to show the performance impacts of different deployment choices and isolation domain assignments for Linux and Ambience, an experimental operating system specifically designed to support microservices for IoT. These results indicate that deployment choices can have a dramatic impact on microservices performance, and thus, MSDBench is a useful tool for developers and researchers in this space.
Article
Full-text available
Microservice architecture (MSA) denotes an increasingly popular architectural style in which business capabilities are wrapped into autonomously developable and deployable software components called microservices. Microservice applications are developed by multiple DevOps teams each owning one or more services. In this article, we explore the state of how DevOps teams in small and medium-sized organizations (SMOs) cope with MSA and how they can be supported. We show through a secondary analysis of an exploratory interview study comprising six cases, that the organizational and technological complexity resulting from MSA poses particular challenges for small and medium-sized organizations (SMOs). We apply model-driven engineering to address these challenges. As results of the second analysis, we identify the challenge areas of building and maintaining a common architectural understanding, and dealing with deployment technologies. To support DevOps teams of SMOs in coping with these challenges, we present a model-driven workflow based on LEMMA—the Language Ecosystem for Modeling Microservice Architecture. To implement the workflow, we extend LEMMA with the functionality to (i) generate models from API documentation; (ii) reference remote models owned by other teams; (iii) generate deployment specifications; and (iv) generate a visual representation of the overall architecture. We validate the model-driven workflow and our extensions to LEMMA through a case study showing that the added functionality to LEMMA can bring efficiency gains for DevOps teams. To develop best practices for applying our workflow to maximize efficiency in SMOs, we plan to conduct more empirical research in the field in the future.
Chapter
Model-driven Development (MDD) is an approach to software engineering that aims to enable analysis, validation, and code generation of software on the basis of models expressed with dedicated modeling languages. MDD is particularly useful in the engineering of complex, possibly distributed software systems. It is therefore sensible to investigate the adoption of MDD to support and facilitate the engineering of distributed software systems based on Microservice Architecture (MSA). This chapter presents recent insights from studying and developing two approaches for employing MDD in MSA engineering. The first approach uses a graphical notation to model the topology and interactions of MSA-based software systems. The second approach emerged from the first one and exploits viewpoint-based modeling to better cope with MSA's inherent complexity. It also considers the distributed nature of MSA teams, as well as the technology heterogeneity introduced by MSA adoption. Both approaches are illustrated and discussed in the context of a case study. Moreover, we present a catalog of research questions for subsequent investigation of employing MDD to support and facilitate MSA engineering.
Preprint
Full-text available
Context. Re-architecting monolithic systems with Microservices-based architecture is a common trend. Various companies are migrating to Microservices for different reasons. However, making such an important decision like re-architecting an entire system must be based on real facts and not only on gut feelings. Objective. The goal of this work is to propose an evidence-based decision support framework for companies that need to migrate to Microservices, based on the analysis of a set of characteristics and metrics they should collect before re-architecting their monolithic system. Method. We designed this study with a mixed-methods approach combining a Systematic Mapping Study with a survey done in the form of interviews with professionals to derive the assessment framework based on Grounded Theory. Results. We identified a set consisting of information and metrics that companies can use to decide whether to migrate to Microservices or not. The proposed assessment framework, based on the aforementioned metrics, could be useful for companies if they need to migrate to Microservices and do not want to run the risk of failing to consider some important information.
Article
Web service compositions have been widely applied in different applications. A service composition is usually implemented in either a centralized or decentralized manner. Compared with the centralized service composition, the decentralized composition has no central control component, and components interact with each other directly, thereby achieving better performance. Process partitioning is a technique to divide a process into multiple parts and has been shown that it can be successfully applied to decentralizing process‐driven service compositions. This paper proposes a new process partitioning technique for constructing decentralized service compositions. The proposed technique, which is based on typed digraphs and a graph transformation technique, is used for exploring available process partitioning solutions. For applications, this paper discusses the topology and interaction features about the partitioning solutions and summarizes a ranking method for them. Three experiments are conducted to evaluate the proposed methods in this paper. Experimental results show that the proposed methods can be applied in constructing decentralized service compositions effectively. In addition, the results also show that the decentralized compositions can have lower average response times and higher throughputs than the corresponding centralized compositions in the experiments.
Conference Paper
Full-text available
The capability to operate cloud-native applications can create enormous business growth and value. But enterprise architects should be aware that cloud-native applications are vulnerable to vendor lock-in. We investigated cloud-native application design principles, public cloud service providers, and industrial cloud standards. All results indicate that most cloud service categories seem to foster vendor lock-in situations which might be especially problematic for enterprise architectures. This might sound disillusioning at first. However, we present a reference model for cloud-native applications that relies only on a small subset of well standardized IaaS services. The reference model can be used for codifying cloud technologies. It can guide technology identification, classification, adoption, research and development processes for cloud-native application and for vendor lock-in aware enterprise architecture engineering methodologies.
Conference Paper
Full-text available
Companies like Netflix, Google, Amazon, Twitter successfully exemplified elastic and scalable microservice architectures for very large systems. Microservice architectures are often realized in a way to deploy services as containers on container clusters. Containerized microservices often use lightweight and REST-based mechanisms. However, this lightweight communication is often routed by container clusters through heavyweight software defined networks (SDN). Services are often implemented in different programming languages adding additional complexity to a system, which might end in decreased performance. Astonishingly it is quite complex to figure out these impacts in the upfront of a microservice design process due to missing and specialized benchmarks. This contribution proposes a benchmark intentionally designed for this microservice setting. We advocate that it is more useful to reflect fundamental design decisions and their performance impacts in the upfront of a microservice architecture development and not in the aftermath. We present some findings regarding performance impacts of some TIOBE TOP 50 programming languages (Go, Java, Ruby, Dart), containers (Docker as type representative) and SDN solutions (Weave as type representative).
Article
Full-text available
In previous work, we concluded that container technologies and overlay networks typically have negative performance impacts, mainly due to an additional layer to networking. This is what everybody would expect, only the degree of impact might be questionable. These negative performance impacts can be accepted (if they stay moderate), due to a better flexibility and manageability of the resulting systems. However, we draw our conclusion only on data covering small core machine types. This extended work additionally analyzed the impact of various (high core) machine types of different public cloud service providers (Amazon Web Services, AWS and Google Compute Engine, GCE) and comes to a more differentiated view and some astonishing results for high core machine types. Our findings stand to reason that a simple and cost effective strategy is to operate container cluster with highly similar high core machine types (even across different cloud service providers). This strategy should cover major relevant and complex data transfer rate reducing effects of containers, container clusters and software-defined-networks appropriately.
Article
Full-text available
Cloud service selection can be a complex and challenging task for a cloud engineer. Most current approaches try to identify a best cloud service provider by evaluating several relevant criteria like prices, processing, memory, disk, network performance, quality of service and so on. Nevertheless, the decision making problem involves so many variables, that it is hard to model it appropriately. We present an approach that is not about selecting a best cloud service provider. It is about selecting most similar resources provided by different cloud service providers. This fits much better practical needs of cloud service engineers. Especially, if container clusters are involved. EasyCompare, an automated benchmarking tool suite to compare cloud service providers, is able to benchmark and compare virtual machine types of different cloud service providers using an Euclidian distance measure. It turned out, that only 1% of theoretical possible machine pairs have to be considered in practice. These relevant machine types can be identiífied by systematic benchmark runs in less than three hours. We present some expectable but also astonishing evaluation results of EasyCompare used to evaluate two major and representative public cloud service providers: Amazon Web Services and Google Compute Engine.
Article
Google's Borg system is a cluster manager that runs hundreds of thousands of jobs, from many thousands of different applications, across a number of clusters each with up to tens of thousands of machines. It achieves high utilization by combining admission control, efficient task-packing, over-commitment, and machine sharing with process-level performance isolation. It supports high-availability applications with runtime features that minimize fault-recovery time, and scheduling policies that reduce the probability of correlated failures. Borg simplifies life for its users by offering a declarative job specification language, name service integration, real-time job monitoring, and tools to analyze and simulate system behavior. We present a summary of the Borg system architecture and features, important design decisions, a quantitative analysis of some of its policy decisions, and a qualitative examination of lessons learned from a decade of operational experience with it.