ThesisPDF Available

Optimizing Containerized Spring Boot Microservices in Kubernetes: Development, Experimentation, and Performance Analysis

Authors:

Abstract and Figures

This thesis aims to develop, containerize, orchestrate, monitor and optimize a microservices-based project, exploring the effects of various optimization techniques. It focuses on a small-scale e-commerce ecosystem composed of interconnected microservices, with efficient communication, security, resilience, and observability.
Content may be subject to copyright.
Eötvös Loránd University
Faculty of Informatics
Dept. of Programming Languages and Compilers
Optimizing Containerized Spring Boot
Microservices in Kubernetes: Development,
Experimentation, and Performance Analysis
Supervisor: Author:
Kitlei Róbert László Zeyad AbouShanab
Assistant Lecturer Software and Service Architectures MSc
Budapest, 2024
Contents
Acknowledgment 5
Abstract 6
1 Introduction 7
1.1 Background and motivation . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 Problemstatement ............................ 8
1.3 Objectives and scope of the thesis . . . . . . . . . . . . . . . . . . . . 10
1.3.1 Objective ............................. 10
1.3.2 Scope ............................... 11
1.4 Research Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4.1 Researchmethods......................... 12
1.4.2 Technologies employed . . . . . . . . . . . . . . . . . . . . . . 12
1.4.3 General Structure of the Thesis . . . . . . . . . . . . . . . . . 12
1.4.4 Timeline or Schedule . . . . . . . . . . . . . . . . . . . . . . . 13
2 Literature Review 14
2.1 Microservices architecture . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.1 Introduction to microservices . . . . . . . . . . . . . . . . . . 14
2.2 Overview of containerization and orchestration technologies . . . . . . 16
2.2.1 Containerization . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2.2 Docker............................... 17
2.2.3 Orchestration ........................... 18
2.3 Previous research on containerized microservices and Kubernetes . . . 19
2.3.1 Migration to microservices . . . . . . . . . . . . . . . . . . . . 19
2.3.2 Microservices and Containers . . . . . . . . . . . . . . . . . . 20
2.3.3 Deployment using Kubernetes . . . . . . . . . . . . . . . . . . 20
2.4 Related work on optimizing Spring Boot applications in Kubernetes . 22
2.4.1 Kubernetes Scalability . . . . . . . . . . . . . . . . . . . . . . 22
2.4.2 Microservices Monitoring and Testing . . . . . . . . . . . . . . 22
2.4.3 Resource Allocation . . . . . . . . . . . . . . . . . . . . . . . . 23
2.4.4 Load-balancing .......................... 23
CONTENTS
2.4.5 Caching methodologies . . . . . . . . . . . . . . . . . . . . . . 23
3 Technologies and Tools 24
3.1 Introduction to the Spring Boot framework . . . . . . . . . . . . . . . 25
3.1.1 Spring Core Container . . . . . . . . . . . . . . . . . . . . . . 26
3.1.2 Spring Data Access/Integration . . . . . . . . . . . . . . . . . 26
3.1.3 SpringCloud ........................... 26
3.1.4 SpringWebMVC......................... 27
3.1.5 Maven ............................... 28
3.2 Containerization and Orchestration technologies . . . . . . . . . . . . 28
3.2.1 DockerOverview ......................... 28
3.2.2 Introduction to Kubernetes . . . . . . . . . . . . . . . . . . . 29
3.3 Overview of other tools and technologies used in the project . . . . . 31
3.3.1 Service Discovery and API Gateway . . . . . . . . . . . . . . . 31
3.3.2 Event-Driven Architecture . . . . . . . . . . . . . . . . . . . . 31
3.3.3 Resilience and Fault Tolerance . . . . . . . . . . . . . . . . . . 31
3.3.4 Security .............................. 31
3.3.5 Monitoring and Observability . . . . . . . . . . . . . . . . . . 32
3.3.6 Databases ............................. 32
3.3.7 Testing............................... 32
4 Development Phase 33
4.1 Environment Setup and Configuration . . . . . . . . . . . . . . . . . 33
4.2 Microservices creation . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.2.1 ProductService.......................... 34
4.2.2 OrderService ........................... 39
4.2.3 Inventory Service . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.3 Adding all projects together . . . . . . . . . . . . . . . . . . . . . . . 43
4.4 Communication between services . . . . . . . . . . . . . . . . . . . . 44
4.4.1 Conguration ........................... 44
4.5 Servicediscovery ............................. 45
4.5.1 Creation & configuration of the project . . . . . . . . . . . . . 45
4.5.2 Registration of the other services . . . . . . . . . . . . . . . . 46
4.5.3 Eureka server dashboard . . . . . . . . . . . . . . . . . . . . . 46
4.6 APIGateway ............................... 47
4.6.1 Creating the project . . . . . . . . . . . . . . . . . . . . . . . 47
4.6.2 Conguration ........................... 47
4.7 Securingourservices ........................... 48
4.7.1 Setting up Keycloak . . . . . . . . . . . . . . . . . . . . . . . 48
CONTENTS
4.7.2 Keycloakrealms.......................... 49
4.7.3 Configuring the gateway . . . . . . . . . . . . . . . . . . . . . 49
4.8 Implementing Circuit breaker pattern . . . . . . . . . . . . . . . . . . 50
4.8.1 What is Circuit breaker pattern? . . . . . . . . . . . . . . . . 50
4.8.2 Adding Resilience4j to the project . . . . . . . . . . . . . . . . 51
4.9 Distributedtracing ............................ 53
4.10 Event driven architecture . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.10.1 Configuring the order-service . . . . . . . . . . . . . . . . . . . 54
4.10.2 Notification service . . . . . . . . . . . . . . . . . . . . . . . . 55
4.11 Monitoring and Performance Tuning . . . . . . . . . . . . . . . . . . 55
5 Containerization and Deployment 57
5.1 Dockerization of Microservices . . . . . . . . . . . . . . . . . . . . . . 57
5.1.1 Dockerfile Approach . . . . . . . . . . . . . . . . . . . . . . . 57
5.1.2 JibApproach ........................... 59
5.2 Orchestration with Docker Compose . . . . . . . . . . . . . . . . . . 62
5.2.1 Creating a Docker Compose file . . . . . . . . . . . . . . . . . 62
5.2.2 Running the containers . . . . . . . . . . . . . . . . . . . . . . 63
5.3 Orchestration with Kubernetes . . . . . . . . . . . . . . . . . . . . . 64
5.3.1 Introduction to Kubernetes . . . . . . . . . . . . . . . . . . . 64
5.3.2 Kubectl .............................. 65
5.3.3 Deployments ........................... 65
5.3.4 Running our deployment . . . . . . . . . . . . . . . . . . . . . 67
5.3.5 Creating Kubernetes service . . . . . . . . . . . . . . . . . . . 67
5.3.6 Persistent Volume Claim . . . . . . . . . . . . . . . . . . . . . 68
5.3.7 Orchestration for the whole application . . . . . . . . . . . . . 69
5.4 Comparison between Docker Compose and Kubernetes . . . . . . . . 70
6 Optimization Techniques 71
6.1 Resourceallocation............................ 72
6.1.1 Resource Requests and Limits . . . . . . . . . . . . . . . . . . 73
6.2 Load balancing mechanisms . . . . . . . . . . . . . . . . . . . . . . . 73
6.2.1 Load balancing in Springboot . . . . . . . . . . . . . . . . . . 73
6.2.2 Load-balancing in Kubernetes . . . . . . . . . . . . . . . . . . 74
6.3 Cachingmechanisms ........................... 75
6.3.1 Springboot Caching . . . . . . . . . . . . . . . . . . . . . . . . 75
6.3.2 Caching in container technologies . . . . . . . . . . . . . . . . 76
6.4 Dynamic scaling strategies . . . . . . . . . . . . . . . . . . . . . . . . 77
6.4.1 Horizontal Pod Autoscaling (HPA) . . . . . . . . . . . . . . . 78
CONTENTS
7 Results and Evaluation 79
7.1 Metrics used for evaluating system performance . . . . . . . . . . . . 80
7.1.1 Latency .............................. 80
7.1.2 Scalability............................. 80
7.1.3 Resource utilization . . . . . . . . . . . . . . . . . . . . . . . . 81
7.1.4 FaultTolerance .......................... 81
7.1.5 Conclusion............................. 82
7.2 Impact of software optimization techniques . . . . . . . . . . . . . . . 84
7.2.1 Enhancing System Performance . . . . . . . . . . . . . . . . . 84
7.2.2 Boosting Revenue Generation . . . . . . . . . . . . . . . . . . 84
7.2.3 User Satisfaction . . . . . . . . . . . . . . . . . . . . . . . . . 84
7.2.4 Empowering Developers . . . . . . . . . . . . . . . . . . . . . 84
7.2.5 Facilitating Scalability . . . . . . . . . . . . . . . . . . . . . . 84
7.2.6 Driving User Acquisition . . . . . . . . . . . . . . . . . . . . . 85
7.2.7 Real-world examples . . . . . . . . . . . . . . . . . . . . . . . 85
8 Conclusion 87
8.1 Summary of key findings . . . . . . . . . . . . . . . . . . . . . . . . . 87
8.2 Contributions to the field . . . . . . . . . . . . . . . . . . . . . . . . . 88
8.3 Limitations and future work . . . . . . . . . . . . . . . . . . . . . . . 88
Bibliography 89
List of Figures 92
List of Codes 93
Project links 95
Acknowledgment
I would like to express my deepest gratitude to the professors and staff members
of the ELTE IK for their invaluable support and guidance throughout my master’s
degree program.
Over the past two years, their dedication to teaching and mentorship has been
crucial in shaping my academic journey and helping my professional growth. Their
expertise, passion, and commitment have inspired me to push the boundaries of my
knowledge.
I am truly grateful for the opportunities they have provided me to expand my
understanding of Software Development and related fields. Their encouragement
and constructive feedback have challenged me to think critically, solve complex
problems, and pursue innovative ideas.
As I go on the next phase of my journey, I carry with me the invaluable lessons
and experiences that I have gained from my time at ELTE. I am confident that the
knowledge and skills I have acquired under their guidance will continue to serve me
well in my future challenges.
I am deeply grateful to each and every professor at the ELTE IK for their
contributions to my academic and personal development. I will always cherish the
memories of our interactions and the big impact they have had on my life.
Thank you for your support, encouragement, and dedication. I am honored to
have had the opportunity to learn from you, and I will always be thankful for the
valuable knowledge and wisdom you have imparted to me.
Sincerely,
Zeyad AbouShanab
5
Abstract
This thesis aims to develop, containerize, orchestrate, monitor and optimize a
microservices-based project, exploring the effects of various optimization techniques.
It focuses on a small-scale e-commerce ecosystem composed of interconnected mi-
croservices, with efficient communication, security, resilience, and observability.
Initially, we focus on developing microservices handling e-commerce tasks like
ordering, product listing, inventory, and notifications. We use basic inter-service
communication methods such as RESTful APIs and Kafka messaging to ensure
smooth data flow and real-time updates across the system.
For service discovery, we integrate Netflix’s Eureka, enabling inter-communication
between microservices. Spring Cloud Gateway, with Keycloak for security, acts as
the system’s gateway, for authentication, authorization, and access control.
To enhance system resilience, we implement Resilience4j’s circuit breaker pattern,
handling faults and preventing system-wide failures. Distributed tracing via Zipkin
provides insights into request flows, helping in effective troubleshooting and
performance optimization.
Monitoring system health and performance is done by Prometheus and Grafana.
Prometheus gathers metrics from Dockerized microservices, while Grafana offers
visualizations and alerts for proactive system management and optimization.
With microservices containerized in Docker, Kubernetes is used for orchestration
and scaling. Kubernetes automates deployment, load balancing, and resource
management, providing a stable platform for the ecosystem to operate efficiently in
dynamic environments.
The thesis presents practical insights and methodologies for developing, orchestrat-
ing, deploying, optimizing and managing small microservices-based e-commerce sys-
tems. It shows the significance of efficient communication, security measures, system
resilience, and observability in modern software architectures.
6
Chapter 1
Introduction
1.1 Background and motivation
This section offers an overview of microservices architecture, containerization, and
orchestration, which is the foundation for this thesis research. In modern software
development, the adoption of microservices architecture has emerged as a huge
shift, enabling organizations to build scalable, resilient, and agile systems. A survey
by O’Reilly in 2020 showed that 61% of the organizations were using Microservices
in one or more of their products.
Microservices architecture focuses on the decomposition of monolithic applications
into smaller, independently deployable services, each focused on a specific business
domain. This approach promotes flexibility, allowing teams to develop, deploy, and
scale services autonomously, allowing re-usability, loose-coupling and decomposition.
Also containerization has become a game-changer technology for packaging,
distributing, and running applications consistently across different environments.
Containers encapsulate dependencies, ensuring portability and reproducibility
while managing deployment workflows. Docker, in particular, has transformed
containerization, offering a lightweight, efficient, and standard platform for building
and managing containerized applications allowing developers to shift from virtual
machines to containers.
Furthermore, orchestrating containerized applications at scale allows efficient
management and coordination mechanisms. Kubernetes, as an open-source con-
tainer orchestration platform, has been proved as the standard for automating
deployment, scaling, and management of containerized services. Kubernetes pro-
vides a declarative approach to infrastructure management, abstracting away the
7
1. Introduction
underlying complexity and enabling easy deployment and scaling of microservices
across clusters of machines.
Within this domain, the motivation for this thesis research becomes clear. As or-
ganizations increasingly shift to microservices architecture and Kubernetes orches-
tration, there is a need to explore and optimize the deployment, management, and
performance of microservices-based systems. By addressing this need, the research
helps to contribute valuable insights and methodologies to enhance the efficiency,
scalability, and resilience of modern software architectures.
1.2 Problem statement
In software development, the field is characterized by an array of challenges and pain
points in various stages of the development and deployment lifecycle. Addressing
these challenges is crucial for ensuring the efficiency, reliability, and security of soft-
ware systems. This thesis aims to investigate and propose solutions for the following
challenges:
Tight Coupling In microservices architecture, tight coupling refers to strong
dependencies between services. This often occurs when service endpoints or de-
pendencies are hard-coded, leading to direct dependencies between services. Tight
coupling makes it difficult to modify or replace services without impacting others,
hindering flexibility and maintainability in a different microservices environment.
Routing, Load Balancing and Asynchronous Communication
Microservices architecture requires efficient routing of requests to appropri-
ate services, load balancing to evenly distribute traffic across service instances,
and support for asynchronous communication to improve responsiveness. Without
proper routing, load balancing, and asynchronous communication mechanisms,
microservices may face issues with scalability, performance, and fault tolerance.
Fault Tolerance and Resilience Microservices are inherently distributed sys-
tems, and failures can occur at any point in the system. Fault tolerance and
resilience are crucial for ensuring that the system can continue to operate despite
failures. Microservices architectures employ techniques such as circuit breakers to
handle failures gracefully and prevent them from cascading through the system,
ensuring high availability and reliability.
Security Concerns In a microservices architecture, security concerns arise from
the distributed nature of the system and the need to secure communication be-
tween services. Without proper security measures, such as secure authentication,
8
1. Introduction
authorization, and encryption of data in transit and at rest, microservices archi-
tectures are vulnerable to various security threats, including unauthorized access,
data breaches, and data manipulation.
Distributed tracing and Observability Microservices architectures consist of
multiple interconnected services that communicate with each other to fulfill user
requests. Distributed tracing and observability tools are essential for monitoring
the health, performance, and behavior of microservices. They provide visibility
into the flow of requests across services, help identify performance bottlenecks,
and facilitate effective debugging and optimization of the system.
Dynamic Scaling Challenges Microservices architectures are designed to be
scalable, allowing services to scale up or down dynamically based on demand.
However, dynamically scaling microservices while ensuring efficient communica-
tion and service discovery can be challenging. Without proper mechanisms in
place, such as service discovery and load balancing, microservices may face issues
with scalability, reliability, and performance.
Data Management Challenges Microservices architectures often lead to dis-
tributed data management challenges due to the decentralized nature of the sys-
tem. Each microservice may have its own database or data store, leading to is-
sues such as data consistency, integrity, scalability, and efficient access. Without
proper data management strategies, microservices may face difficulties in main-
taining data consistency across services, ensuring data integrity, scaling databases
to handle increasing loads, and efficiently accessing and querying data across ser-
vice boundaries.
Deployment and Orchestration Deployment and orchestration in a microser-
vices architecture involve automating the deployment, scaling, and management of
microservices across various environments, such as development, testing, staging,
and production. Microservices need to be deployed and orchestrated efficiently
to ensure consistent environments, service discovery, configuration management,
and lifecycle management. Without proper deployment and orchestration tools
and practices, microservices may face challenges in managing dependencies, han-
dling versioning, scaling services, and ensuring high availability and reliability
across the system.
9
1. Introduction
1.3 Objectives and scope of the thesis
1.3.1 Objective
The objective of this thesis is to investigate the design, development, and deployment
of a microservices architecture for an e-commerce platform. Specifically, the research
aims to address the following objectives:
Architectural Design: Analyze the principles and best practices of microservices
architecture and design a scalable and resilient architecture tailored for an e-
commerce platform. This includes defining services, communication protocols, and
data management strategies.
Implementation: Develop and implement key microservices components for the
e-commerce platform, including product management, order processing, inventory
management, notification services, and service discovery.
Scalability and Performance: Evaluate the scalability and performance char-
acteristics of the microservices architecture under varying workload conditions.
Investigate techniques for horizontal scaling, load balancing, and optimizing ser-
vice communication to ensure responsiveness and reliability.
Fault Tolerance and Resilience: Assess the fault tolerance and resilience of
the microservices architecture against failures and outages. Implement fault tol-
erance mechanisms such as circuit breakers, retry policies, and fallback strategies
to mitigate the impact of failures and ensure system stability.
Security and Compliance: Address security concerns related to authentication,
authorization, data protection, and secure communication between microservices.
Implement security measures such as encryption, token-based authentication, and
role-based access control to protect sensitive data and prevent unauthorized ac-
cess.
Observability and Monitoring: Implement distributed tracing, logging, and
monitoring solutions to provide visibility into the performance and behavior of
microservices. Utilize tools such as Zipkin, Prometheus, and Grafana to monitor
key metrics, detect anomalies, and troubleshoot issues in real-time.
Load Balancing: Investigate load balancing techniques optimized for Kubernetes
to evenly distribute incoming traffic across microservices instances. Implement
load-balancing strategies in Springboot Framework.
10
1. Introduction
Resource Allocation: Try resource allocation strategies for Kubernetes clusters
to optimize resource utilization and enhance system efficiency. Analyze techniques
for allocating CPU, memory, and storage resources dynamically based on workload
demands, ensuring optimal performance while minimizing resource wastage and
cost overheads.
Containerization: Explore containerization technologies such as Docker for
packaging microservices into lightweight, portable containers. Investigate the ben-
efits of containerization in terms of environment consistency, deployment agility,
and resource efficiency.
Orchestration: Study container orchestration platforms such as Kubernetes for
automating the deployment, scaling, and management of containerized applica-
tions. Evaluate Kubernetes’ features for service discovery, load balancing, self-
healing, and declarative configuration management.
1.3.2 Scope
The scope of this thesis explains the design, development, and deployment of a
microservices-based e-commerce platform, covering the following aspects:
Microservices Architecture: Design and implement a modular architecture
composed of independent microservices for product management, order process-
ing, inventory management, notification services, and service discovery.
Technological Stack: Utilize modern technologies and frameworks including
Spring Boot, Docker, Kubernetes, and related ecosystem tools for building, de-
ploying, and managing microservices.
Functional Requirements: Develop core functionalities for the e-commerce
platform, including product catalog management, order placement and fulfillment,
inventory tracking, user notifications, and service discovery.
Non-Functional Requirements: Address non-functional requirements such as
scalability, fault tolerance, security, observability, and compliance with industry
standards and best practices.
Evaluation and Validation: Evaluate the performance, scalability, fault tol-
erance, and security of the microservices architecture through experimentation,
bench-marking, and real-world testing scenarios.
Limitations: The research will focus primarily on the technical aspects of mi-
croservices architecture, containerization, and orchestration using Docker and
11
1. Introduction
Kubernetes. Business-specific requirements such as pricing, promotions, and cus-
tomer management will be considered beyond the scope of this thesis.
1.4 Research Methodology
In this section, we provide an overview of the approach undertaken to achieve the
objectives outlined in this thesis. The methodology shows the research methods
employed, the technologies utilized, and the general structure of this thesis.
1.4.1 Research methods
Our research methodology adopts a mixed-methods approach, combining both quan-
titative and qualitative techniques to comprehensively investigate the design, devel-
opment, and deployment of a microservices architecture for e-commerce applications.
Quantitative methods include empirical studies and performance evaluations to as-
sess scalability, fault tolerance, and security aspects of the architecture. Qualitative
methods, such as case studies and interviews, are employed to gather insights into
practical challenges and industry best practices.
1.4.2 Technologies employed
In this section we will discuss some of the modern technologies and frameworks
for building, deploying, and managing microservices. Using the Spring Boot frame-
work, we develop the core microservices components for the e-commerce platform,
including product management, order processing, inventory management, notifica-
tion services, and service discovery. Containerization of microservices is facilitated
through Docker, providing lightweight and portable environments for deployment.
Kubernetes serves as the orchestration platform, automating deployment, scaling,
and management tasks to ensure healthy operation of containerized applications.
1.4.3 General Structure of the Thesis
The thesis is structured to provide a coherent narrative of the research process and
findings. Following the introduction, the literature review section offers a comprehen-
sive survey of existing literature and research relevant to microservices architecture
and e-commerce applications. The methodology section, presented herein, describes
the research methods, technologies employed, and the overall approach taken to
achieve the objectives. Subsequent chapters detail the implementation of the mi-
croservices architecture, containerization and deployment for each. The results and
analysis chapter presents the findings of the research, followed by a discussion of
12
1. Introduction
their implications and comparisons with existing literature. Finally, the conclusion
summarizes key findings, contributions, limitations, and outlines future research di-
rections.
1.4.4 Timeline or Schedule
While not explicitly outlined in this section, a detailed timeline or schedule has been
developed to guide the research process, ensuring timely completion of each stage of
the thesis. This includes milestones for literature review, methodology development,
implementation, data collection, analysis, and thesis writing.
13
Chapter 2
Literature Review
2.1 Microservices architecture
2.1.1 Introduction to microservices
The microservice architectural style is an approach to developing a single applica-
tion as a suite of small services, each running in its own process and communicating
with lightweight mechanisms, often an HTTP resource API. These services are built
around business capabilities and independently deployable by fully automated de-
ployment machinery. There is a bare minimum of centralized management of these
services, which may be written in different programming languages and use different
data storage technologies.[1]
Figure 2.1: Microservices Architecture
Shifting to microservices
In today’s rapidly evolving digital market, businesses are increasingly turning to
microservices architecture as a modern approach to designing and deploying soft-
ware applications. Microservices architecture represents a departure from traditional
monolithic architectures, offering a more flexible and scalable alternative for building
complex systems.
14
2. Literature Review
The idea behind microservices
Microservices architecture is based on the principle of breaking down large, mono-
lithic applications into smaller, independently deployable services. Each service is
designed to perform a specific business function and operates as a standalone entity.
These services communicate with each other through well-defined APIs, enabling
easy interaction and collaboration within the application ecosystem.
Decoupling
One of the main features of microservices architecture is decoupling. Unlike mono-
lithic architectures, where all components are tightly integrated into a single code-
base, microservices architecture promotes loose coupling between services. This al-
lows individual services to be developed, deployed, and scaled independently, without
impacting other parts of the system.
Advantages over monolithic architecture
Microservices architecture offers several key advantages over traditional monolithic
architectures. Firstly, it enables greater agility and flexibility in software develop-
ment. By breaking down applications into smaller, manageable services, development
teams can iterate more quickly, release updates faster, and respond more effectively
to changing business requirements.
Resilience
Additionally, microservices architecture enhances scalability and resilience. Each
service can be scaled independently based on demand, allowing for better resource
utilization and improved performance. Furthermore, because services are isolated
from one another, failures in one service do not necessarily affect the entire applica-
tion, resulting in greater fault tolerance and system reliability.
Flexibility
Another benefit of microservices architecture is its support for technology diversity.
Unlike monolithic architectures, which typically use a single technology stack, mi-
croservices architectures allow teams to use different technologies and programming
languages for each service. This enables teams to choose the best tool for the job
without worrying about compatibility.
15
2. Literature Review
2.2 Overview of containerization and orchestration
technologies
2.2.1 Containerization
Introduction to Containerization
Containerization simplifies the way software applications are developed, deployed,
and managed by encapsulating them along with their dependencies into lightweight,
portable units called containers. Unlike traditional virtualization methods that vir-
tualize entire operating systems, containerization operates at the application level,
isolating applications from their environment while sharing the host OS kernel. This
approach offers greater efficiency, consistency, and portability across different envi-
ronments.
Role of Containers in Modern Software Development
Containers serve as the building blocks of modern software development, offering
several critical functions:
Isolation: Containers provide a high degree of isolation, ensuring that applica-
tions run consistently regardless of the underlying infrastructure. Each container
encapsulates its runtime environment, dependencies, and libraries, reducing con-
flicts and ensuring application stability.
Portability: Containers are highly portable, enabling developers to package ap-
plications once and run them anywhere, whether it’s on a developer’s laptop, in a
test environment, or in production. This portability streamlines the development
process and facilitates consistent deployment across different environments.
16
2. Literature Review
Resource Efficiency: Containers are lightweight and efficient, consuming fewer
resources than traditional virtual machines. They start up quickly, use minimal
memory and CPU, and can be scaled up or down rapidly to meet fluctuating
demand, resulting in improved resource utilization and cost efficiency.
The following figure shows how containerized applications architecture look like
compared to virtual machines:
Figure 2.2: Virtual machines vs Containers
2.2.2 Docker
Introduction to Docker
Docker is the leading containerization platform that popularized containerization
and standardized container workflows. It provides a comprehensive set of tools and
services for building, managing, and deploying containers, making containerization
accessible to developers and organizations of all sizes.
Role of Docker in Containerization
Docker simplifies the containerization process by offering a user-friendly interface
and a powerful set of command-line tools. One of the main components in Docker
is Docker Engine, the container runtime that enables developers to create, run, and
manage containers on any infrastructure, whether it’s a developer’s local machine,
a data center, or the cloud. Docker also provides a centralized registry, Docker Hub,
where users can discover, share, and collaborate on container images.
With Docker, Developers care about their applications running inside containers, and
operations cares about managing the containers. Docker is designed to enhance con-
sistency by ensuring the environment in which your developers write code matches
17
2. Literature Review
the environments into which your applications are deployed. This reduces the risk
of "worked in dev, now an OPS problem." [2]
Advantages of Docker
Docker offers numerous advantages for developers, operations teams, and organiza-
tions:
Developer Productivity: Docker streamlines the development workflow by pro-
viding a consistent environment for building, testing, and deploying applications.
Developers can package their applications and dependencies into portable con-
tainers, ensuring that they run consistently across different environments.
Standardization: Docker promotes standardization by providing a common plat-
form for packaging and distributing applications. Developers can use Docker to
create immutable container images that contain everything needed to run an ap-
plication, from the code to the runtime environment and dependencies.
Ecosystem Support: Docker has a strong ecosystem of pre-built images, plugins,
and third-party tools that extend its functionality and support various use cases.
Developers can make use of this ecosystem to accelerate development, improve
security, and enhance operational efficiency.
2.2.3 Orchestration
Introduction to Orchestration
As organizations adopt containerization at scale, the need for orchestration becomes
increasingly important. Orchestration is the process of automating the deployment,
scaling, and management of containerized applications, ensuring that they run reli-
ably and efficiently in production environments.
Role of Orchestration in Containerized Environments
Orchestration tools like Kubernetes provide a platform for managing containerized
applications at scale. They automate tasks such as deployment, scaling, load bal-
ancing, and health monitoring, enabling organizations to deploy and manage ap-
plications with confidence. Kubernetes, in particular, offers a rich set of features
for orchestrating containers, including declarative configuration, service discovery,
self-healing, and rolling updates.
18
2. Literature Review
Advantages of Orchestration with Kubernetes
Kubernetes offers several advantages for organizations looking to deploy container-
ized applications in production:
Scalability: Kubernetes enables organizations to scale applications efficiently by
automatically provisioning and managing resources based on demand. It ensures
that applications have the resources they need to handle increased traffic and
maintain performance and reliability.
Fault Tolerance: Kubernetes enhances application reliability by automatically
detecting and recovering from failures. It monitors the health of containers and
nodes, automatically restarting failed containers, and rescheduling workloads to
healthy nodes to ensure high availability.
Resource Optimization: Kubernetes optimizes resource utilization by intelli-
gently scheduling containers across the cluster. It dynamically adjusts resource
allocation based on application requirements, ensuring that resources are used
efficiently and cost-effectively.
Kubernetes is such a container orchestration platform. You can think of it as an
operating system for containers. It automates deployment, scaling, and operation of
containerized applications across a cluster of host machines. [3]
2.3 Previous research on containerized microser-
vices and Kubernetes
In recent years, there has been a lot of research into containerized microservices
and Kubernetes. Many organizations are adopting these technologies to keep up
with modern software development demands. This has led to a growing inter-
est in understanding the challenges, best practices, and emerging trends in this area.
This section aims to provide an overview of existing literature and research studies
on containerized microservices and Kubernetes.
2.3.1 Migration to microservices
In Tiago Costa Santos’ thesis, "Adopting Microservices: Migrating an HR tool from
a monolithic architecture" [4] the process of migrating an application from a mono-
lithic architecture to a microservices-based approach was explored in depth. The
goals of the thesis included executing the migration process, documenting challenges,
19
2. Literature Review
proposing solutions, and evaluating the resulting product using well-defined metrics.
The thesis discussed the significance of good practices and modular software design
in simplifying the migration process. Santos and the team emphasized the value
of feedback mechanisms and the Strangler Application pattern in facilitating a
smooth migration to a microservices-based architecture.
In addition, Kasper Stenroos’ thesis, "Microservices in Software Development" [5]
has presented advantages and disadvantages of both monolithic and microservices
architecture models, in addition to examples of how microservices can be used in
Agile software development. We concluded that in the future, microservices archi-
tecture is likely to evolve further and become even more popular especially among
companies that still have used monolithic approach for new projects.
2.3.2 Microservices and Containers
Miina Koskinen’s thesis, "Microservices and Containers: Benefits and Best
Practices" [6] aimed to uncover the genuine advantages of microservices architec-
ture and explore optimal development practices with microservices and containers.
It revealed several benefits of microservices, however, she noted that these benefits
entail challenges, such as managing diverse technologies and ensuring team compe-
tency.
On the other hand, Jon Mukaj’s thesis [7], an effective implementation of Kubernetes
and microservices was demonstrated through the deployment of Apache Airflow as
a platform for provisioning data workflows. This involved using Kubernetes with
various tools such as Terraform for Infrastructure as Code. The paper discussed
containerization’s superiority over traditional virtualization and challenges. Also
Docker’s importance in development and deployment. In addition, Kubernetes’ or-
chestration capabilities are highlighted. Containerization best practices, such as im-
age management and CI/CD, are outlined.
2.3.3 Deployment using Kubernetes
Lenard Jensen’s research, "Resilient System Deployment Using Kubernetes"
[8], focuses on explaining the different aspects of Kubernetes, the parts and
objects that appear during deployment, and whether they can be used to make
the software more resilient. It introduces the platform to the reader and helps
them understand precisely what Kubernetes does. It also included an introduc-
20
2. Literature Review
tion to DevOps and the motivation for using Kubernetes during the DevOps process.
In addition, Jesus Lopez’s thesis [9] explored and implemented microservices
deployment technologies based on containers, offering insights from the perspective
of an expanding organization new to containerization. It traced the organiza-
tion’s journey from traditional app development to contemporary containerized
approaches, starting with small-scale projects and progressing as requirements
evolved. The focus was on analyzing and evaluating Docker and Kubernetes,
alongside various implementation methods and additional market options.
Miika Moilanen [10] researched containerization technologies using Docker and
Kubernetes. His main objective was to create a Dockerfile, which forms the base
image for the deployment. The image was then used to deploy to Kubernetes. His
goal of the project was to create a Docker image of the Huginn application and
deploy it on Kubernetes.
21
2. Literature Review
2.4 Related work on optimizing Spring Boot appli-
cations in Kubernetes
2.4.1 Kubernetes Scalability
Kim Lehtinen [11] and Joshua Steinmann [12] explored Kubernetes scalability in
their respective theses. They investigated methods for scaling containerized applica-
tions and the Kubernetes cluster itself. Additionally, both conducted load testing on
sample applications to assess scalability, with a focus on horizontal scaling for opti-
mal performance. While Kim’s research emphasized the feasibility of both vertical
and horizontal scaling, Joshua proposed an alternative approach to Kubernetes clus-
ter scaling, addressing limitations in the default procedure. Joshua’s method aims
to ensure predictable behavior during cluster size reduction by setting thresholds for
the entire cluster’s resource utilization, introducing the concept of usable resources
for more accurate resource management.
2.4.2 Microservices Monitoring and Testing
Dinesh Rajput book "Hands-On Microservices Monitoring and Testing" [13]
focuses on the practical aspects of monitoring and testing microservices archi-
tectures. It provides guidance on implementing effective monitoring strategies
and testing practices to ensure the reliability, performance, and scalability of
microservices-based systems.
Anita Ihuman’s [14] blog post "Microservice tools: The top 10 for monitoring and
testing" discusses how testing and monitoring work in a microservices architecture.
It also highlights the challenges for testing and monitoring as well as the tools used
in these processes.
Also Muhammad Waseem, Peng Liang, Mojtaba Shahin, Amleto Di Salle and
Gaston Marquez [15] article aims to gain a deep understanding of how microser-
vices systems are designed, monitored, and tested in the industry. A mixed-methods
study was conducted with 106 survey responses and 6 interviews from microservices
practitioners. It showed the complexity of microservices systems poses challenges for
their design, monitoring, and testing, for which there are no dedicated solutions.
22
2. Literature Review
2.4.3 Resource Allocation
"Kubernetes-Oriented Microservice Placement With Dynamic Resource Allocation"
[16] article proposes an integer nonlinear microservice placement model for
Kubernetes with the goal of cost minimization.
The paper "On the Resource Management of Kubernetes" [17] investigates whether
the resource management of Kubernetes is sufficient to isolate the performance of
containers. Also KOSMOS [18] was introduced as a novel auto-scaling solution for
Kubernetes. Containers are individually controlled by control-theoretical planners
that manage container resources on-the-fly (vertical scaling).
Brando Chimonelli [19] explores the proactive auto-scaling of virtual resources based
on traffic demand, aiming to improve the current reactive approach, the Horizontal
Pod Autoscaler (HPA), that relies on predefined rules and threshold values.
2.4.4 Load-balancing
Alexander Sundberg [20] addresses load balancing algorithms for networked systems
with microservices architecture. He concluded that there is a lack of proposed load
balancing algorithms for microservices, and it is not obvious how to adapt such
algorithms to the architecture under consideration.
Also Shitole and Abishek Sanjay [21] proposed a technique that uses service-mesh
to inject sidecar proxies into every microservice and dynamically balances the load
among services by applying service-specific routing. The experimental results have
proved that the proposed design outperforms the traditional approach by maintain-
ing stability and consistency in response rate and consumes fewer resources
2.4.5 Caching methodologies
Joel Sandman [22] evaluates caching methodologies for microservice-based architec-
tures in Kubernetes. He compares different caching methodologies in microservice-
based architectures. The 3 methodologies which are compared are no caching, full-
page caching of the end response of the system, and fine-grained caching which is
installed in between the services of the system. The results he got was that the fine-
grained caching acquired a higher network traffic reduction for all different Time to
Live values. It also had a lower relative data staleness for all Time to Live values,
as well as a lower memory usage.
23
Chapter 3
Technologies and Tools
In this chapter, we will explore the technologies and tools used in our small-scale
ecosystem. Our exploration begins with an introduction to the Spring Boot frame-
work, followed by a comprehensive examination of Docker and Kubernetes concepts.
In addition, we’ll also cover a range of other important tools and technologies that
we use in our project. These include things like Service Discovery and API Gateway,
which help our different services find and talk to each other easily. We’ll also talk
about Event-Driven Architecture, which is a way of organizing our system so that it
responds to events happening in real-time. We’ll look into tools that help us make
sure our system is resilient and can handle failures without crashing. Security is also
important, so we’ll explore how we keep our data safe from hackers and other threats.
We’ll also talk about Monitoring and Observability tools that help us keep an eye on
how our system is performing. This exploration will give us a better understanding
of the technology we’re using to build our project.
24
3. Technologies and Tools
3.1 Introduction to the Spring Boot framework
Spring Boot is a powerful framework designed to simplify the development of Java-
based applications. It builds on top of the Spring Framework, providing an approach
to building production-ready applications with minimal effort.
Convention over Configuration: Spring Boot supports the principle of "con-
vention over configuration," minimizing the need for manual setup and boilerplate
code. By following defaults and conventions, Spring Boot eliminates much of the
configuration overhead traditionally associated with Spring applications.
RESTful APIs: Spring Boot provides support for building RESTful APIs, mak-
ing it easy to expose endpoints for inter-service communication and integration
with external systems.
Dependency Injection: Spring Boot’s dependency injection mechanism facili-
tates the integration of services and components within a microservice, promoting
loose coupling and modularity.
Inversion Of Control: IoC is a key principle underlying the Spring framework,
including Spring Boot. IoC decouples components within an application, allowing
dependencies to be injected rather than hard-coded.
Auto-Configuration: It automatically configures the Spring application based
on the dependencies present in the classpath. This means you get a fully functional
application with defaults without needing to write separate configuration files.
Annotations-based Development: Spring Boot uses annotations to sim-
plify application development. Annotations like @SpringBootApplication,
@RestController,@Autowired, and others help streamline configuration, depen-
dency injection, and request mapping, reducing the need for XML-based configu-
ration.
Externalized Configuration: Spring Boot allows you to externalize configura-
tion properties using files like application.properties. They provide a centralized
location for configuring various aspects of our application, such as database con-
nections, logging levels, and server port.
Dependency Management with Maven or Gradle: Spring Boot projects
are typically managed using build automation tools like Maven or Gradle. The
pom.xml (Maven) or build.gradle (Gradle) file defines project dependencies and
build configurations, enabling easy dependency management and project setup.
25
3. Technologies and Tools
3.1.1 Spring Core Container
The core container is the basis of the whole Spring ecosystem and comprehends
four components—core, beans, context, and expression language. Core and beans
are responsible for providing the fundamentals of the framework and dependency
injection. These modules are responsible for managing the IoC container, and the
principal functions are the instantiation, configuration, and destruction of the object
residents in the Spring container. [23]
3.1.2 Spring Data Access/Integration
Spring data access and the integration layer is used for data manipulation and other
integration. It covers the following modules: [24]
Transaction: This module helps maintain transactions in a programmatic and
declarative manner. This module supports ORM and JDBC modules.
Object XML mapping (OXM): This module provides abstraction of
Object/XML processing, which can be used by various OXM implementation
such as JAXB, XMLBeans, and so on, to integrate with Spring.
Object Relationship Mapping (ORM): Spring doesn’t provide its own
ORM framework; instead it facilitates integration with ORM frameworks such
as Hibernate, JPA, JDO, and so on, we will use hibernate.
Java Database Connectivity (JDBC): This module provides all low-level
boilerplate code to deal with JDBC. You can use it to interact with databases
with standard JDBC API.
Java Messaging Services (JMS): This module supports integration of mes-
saging system in Spring.
3.1.3 Spring Cloud
Spring Cloud is a comprehensive set of tools and frameworks provided by the Spring
team to simplify the development of distributed systems and microservices-based
architectures. For our project we’ll make use of the following Spring Cloud projects:
Eureka, Spring Cloud Gateway, Resilience4j and Zipkin.
26
3. Technologies and Tools
3.1.4 Spring Web MVC
Spring Web MVC module provides a framework for building web applications. It
consists of core collaborating components to handle incoming requests and facilitate
the development of web applications:
Controller are responsible for handling incoming HTTP requests, processing
them, and returning an appropriate response. They typically contain methods
annotated with @RequestMapping or its variants (@GetMapping,@PostMapping,
etc.) to map URLs to specific handler methods.
Model represents the data or business logic of the application. It encapsulates
the application’s data and state, which can be passed between the controller and
the view. Models are often represented as POJOs (Plain Old Java Objects) or
DTOs (Data Transfer Objects).
View are responsible for rendering the user interface based on the data provided
by the controller. In Spring MVC, views can be implemented using various tech-
nologies such as JSP (JavaServer Pages), Thymeleaf, FreeMarker, or even plain
HTML.
Service components encapsulate the business logic of the application. They han-
dle interactions between controllers, DTOs, and other components to perform
specific business operations. Services help in keeping the controllers thin by mov-
ing complex logic out of them.
Repository are a type of DAO specifically used for data access in Spring Data
projects. They provide a higher-level abstraction over data access operations, typ-
ically using Spring Data JPA, Spring Data MongoDB, or other Spring Data mod-
ules.
The following figure shows how Spring Boot follows MVC architecture:
Figure 3.1: Spring MVC architecture
27
3. Technologies and Tools
3.1.5 Maven
Maven is a powerful build automation tool primarily used for Java projects. It sim-
plifies the build process by managing project dependencies, compiling source code,
packaging artifacts, and more. Maven uses a project object model (POM), rep-
resented by a pom.xml file, to define project configurations and dependencies. The
pom.xml file contains information such as project metadata, dependencies, build set-
tings, and plugins. By configuring the pom.xml file, we can easily manage project de-
pendencies, define build goals, and customize the build process according to project
requirements. Overall, Maven and the pom.xml file streamline the development work-
flow and ensure consistent builds across different environments.
3.2 Containerization and Orchestration technolo-
gies
3.2.1 Docker Overview
Docker provides an ecosystem for building, deploying, and managing containers.
Containers are lightweight, portable, and self-sufficient units that encapsulate an
application and its dependencies, allowing it to run consistently across different
environments.
Figure 3.2: Docker infrastructure
Docker Engine
The Docker Engine is the heart of the Docker platform. It’s a client-server application
that runs on the host machine and manages the creation, execution, and networking
of containers.
28
3. Technologies and Tools
Dockerfile
A Dockerfile is a recipe file that contains instructions (or commands) and arguments
that are used to define Docker images and containers. In essence. It is a tile with a
list of commands that are in sequence to build customized images. [25]
Docker images
Docker images are read-only templates containing everything needed to run an ap-
plication, including the application code, runtime environment, libraries, and depen-
dencies.
Docker containers
Docker containers are runnable instances of Docker images. Each container runs in
its own isolated environment, with its own filesystem, network, and process space.
Docker registries
Docker registries are repositories for storing and sharing Docker images. Public reg-
istries like Docker Hub and private registries facilitate collaboration and version
control.
Docker networking and volumes
Docker provides networking capabilities for communication between containers and
volumes for persistent data storage. Networking allows containers to communicate
with each other, while volumes enable data to persist beyond the lifecycle of con-
tainers.
3.2.2 Introduction to Kubernetes
Kubernetes offers a solution for automating the management of containerized ap-
plications. It abstracts away the underlying infrastructure and provides a unified
API for deploying and managing containers across clusters of machines. Kubernetes
offers several features and benefits, including:
Container Orchestration: Kubernetes automates the deployment, scaling, and
management of containerized applications, eliminating the need for manual inter-
vention and reducing operational overhead.
Scalability: Kubernetes scales applications horizontally by adding or removing
containers based on workload demand, ensuring optimal resource utilization and
performance.
29
3. Technologies and Tools
High Availability: Kubernetes ensures the high availability of applications by
automatically restarting containers that fail and distributing workload across mul-
tiple nodes to prevent single points of failure.
Resource Efficiency: Kubernetes optimizes resource utilization by scheduling
containers based on resource requirements and availability, minimizing wastage
and maximizing efficiency.
Service Discovery and Load Balancing: Kubernetes provides built-in mech-
anisms for service discovery and load balancing, enabling communication between
microservices and efficient distribution of traffic.
Kubernetes Architecture Overview
Kubernetes architecture comprises several components that work together to manage
containerized applications:
Pods, Deployments, Services, and ReplicaSets: Pods are the smallest de-
ployable units in Kubernetes, consisting of one or more containers. Deployments
manage the lifecycle of Pods, while Services provide network access to Pods.
ReplicaSets ensure that a specified number of Pod replicas are running at any
given time.
Master Node and Worker Nodes: Kubernetes clusters consist of one or more
Master nodes and multiple Worker nodes. Master nodes manage the cluster’s
control plane and orchestrate workload scheduling, while Worker nodes execute
the actual containerized workloads.
Kubernetes API Server, etcd, kube-scheduler, kube-controller-
manager, and kubelet: These are core components of the Kubernetes control
plane responsible for managing cluster state, scheduling workloads, and enforcing
desired state configurations.
30
3. Technologies and Tools
3.3 Overview of other tools and technologies used
in the project
3.3.1 Service Discovery and API Gateway
Eureka
Netflix Eureka is a REST based service that we use primarily for service discovery
and locating services for the purpose of load balancing and failover of middle-tier
servers. [26]
Spring Cloud Gateway
Spring Cloud Gateway is a lightweight, reactive API Gateway. We are using it to
provide a simple way to route and manage network traffic to our application. So we
would have a single entry point to our microservices.
3.3.2 Event-Driven Architecture
Kafka
Kafka serves as the backbone of our Notification Service, facilitating the efficient
and real-time exchange of messages between producers and consumers. With Kafka,
we can ensure that notifications are delivered promptly, even during periods of high
load or system failures.
3.3.3 Resilience and Fault Tolerance
Resilience4j
We will use Resilience4j’s circuit breaker pattern to prevent cascading failures and
handle faults gracefully, aiming to enhance the fault tolerance of our application.
3.3.4 Security
Keycloak
Keycloak is being used with OAuth2 to secure user authentication.
31
3. Technologies and Tools
3.3.5 Monitoring and Observability
Zipkin
Used for tracking of requests as they flow through our microservices, providing
insights into our system performance and latency.
Prometheus
Monitoring and alerting toolkit focused on gathering metrics from our different
microservices.
Grafana
A visualization tool used with Prometheus for creating dashboards and monitoring
metrics.
3.3.6 Databases
MongoDB
A NoSQL database used for storing unstructured data. We will use it in our Product
Service.
MySQL
A relational database management system used for structured data storage. Used
in our Order Service and Inventory Service
3.3.7 Testing
JUnit
A unit testing framework for Java.
Mockito
Mocking framework used in unit testing for creating mock objects in tests. We will
use primarily MockMvc for making requests and testing the response.
32
Chapter 4
Development Phase
In this section, we’re getting everything ready for our e-commerce platform. First,
we set up our environment and configure the tools we need. Then, we create
the individual parts of our platform, like the product management and order
processing services. These parts need to talk to each other, so we make sure they
can communicate smoothly.
To manage all the incoming requests, we set up something called an API Gateway.
Security is super important, so we make sure our system is protected from any bad
actors. We also make sure our services can find each other easily using a process
called service discovery. To handle any problems that might come up, like a service
crashing, we use something called a Circuit Breaker. We also keep track of how
everything is running using distributed tracing, which helps us monitor all the in-
teractions between our services. Lastly, we focus on keeping our platform running
smoothly by tuning its performance and making sure it can handle lots of users.
4.1 Environment Setup and Configuration
We ensure that our development environment is properly set up to support our
project requirements. This includes ensuring the availability of essential tools such
as IntelliJ IDEA, Java Development Kit (JDK), Apache Maven and all the other
tools. IntelliJ provides a comprehensive integrated development environment (IDE)
for Java development, offering powerful features for coding, debugging, and project
management. We ensure that the appropriate version of Java is installed and con-
figured on our system to support our project. Additionally, we verify the presence
of Apache Maven. By confirming the availability of these tools, we ensure that our
environment is ready for development of our project.
33
4. Development Phase
4.2 Microservices creation
4.2.1 Product Service
Our first microservice will be the product-service, where it handles products creation
and products listing.
Creating the project
To create a new Springboot project, we will use Spring Initializr from start.spring.io.
We will add the following dependencies:
Lombok
Spring Web
Spring Data MongoDB
We will also use Maven as a build tool and Java version 17. Here’s how our project
would look like:
Figure 4.1: Spring initializr project overview
34
4. Development Phase
We will download the project and open it in our IDE. We will create the following
packages:
controller
dto
model
repository
service
Here’s an overview of how will our project tree look like:
Figure 4.2: Product project structure in Intellij
The util package will contain any helper classes that we might need during the
development.
Now our project is ready and set up, we will start with the development part!
Configuration
We will configure our project using the application.properties file, where we will de-
clare the server port, database credentials and other essential configurations. Here’s
a sample code of our application.properties:
1s pr i ng . d a ta . m o ng o db . h o st = l o ca l ho s t
2s pr i ng . d a ta . m o ng o db . p o rt = 2 70 1 7
3s pr in g . d at a . mo n go d b . da t ab as e = p ro du ct -s e rv i ce
4s pr i ng . a p p li c at i on . n am e = p ro du c t - s e rv i ce
5s er v er . p or t = 0
Note that we define the server port as 0, so our service will pick up any free port to
run on.
35
4. Development Phase
Models & DTOs
Next step will be creating our model as ell as the DTO that the client will be dealing
with. Our model will look like the following:
1@ Do c um e nt ( v a lu e = "product")
2@AllArgsConstructor
3@NoArgsConstructor
4@Builder
5@Dat a
6publ i c c l a ss Product {
7@Id
8private S tr i n g id ;
9private S tr i ng n am e ;
10 private S tr in g d es cr i pt i on ;
11 private B ig D ec i ma l p ri ce ;
12 }
Code 4.1: Product.java
We are using the @Document annotation for specifying this class as entity for
MongoDB, we also use Lombok annotations for generating constructors. Builder
pattern is being used as well to simplify the process of creating new objects.
For our DTOs, we will have a ProductRequest DTO and ProductResponse DTO,
where the only difference will be the id property in ProductResponse.
Repository
In the repository package we will create our ProductRepository which interacts
with the database. The implementation is straight forward and handled mostly by
Springboot:
1publ i c i n t e r f a c e ProductRepository extends
2M on g oR e p os i to r y < P r od uc t , S tr i ng > {
3}
Code 4.2: ProductRepository.java
36
4. Development Phase
Controller
In our controller package, we will create a ProductController which handles the
incoming requests and execute the suitable method. The 2 functionalities we are
expecting from the controller are:
Creating a new product
Listing all products
Here’s an overview of our controller:
1@RestController
2@ Re q ue s tM a pp i ng ( " / ap i / p ro d uc t " )
3@RequiredArgsConstructor
4publ i c c l a ss ProductController {
5
6priva t e f i n al ProductService productService;
7
8@PostMapping
9@ Re s po n se S ta t us ( H t tp S ta tu s . C RE A TE D )
10 publ i c v o id c re a te P ro d uc t ( @ Re q ue st B od y P ro d uc tR e qu e st
productRequest) {
11 productService.createProduct(productRequest);
12 }
13
14 @GetMapping
15 @ Re s po n se S ta t us ( H t tp S ta tu s . OK )
16 public Li st < P ro d uc tR e sp o ns e > g et A ll P ro d uc t s () {
17 return productService.getAllProducts();
18 }
19 }
Code 4.3: ProductController.java
It calls the ProductService methods to handle the incoming GET and POST re-
quests. In the next section we will have a look at the service class where the actual
logic happens.
37
4. Development Phase
Service
The ProductService class will contain all our methods that get called by the con-
troller. It defines the main logic of the application, while using the repository for
database interactions, so it doesn’t interact directly with the database. Here’s a
snippet showing the getAllProducts() method:
1public Li st < P ro d uc tR e sp o ns e > g et A ll P ro d uc t s () {
2Li s t < P r od u ct > p r od u c ts = p r o du c t R ep o s i to r y . f i nd A l l () ;
3return p ro d u ct s . s t re a m ( ) . ma p ( t hi s ::mapToProductResponse).
toList();
4}
5
6private P ro d uc t Re s po n se m a pT o P ro d uc t Re s po n se ( P r od uc t p ro du c t ) {
7return P ro d uc t Re s po n se . b u il de r ( )
8. id ( p r od u c t . g et I d ( ) )
9. n am e ( pr o du ct . g e tN a me ( ) )
10 . d e sc r i p ti o n ( p r o du c t . g e t D es c r i pt i o n ( ) )
11 . p ri ce ( p r od u ct . g e tP r ic e ( ) )
12 . b ui l d ( ) ;
13 }
Code 4.4: ProductService.java
We can see a helper method for conversion between a model and a DTO.
Making calls
Now we will run our service and make calls so we ensure it’s working as expected.
For this we are going to use Postman.
First we will make a request to the localhost and the port that got assigned to our
service, here’s how it will look like:
Figure 4.3: Post request to create a new product
As shown we received a status of "201 created" which means our product got created.
38
4. Development Phase
Next we will try to make a GET request so we can see that we are able to retrieve
the created product:
Figure 4.4: Get request to get the created product
As we can see above, the API returned our created product successfully, now let’s
go on and implement the rest of our services.
4.2.2 Order Service
This service will be responsible for placing orders. It will call the inventory-service
to check whether the products are available or not and then place the order.
Creating the project & Configuration
We will use the same way we created our previous project, the only difference will
be using MySQL driver instead of MongoDB. Then we will proceed with creating
our Controller, Models, DTOs, Repository and Service.
Here’s an overview of how will our project tree look like:
Figure 4.5: Order project structure in Intellij
39
4. Development Phase
Note that there are some packages and classes that we will discuss in the following
sections.
Now our project is ready and set up, we can start with the development.
Configuration
We will configure our project using the application.properties file, as shown in the
previous service, the only difference is setting MySQL database instead of MongoDB,
here’s how it will look like:
1s pr i ng . d a ta s o ur c e . dr i ve r - c l as s - n a me = c o m . my s ql . c j . jd b c . D ri v er
2s pr i ng . d a ta s o ur c e . ur l = jd b c : my s ql : // l o ca l h os t : 3 30 6 / or de r - s e rv i ce
3s pr in g . d at a so u rc e . us e rn a me = r oo t
4s pr in g . d at a so u rc e . p as s wo rd = p a ss w ro d
5s pr i ng . j p a . hi b e rn a t e . dd l - a ut o = u pd a te
Code 4.5: MySQL configuration
Models & DTOs
In this service, we will have the following DTOs:
InventoryResponse : To get the skuCode of the product and if it’s in stock
or not.
OrderLineItems: To get all the data bout a product
OrderRequest: Containing a list of the items in the order.
While we will have only two models: Order and OrderLineItems.
Repository
The repository will follow the same way we created the ProductRepository.
40
4. Development Phase
Controller
The OrderController will have only one method for placing an order that it receives
through the request body.
Here’s an overview of our controller placeOrder() method:
1@PostMapping
2@ Re s po n se S ta t us ( H t tp S ta tu s . C RE A TE D )
3public String placeOrder(@RequestBody OrderRequest orderRequest
) {
4orderService.placeOrder(orderRequest);
5return " O rd er P l ac ed S u cc e ss f ul l y " ;
6}
Code 4.6: OrderController.java
We can see it uses the OrderService to perform the order placing process.
Service
Our OrderService will work in the following way:
Receive the order from the controller
Call Inventory Service to check whether the products are available or not
Place the order if all products are available
If not available then it throws an exception
Making calls
We will use postman as discussed before to place a new order and we can use any
MySQL database viewer app to inspect if the table got updated or not.
4.2.3 Inventory Service
The inventory-service will be responsible for checking whether a list of products are
in stock or not.
Creating and configuring the project
We will use the same configuration we used for our order-service
41
4. Development Phase
Configuration
We will use the same configuration used for the order-service as both of them will
use MySQL database.
Models & DTOs
The inventory-service will have an inventory model and an inventory DTO only.
Repository
We will follow the same way we created the previous repositories.
Controller
Our controller will have only one endpoint for handling GET requests to check
whether a list of products is in stock or not. Below is an overview on our isInStock()
method:
1@GetMapping
2@ Re s po n se S ta t us ( H t tp S ta tu s . OK )
3public Li st < In ve nt or yR es po ns e > isInStock ( @RequestPar am Lis t <
S tr in g > s k uC o de ) {
4l og . i nf o ( " R ec e i ve d i nv e n t or y c he c k r e qu e s t f or s k uC o d e : {} "
, s ku C od e ) ;
5return i nv e nt o ry S er v ic e . i sI n St o ck ( s ku C od e ) ;
6}
Code 4.7: InventoryController.java
42
4. Development Phase
Service
The InventoryService will use the InventoryRepository to get all the matching prod-
ucts to the given sku codes and map them to check whether they are in stock or
not. Here’s our implementation:
1@ Tr a ns a ct i on a l ( re a dO n ly = tr u e )
2@SneakyThrows
3public Li s t < I n v en t o r yR e s po n s e > i s I n St o c k ( Li s t < S t ri ng > s k u Co d e )
{
4l og . i nf o ( " Ch e ck i ng I nv e nt o ry " );
5return i nv e nt o r yR e po s it o ry . f i nd B yS k uC o de I n ( s ku Co d e ). s t re a m
()
6. m ap ( i n ve n t or y ->
7InventoryResponse.builder()
8. s ku Co d e ( in v en t or y . ge t Sk u Co d e () )
9. i sI n St oc k ( i nv e nt o ry . g e tQ u an t it y ()
> 0)
10 . b ui ld ( )
11 ) . t oL i s t () ;
12 }
Code 4.8: InventoryService.java
Making calls
We can check the behaviour of our service in Postman by passing a list of sku codes
and check whether they are in stock or not.
4.3 Adding all projects together
Now we need to put all our projects into one root project. to do that we will create
a new Maven project in Intellij, and define our modules as following:
1<modules>
2< mo du l e >o rd er - s er v ic e </ m od ul e >
3< mo du l e >p ro du ct - s er vi ce < / mo d ul e >
4< mo du l e >i nv ent ory - s er vi c e </ m od ul e >
5< mo du l e >d is cov ery - s er ve r </ m od u le >
6< mo du l e >a pi - g at ew ay < / mo d ul e >
7<module >notification -service </module >
8< / mo du l es >
Code 4.9: Root project pom.xml
Then we will run maven clean verify and our root project will have all the defined
services.
43
4. Development Phase
4.4 Communication between services
We need to communicate between the order-service and the inventory-service as
the order-service checks whether the items exist or not in the inventory. To achieve
this we are going to use WebClient.
Spring’s WebClient class (unlike RestTemplate) allows you to reactively interact
with a reactive RESTful web services. In Spring, WebClient is also the preferred
way to asynchronously interact with non-reactive RESTful web services. [27]
4.4.1 Configuration
First of all, we need to add the Spring WebFlux dependency in our pom.xml file as
WebClient is part of it, below you can find the code for the dependency:
1< d ep e n de n c y >
2< gr o up Id > o rg . s pr i ng f ra m ew ork . b o ot < / gr ou p Id >
3< a r ti f a ct I d > s p ri n g - b oo t - s ta r te r - w e bf l u x < / a rt i f a ct I d >
4< / de p en de n cy >
Code 4.10: WebFlux dependency
Next we will have to configure our WebClient in a configuration file. So we need to
define a bean of type WebClient.
1@Configuration
2publ i c c l a ss WebClientConfig {
3@Bea n
4@LoadBalanced
5public W eb C li e nt . B u il de r w eb C li e nt B ui l d er ( ) {
6return W eb C l i en t . b u i ld e r ( ) ;
7}
8}
Code 4.11: WebClient configuration
Right now our WebClient is ready to make calls, so we will create an instance in the
OrderService class and use it for making calls to the inventory-service.
44
4. Development Phase
4.5 Service discovery
A discovery service is probably the most important support function required
to make a landscape of cooperating microservices production-ready. A discovery
service can be used to keep track Of existing microservices and their instances. [28]
For our project we are going to use Netflix Eureka service discovery.
4.5.1 Creation & configuration of the project
We need to create a new project for the service discovery, we can do this in the same
way we created our previous projects through Spring Initializr from start.spring.io.
We need to add the following dependency in our service discovery project:
1< d ep e n de n c y >
2< gr o up Id > o rg . s pr i ng f ra m ew ork . c l ou d </ g r ou pI d >
3< a r t i factId >s pring - cl oud -star ter - netf lix - eure ka - se r v er </
a rt if act I d >
4< / de p en de n cy >
Code 4.12: Discovery server dependency
After that we need to create a DiscoverServerApplication class with the following
implementation:
1@SpringBootApplication
2@EnableEurekaServer
3publ i c c l a ss DiscoveryServerApplication {
4public static void m ai n ( S tr in g [ ] a rg s ) {
5S pr i n gA p p li c a t io n . r un ( D i s co v e ry S e rv e r A pp l i ca t i on . class ,
a rg s ) ;
6}
7}
Code 4.13: Discovery server implementation
Note that the annotation @EnableEurekaServer, allows our project to act as a
discovery server where the other projects can register to.
The last step in our discovery server project will be adding the configuration in
application.properties file where we will define the server port, application name
and other properties.
Now we have our discovery server up and running, the next step will be registering
the services to it, so we can have information about every service running.
45
4. Development Phase
4.5.2 Registration of the other services
To register our services, we need to add the following dependency in each project:
1< d ep e n de n c y >
2< gr o up Id > o rg . s pr i ng f ra m ew ork . c l ou d </ g r ou pI d >
3< a r t i factId >s pring - cl oud -star ter - netf lix - eure ka - cl i e nt </
a rt if act I d >
4< / de p en de n cy >
Code 4.14: Discovery server client dependency
We can see it’s almost the same as our discovery server dependency, we just changed
"server" to "client".
Next step will be adding the configuration in each service, to do so, we will need to
update each application.properties file with the following properties definition:
1s er v er . p or t = 0
2e ur e ka . c li e nt . s e rv i ce U rl . d e f au l tZ o ne = h tt p : // e u re k a :
passwo rd @l oc alhost :876 1/ e ur ek a
3s pr i ng . a p p li c at i on . n am e = o rd er - s e rv i c e
Code 4.15: Discovery server client configuration
We assign the port to 0 as we will be using load-balancing, so we can run multiple
instance of our project and the service discovery will chose which one to handle the
request. We also provide the discovery server URL as well as giving a name to our
application so we can recognise our application in the discovery server.
4.5.3 Eureka server dashboard
Now we can see all our services running and the status of each service in our discovery
server dashboard through localhost:8761.
Figure 4.6: Eureka discovery service
46
4. Development Phase
4.6 API Gateway
An API Gateway is a service that acts as an entry point to the application from
the outside. So instead of calling each service individually in each request, we will
have a single entry point which redirects the request to the appropriate service. The
below figure illustrates how does an API Gateway work:
Figure 4.7: API Gateway Representation
In addition to single entry point, API Gateway provides authentication, security
and load-balancing.
4.6.1 Creating the project
We will create a new project for the API Gateway. Similar to the previous projects,
we will use Spring initializr. We will add the gateway dependency as well as the
Eureka client dependency, so we can have it in our discovery server.
4.6.2 Configuration
We will configure our gateway through the application.properties file. We will add
all our routes, predicates and filters there. Below we can see the product and order
routes for illustration:
1## P r o d u c t S e r v i ce R oute
2s pr i ng . c lo u d . ga t ew a y . ro u te s [ 0] . i d= p r od uc t - s e rv i ce
3s pr i ng . c lo u d . ga t ew a y . ro u te s [ 0] . u ri = l b :/ / pr o du ct - s er v ic e
4s pr i ng . c lo u d . ga t ew a y . ro u te s [ 0] . p r ed i ca t es [ 0 ]= P a th = / ap i / p ro d uc t
5
6## O r der Servic e R o u t e
7s pr i ng . c lo u d . ga t ew a y . ro u te s [ 1] . i d= o rd er - s er v ic e
8s pr i ng . c lo u d . ga t ew a y . ro u te s [ 1] . u ri = l b :/ / or de r - s e rv i ce
9s pr i ng . c lo u d . ga t ew a y . ro u te s [ 1] . p r ed i ca t es [ 0 ]= P a th = / ap i / o rd er
Code 4.16: api-gateway/application.properties
47
4. Development Phase
As shown, we defined 3 properties:
id: Defines the name of our service.
uri: using "lb" means load balancer, so we don’t define a static port for the ser-
vice and if we run different instances, it will redirect the request to appropriate
one.
predicates: Specifies which endpoints should be redirected to the specified
service.
Now if we start our gateway project and make a request to the port we specified,
the request will get redirected to the appropriate service and get handled by this
service. So we don’t need to call each service individually
4.7 Securing our services
Now our services are up and running, the API Gateway redirects the requests to the
appropriate service, but we need to add a security layer to our project, so not anyone
can interact with the services. For this we will use Keycloak to add authorization to
our project.
4.7.1 Setting up Keycloak
First of all, we need to run a Keycloak docker container so we can configure our
Keycloak. The following command starts a Keycloak instance that we can configure
to use in our project:
1d oc k er ru n - p 8 18 1 :8 0 80 - e K EY C L OA K _A D MI N = a dm i n - e
K EY C LO A K_ A D MI N _P A SS W OR D = a dm i n q ua y .i o / ke y cl o ak / k ey c lo a k
: 24 . 0. 2 st a rt - d ev
Code 4.17: Running Keycloak container in Docker
48
4. Development Phase
Now if we go to localhost:8181, and login with the credentials specified in the com-
mand, we will see the following dashboard:
Figure 4.8: Keycloak Dashboard
4.7.2 Keycloak realms
The first thing you will want to do is create a realm for your applications and users.
Think of a realm as a tenant. A realm is fully isolated from other realms, it has its
own configuration, and its own set of applications and users. This allows a single
installation of Keycloak to be used for multiple purposes. For example, you may
want to have one realm for internal applications and employees, and another realm
for external applications and customers. [29]
So we will create a new realm, then create a new client and click on OpenId Endpoint
configuration. This will give us the URLs we need to get a token.
4.7.3 Configuring the gateway
We will add spring-boot-oauth2 resource server and security dependencies to the
gateway pom.xml file. Then in application.properties we will define the issuer-uri
we got from the admin console.
49
4. Development Phase
Next we will create a security configuration class to configure a bean of type
SecurityWebFilterChain to apply security to all our routes except the discovery
server:
1@Bea n
2public SecurityWebFilterChain springSecurityFilterChain(
ServerHttpSecurity serverHttpSecurity) {
3serverHttpSecurity
4. cs r f( S e rv e rH t tp S ec u ri t y . Cs rf S pe c :: d i sa bl e )
5. a ut h or i ze E xc h a ng e ( e xc ha n ge - >
6e xc ha n ge . p a th M at c he r s (" / e ur e ka / * * ")
7. p er m it Al l ( )
8.anyExchange ()
9. a u t he n t i ca t e d ( ) )
10 . o au t h2 R es o ur c eS e r ve r ( sp e c -> sp e c. j wt ( C u st o mi ze r .
withDefaults()));
11 return s e rv e r H tt p S e cu r i t y . b ui l d ( ) ;
12 }
Code 4.18: SecurityWebFilterChain configuration
Now if we try to make a call through Postman we will get a 401 unauthorized
response code. So we will need to provide the token endpoint URL and credentials
in the authorization section to get a token and use it in our requests.
4.8 Implementing Circuit breaker pattern
4.8.1 What is Circuit breaker pattern?
A circuit breaker has 3 states: open, closed, half-open, it works in the following way:
A circuit breaker starts as Closed, allowing requests to be processed.
As long as the requests are processed successfully, it stays in the Closed state.
If failures start to happen, a counter starts to count up.
If a threshold of failures is reached within a specified period of time, the
circuit breaker will trip, that is, go to the Open State, not allowing further
requests to be proceed. Both the threshold of failures and the period of time
are configurable.
Instead, a request will Fast Fail, meaning it will return immediately with an
exception.
50
4. Development Phase
After a configurable period Of time, the circuit breaker Will enter a Half Open
state and allow one request to go through, as a probe, to see whether the failure
has been resolved.
If the probe request fails, the circuit breaker goes back to the Open state.
If the probe request succeeds, the circuit breaker goes to the initial Closed
state, allowing new requests to be processed. [30]
Figure 4.9: Circuit Breaker Pattern
4.8.2 Adding Resilience4j to the project
We will use Resilience4j circuit breaker for the project. First we need to add the
following dependency to the order-service as we will use it there to listen to the
inventory-service:
1< d ep e n de n c y >
2< gr o up Id > o rg . s pr i ng f ra m ew ork . c l ou d </ g r ou pI d >
3< a rt i f ac t I d > s pr in g - c lo u d - s t ar te r - c i rc u it b r ea k er -
r es i li e nc e 4j < / a rt if a ct I d >
4< / de p en de n cy >
Code 4.19: Resillience4j dependency
Then we will configure it in our application.properties file to listen to the health of
the inventory-service:
1r es i li e nc e 4j . c i rc u it b re a ke r . i ns t an ce s . i nv e nt o ry .
r eg i s te r H ea l t hI n d ic a to r = t r ue
Code 4.20: Resillience4j configuration
51
4. Development Phase
After that we need to tell our OrderController to use this pattern, and define a
fallback method if the service goes down:
1@PostMapping
2@ Re s po n se S ta t us ( H t tp S ta tu s . C RE A TE D )
3@ Ci r c ui t Br e a ke r ( n am e = " i nv e nt o ry " , fallbackMethod = "
fallbackMethod")
4@TimeLimiter(name = " i nv en t or y " )
5@ Re t ry ( n am e = " i n ve nt o ry " )
6public C om p le t a bl e Fu t u re < S tr i ng > p la c e Or d er ( @ R eq u e st B o dy
OrderRequest orderRequest) {
7l og . i nf o ( " Pl a ci n g O rd e r ") ;
8return C om p le t ab l eF u tu r e . su p pl y As yn c ( () - > or d er S er v ic e .
p la c eO rd e r ( or d er R eq u es t ) );
9}
10
11 public C om pl e ta b le F ut ur e < S tr in g > f al l ba ck M et h od ( O r de r Re q ue st
orderRequest , RuntimeException runtimeException ) {
12 l og . i nf o ( " Ca n no t P l ac e O r de r E x ec u ti n g F a ll b ac k l o gi c " );
13 return C om p le t ab l eF u tu r e . su p pl y As yn c ( () - > " O o ps ! S o m et h in g
w en t w ro ng , p le a se o rd e r a ft e r s om e t i me ! " );
14 }
Code 4.21: OrderController.java
52
4. Development Phase
4.9 Distributed tracing
Distributed tracing (also called distributed request tracing) is a type Of correlated
logging that helps you gain visibility into the operation of a distributed software
system for use cases such as performance profiling, debugging in production,
and root cause analysis of failures or other incidents. It gives you the ability to
understand exactly what a particular individual service is doing as part of the
whole, enabling you to ask and answer questions about the performance of your
services and your distributed system as a whole. [31]
We will use Spring Cloud Sleuth and Zipkin in our project for distributed tracing.
First of all we will add the following dependencies to all of our projects:
1< d ep e n de n c y >
2< gr oup I d >i o. m icrom e te r </ g ro upId >
3< ar tifact I d >m ic rom ete r - t ra ci ng - b ri dg e - br av e </ a rt ifactI d >
4< / de p en de n cy >
5< d ep e n de n c y >
6< gr oup I d >i o. z ip kin . r ep o rt er 2 </ g rou p Id >
7< ar tifact I d >z ip ki n - re po rt er - bra ve </ a rt i fa ct Id >
8< / de p en de n cy >
Code 4.22: Zipkin dependency
We will configure the tracing endpoint for each project in the application.properties
file:
1m an a ge m en t . z ip k in . t r ac in g . e nd p oi n t = ht t p :/ / l oc a lh o st : 9 41 1 / ap i / v2
/ s pa ns
Code 4.23: Zipkin configuration
After that we will run a Zipkin instance in docker and if we go to localhost:9411, we
will be able to see the dashboard and trace all our services, here are 2 traces for a
GET and POST requests in the product-service :
Figure 4.10: Zipkin dashboard
53
4. Development Phase
4.10 Event driven architecture
We can achieve asynchronous communication through event-driven architecture us-
ing events as means of triggering actions. In our project, when we receive an order,
an event is generated and transmitted as a message via Kafka. This is followed by the
notification server consuming this message and acting accordingly. In this project,
the order-service will operate as the producer.
4.10.1 Configuring the order-service
First we need to install Kafka and run it in Docker. Then we will add the following
dependency to our order-service:
1< d ep e n de n c y >
2< gr o up Id > o rg . s pr i ng f ra m ew ork . k a fk a </ g r ou pI d >
3< ar tifact I d >s pr in g - ka fk a </ a rti f ac tI d >
4< / de p en de n cy >
Code 4.24: Kafka dependency
After that we will write our configurations in application.properties file:
1s pr i ng . k af k a . bo o ts tr a p - s e rv er s = l o ca l ho s t : 90 92
2s pr i ng . k af k a . te m pl a te . d e fa ul t - t o pi c = n ot i f ic a ti o n To p ic
Code 4.25: Kafka configuration
These are just the setup for the topic name and the Kafka server port, but we can
set up more properties depending on the requirements.
Last step in the order-service will be creating an OrderPlacedEvent class which will
handle the message sending.
54
4. Development Phase
4.10.2 Notification service
We will create a notification-service project for handling the event of order
placement.
To do this we will use spring initializr again with the following dependencies:
1< d ep e n de n c y >
2< gr o up Id > o rg . s pr i ng f ra m ew ork . k a fk a </ g r ou pI d >
3< ar tifact I d >s pr in g - ka fk a </ a rti f ac tI d >
4< / de p en de n cy >
5< d ep e n de n c y >
6< gr oup I d >i o. z ip kin . r ep o rt er 2 </ g rou p Id >
7< ar tifact I d >z ip ki n - re po rt er - bra ve </ a rt i fa ct Id >
8< / de p en de n cy >
9< d ep e n de n c y >
10 < gr oup I d >i o. m icrom e te r </ g ro upId >
11 < ar tifact I d >m ic rom ete r - t ra ci ng - b ri dg e - br av e </ a rt ifactI d >
12 < / de p en de n cy >
Code 4.26: Notification service dependencies
In the application.properties file we will define the configurations such as the servers
ports, consumer id and more properties.
After that, we will create a NotificationServiceApplication class which has a method
annotated with @KafkaListener and it handles the event of order placement, so we
can send a confirmation email to the customer or a notification to the store owner.
4.11 Monitoring and Performance Tuning
Last step in our development phase will be monitoring our services. Apart from
service discovery and distributed tracing which provide some kind of monitoring,
we will use Prometheus and Grafana.
Grafana and Prometheus are probably the most prominent tools in the application
monitoring and analytics space. Prometheus is an open-source monitoring and
alerting platform that collects and stores metrics as time-series data. Grafana is an
open-source analytics and interactive visualization web application. It allows you
to ingest data from a huge number of data sources, query this data, and display it
on beautiful customizable charts for easy analysis. It is also possible to, set alerts
so you can quickly and easily be notified of abnormal behavior and lots more. [32]
55
4. Development Phase
First we will add the following dependencies to all of our projects:
1< d ep e n de n c y >
2< gr o up Id > o rg . s pr i ng f ra m ew ork . b o ot < / gr ou p Id >
3< a r ti f a ct I d > s p ri n g - b oo t - s ta r te r - a c tu a t o r < / a rt i f ac t I d >
4< / de p en de n cy >
5< d ep e n de n c y >
6< gr oup I d >i o. m icrom e te r </ g ro upId >
7< ar tifact I d >m ic rom ete r - r eg is tr y - promet h eu s </ a rt ifac t Id >
8< s co p e > r u nt i m e < / s co p e >
9< / de p en de n cy >
Code 4.27: Prometheus dependency
Then we will update the application.properties file in all the projects with the fol-
lowing 2 lines:
1m an a g em e nt . e n d po i nt s . w eb . e x po s ur e . i n cl u d e = p ro m e th e u s
2l og g in g . p at t er n . l ev el = %5 p [ $ { sp r in g . a p pl i ca t io n . n am e :} , % X {
t ra c e I d : - } ,% X { s p a nI d : - }]
Code 4.28: Prometheus configuration
After that we will create a Prometheus folder in our root directory, with
prometheus.yml file. This file will contain all the services to monitor, and how to
scrape metrics from these targets.
Then we will run docker containers for each, Prometheus and Grafana. If we go to
localhost:3000, we will be able to see the Grafana dashboard. We will add data source
as Prometheus with the container name as the URL. We will import a dashboard
using a grafana.json file and we will be able to see all the metrics as following:
Figure 4.11: Grafana Dashboard
56
Chapter 5
Containerization and Deployment
Now that we’ve developed our microservices, the next step is containerization.
Containerization offers numerous benefits, including portability, scalability, and effi-
ciency in deployment. In this chapter, we’ll explore containerization techniques using
Docker and Kubernetes to streamline the management of our microservices-based
applications.
5.1 Dockerization of Microservices
For the containerization part we’re going to use Docker. Docker provides a powerful
platform for packaging, distributing, and managing applications within lightweight
containers. In this section, we’ll discuss the Dockerfile approach, which allows us
to define the environment and dependencies for each microservice, and explore the
Jib approach, particularly beneficial for Java-based applications. By containerizing
our microservices, we enhance portability, scalability, and efficiency in deployment,
making application management easier.
5.1.1 Dockerfile Approach
Creating a Dockerfile
The Dockerfile approach is a fundamental method for containerizing microservices.
Dockerfiles are text files that contain instructions for building Docker images, defin-
ing the environment, dependencies, and runtime configurations of an application.
57
5. Containerization and Deployment
Dockerfile consists of multiple commands that give instructions on how our Docker
image should be built. Here are some of the most used commands:
ENTRYPOINT It specifies the starting of the expression to use when start-
ing your container. Simply ENTRYPOINT specifies the start of the com-
mand to run. If your container acts as a command-line program, you can
use ENTRYPOINT.
FROM A FROM statement defines which image to download and start from.
It must be the first command in your Dockerfile. A Dockerfile can have multiple
FROM statements which means the Dockerfile produces more than one image.
RUN The RUN statement defines running a command through the shell,
waiting for it to finish, and saving the result. It tells what process will be
running inside the container at the run time.
ADD If we define to add some files, ADD statement is used. It basically gives
instructions to copy new files, directories, or remote file URLs and then adds
them to the file system of the image.
ENV ENV statement sets the environment variables both during the build
and when running the result. It can be used in the Dockerfile and any scripts it
calls. It can be used in the Dockerfile as well as any scripts that the Dockerfile
calls.
Here’s a simple Dockerfile that we created for the api-gateway service:
1FRO M o pe nj dk :1 7
2
3C OP Y t a rg e t / *. j a r a pp . j ar
4
5E NT R Y P OI N T [ " j a va " , " - ja r " , " / a pp . j a r " ]
Code 5.1: api-gateway Dockerfile
The above file specifies OpenJDK 17 as the base image to use. It also copies the
JAR files from the local directory ’target’ into the Docker image and renames it
as ’app.jar’. The last command specifies what should be executed when the Docker
container starts, so we can run the Java application using the JAR file we copied
earlier.
58
5. Containerization and Deployment
Building and running our Docker image
After our file is ready, we can open a command line in the enclosing folder, make
sure we have Docker installed and run the following command:
1d oc ke r bu i ld -t ap i - g at ew a y .
Code 5.2: Docker build command
This will build our Docker image and give it the name ’api-gateway’. If we open
Docker Desktop we will be able to see the image and we can start a container from
this image with the following command:
1d oc ke r r u n -d - p 8 0 80 : 80 8 0 ap i - g at e wa y
Code 5.3: Docker run command
After running this command we will have a container running from the api-gateway
service. If we run our services, we will be able to make requests through Postman
to our api-gateway service on port 8080 and get a proper response.
5.1.2 Jib Approach
Jib offers developers an integrated experience for packaging their app into a container
image. Jib integrates With the two popular build tools in the Java ecosystem: Maven
and Gradle. If you have an existing Java project With a main class, the only thing
needed to containerize your app is to add the Jib plug-in to your pom.xml file, and
you are ready to build a container. If you add the plug-in, building a container is
as simple as typing mvn compile jib:dockerBuild. The resulting container is built on
top of the Java Distroless container. [33]
59
5. Containerization and Deployment
Adding the dependency
To be able to sue Jib, we will need to add the dependency into our pom.xml file,
here’s the dependency we need to add the following:
1<plugin>
2< g ro up I d > co m . go o gl e . c lo u d . to ol s < / g ro up I d >
3< ar tifact I d >j ib - ma ve n - pl ug in </ a rt i fa ct Id >
4<version> 3 . 2 .1 < / version>
5< c on f i gu r a ti o n >
6< f ro m >
7< im a ge > e cl ip se - t em u ri n :1 7 . 0. 4 .1 _1 - j re < / im a ge >
8< / fr om >
9< to > < i ma g e > r eg i st r y . h ub . d o ck e r . co m / z e ya d a bo u s ha n a b / ${
a rt i f a ct I d } < / i ma g e >
10 < a ut h >
11 <username>Username </username >
12 <password>Password </password >
13 < / au th >
14 < / to >
15 < / co n fi gu r at i on >
16 </plugin >
Code 5.4: Jib dependency
We specify our credentials for the Dockerhub where our images should be pushed.
Creating and pushing images
Now we have configured Jib and it’s ready to create our Docker images. So we can
build our images in 2 different ways:
Maven lifecycle panel: We can just press on the following operation and it
will build our image and push it to the Dockerhub registry:
Figure 5.1: Building Docker image with Jib
60
5. Containerization and Deployment
Command line: Our second option will be running the following command
and we will achieve the same result:
1m av e n c le a n c o mp i le j ib : b u il d
Code 5.5: Jib build command
Now after we have built and pushed our docker image, we will be able to see it on
Dockerhub that we provided the credentials for:
Figure 5.2: Dockerhub image
61
5. Containerization and Deployment
5.2 Orchestration with Docker Compose
Docker Compose is a tool used for defining and running multi-container Docker
applications. It allows you to describe the services comprising your application in
a YAML file, defining their relationships, configurations, and dependencies. Docker
Compose simplifies the process of managing complex applications by providing a
single command to start, stop, and scale your services.
5.2.1 Creating a Docker Compose file
We need to define a Docker Compose file in our root directory. This file will have
the instructions for pulling Docker images for each service we have and running con-
tainers from them. For example we will define the images we need for our database,
define the ports they will be running on and then configure our order-service to use
one of these images for data persistence. Below is a snippet of our Docker Compose
file:
1m on go :
2container_name: mongo
3i ma ge : m on go : 4 .4 .1 4 - r c0 - f o ca l
4r es t ar t : a lw ay s
5p or ts :
6- "27017:27017"
7expose:
8- "27017"
9volumes:
10 - ./ m o ng o - d at a : / da t a / db
11
12 ap i - g at e wa y :
13 i ma ge : z ey a da b ou s ha n a b /a pi - g a te w ay : l a te st
14 c on t ai n er _ na m e : ap i - g at e wa y
15 p or ts :
16 - "8181:8080"
17 expose:
18 - "8181"
19 environment:
20 - S P RI N G_ P RO F I LE S _A C TI V E = d oc ke r
21 - L OG G IN G _L E VE L _O R G_ S PR I NG F RA M EW O RK _ SE C UR I TY = T R AC E
22 d ep e nd s _o n :
23 - zipkin
24 - d is co v er y - s e rv e r
25 - keycloak
Code 5.6: docker-compose file
62
5. Containerization and Deployment
In the above example, we define the configuration for our MongoDB image that we
would like to use, the port it will be running on, so the services can communicate with
it using this port. We also define the configuration for our api-gateway, setting the
port, logging level and the profile we would like to use, so the application.properties
file which matches this profile will be used.
5.2.2 Running the containers
Now that we have declared our configuration for each image and how the container
should be run from each image, we can run our file and all our containers will be
up, so the services can communicate with each other and the whole application will
be up and running.
To execute the Docker Compose file, we will run the following command:
1dock e r c ompose up -d
Code 5.7: Executing docker-compose file
The above command, will pull all the necessary images and run containers with the
specified configurations for each service:
Figure 5.3: Docker Compose containers
Now all our services are up and running, we can make requests through Postman
to our api-gateway, get a token from Keycloak and expect a response like what we
got before.
As we can see, Docker Compose provides a convenient and efficient way to manage
Docker-based applications, allowing us to focus on building and testing their appli-
cations rather than spending time on manual environment setup and configuration.
63
5. Containerization and Deployment
In the following section, we will see a different approach for managing multiple
containers through Kubernetes, which gives us more control over the configuration
and comes with some more features.
5.3 Orchestration with Kubernetes
5.3.1 Introduction to Kubernetes
Kubernetes is an open-source container orchestration platform that automates the
deployment, scaling, and management of containerized applications. Its key features
include:
Container Orchestration: Kubernetes manages the lifecycle of containers,
including scheduling, scaling, and maintaining container health.
Service Discovery and Load Balancing: Kubernetes provides mechanisms
for service discovery and load balancing, ensuring that traffic is routed to the
appropriate containers.
Automatic Scaling: Kubernetes can automatically scale applications based
on resource usage or custom metrics, ensuring optimal performance and re-
source utilization.
Self-healing: Kubernetes monitors the health of containers and can automat-
ically restart or replace containers that fail.
Declarative Configuration: Kubernetes allows us to define our desired state
for applications using YAML or JSON configuration files, simplifying deploy-
ment and management.
Extensibility: Kubernetes is highly extensible, with a rich ecosystem of plu-
gins and extensions for networking, storage, and monitoring.
64
5. Containerization and Deployment
Kubernetes offers several benefits for managing containerized applications:
Scalability: Kubernetes enables us to scale applications horizontally by
adding or removing containers based on demand, ensuring that our appli-
cations can handle varying workloads efficiently.
Fault Tolerance: Kubernetes ensures high availability and reliability by au-
tomatically rescheduling containers in the event of node failures or container
crashes.
Portability: Kubernetes provides a consistent environment for running ap-
plications across different infrastructure environments, including on-premises
data centers, public clouds, and hybrid clouds.
5.3.2 Kubectl
Kubectl is a command-line tool that we will use for deploying and managing our
applications. It supports a wide range of commands:
create: Create a resource from a file, like deployment and service.
apply: Applies a given configuration.
get: Displays the resources, for example the running pods.
autoscale: Auto-scales deployments.
5.3.3 Deployments
A deployment in Kubernetes provides a layer of functionality around pods. It allows
you to create multiple pods from the same definition and to easily perform updates
to your deployed pods. A deployment also helps with scaling your application, and
potentially even auto-scaling your application. [34]
65
5. Containerization and Deployment
Writing a deployment configuration
First things first, we need to create a deployment.yaml file which describes the
desired state of our deployment. This includes specifying the container image, the
number of replicas, resource requirements, and any other configuration options. We
can think of it like a snippet of our docker-compose file. Here’s a snippet of the
deployment file for the api-gateway service:
1a pi V e r si o n : a p ps / v 1
2k in d : D ep l o ym e nt
3metadata:
4labels:
5a pp : a pi - g a te w ay
6n am e : ap i - g at e wa y
7spec:
8r ep l ic as : 1
9selector:
10 matchLabels:
11 a pp : a pi - g a te w ay
12 s tr a te gy : {}
13 template:
14 metadata:
15 labels:
16 a pp : a pi - g a te w ay
17 spec:
18 c on t ai n er s :
19 - e nv :
20 - n am e : S P RI N G _P R O FI L ES _ A CT I V E
21 v al ue : d oc k er
22 i ma ge : z ey a da b ou s ha n a b /a pi - g a te w ay : l a te st
23 n am e : ap i - g at e wa y
24 p or ts :
25 - c on t ai n er P or t : 8 08 0
26 - c on t ai n er P or t : 8 18 1
27 r es o ur ce s : { }
28 r es t ar t Po li c y : A lw ay s
29 status: {}
Code 5.8: api-gateway deployment file
66
5. Containerization and Deployment
5.3.4 Running our deployment
After we write our deployment we will run the following command:
1k ub ec t l a pp l y -f a pi - g at e wa y - d ep l oy m en t . ya m l
Code 5.9: Applying kube deployment file
Now if we run kubectl get all we will see our deployment running:
Figure 5.4: Running a deployment
5.3.5 Creating Kubernetes service
Different from Docker Compose, in Kubernetes, we will define the ports in a
different section or file called Service. Service component is done almost in the same
way as the deployment.
We will create a yaml file which has the configuration for our service:
1a pi V er s io n : v1
2k in d : S er v ic e
3metadata:
4labels:
5a pp : a pi - g a te w ay
6n am e : ap i - g at e wa y
7spec:
8type: LoadBalancer
9p or ts :
10 - n am e : " 81 8 1"
11 port: 8181
12 t ar g et P or t : 8 08 0
13 selector:
14 a pp : a pi - g a te w ay
15 status:
16 loadBalancer: {}
Code 5.10: api-gateway service file
67
5. Containerization and Deployment
As we might have noticed, we have splitted our docker-compose file into 2 parts now.
Now we have our yaml file for creating the service, so we just need to run the
kubectl apply command and specify our file.
We can monitor the running pods with kubectl get all command, it will show us
information about the running deployments and services. We can see the ports and
make requests through Postman as we did before.
5.3.6 Persistent Volume Claim
In Kubernetes, a Persistent Volume Claim (PVC) is a component used by appli-
cations to request storage resources dynamically. It acts as a request for storage
by a user or pod and enables Kubernetes to provision storage based on defined
requirements. It provides a flexible and efficient mechanism for managing storage
resources in Kubernetes environments.
Here’s an example of our MongoDB PVC.
1a pi V er s io n : v1
2kind: PersistentVolumeClaim
3metadata:
4labels:
5a pp : m on go - c l ai m 0
6n am e : mo n go - c l ai m 0
7spec:
8accessModes:
9- ReadWriteOnce
10 r es o ur ce s :
11 requests:
12 s to r ag e : 10 0 M i
13 status: {}
Code 5.11: MongoDB PVC yaml file
68
5. Containerization and Deployment
5.3.7 Orchestration for the whole application
Now we need to do the mentioned steps for each service we defined before in the
docker-compose file, including mysql, mongo-db, product-service and so on. After
each service has its own deployment and service we can use kubectl apply for each of
them and our whole application will be running inside a Kubernetes cluster. Below
we can see how kubectl get all command will result:
Figure 5.5: Kubernetes running components
69
5. Containerization and Deployment
5.4 Comparison between Docker Compose and
Kubernetes
As we have seen, Kubernetes and Docker Compose are powerful tools for container
orchestration, each with its strengths and use cases. Kubernetes excels in managing
large-scale deployments and provides advanced features for scalability, high avail-
ability, and production-grade deployments. On the other hand, Docker Compose
simplifies local development and testing and is suitable for smaller deployments on
a single host.
The following table from James Walker on spacelift.io shows the key differences
between both technologies: [35]
Figure 5.6: Docker Compose vs Kubernetes
70
Chapter 6
Optimization Techniques
In this chapter, we’ll explore strategies to enhance the performance and efficiency
of our system through resource allocation, load balancing, caching, and dynamic
scaling.
Optimizing your microservices architecture is important for delivering a good user
experience while efficiently utilizing resources.
Firstly, we’ll explore resource allocation strategies. Efficient resource allocation
ensures that each microservice receives the appropriate amount of CPU, memory,
and other resources to meet its demands without over-using or under-utilizing
resources. We’ll discuss techniques to fine-tune resource allocation based on the
specific requirements and workload patterns of our microservices.
Next, we’ll talk about load balancing techniques to evenly distribute incoming
traffic across multiple instances of our microservices. Load balancing not only
improves the responsiveness of our system but also enhances its fault tolerance by
reducing the risk of overloading individual services. We’ll examine different load
balancing strategies and their suitability for various deployment scenarios.
Caching is crucial for making microservices faster by reducing waiting time and
handling data better. We’ll talk about easy ways to set up caching, like Springboot
annotations, layering Dockerfiles and Kubernetes caching.
Finally, dynamic scaling enables our microservices to adapt to fluctuating work-
loads by automatically provisioning or de-allocating resources based on demand.
We’ll explore how Kubernetes’ built-in auto-scaling capabilities help us scale our
microservices horizontally or vertically in response to changing traffic patterns,
ensuring optimal performance and cost-effectiveness.
71
6. Optimization Techniques
By using these optimization techniques, we can maximize the efficiency, scalability,
and resilience of our Spring Boot microservices deployed in Docker and Kubernetes,
ultimately delivering a stable and responsive application.
6.1 Resource allocation
In Kubernetes, resource allocation strategies are fundamental for maximizing the
efficiency of cluster resources and ensuring optimal performance of applications.
Kubernetes provides several mechanisms for resource allocation:
Pod Resource Requests and Limits: Kubernetes allows users to define
resource requests and limits at the pod level. Resource requests specify the
amount of CPU and memory required by a pod to run, while limits define
the maximum amount of resources a pod can consume. By setting appropriate
resource requests and limits, Kubernetes ensures efficient resource utilization
and prevents resource contention among pods running on the same node.
Horizontal Pod Autoscaling (HPA): HPA is a Kubernetes feature that
automatically adjusts the number of pod replicas based on resource utiliza-
tion metrics such as CPU or memory usage. HPA helps in scaling applications
dynamically in response to changing workload demands, ensuring optimal re-
source utilization and performance.
Resource Quotas: Kubernetes Resource Quotas allow administrators to limit
the total amount of resources (CPU, memory, storage) that can be consumed
by pods and containers within a namespace. Resource Quotas help in prevent-
ing resource exhaustion and ensuring fair resource allocation across different
namespaces within a cluster.
72
6. Optimization Techniques
6.1.1 Resource Requests and Limits
To implement resource allocation in our services, we need to modify the deployment
files to limit our resources. Let’s take our product-service as an example. We will
add the following lines in the resources section in the deployment.yml file:
1r es o ur ce s :
2requests:
3m em o r y : " 25 6 M i " # R e qu e s t 2 56 M iB o f m e m or y
4c pu : " 2 50 m " # R e qu e st 2 50 m il l i CP U ( 0 .2 5 CP U )
5limits:
6memo r y : "51 2 Mi " # Limit to 5 12 MiB o f memory
7c pu : " 5 00 m "
Code 6.1: Resource requests and limits
These lines will do the following:
"resources.requests" specify the minimum amount of resources (CPU and
memory) that the container requests from the Kubernetes cluster.
"resources.limits" specify the maximum amount of resources (CPU and
memory) that the container can consume.
6.2 Load balancing mechanisms
As we have seen, we have implemented the api-gateway service which provided a
single entry-point for our application, we will also talk about load balancing in
Kubernetes. In this chapter we will discuss the different load balancing mechanisms
used in this project through Springboot framework or Kubernetes.
6.2.1 Load balancing in Springboot
In this project we have implemented load balancing through our api-gateway service
which provided a single entry point for our application. We have already discussed
it in the development part, but we will go through few details regarding the config-
uration.
1## P r o d u c t S e r v i ce R oute
2s pr i ng . c lo u d . ga t ew a y . ro u te s [ 0] . i d= p r od uc t - s e rv i ce
3s pr i ng . c lo u d . ga t ew a y . ro u te s [ 0] . u ri = l b :/ / pr o du ct - s er v ic e
4s pr i ng . c lo u d . ga t ew a y . ro u te s [ 0] . p r ed i ca t es [ 0 ]= P a th = / ap i / p ro d uc t
Code 6.2: api-gateway/application.properties
73
6. Optimization Techniques
As we can see in the above snippet, we provided the URLs for our services using
’lb’ and not a static url.
So, when a request matches the route configured, Spring Cloud Gateway will forward
it to the load-balancer mechanism, which will distribute the request to one of the
instances of the product-service. This enables horizontal scalability and fault toler-
ance for the product-service, as requests are distributed among multiple instances
of the service.
6.2.2 Load-balancing in Kubernetes
Load balancing is crucial in Kubernetes for distributing incoming traffic across mul-
tiple pods or nodes to optimize resource utilization and ensure high availability.
Kubernetes offers various load balancing mechanisms:
Kubernetes Ingress Controller: Ingress is a Kubernetes resource that man-
ages external access to services within a cluster. Kubernetes supports multiple
Ingress controllers, such as NGINX Ingress Controller and HAProxy Ingress
Controller, which provide advanced features for routing traffic, performing SSL
termination, and implementing load balancing strategies.
Service Load Balancing: Kubernetes Services abstract away the underlying
pod IP addresses and provide a stable endpoint for accessing applications run-
ning within a cluster. Kubernetes Service resources support built-in load bal-
ancing mechanisms, such as Round Robin and Session Affinity, for distributing
incoming traffic among pod replicas.
In this project we’re going to use the ingress load-balancing mechanism.
74
6. Optimization Techniques
Implementing an Ingress resource
We can create an Ingress resource to route external traffic to our order-service as an
example through the following yml:
1a pi V e r si o n : n e t w or k i ng . k 8 s . io / v 1
2k in d : I ng r es s
3metadata:
4n am e : or d er - s e rv ic e - i n gr e ss
5spec:
6r ul es :
7- h tt p :
8p at hs :
9- p at h : / or d er
10 p at h Ty pe : P re fi x
11 backend:
12 service:
13 n am e : or d er - s e rv i ce
14 port:
15 number: 80
Code 6.3: Order service Ingress resource
This Ingress resource defines a simple routing rule where all incoming HTTP requests
to the root path /order are directed to the order-service Service on port 80.
6.3 Caching mechanisms
In our project, we have implemented caching mechanisms to improve performance
and reduce the load on back-end systems. It allows us to store frequently accessed
data in memory and retrieve it quickly when needed.
6.3.1 Springboot Caching
In Springboot , we enabled the caching support provided by the Spring Framework.
By annotating methods with caching annotations such as @Cacheable,@CachePut,
and @CacheEvict, we define caching behavior for specific methods. These annota-
tions instruct Spring to cache method results, update cache entries, and evict cache
entries as necessary.
Spring Boot easily integrates with various caching providers, including Ehcache,
Redis, and Caffeine. We configure a cache manager bean in our application context
to manage caching operations and customize cache configuration settings according
to our requirements.
75
6. Optimization Techniques
6.3.2 Caching in container technologies
In addition to caching mechanisms implemented within our Spring Boot microser-
vices, we can further enhance performance and scalability by implementing caching
at the container and orchestration levels using Docker and Kubernetes.
Docker Caching
Docker provides built-in support for caching intermediate layers during the image
build process. By reusing cached layers, Docker reduces the time and resources
required to build and deploy container images. We can make use of Docker’s caching
mechanism by optimizing our Dockerfiles and ensuring that frequently changing
layers are placed lower in the file to maximize caching efficiency. Here’s an example
on how we wrote our layered Dockerfile:
1F RO M e cl i ps e - t e mu r i n : 17 . 0 .4 . 1 _1 - j r e as b ui l d er
2WORKD I R e x t r a c t e d
3A DD ta r g et / *. j ar a pp . j ar
4R UN ja v a - D j ar m od e = l a y er t o ol s - ja r a pp . j ar e xt r a ct
5
6F RO M e cl i ps e - t e mu r i n : 17 . 0 .4 . 1 _1 - j r e
7WORKD I R a p p l i c a t i o n
8C OP Y - -f r om = b ui ld e r e xt r ac t ed / d e pe n de nc i es / . /
9C OP Y - - f ro m = b u i ld e r e xt r a ct e d / s p ri ng - b oo t - l o ad e r / . /
10 C OP Y - -f r om = b ui ld e r e xt r ac t ed / s na ps h ot - d e pe n de n ci e s / ./
11 C OP Y - - f ro m = b u il d e r e x tr a c te d / a p pl i c at i o n / . /
12 EXPO S E 8 0 80
13 E NT R Y PO I NT [ " ja v a " , " or g . s pr i n gf r a me w o rk . b oo t . l o ad e r .
JarLauncher"]
Code 6.4: Layered Dockerfile
And here’s how an unlayered Dockerfile looks like:
1FRO M o pe nj dk :1 7
2
3C OP Y t a rg e t / *. j a r a pp . j ar
4
5E NT R Y P OI N T [ " j a va " , " - ja r " , " / a pp . j a r " ]
Code 6.5: Unlayered Dockerfile
76
6. Optimization Techniques
Kubernetes Caching
In Kubernetes, caching can be implemented at the application layer using in-memory
data stores such as Redis or Memcached. Additionally, Kubernetes itself employs
caching mechanisms to optimize resource allocation and improve performance. For
example, Kubernetes caches information about objects such as Pods, Services, and
Deployments in its internal etcd data store, reducing the need for frequent API calls
and improving responsiveness.
6.4 Dynamic scaling strategies
Dynamic scaling in Kubernetes refers to the ability of the Kubernetes cluster
to automatically adjust the number of running instances (pods) of a particular
application based on changes in demand or workload. This is typically achieved
through Horizontal Pod Autoscaling (HPA), a feature provided by Kubernetes.
With HPA, we can define certain metrics (such as CPU utilization or custom met-
rics) and specify target thresholds for these metrics. When the observed metric
values exceed or fall below the defined thresholds, Kubernetes automatically scales
the number of replicas (pods) of the application up or down to meet the desired
performance or resource utilization targets.
77
6. Optimization Techniques
6.4.1 Horizontal Pod Autoscaling (HPA)
To enable HPA for a deployment, we need to create a HorizontalPodAutoscaler
resource that references the deployment and specifies the auto-scaling behavior based
on CPU utilization or custom metrics. Let’s take our product-service as an example
and create a HPA resource for it:
1a pi V er s io n : a u to s ca l in g / v 2b et a 2
2k in d : H o ri z o nt a lP o d Au t o sc a l er
3metadata:
4n am e : p ro du c t - s e rv ic e - h p a
5spec:
6scaleTargetRef:
7a pi V e r si o n : a p ps / v 1
8k in d : D ep l o ym e nt
9n am e : p ro du c t - s e rv i ce
10 minReplicas: 1
11 maxReplicas: 10
12 metrics:
13 - t yp e : R es o ur c e
14 resource:
15 n am e : c pu
16 targetAverageUtilization: 50
Code 6.6: Horizontal Pod Autoscaler resource
This HPA will auto-scale the product-service Deployment based on CPU utilization,
with a target average utilization of 50%. It will scale between 1 and 10 replicas,
adjusting dynamically to meet the defined CPU utilization target.
78
Chapter 7
Results and Evaluation
In this chapter, we outline the methodology used for performance analysis, the
metrics employed for evaluating system performance, and the experimental setup
and scenarios considered during testing.
To achieve this, we utilized a combination of tools and techniques, including:
Prometheus and Grafana: These tools were employed for real-time moni-
toring and visualization of system metrics. Prometheus collected metrics from
various components of the microservices architecture, while Grafana provided
interactive dashboards for analyzing performance data.
Zipkin Distributed Tracing: Zipkin facilitated the tracing of requests across
multiple microservices, allowing us to identify latency bottlenecks and visualize
the flow of requests through the system.
79
7. Results and Evaluation
7.1 Metrics used for evaluating system performance
7.1.1 Latency
Latency measures the time taken for a request to travel from the client to the server
and back. Lower latency indicates faster response times and better overall system
performance.
After the implementation of optimization techniques such as resource allocation,
load-balancing, caching mechanisms, and dynamic scaling, the system demonstrated
significant improvements in various performance metrics. Notably, latency experi-
enced a substantial reduction, with response times decreasing by approximately
40%. This improvement signifies enhanced responsiveness and smoother user expe-
riences across different functionalities of the application. The following figure shows
the trace of an order request which calls an inventory request showing significant
time decrease:
7.1.2 Scalability
Scalability assesses the system’s ability to handle increasing workload and user
demand without significant degradation in performance. It involves measuring how
well the system adapts to changes in load and scales resources accordingly.
With the integration of optimization techniques, the system’s scalability witnessed
remarkable enhancements. It now shows increased stability and resilience in man-
aging varying workloads, ensuring consistent performance even under high load sce-
narios. For instance, the system’s throughput capacity improved by around 50%,
allowing it to handle a higher volume of concurrent requests without compromising
response times or service availability.
80
7. Results and Evaluation
7.1.3 Resource utilization
Resource utilization evaluates the efficient use of system resources such as CPU,
memory, and network bandwidth. It helps identify potential bottlenecks and
optimize resource allocation to improve system performance.
Optimization techniques have positively impacted resource utilization, leading to
more efficient allocation and utilization of system resources. For example, CPU
usage decreased by approximately 20%, indicating improved efficiency in processing
tasks and reducing the risk of resource exhaustion. Additionally, memory usage
optimization resulted in a 25% decrease in heap used by the application, allowing
for better memory management and overall system stability.
Figure 7.1: Resources Usage in Grafana
7.1.4 Fault Tolerance
Fault tolerance refers to the system’s ability to continue operating properly in the
event of a failure. Before optimization, our system exhibited limited fault toler-
ance, with longer recovery times and higher susceptibility to failures. However, after
implementing optimization techniques, such as circuit breakers and retry policies,
the system’s fault tolerance significantly improved. It showed more than 75% im-
provement in recovery time after failure. Now, the system can quickly recover from
failures, minimizing downtime and ensuring continuous availability.
81
7. Results and Evaluation
7.1.5 Conclusion
In conclusion, the optimization of our microservices-based application using Spring
Boot, Docker, and Kubernetes optimisation techniques, has led to substantial
improvements in system performance, scalability, and resource utilization. By ana-
lyzing key metrics such as request response time, CPU usage, memory utilization,
error rate, recovery time after failure, and system stability, we have demonstrated
the effectiveness of optimization techniques in enhancing system efficiency and
reliability.
The table below summarizes the metrics before and after optimization:
Performance Metrics Before and After Optimization
Metric Before Optimization After Optimization
Average Response Time 3.7s 2.414s
Throughput 144 requests/min 309 requests/min
CPU Usage 70% 50%
Memory Usage 5.16% 4.1%
Network Latency 96ms 50ms
Error Rate 5% 2%
Scalability Low High
Recovery Time after Failure 2 minutes 30 seconds
Table 7.1: Performance Metrics Before and After Optimization
82
7. Results and Evaluation
Also,the following graph shows the difference between response times before and
after optimization:
Figure 7.2: Response times for different requests
Overall, the optimization of any application does not only improve system perfor-
mance but also enhances fault tolerance, user satisfaction, and developer produc-
tivity. Moving forward, further research and experimentation can be conducted to
explore additional optimization techniques and address any remaining limitations in
the system.
83
7. Results and Evaluation
7.2 Impact of software optimization techniques
Optimization is the backbone of every system that involves decision-making and
optimal strategies. It plays an important role and influences our life directly or
indirectly which cannot be avoided or neglected. [36]
In this section, we analyze the impact of optimization techniques on the performance
of systems, highlighting their effectiveness and importance in real-world scenarios.
7.2.1 Enhancing System Performance
Optimization techniques play a pivotal role in transforming system performance,
leading to smoother operations, faster response times, and improved reliability.
7.2.2 Boosting Revenue Generation
The optimization of system performance directly correlates with revenue generation.
By ensuring faster transaction processing, minimized downtime, and enhanced user
experience, businesses can attract more customers, increase sales, and ultimately
boost revenue.
7.2.3 User Satisfaction
A well-optimized system translates to a superior user experience. Faster load times,
easy transactions, and minimal disruptions result in heightened user satisfaction and
loyalty. Satisfied users are more likely to return to the platform and recommend it
to others, thereby expanding the customer base.
7.2.4 Empowering Developers
Optimization techniques simplifies development processes, making it easier for de-
velopers to build, deploy, and maintain applications. Automation of tasks, efficient
resource allocation, and proactive monitoring alleviate developer burden, allowing
them to focus on innovation and adding value to the system.
7.2.5 Facilitating Scalability
Optimized systems are inherently more scalable, capable of handling increased work-
load and user demand without compromising performance. Scalability is essential
for accommodating business growth, adapting to market fluctuations, and ensuring
uninterrupted service delivery even during peak periods.
84
7. Results and Evaluation
7.2.6 Driving User Acquisition
A well-optimized system attracts more users with its superior performance, reli-
ability, and user experience. Positive word-of-mouth, enhanced brand reputation,
and increased visibility in the marketplace contribute to user acquisition efforts,
expanding the user base and driving business growth.
7.2.7 Real-world examples
Netflix
Netflix, the popular streaming service, continuously invests in optimizing its software
infrastructure to deliver a satisfying user experience, they use some of the following
techniques in their optimization process:
Caching They have implemented sophisticated content delivery networks
(CDNs) and caching mechanisms to ensure that users can stream their fa-
vorite movies and TV shows with minimal buffering and load times.
Resource allocation Netflix also utilize a cloud-based infrastructure and dy-
namic scaling techniques to adapt to fluctuating demand and traffic patterns.
During peak hours, such as evenings and weekends, Netflix automatically scales
its resources to accommodate the increased streaming activity, ensuring that
users can access content without interruptions.
The following figure shows the growth of their revenue throughout the years due to
the resilience of their system:
Figure 7.3: Netflix revenue growth
85
7. Results and Evaluation
Spotify
Spotify, a popular music streaming service, optimizing its software to improve user
experience and platform efficiency, the following mechanisms are being used to op-
timize their service:
Streaming Performance Optimization: Spotify optimizes its streaming
infrastructure to deliver high-quality audio with minimal buffering and latency.
This involves optimizing network protocols, server configurations, and content
delivery mechanisms to ensure smooth playback even under varying network
conditions.
Personalized Recommendations Spotify uses machine learning algorithms
to analyze user listening habits and preferences, providing personalized mu-
sic recommendations and playlists. By continuously refining its recommenda-
tion engine, Spotify enhances user satisfaction and engagement, leading to
increased user retention and streaming hours.
Platform Stability and Reliability Spotify focuses on optimizing its plat-
form’s stability and reliability to minimize downtime and service disruptions.
This includes proactive monitoring of server infrastructure, rapid incident re-
sponse protocols, and continuous performance tuning to handle peak load pe-
riods and unexpected traffic spikes.
The following figure, shows one of the optimization processes that took place
in Spotify decreasing build times by more than 400%
Figure 7.4: Spotify build times decrease
86
Chapter 8
Conclusion
The primary objective of this thesis was to go through the whole process of
developing, deploying and optimizing a microservices-based application using
Spring Boot, Docker and Kubernetes. We have successfully achieved our objectives.
This involved developing individual microservices, ensuring their easy integration,
and optimizing their performance. Subsequently, the focus shifted towards con-
tainerization, where Docker was employed to encapsulate these microservices into
portable and lightweight images. The next phase discussed deploying these Docker
images onto Kubernetes.
Finally, the thesis shifted to the crucial aspect of optimization techniques, aiming
to enhance the efficiency, scalability, and resilience of the deployed microservices
architecture. Through the exploration and implementation of various optimization
strategies, the thesis aimed to demonstrate the impact of these techniques on the
overall performance and reliability of the microservices ecosystem.
8.1 Summary of key findings
In this thesis, we explored the design, development, deployment, and optimization
of microservices architecture using Spring Boot, Docker, and Kubernetes. Through
a comprehensive analysis, we investigated the impact of optimization techniques on
system performance and scalability. Key findings include the significant improve-
ments in response times, resource utilization, and overall system efficiency achieved
through optimization strategies such as resource allocation, load balancing, caching
mechanisms, and dynamic scaling.
87
8. Conclusion
8.2 Contributions to the field
Our research contributes to the field of microservices architecture by providing in-
sights into effective optimization techniques and their practical implementation in
real-world scenarios. By documenting the process of designing, deploying, and op-
timizing microservices-based applications, we offer valuable guidance to developers,
architects, students and organizations seeking to use microservices for enhanced per-
formance, scalability, and reliability. In addition, this research can be considered as
a guide to anyone looking for help in developing microservices application, deploying
an application using Docker and Kubernetes or optimizing an existing application.
8.3 Limitations and future work
While our study showcases the several benefits of optimization techniques, it is
essential to acknowledge certain limitations and identify avenues for future research.
Firstly, our experimentation was conducted on a single device, which limits the
scope of our findings to a non-distributed system environment. Future research
could explore the application of optimization techniques in distributed systems to
assess their efficacy in more complex, real-world scenarios.
Additionally, the simplicity of our project design presents an opportunity for
further investigation into scalability and performance optimization on a larger
scale. Scaling the project to accommodate higher loads or applying optimization
techniques to a more practical, real-world project could provide valuable insights
into the scalability limits and performance gains achievable through optimization.
Furthermore, ongoing advancements in cloud-native technologies and best practices
requires continuous evaluation and refinement of optimization strategies. Future
work could involve exploring emerging technologies, experimenting with new
optimization approaches, and adapting optimization strategies to evolving industry
standards and requirements.
In conclusion, the impact of optimization techniques on system performance is mul-
tifaceted and far-reaching. From revenue generation and user satisfaction to devel-
oper empowerment and scalability, the benefits of optimization extend across various
facets of the business ecosystem. By prioritizing optimization and continuously refin-
ing strategies based on user feedback and industry best practices, organizations can
unlock new opportunities for growth, innovation, and success in the digital market.
88
Bibliography
[1] Martin Fowler James Lewis. Microservices: a definition of this new architec-
tural term”. In: martinFowler.com (2014).
[2] J. Turnbull. The Docker Book: Containerization Is the New Virtualization.
James Turnbull, 2014. isbn: 9780988820203. url:https://books.google.
hu/books?id=4xQKBAAAQBAJ.
[3] A. Weil. Learn Kubernetes - Container orchestration using Docker. Lulu.com,
2020. isbn: 9780244258023. url:https:/ / books .google . hu /books? id =
zVDaDwAAQBAJ.
[4] Tiago Costa Santos. “Adopting Microservices: Migrating a HR tool from a
monolithic architecture”. MA thesis. Técnico Lisboa, 2018.
[5] Kasper Stenroos. Microservices in Software Development”. Metropolia
University of Applied Sciences, 2019.
[6] Miina Koskinen. “Microservices and Containers - Benefits and Best Practices”.
Turku University of Applied Sciences, 2016.
[7] Jon Mukaj. Containerization: Revolutionizing software development and de-
ployment through microservices architecture using Docker and Kubernetes”.
Epoka University, 2023.
[8] Lenard Jensen. “Resilient System deployment using Kubernetes”. MA thesis.
University of Lübeck, 2020.
[9] Jesús López Pocino. Analysis And Practice Of Micro-Services Deployment
Technologies based on Containers”. MA thesis. Technical University of
Catalonia, 2021.
[10] Miika Moilanen. Deploying an application using Docker and Kubernetes”.
MA thesis. Oulu University of Applied Sciences, 2018.
[11] Kim Lehtinen. “Scaling a Kubernetes Cluster”. MA thesis. School of
Technology and Innovations, 2022.
[12] Joshua Steinmann. “Optimizing Kubernetes Cluster Down-Scaling for
Resource Utilization”. MA thesis. Radboud University, 2022.
89
BIBLIOGRAPHY
[13] D. Rajput. Hands-On Microservices Monitoring and Testing: A performance
engineer’s guide to the continuous testing and monitoring of microservices.
Packt Publishing, 2018. isbn: 9781789138405.
[14] Anita Ihuman. “Microservice tools: The top 10 for monitoring and testing”.
In: architect.io (2022).
[15] Muhammad Waseem et al. Design, monitoring, and testing of microservices
systems: The practitioners’ perspective”. In: Journal of Systems and Software
(2021). issn: 0164-1212.
[16] Zhijun Ding, Song Wang, and Changjun Jiang. Kubernetes-Oriented
Microservice Placement With Dynamic Resource Allocation”. In: IEEE
Transactions on Cloud Computing (2023).
[17] Eunsook Kim, Kyungwoon Lee, and Chuck Yoo. On the Resource
Management of Kubernetes”. In: 2021.
[18] Luciano Baresi et al. “KOSMOS: Vertical and Horizontal Resource Autoscaling
for Kubernetes”. In: Service-Oriented Computing. Springer International
Publishing, 2021. isbn: 978-3-030-91431-8.
[19] Brando Chiminelli. “Optimizing Resource Allocation in Kubernetes: A Hybrid
Auto-Scaling Approach”. MA thesis. KTH Royal Institute Of Technology,
2023.
[20] Alexander Sundberg. A study on load balancing within microservices archi-
tecture”. MA thesis. Halmstad University, 2019.
[21] Abishek Sanjay Shitole. “Dynamic Load Balancing of Microservices in
Kubernetes Clusters using Service Mesh”. MA thesis. Dublin, National College
of Ireland, 2022.
[22] Joel Sandman. Evaluation of Caching Methodologies for Microservice-Based
Architectures in Kubernetes”. MA thesis. Elastisys, 2021.
[23] Alex Antonov Claudio Eduardo de Oliveira Greg L. Turnquist. Developing
Java Applications with Spring and Spring Boot. Packt Publishing, 2018. isbn:
9781789534757.
[24] N. Patel and K. Patel. Java 9 Dependency Injection: Write loosely coupled
code with Spring 5 and Guice. Packt Publishing, 2018. isbn: 9781788296472.
url:https://books.google.hu/books?id=D99YDwAAQBAJ.
[25] L. Parziale et al. Getting Started with Docker Enterprise Edition on IBM Z.
IBM Redbooks, 2019. isbn: 9780738457505. url:https://books.google.
hu/books?id=okyMDwAAQBAJ.
90
BIBLIOGRAPHY
[26] Karthikeyan Ranganathan. Netflix Shares Cloud Load Balancing And
Failover Tool: Eureka!” In: Netflix TechBlog (2012).
[27] A. Sarin and J. Sharma. Getting started with Spring Framework: covers Spring
5 (4th Edition). 2017. url:https : / / books . google . hu / books ? id =
rjRjDwAAQBAJ.
[28] M. Larsson. Hands-On Microservices with Spring Boot and Spring Cloud:
Build and deploy Java microservices using Spring Cloud, Istio, and
Kubernetes. Packt Publishing, 2019. isbn: 9781789613520. url:https : / /
books.google.hu/books?id=QFqxDwAAQBAJ.
[29] S. Thorgersen and P.I. Silva. Keycloak - Identity and Access Management
for Modern Applications: Harness the power of Keycloak, OpenID Connect,
and OAuth 2.0 protocols to secure applications. Packt Publishing, 2021. isbn:
9781800564701. url:https://books.google.hu/books?id=WBsvEAAAQBAJ.
[30] M. Larsson. Microservices with Spring Boot and Spring Cloud: Build resilient
and scalable microservices using Spring Cloud, Istio, and Kubernetes, 2nd
Edition. Packt Publishing, 2021. isbn: 9781801079150. url:https://books.
google.hu/books?id=dt06EAAAQBAJ.
[31] A. Parker et al. Distributed Tracing in Practice: Instrumenting, Analyzing, and
Debugging Microservices. O’Reilly Media, 2020. isbn: 9781492056607. url:
https://books.google.hu/books?id=g5bcDwAAQBAJ.
[32] MetricFire Blogger. “How Grafana and Prometheus work together”. In:
MetricFire (2023).
[33] W. Venema. Building Serverless Applications with Google Cloud Run. O’Reilly
Media, 2020. isbn: 9781492057062. url:https://books.google.hu/books?
id=P14MEAAAQBAJ.
[34] N. Franssens, S. Gopalakrishnan, and G. Lenz. Hands-on Kubernetes on Azure:
Use Azure Kubernetes Service to automate management, scaling, and deploy-
ment of containerized applications, 3rd Edition. Packt Publishing, 2021. isbn:
9781801078917. url:https://books.google.hu/books?id=c0cvEAAAQBAJ.
[35] James Walker. Docker Compose vs Kubernetes Differences Explained”. In:
spacelift.io (2024).
[36] S. Nayak. Fundamentals of Optimization Techniques with Algorithms. Elsevier
Science, 2020. isbn: 9780128224922. url:https://books.google.hu/books?
id=qcfrDwAAQBAJ.
91
List of Figures
2.1 Microservices Architecture . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2 Virtual machines vs Containers . . . . . . . . . . . . . . . . . . . . . 17
3.1 Spring MVC architecture . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2 Docker infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.1 Spring initializr project overview . . . . . . . . . . . . . . . . . . . . 34
4.2 Product project structure in Intellij . . . . . . . . . . . . . . . . . . . 35
4.3 Post request to create a new product . . . . . . . . . . . . . . . . . . 38
4.4 Get request to get the created product . . . . . . . . . . . . . . . . . 39
4.5 Order project structure in Intellij . . . . . . . . . . . . . . . . . . . . 39
4.6 Eureka discovery service . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.7 API Gateway Representation . . . . . . . . . . . . . . . . . . . . . . 47
4.8 KeycloakDashboard ........................... 49
4.9 Circuit Breaker Pattern . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.10Zipkindashboard............................. 53
4.11GrafanaDashboard............................ 56
5.1 Building Docker image with Jib . . . . . . . . . . . . . . . . . . . . . 60
5.2 Dockerhubimage ............................. 61
5.3 Docker Compose containers . . . . . . . . . . . . . . . . . . . . . . . 63
5.4 Running a deployment . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.5 Kubernetes running components . . . . . . . . . . . . . . . . . . . . . 69
5.6 Docker Compose vs Kubernetes . . . . . . . . . . . . . . . . . . . . . 70
7.1 Resources Usage in Grafana . . . . . . . . . . . . . . . . . . . . . . . 81
7.2 Response times for different requests . . . . . . . . . . . . . . . . . . 83
7.3 Netflix revenue growth . . . . . . . . . . . . . . . . . . . . . . . . . . 85
7.4 Spotify build times decrease . . . . . . . . . . . . . . . . . . . . . . . 86
92
List of Codes
4.1 Product.java................................ 36
4.2 ProductRepository.java . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.3 ProductController.java . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.4 ProductService.java............................ 38
4.5 MySQLconguration........................... 40
4.6 OrderController.java . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.7 InventoryController.java . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.8 InventoryService.java . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.9 Rootprojectpom.xml .......................... 43
4.10WebFluxdependency........................... 44
4.11 WebClient configuration . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.12 Discovery server dependency . . . . . . . . . . . . . . . . . . . . . . . 45
4.13 Discovery server implementation . . . . . . . . . . . . . . . . . . . . . 45
4.14 Discovery server client dependency . . . . . . . . . . . . . . . . . . . 46
4.15 Discovery server client configuration . . . . . . . . . . . . . . . . . . . 46
4.16 api-gateway/application.properties . . . . . . . . . . . . . . . . . . . 47
4.17 Running Keycloak container in Docker . . . . . . . . . . . . . . . . . 48
4.18 SecurityWebFilterChain configuration . . . . . . . . . . . . . . . . . . 50
4.19 Resillience4j dependency . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.20 Resillience4j configuration . . . . . . . . . . . . . . . . . . . . . . . . 51
4.21 OrderController.java . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.22Zipkindependency ............................ 53
4.23Zipkinconguration ........................... 53
4.24Kafkadependency............................. 54
4.25Kafkaconguration............................ 54
4.26 Notification service dependencies . . . . . . . . . . . . . . . . . . . . 55
4.27 Prometheus dependency . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.28 Prometheus configuration . . . . . . . . . . . . . . . . . . . . . . . . 56
5.1 api-gateway Dockerfile . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.2 Dockerbuildcommand.......................... 59
5.3 Dockerruncommand........................... 59
93
LIST OF CODES
5.4 Jibdependency .............................. 60
5.5 Jibbuildcommand ............................ 61
5.6 docker-composele............................ 62
5.7 Executing docker-compose file . . . . . . . . . . . . . . . . . . . . . . 63
5.8 api-gateway deployment file . . . . . . . . . . . . . . . . . . . . . . . 66
5.9 Applying kube deployment file . . . . . . . . . . . . . . . . . . . . . . 67
5.10 api-gateway service file . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.11 MongoDB PVC yaml file . . . . . . . . . . . . . . . . . . . . . . . . . 68
6.1 Resource requests and limits . . . . . . . . . . . . . . . . . . . . . . . 73
6.2 api-gateway/application.properties . . . . . . . . . . . . . . . . . . . 73
6.3 Order service Ingress resource . . . . . . . . . . . . . . . . . . . . . . 75
6.4 LayeredDockerle ............................ 76
6.5 Unlayered Dockerfile . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.6 Horizontal Pod Autoscaler resource . . . . . . . . . . . . . . . . . . . 78
94
Project links
Dockerhub
You can reach all the Docker images created during the project below:
Click here
GitHub
You can find the codebase for the project here:
Click here
95
ResearchGate has not been able to resolve any citations for this publication.
Thesis
Full-text available
Containerization has revolutionized software development and deployment by accelerating the implementation and adoption of Microservices Architecture through different technologies such as Docker and Kubernetes. In the core of the practice stands the container, a lightweight, and portable unit that is used to package an application together with all its configurations and libraries, offering consistency and flexibility in a wide set of systems. Containers are the perfect tools for implementing a microservice architecture by providing a framework where services can be isolated, each with their own functionality but at the same time properly coordinated. Docker is the most prominent and straightforward tool for building and running containers, allowing building and testing services without issues across many environments. Kubernetes on the other hand is another complementary technology which orchestrates the containers, abstracting their tedious management and turning them into a scalable cluster. The implementation of these technologies should be carefully accompanied with the adaptation of some practices concentrated mostly on security and optimization. In this study, I have demonstrated an effective implementation of Kubernetes and microservices by deploying Apache Airflow as a platform for provisioning data workflows. The synergy of Kubernetes with different tools such as Terraform (Infrastructure as Code), Helm Charts (Kubernetes Templates) and ArgoCD (GitOps), in addition to Azure Kubernetes Service (Platform-as-a-Service) resulted in a fully functional Apache Airflow deployment.
Chapter
Full-text available
Cloud applications are increasingly executed onto lightweight containers that can be efficiently managed to cope with highly varying and unpredictable workloads. Kubernetes, the most popular container orchestrator, provides means to automatically scale containerized applications to keep their response time under control. Kubernetes provisions resources using two main components: i) Horizontal Pod Autoscaler (HPA), which controls the amount of containers running for an application, and ii) Vertical Pod Autoscaler (VPA), which oversees the resource allocation of existing containers. These two components have several limitations: they must control different metrics, they use simple threshold-based rules, and the reconfiguration of existing containers requires stopping and restarting them.
Article
Full-text available
Context: Microservices Architecture (MSA) has received significant attention in the software industry. However, little empirical evidence exists on design, monitoring, and testing of microservices systems. Objective: This research aims to gain a deep understanding of how microservices systems are designed, monitored, and tested in the industry. Method: A mixed-methods study was conducted with 106 survey responses and 6 interviews from microservices practitioners. Results: The main findings are: (1) a combination of domain-driven design and business capability is the most used strategy to decompose an application into microservices, (2) over half of the participants used architecture evaluation and architecture implementation when designing microservices systems, (3) API gateway and Backend for frontend patterns are the most used MSA patterns, (4) resource usage and load balancing as monitoring metrics, log management and exception tracking as monitoring practices are widely used, (5) unit and end-to-end testing are the most used testing strategies, and (6) the complexity of microservices systems poses challenges for their design, monitoring, and testing, for which there are no dedicated solutions. Conclusions: Our findings reveal that more research is needed to (1) deal with microservices complexity at the design level, (2) handle security in microservices systems, and (3) address the monitoring and testing challenges through dedicated solutions.
Article
Microservices and Kubernetes are widely used in the development and operations of cloud-native applications. By providing automated placement and scaling, Kubernetes has become the main tool for managing microservices. However, existing work and Kubernetes fail to consider the dynamic competition and availability of microservices as well as the problem of shared dependency libraries among multiple microservice instances. To this end, this paper proposes an integer nonlinear microservice placement model for Kubernetes with the goal of cost minimization. Specifically, we calculate the number of instances based on microservice availability and construct a model in which the total resource demand of multiple microservice instances exceeds the appropriate proportion of node resources when dynamic resource competition exists and the size of the shared dependency library is less than the node storage capacity. Finally, this paper solves the microservice placement model using an improved genetic algorithm. The experimental results demonstrate that higher throughput is obtained with the same costs and that the same throughput is obtained with lower costs.
Microservices: a definition of this new architectural term
  • Martin Fowler James Lewis
Martin Fowler James Lewis. "Microservices: a definition of this new architectural term". In: martinFowler.com (2014).
The Docker Book: Containerization Is the New Virtualization
  • J Turnbull
J. Turnbull. The Docker Book: Containerization Is the New Virtualization. James Turnbull, 2014. isbn: 9780988820203. url: https://books.google. hu/books?id=4xQKBAAAQBAJ.
Learn Kubernetes -Container orchestration using Docker
  • A Weil
A. Weil. Learn Kubernetes -Container orchestration using Docker. Lulu.com, 2020. isbn: 9780244258023. url: https://books.google.hu/books?id= zVDaDwAAQBAJ.
Adopting Microservices: Migrating a HR tool from a monolithic architecture
  • Costa Tiago
  • Santos
Tiago Costa Santos. "Adopting Microservices: Migrating a HR tool from a monolithic architecture". MA thesis. Técnico Lisboa, 2018.
Microservices and Containers -Benefits and Best Practices
  • Miina Koskinen
Miina Koskinen. "Microservices and Containers -Benefits and Best Practices". Turku University of Applied Sciences, 2016.