Conference PaperPDF Available

Microservices: Architecting for Continuous Delivery and DevOps

  • Confidential

Abstract and Figures

Businesses today need to respond to customer needs at unprecedented speeds. Driven by this need for speed, many companies are rushing to the DevOps movement and implementing Continuous Delivery (CD). I had been implementing DevOps and CD for Paddy Power, a multi-billion-euro betting and gaming company, for four years. I had found that software architecture can be a key barrier. To address the architectural challenges, we tried an emerging architectural style called Microservices. I have observed increased deployability, modifiability, and resilience to design erosion. At the same time, I also observed new challenges associated with the increased number of services, evolving contracts among services, technology diversity, and testing. I share the practical strategies that can be employed to address these new challenges, discuss situations for which Microservices may not be a good choice, and outline areas that require further research.
Content may be subject to copyright.
Microservices: Architecting for Continuous Delivery
and DevOps
Lianping Chen
Lianping Chen Limited
Dublin, Ireland
AbstractBusinesses today need to respond to customer
needs at unprecedented speeds. Driven by this need for speed,
many companies are rushing to the DevOps movement and
implementing Continuous Delivery (CD). I had been
implementing DevOps and CD for Paddy Power, a multi-billion-
euro betting and gaming company, for four years. I had found
that software architecture can be a key barrier. To address the
architectural challenges, we tried an emerging architectural style
called Microservices. I have observed increased deployability,
modifiability, and resilience to design erosion. At the same time, I
also observed new challenges associated with the increased
number of services, evolving contracts among services,
technology diversity, and testing. I share the practical strategies
that can be employed to address these new challenges, discuss
situations for which Microservices may not be a good choice, and
outline areas that require further research.
KeywordsContinuous Delivery; Continuous Deployment;
DevOps; Architecture; Microservices; Agile; SOA
Businesses today need to respond to customer needs at
unprecedented speeds [1]. Driven by this need for speed, a
movement called DevOps, which aims at establishing a culture
and environment in which software can be built and released
quickly and reliably, has rapidly gained traction.
A key approach used in this DevOps movement is
Continuous Delivery (CD) [2]. CD is a software engineering
approach in which teams produce valuable software in short
cycles and ensure that the software can be reliably released at
any time [3]. Companies that have adopted CD have reported
huge benefits [3], which have motivated more companies to
adopt CD.
However, implementing CD can be very challenging [3, 4].
Based on my experiences in adopting CD over four years at a
multi-billion-euro betting company, I found that software
architecture can be a key barrier. However, discussions of CD
have primarily focused on build, test, and deployment
automation [5]; insufficient attention has been paid to software
To address the architectural challenges, we have tried an
emerging architectural style called Microservices. In this
article, I report on the main benefits we have achieved from
using Microservices, describe the new challenges that have
arisen from its use, present the practical approaches that can be
employed to address these new challenges, discuss situations
for which Microservices may not be a good choice, and outline
areas that require further research.
A. DevOps and CD at Paddy Power
Paddy Power was a rapidly growing company with
approximately 5,000 employees and a turnover of 7 billion. It
merged with Betfair in 2016, making it the world's largest
public online betting and gaming company.
Paddy Power offers its services in regulated markets
through betting shops, phones, and the Internet. The company
relies heavily on several hundreds of custom software
applications. These applications are developed using a wide
range of technology stacks, including Java, Ruby, PHP,
JavaScript, .NET, and C++. The diversity of applications and
technology stacks results in diverse application architectures,
which provided me with the opportunity to observe the role of
software architecture in DevOps and CD adoption.
The adoption of DevOps and CD began in 2012. The CIO
championed the cultural change and revamped the
organizational structure to promote cross-functional
collaboration both between and within teams. A team of eight
people (our team) was established to move applications to CD.
In four year’s time, we had implemented CD for more than
60 different applications and services. For each of these, when
a developer commits changed code, a CD pipeline execution is
automatically triggered. The code change passes through a
series of stages that check the quality of this code change. If
the change fails to pass any of the pipeline stages, the pipeline
process is aborted, and the developers are notified to fix the
problems. If the change passes all these stages, then the change
can be released to production with the click of a button.
B. Architectural Challenge
Over this four-year journey, we had investigated many
applications in the company and had attempted to move them
to CD. As I reported in my previous work [6], an application’s
architecture can be a key barrier.
To better understand the characteristics that make an
application amenable to CD, we compared the architectural
characteristics of the applications that were easily moved to
CD with those that were difficult to move to CD [6]. A set of
Accepted for publication by the main track of ICSA 2018, please cite as: Lianping Chen. "Microservices: Architecting for Continuous Delivery
and DevOps". in IEEE International Conference on Software Architecture. 2018. Seattle, USA: IEEE.
characteristics, in the form of Architecturally Significant
Requirements (ASRs) [7], was identified [6]. Of those,
deployability and modifiability often shape architectural
1) Deployability: A level of deployability that is
acceptable for a traditional multi-month release model is no
longer acceptable for CD. When releasing twice a year, we
can take the system down for a release, assign a significant
task force to perform the deployment, and be less concerned
with the actual time of the deployment as long as it completes
within some time window.
However, all of the above are unacceptable with the high
frequency of releases in CD. To maintain system availability,
reliable zero-downtime deployments are required. To handle
the large number and high frequency of deployments in both
testing and production environments, deployments must be
automated. A deployment must also be fast to allow the
execution of the CD pipeline to finish within an acceptable
amount of time, so that developers can obtain prompt feedback
regarding the production readiness of a change.
2) Modifiability: Deployability is only one aspect of
increasing the speed of software delivery. The modifiability of
an application itself is also important. Although modifiability
is also a concern when architecting an application in a
traditional context, the level of modifiability required in CD is
often higher. This is because in CD, the delivery process is no
longer a bottleneck, and thus, the speed at which the software
development team can make modifications becomes the factor
that determines the speed of software delivery. To compete
effectively, the business must increase the modifiability of the
application itself to respond more quickly to customer needs
and requests.
Thus, in CD, deployability and modifiability have become
more demanding, and we can no longer trade them off lightly.
C. The Use of Microservices
Driven by the increased priorities of deployability and
modifiability and the concomitant increased degree of
requirements on these quality attributes, we moved from a
monolithic architecture to a Microservices architecture [8].
Rather than building large monolithic applications, we
organize systems into small, self-contained, single-
responsibility units that can be independently developed,
tested, and deployed.
We decided to try Microservices for two practical reasons:
We had tried to build well-modularized monolithic
applications, but they often eventually evolved into big
balls of mud”. Consequently, it became necessary to try
a new approach.
Microservices looked promising in bringing us the
levels of deployability and modifiability that we
The team size per service ranges from two to five people.
We do not have strict rules on the lines of code per service but
a rule of thumb is that each service must to be kept to a size
that a single engineer can comprehend.
After we moved our applications to a Microservices
architecture, we observed increased deployability,
modifiability, and resilience to architecture erosion, as shown
in Fig. 1.
A. Increased Deployability
We have observed the following deployability
improvements, which are essential for achieving CD, as
discussed above.
1) Deployment Independency: Prior to using
Microservices, multiple teams worked on the same code base
of a large monolithic application. When one team made a
change, that team was unable to release it independently.
Instead, they first needed to merge the change into the master
branch. To avoid the chaos caused by multiple simultaneous
merges, they often needed to queue the merge. The merge
could often involve conflicts. Resolving these conflicts
required coordination with other teams. Then, after the merge,
the entire application needed to be re-tested to find any new
errors that might have been introduced. Finally, the decision to
actually make the release was beyond the team’s control;
because the entire application was released as one unit, thus,
many other teams’ changes also needed to be considered when
making the release decision.
After the application has been moved to Microservices,
there is no need to queue for merges; no need to resolve merge
conflicts with other teams; and no need to coordinate with and
wait for other teams’ changes. Finally, each team can deploy
their changes independently.
2) Shorter Deployment Time: Prior to using
Microservices, a deployment needed to deploy all components
of a large monolithic application. Now, the monolith has been
broken down into many smaller services. Each service is
significantly smaller than the corresponding former monolith.
Deploying such a small service takes a shorter amount of time
than that for the corresponding former monolith. For example,
one of the applications has been broken down into 11
microservices. The time for deploying a change has reduced
from 30 minutes to 3 minutes.
This shorter deployment time does make a difference. For
most pipelines, the software change will be deployed to four
different testing environments (i.e., four deployments).
Consequently, the reduction in deployment time helps to
shorten the pipeline execution time. This shortened pipeline
execution time is important for practicing CD. First, the
pipeline duration determines the minimum time with which a
code change can be released to production, which is
particularly critical when we need to release an important
software fix to production. Second, our team leads reported
that the pipeline duration impacts the developers’ code commit
behavior. The shorter that time is, the more frequently the
developers commit code.
Fig. 1. Benefits of Microservices
3) Simpler Deployment Procedures: The procedures to
deploy a large monolithic application are generally more
complex than the procedures to deploy a microservice. A
large monolith often has some peculiarities in its
deployment. Because of these peculiarities and the
complexity, it is often too difficult (or too costly) to build a
common tool chain that can cater to the needs of all large
monolithic applications.
Even worse, in some cases, building a tool dedicated
solely to a particular big monolith could be difficult. For
example, the company had attempted to automate the
deployment of a complex monolith. Although the company
invested more than a year’s effort for a team of four people,
the deployment was still not completely automated.
In contrast, after the application has been moved to
Microservices, each service is small, which makes its
deployment procedures simpler than that of the large
monolith, and consequently, it is easier to develop
deployment automation that completely eliminates all
manual activities.
More importantly, building a common tool chain that can
handle any microservice becomes feasible. After we built
this tool chain, adding a new microservice no longer requires
a new tool. Thus, this is a more scalable approach.
Developing the tool chain for CD is a formidable task, as
reported by other researchers as well [9]. Thus, simplifying
the tool chain requirements (e.g., simplifying the deployment
procedures for the software) has a significant impact on the
cost and feasibility of the tool chain development.
4) Zero Downtime: We were unable to release most of
our large monolithic applications in a zero-downtime
manner. One common barrier is associated with the
database. For example, one of our large monolithic
applications had about 100 database tables with complex
interrelationships. Almost every release included some
database schema changes. Due to the complexity of the
database schema and the fact that each release contained
changes from multiple development teams, the best
available database migration tools could not handle all the
scenarios [10], often the teams were confident in performing
database migrations only when the application was shut
After the application has been moved to Microservices,
we can choose the most suitable database for each individual
service. It turned out only a small portion of the services
actually required a relational database. Thus, all the other
services use a NoSQL database, which is more amenable to
schema evolution. Although we still need to address database
schema evolution for services that require a relational
database, because the schema for a microservice is
considerably smaller and under the full control of the small
team that owns the microservice, the team can design the
schema and plan the changes in a way that the existing
database migration tools can handle. Thus, it is easier to
achieve zero-downtime releases with Microservices than
with our former large monolithic applications.
B. Increased Modifiability
1) Shorter Cycle Time for Small Incremental Functional
Changes: For most small incremental functional changes,
we have reduced the cycle time from multiple months to two
to five days. Although we cannot attribute all the cycle time
reduction to the use of Microservices, the use of
Microservices is a factor to achieve such short (two to five
days) cycle time.
a) Faster decision making: With our large monolithic
applications, many design decisions needed to be discussed
and approved by many teams working on the application
because, after all, all the code was in the same code base.
Although the time required of each person contributing to
the decision may not be significant, reaching a mutual
decision across these teams often took a long calendar time
(days or weeks).
With Microservices, as long as a team does not change
the contracts among services, a decision can be made within
the small service team of two to five people rather than
among the 50-odd people working on a large monolithic
application. Often, the decision can be made with only a
short chat among the team members.
b) Shorter communication path from customers to the
responsible developers: I also noticed that the
communication path for a piece of requirement to reach the
responsible developers is shorter than that with a big
monolithic application. A person that is responsible for a
system that has been moved to Microservices said that, “for
many things that people used to call me, they no longer call
me anymore, they call the person responsible for the relevant
microservice directly.” This is probably because a
microservice has a higher visibility which comes from the
fact that the service is developed, released, and operated by
an autonomous team than a module of a monolith. The
shorter path of communication helps to shorten the cycle
2) Incremental Quality Attribute Changes: We are not
only able to make incremental functional changes but also to
make incremental changes to quality attributes.
For example, we had a monolithic application called
RMP 1. We broke RMP down to more than fifteen
As the business grows, the company offer betting
opportunities for increasingly more sports events. The
company had also attracted an increasing number of
business-to-business customers, who buy price information
(called odds) from the comnpany. Moreover, the company
were opening businesses in more locations (e.g., in Italy,
Australia, and the USA). We call each of these business
customers and locations a price distribution destination (or
destination for short).
After running successfully for two years, RMP was no
longer able to handle the increased load introduced by the
new sporting events and new destinations, requiring us to
improve its scalability and performance.
We analyzed the system and identified the bottleneck,
which involved only one service. When the team was
building the service two years ago, they did not realize that
the business could grow this fast. Thus, the service was not
tested for the new, higher traffic level.
The team was able to make the changes in several weeks.
These involved changing the service architecture to a
Reactive Architecture and changing the programming model
from imperative programming to functional programming.
1 A surrogate name is used here for confidentiality reasons.
Although this change took longer than delivering a
functional increment, it was feasible and not too difficult.
Our engineers commented that it would have been more
difficult to make the change if the service were a component
of a previous large monolith because they would need to
consider how to scale the whole monolith. With
Microservices, the change was localized to one service only;
other services were not affected at all. In addition, the small
size of the service also made the change easier [11].
Another interesting (and somewhat surprising)
observation was that many quality attribute changes that
were associated with the growth of the business did not
require system-wide changes. This probably occurred
because, with Microservices, we partition a system based on
business capabilities. Each service is responsible for a small
business domain that has only a single responsibility, so the
factors that trigger the need for improved quality attributes
differ from one service to another.
For example, the service responsible for displaying the
betting results is affected by the number of concurrent users
who view the betting results after an event. The popularity of
the event together with the success of the marketing is the
factor that triggers the need for improved scalability. This
same factor will not directly trigger the need for improved
scalability for a service that is responsible for calculating the
price. Irrespective of how popular the event is, it is only one
event for the price calculation service.
Certainly, this experience is not wide and long enough to
make a definitive conclusion. However, I believe this is an
interesting point to bring up for discussion and further
3) Easier Technology Changes: Teams can choose
different technologies for implementing their services. I
found that this is particularly useful when adopting a new
technology. An example was the introduction of the Scala
language, which is a better option for developing low-
latency and high-throughput applications. A team was able
to use Scala in their own service without impacting other
services, which used C++. Such a change would not be
technically easy if it were part of a monolithic application.
The speed with which we can experiment and adopt new
technology is important to maintain our products' superiority.
The impact of new technologies on our products can be as
important as the new features of the products that we build in
4) Easier Language and Library Upgrades: Changing
language and library versions often becomes exponentially
more difficult as the code base expands in a monolith. For
example, previously, when one part of a system required an
upgrade of Java to use its new features, the team could not
simply upgrade because the entire system could only use a
single version of Java; upgrading only one part could break
other parts of the system. Such dependencies and the
required coordination with teams working on other parts of
the system made upgrading difficult and painful.
After we moved a system to Microservices, each team
can upgrade independently because each service no longer
shares the same code base, compilation process, and
language runtime (e.g., Java virtual machine) with other parts
of the system.
Similar to technology changes, the speed with which we
can adopt new features in the new library or language
versions allows us to bring new enhancements to our
customers earlier than our competitors.
In addition, language and library developers are also
attempting to adopt CD. The speed at which they produce
new versions is increasing, which enhances the importance
of making language and library upgrades easier for our
C. Resilience to Architectural Erosion
I found that the strong boundaries provided by
Microservices have important practical implications. One
such implication is better protection against architectural
erosion—provided that the teams correctly define the service
boundaries initially.
We had attempted to build a well-modularized monolith,
but it often eventually evolved into a “big ball of mud”. The
concept of modularization is decades old, and our
applications were built only 10 years ago. When we started,
for all of them, the goal was to build well-modularized
monoliths; we never aimed to build “big balls of mud”.
However, I found that the boundaries between modules
in a big monolith did not tolerate the chaos of real software
development projects, where people are often in a hurry. It
was too tempting and too easy for someone in a hurry to
cross a boundary and grab something that she needed over
there and use it directly. Often, the software eventually
evolved into a “big ball of mud”.
Microservices make the boundaries between components
physical. Each service has its own code base, its own team,
and its own set of virtual machines or containers in which it
runs. Therefore, taking shortcuts is considerably more
difficult in Microservices. For example, some engineers
commented that sometimes, under pressure, they had thought
of crossing a boundary but immediately realized that doing
so was no longer a shortcut because of the physical
boundaries between services. Thus, Microservices provide
better protection against the temptation to break boundaries
under pressure for short-term implementation convenience.
Some of the initial microservices had been running for
more than two years. The boundary crossing problems that
used to be common in monolithic applications had not
occurred. Although it may still be too early to draw a
definitive conclusion, early observations are positive.
Despite the above benefits, Microservices is not a silver
bullet. Adopting Microservices introduces new complexities
and challenges. Without properly managing them, we can
easily run into another problematic situation.
A. Increased Number of Services
When we break down a monolithic application into many
smaller microservices, we get an increased number of
services, which introduces a new complexity.
Traditionally, developers hand the software over to
operations engineers for deployment to production. The
deployment involves many manual activities. The traditional
approach is not able to handle the significantly increased
number of services with frequent releases. The large number
of deployment requests greatly exceeds what the operations
engineers can handle.
To tackle this complexity, we built the CD platform. The
CD platform provides a CD pipeline for each service. Upon
each code commit, the CD pipeline automatically builds the
service, runs various automated tests, provisions the
environments for testing, and enables the team to release the
change to production with the click of a button. This removes
the operations engineers as the bottleneck. The CD platform
makes handling frequent releases of a large number of
services possible.
B. Evolving Interactions/Contracts among Services
With Microservices, some interactions that were formerly
internal communications within an application became
external communications among services. This led to more
interactions among services and more contracts to manage.
Not only did the number of interactions/contracts increase,
but all the participants in these interactions also undergo
frequent changes. The strategies that can be used to tackle
this challenge are as follows.
1) Keep Interactions Simple: I found that it is important
to eliminate unnecessary complexities. When new
interactions and contracts among services are introduced,
the teams carefully review them to ensure that any addition
is truly necessary. As a heuristic, I observed that when
complicated interactions occur among services, the design is
often incorrect.
2) Robustness Principle: When we write code that
interacts with other services, we try to follow the robustness
principle: Be conservative in what you send, be liberal in
what you accept from others [12].
For example, we had a service S. An interface of S had a
return message with two fields, F1 and F2. This interface
was being used by five other consuming services. All these
services followed the robustness principle. When they
received a message from S, if the message contained extra
fields, they ignored the extra fields rather than failed the call
as long as the required fields F1 and F2 were present. At one
point, a new service needed to use S. This sixth service
required a new field, F3. Because all consuming services
followed the robustness principle, we were able to modify
the interface by adding the new F3 field without breaking the
By following the robustness principle, we give the
producing service some leeway to make certain contract
changes without breaking compatibility.
3) Expand and Contract: Even when all services follow
the robustness principle, we sometimes still need to make
interface changes that could break compatibility.
To avoid breaking compatibility, rather than modifying
the interface, we instead first add a new interface, migrate all
consumer services to use the new interface and, finally,
remove the old interface. In this way, we are able to make
contract changes and still maintain service compatibility.
How do we ensure the service consumers migrate to the
new interfaces, so that we can remove the old interfaces
timely? The following strategies can be used.
a) Grace period: We mark the old interface as
deprecated and set a grace period. After the grace period
expires, if a team still uses the old interface, that team
becomes responsible for any negative consequences.
b) Interface usage monitoring: To evaluate whether a
deprecated interface is still receiving traffic and which
teams are using it, we need a monitoring system to monitor
the usage of each interface. When the service owner can
validate that a deprecated interface has no traffic, the team
can safely remove that interface.
c) Incentivize migration: What happens if, after the
grace period, a team still uses the deprecated interface? This
is a major practical challenge. Some tactics to incentivize
the migration are as follows: 1) Show alerts on that team's
monitoring system when they use the deprecated interface,
2) temporarily disable the interface. The consuming team
often came and asked, "What happened?" The service owner
often replied with "We notified everyone that we were going
to turn it off after the grace period. We will give you another
week to migrate. After that, we will turn it off permanently."
C. Technology Diversity
In Microservices, the traditional technical constraints for
all the components of a large monolithic application to use
the same technology stack disappear. Technically, each team
can use whatever technology stack they prefer. This is an
important feature that Microservices advocates promote.
However, using diverse technology stacks in an
uncontrolled manner can lead to problems. For example,
each team using a different technology stack creates
knowledge silos that make moving individuals from over-
staffed to under-staffed teams difficult. Moreover, each
introduced technology stack requires operational overhead,
which can outweigh the benefits of introducing a new
technology stack.
To address these problems, we put the use of polyglot
(multiple technology stacks) under proper governance. Every
new technology stack that a team wants to use must undergo
a review process. This practice helps to avoid the technology
landscape becoming unmanageable while utilizing the
flexibility of polyglot when it is really needed.
D. Testing
Microservices introduces testing challenges, which
mainly arise from changes that impact interactions among
services. We reduce these challenges by using the following
1) Test-first Mindset and Practices: We realized that an
application’s testability becomes essential. To ensure
testability, the most effective strategy we found so far is
enforcing the use of test-first mindset and practices. Before
a user story can be moved to development, the acceptance
criteria must be discussed and defined among the testers,
business analysts, and developers. Test code is reviewed as
rigorously as application code. For any code change, if the
test coverage (both line and condition) is below 90%, the
CD pipeline will fail the build.
With these test-first practices, the teams immediately feel
the pain when they make a design decision that makes testing
difficult. This motivates them to come up with designs that
improve testability.
2) Consumer-Driven Contract Testing: The developers
of a consuming service write contract tests to ensure that
producing services meet their expectations. These contract
tests are also run in the pipeline of the producing service. In
this way, the developers of the producing service know
whether their changes will break the contracts expected by
their consumers. These tests provide a safety net for contract
3) Always Online System Integration Test Environment:
The CD pipeline includes a stage in which a service is
deployed to a system integration testing environment, which
is kept always online. In this environment, the service is
configured to use its dependent services, which all have a
deployment in this environment. Tests are executed to
ensure that the service works with its dependent services.
When each service is doing this, we can ensure that the
entire set of services can work together.
4) Test in Production (TiP) and Monitoring: TiP was
introduced. We also enhanced logging and monitoring to
quickly spot anomalies in production. Therefore, even when
an issue arises, we can take immediate action, which
minimizes the impact of the issue. Together, these strategies
give us confidence to release changes to production
Considering the above new complexities and challenges,
does Microservices suit every situation? No, it does not,
according to my experience.
Dealing with these new complexities and challenges can
require significant costs. For example, building the CD
platform is not a trivial task: doing so has taken a team of
eight people four years, amounting to 32 person-years of
effort. Therefore, if your system is simple enough that it can
be comfortably managed as a monolith, it is not “worth the
trouble” to use Microservices. The Microservices approach is
for handling complex systems that require high speed
When teams do not have sufficient domain knowledge
and experience to get the service boundaries right and expect
that the boundaries will evolve dramatically over time,
Microservices can be a dangerous option because refactoring
service boundaries is difficult, has no tool support, and there
is no successful experience reported yet. In contrast,
refactoring a monolithic application is much easier, tool
support exists, and such refactoring is a common practice. In
this situation, it is probably a better option to start by
building a monolith.
Some technical constraints can also rule out the use of
Microservices. For one of the applications that require sub-
microsecond latency, the network latency introduced by
Microservices is unacceptable. Furthermore, when an
application absolutely requires strong consistency, the
difficulties in ensuring strong consistency across all micro
services can outweigh the benefits obtained from
Organizational structure and culture is also an important
factor. If your organization requires numerous handoffs
between requirements engineers, developers, testers, and
operations engineers, things will become very difficult to
manage with Microservices. To make Microservices work,
we have removed the handoffs and built autonomous teams.
Each team is responsible for building the service, deploying
it, operating it, and is empowered to make decisions to react
to user feedback.
A. Taming Interactions among Microservices
By using the strategies described above, we had not
encountered major issues caused by interactions among
services. However, I anticipate more challenges in the future.
The number of services is increasing. As the number
continues to rise, manually analyzing interactions among
services will become increasingly difficult. Research is
required to develop techniques to automatically analyze the
interactions among services, identify harmful interactions,
and generate warnings when such harmful interactions are
When we need to change interactions among a large
number of services, how to effectively test these changes to
gain high confidence before releasing to production also
requires researchers’ attention. This is especially critical for
applications in risk-averse regulated environments, where
bugs in production can have significant repercussions.
B. Refactoring Services Boundaries
According to my experience, as a large monolithic
application evolves over time, not only can the interactions
among modules change but the scope and boundaries of
modules can also change to cater to domain evolution,
requirement changes, and to correct the wrong module
separations caused by a lack of domain understanding when
the modules were initially designed (this is particularly
common when a team is new to a domain).
I anticipate that scope and boundaries changes will
happen in Microservices as well; however, in comparison to
the same situation in monolithic applications, where we have
extensive experience and tool support for refactoring module
boundaries, there is not much reported experience
concerning refactoring service boundaries, and there is no
tool support to help in refactoring service boundaries.
Therefore, research is needed to answer questions such
as: How difficult it is to refactor service boundaries? What
factors impact the need to refactor service boundaries? How
can we minimize the need to refactor service boundaries? We
also need researchers to help develop approaches and tools to
help with refactoring when boundary refactoring is
The problem is difficult because, compared to a
monolith, service boundaries are more physical, changing the
boundaries becomes more difficult and more architecturally
significant in terms of the cost of such changes [7].
C. Finding and Evaluating Alternative Architectural Styles
As discussed above, Microservices do not suit every
situation. Thus, research is also needed to identify and
evaluate alternative architectural styles for architecting
applications for CD in those situations where Microservices
are not suitable.
This work has several limitations. First, all the
experiences and observations stem from one company;
hence, they represent only one data point. Although I believe
that this data point is interesting and important, it may not be
representative; other organizations may have different
Second, our description is primarily qualitative. I found
that obtaining accurate quantitative data concerning the
observed benefits is not easy in a real-world industrial setting
for a topic like this.
Third, while describing the observed benefits of
Microservices, I also attempted to explain why the observed
benefits can only be obtained (or are easier to obtain) with
Microservices. However, the explanation is largely
experience-based. More rigorous empirical studies are
needed to confirm the correctness of the explanations.
Fourth, the strategies to address the challenges that arose
have not been validated outside of our company; other
organizations may well have faced different challenges, and
different strategies may work better.
Finally, due to company security and public relations
policies, I cannot disclose a clear full picture of the
application landscape, the exact architecture of an
application, or an exact list of applications that we moved to
Microservices. However, I do not think this renders this
experience sharing useless.
To the best of our knowledge, not much research has
been conducted on the topic of architecting software
applications for DevOps and CD, especially in industrial
settings. This is consistent with the observations of other
researchers, such as Shahin, et al. [13].
My previous work presents ASRs that CD amenable
applications should meet [6], but it does not tell how to meet
those ASRs. Bellomo, et al. [14] provided an in-depth
description of deployability. However, the description does
not cover modifiability.
There is literature concerning service-oriented
architecture [15], but these studies do not explicitly examine
the effectiveness of Microservices.
Balalaie, et al. [16] reported their experiences when
migrating to Microservices. Bass, et al. [2] presented a
similar case study. Their works focus on the migration
process, whereas this work focuses on the observed benefits
and challenges of this new architectural style and on how to
address the new challenges that arose. In addition, they did
not explicitly discuss situations in which Microservices may
not be a good choice.
Driven by the need for speed [1], in the context of
DevOps and Continuous Delivery (CD), Paddy Power turned
to a new architectural style called Microservices. Increased
deployability, increased modifiability, and increased
resilience to design erosion have been observed.
At the same time, new challenges associated with the
increased number of services, evolving contracts among
services, technology diversity, and testing have been
Although we were able to address these challenges using
practical strategies and achieve CD’s significant benefits [3],
research in taming interactions among services, refactoring
service boundaries, and finding alternative architectural
styles for situations where Microservices are not suitable
could dramatically advance this field.
I thank my colleagues, Klaas-Jan Stol, this paper’s
reviewers for their help and thoughtful comments. This paper
is based on my experience at Paddy Power. It represents only
my own views and does not necessarily reflect those of
Paddy Power.
[1] J. Bosch, "Speed, Data, and Ecosystems: The Future of
Software Engineering," IEEE Software, vol. 33, pp.
82-88, 2016.
[2] L. Bass, I. Weber, and L. Zhu, DevOps: A Software
Architect's Perspective. New York: Addison-Wesley,
[3] L. Chen, "Continuous Delivery: Huge Benefits, but
Challenges Too," Software, IEEE, vol. 32, pp. 50-54,
[4] L. Chen, "Continuous Delivery: Overcoming adoption
challenges," Journal of Systems and Software, vol.
128, pp. 72-86, 2017.
[5] J. Humble and D. Farley, Continuous Delivery:
Reliable Software Releases through Build, Test, and
Deployment Automation. New York: Addison-Wesley,
[6] L. Chen, "Towards Architecting for Continuous
Delivery," in Software Architecture (WICSA), 2015
12th Working IEEE/IFIP Conference on, 2015, pp.
[7] L. Chen, M. Ali Babar, and B. Nuseibeh,
"Characterizing Architecturally Significant
Requirements," Software, IEEE, vol. 30, pp. 38-45,
[8] S. Newman, Building Microservices. Sebastopol,
California: O'Reilly Media, 2015.
[9] S. Tony, D. Mitchell, G. Michael, W. Laurie, B. Kent,
and S. Michael, "Continuous deployment at Facebook
and OANDA," presented at the Proceedings of the
38th International Conference on Software
Engineering Companion, Austin, Texas, 2016.
[10] J. Michael de and D. Arie van, "Continuous
deployment and schema evolution in SQL databases,"
presented at the Proceedings of the Third International
Workshop on Release Engineering, Florence, Italy,
[11] L. Chen and M. A. Babar, "Towards an Evidence-
Based Understanding of Emergence of Architecture
through Continuous Refactoring in Agile Software
Development," in Software Architecture (WICSA),
2014 IEEE/IFIP Conference on, 2014, pp. 195-204.
[12] J. Postel, "Transmission Control Protocol," 1980.
[13] M. Shahin, M. Ali Babar, and L. Zhu, "The
Intersection of Continuous Deployment and
Architecting Process: Practitioners’ Perspectives,"
presented at the ACM/IEEE International Symposium
on Empirical Software Engineering and Measurement
(ESEM), Ciudad Real, Spain, 2016.
[14] S. Bellomo, N. Ernst, R. Nord, and R. Kazman,
"Toward Design Decisions to Enable Deployability:
Empirical Study of Three Projects Reaching for the
Continuous Delivery Holy Grail," in Dependable
Systems and Networks (DSN), 2014 44th Annual
IEEE/IFIP International Conference on, 2014, pp. 702-
[15] M. Razavian and P. Lago, "A systematic literature
review on SOA migration," Journal of Software:
Evolution and Process, vol. 27, pp. 337-372, 2015.
[16] A. Balalaie, A. Heydarnoori, and P. Jamshidi,
"Microservices Architecture Enables DevOps:
Migration to a Cloud-Native Architecture," IEEE
Software, vol. 33, pp. 42-52, 2016.
Lianping Chen is an independent researcher and consultant, currently works as Chief
Continuous Delivery Expert of Products and Solutions (P&S) at Huawei Technologies,
where he is responsible for implementing Continuous Delivery across all business units
within P&S that has tens of thousands of R&D staff.
His research interests include DevOps, Continuous Delivery, Microservices, and
software product lines. He has published 15+ peer-reviewed papers in leading journals
and conferences on software development. These papers have received about 1000
... 3,4 In line with evolutionary design, microservice boundaries are usually determined in an iterative process, while this process still lacks pragmatic decision support in practice. [4][5][6] Architectural smell (AS) is a metaphor often referring to poor design decisions that possibly have negative impacts on software maintenance [7][8][9] and lead to the accumulation of technical debt. 10 As an indicator of potential architectural problems, 11 this metaphor has been used to pinpoint refactoring opportunities to provide decision support for evolving software architecture, 8,12 including microservice boundary. ...
... For this evolutionary process, concrete and universally applicable decision support is needed, for example, about refactoring microservices. [4][5][6] Yet this has received scant attention from scientific research. Distinct from traditional architectural issues (e.g., architectural erosions, 33,34 architectural debts 3 ), Architectural Smell (AS), is a metaphor describing poor architectural decisions that are claimed to negatively impact system maintainability. ...
As a recently predominant architecture style, MicroService Architecture (MSA) is likely to suffer the issues of poor maintainability due to inappropriate microservice boundaries. Architectural Smell (AS), as a metaphor for potential architectural issues that may have negative impacts on software maintenance, can be used to pinpoint refactoring opportunity for evolving microservice boundary. However, existing studies mostly focus on AS detection with little further investigation on the possible impacts, causes, and solutions of AS, which does little help in addressing the bad smells in architecture. Our goal in this study is to bridge this gap by investigating the possible impacts, causes, and solutions of AS in MSA‐based systems. An industrial case study is carried out to collect repository data and practitioners' views on six typical ASes in a real MSA‐based telecommunication system. Statistical Analysis and Coding techniques are used in the analyses of quantitative and qualitative data respectively. The results show that AS influences the modularity, modifiability, analyzability, and testability of the MSA‐based system, which further induce extra cross‐team communication, change‐ and fault‐prone microservices. To explore the causes for AS, a five‐aspect conceptual classification with technology, project, organization, business, and professional is proposed, in which the business and organization aspects take the major roles. Both technical and non‐technical solutions are distilled to deal with ASes despite potential constraints. These results and their comparison to current literature are discussed, which provide practical implications in coping with AS in microservices.
... 4 II Architect the product for continuous delivery (Chen 2018) Refactoring a large existing product to stateless micro-services requires substantial effort and creates many risks. ...
Full-text available
Context Software companies must become better at delivering software to remain relevant in the market. Continuous integration and delivery practices promise to streamline software deliveries to end-users by implementing an automated software development and delivery pipeline. However, implementing or retrofitting an organization with such a pipeline is a substantial investment, while the reporting on benefits and their relevance in specific contexts/domains are vague. Aim In this study, we explore continuous software engineering practices from an investment-benefit perspective. We identify what benefits can be attained by adopting continuous practices, what the associated investments and risks are, and analyze what parameters determine their relevance. Method We perform a multiple case study to understand state-of-practice, organizational aims, and challenges in adopting continuous software engineering practices. We compare state-of-practice with state-of-the-art to validate the best practices and identify relevant gaps for further investigation. Results We found that companies start the CI/CD adoption by automating and streamlining the internal development process with clear and immediate benefits. However, upgrading customers to continuous deliveries is a major obstacle due to existing agreements and customer push-back. Renegotiating existing agreements comes with a risk of losing customers and disrupting the whole organization. Conclusions We conclude that the benefits of CI/CD are overstated in literature without considering the contextual and domain complexities rendering some benefits infeasible. We identify the need to understand the customer and organizational perspectives further and understand the contextual requirements towards the CI/CD.
... Disappointments in mechanical foundations can reason the disintegration of financial and social usefulness [22]. Power outages of significant distance telephone administration, financial assessment insights structures, advanced value range switch constructions, and distinctive significant interchanges and records preparing contributions could surely reason critical money-related interruption [23]. Notwithstanding, it very well may be impractical to avoid innovative reliance. ...
As we are aware Information Technology had its cutting-edge lifestyle from the overdue sixties of the remaining century whilst the Arpanet become introduced, funded with the aid of using the branch of protection of the USA. After that, the IT enterprise has come a protracted manner to its cutting-edge form in which its miles gambling a dominant function in each sphere of life. It has made innovative modifications in facts amassing and dissemination and worldwide communication. It is growing into a surely paperless painting environment. Also, we can now ship a message very without difficulty to everywhere withinside the international in seconds. From a schooling factor of view, we can have a digital elegance in which the teacher ought to take a seat down in any part of the arena and his college students scattered in all exceptional elements of the arena via video convention with the presentation of look at substances in addition to query and solution sessions. A health practitioner now sitting in any part of the arena ought to carry out a surgical procedure in which the affected person is mendacity in some other part of the arena. These work examples display where we stand these days compared to what has become 1/2 of a century back. But as we recognize, nothing on this international is only correct as the whole thing has a darkish side. In this paper, we might speak about the deserves and demerits of enforcing IT globally and in which we are heading withinside the future.
... 15: Les valeurs de la criticité des artifacts. ...
Les microservices sont apparus comme une solution alternative à de nombreuses technologies existantes, permettant de décomposer les applications monolithiques en ``petits'' composants/modules de granularité fine, hautement cohésifs et faiblement couplés. Cependant, l'identification des microservices reste un défi pouvant remettre en cause le succès de ce type de migration. Cette thèse propose une approche pour l'identification automatique des microservices à partir d'un ensemble de processus métier (BP). L'approche combine différents modèles indépendants représentant respectivement les dépendances de contrôle, les dépendances de données et les dépendances sémantiques d'un BP. L'approche se base sur un clustering collaboratif afin de regrouper les activités en microservices. Pour illustrer la démarche et démontrer sa faisabilité et ses performances, nous avons adopté deux études de cas, la location de vélos et le suivi de cargaison. En termes de précision, les résultats expérimentaux montrent que les différents types de dépendances entre activités extraites de spécification de BPs comme paramètres d'entrée permettent de générer des microservices de meilleure qualité par rapport aux autres approches proposées dans l'état de l'art.
Full-text available
Various architectures can be applied in software design. The aim of this research is to examine a typical implementation of Jakarta EE monolithic and microservice software architectures in the context of software quality attributes. Software quality standards are used to define quality models, as well as quality characteristics and sub-characteristics, i.e., software quality attributes. This paper evaluates monolithic and microservice architectures in the context of Coupling, Testability, Security, Complexity, Deployability, and Availability quality attributes. The performed examinations yielded a quality-based mixed integer goal programming mathematical model for software architecture optimization. The model incorporates various software metrics and considers their maximal, minimal or targeted values, as well as upper and lower deviations. The objective is the sum of all deviations, which should be minimal. Considering the presented model, a solution which incorporated multiple monoliths and microservices was defined. This way, the internal structure of the software is defined in a consistent and symmetrical context, while the external software behavior remains unchanged. In addition, an intersection point of monolithic and microservice software architectures, where software metrics obtain the same values, was introduced. Within the intersection point, either one of the architectures can be applied. With the exception of some metrics, an increase in the number of features leads to a value increase of software metrics in microservice software architecture, whilst these values are constant in monolithic software architecture. An increase in the number of features indicated a quality attribute’s importance for the software system should be examined and an appropriate architecture should be selected accordingly. Finally, practical recommendations regarding software architectures in terms of software quality were given. Since each software system needs to meet non-functional in addition to functional requirements, a quality-driven software engineering can be established.
Network service, as a behavioral process oriented toward network operation, plays a vital role in the function of network application system. State-of-the-art researches are prone to ignore the intrinsic behavioral characteristics of network services, and yield subjective and biased analysis results on service selection, coordination, composition, and management. To fill in this gap, an approach of calculating the network service behavioral utility is presented in this paper. We first propose a method of extracting the behavioral attributes and constructing the state space model of network service with machine learning and differential geometry techniques. Moreover, based on the state space model, which is described as Riemannian manifold, we propose a method of calculating the utility of network services and applications, which serves as a measure of impact of the particular network behavior. The proposed method is examined in different application scenarios under the service mesh architecture, and experimental results show that the method can effectively reflect the intrinsic behavioral characteristic of network service.
Full-text available
This thesis explored software architecture design of microservices systems in the context of DevOps and make the following novel aspects: (1) This thesis proposes a set of taxonomies of the research themes, problems, solutions, description methods, design patterns, quality attributes as well as the challenges of microservices architecture in DevOps that contributes to the software engineering body of knowledge by conducting state of the art (i.e., systematic mapping) and practice (i.e., mixed-method) studies on architecture design of microservices systems. These studies mainly identify, analyze, and classify the challenges and the solutions for microservices architecture in DevOps, design of microservices systems, as well as monitoring and testing of microservices systems. The findings of these studies can help practitioners to improve the design of microservices systems. (2) This thesis proposes a taxonomy of issues occurring in microservices systems by analyzing 1,345 issue discussions extracted from five open source microservices systems. The proposed taxonomy of issues consisting of 17 categories, 46 subcategories, and 138 types. This thesis also identified a comprehensive list of causes and mapped them to the identified issues. The identified causes consist of 7 categories, 26 subcategories, and 109 types. The proposed taxonomy and identified causes can help practitioners to avoid and address various types of issues in the architecture design of microservices systems. (3) This thesis proposes a set of decision models for selecting patterns and strategies in four MSA design areas: application decomposition into microservices, microservices security, microservices communication, and service discovery. The proposed decision models are based on the knowledge gained from the systematic mapping study, mixed-method study, exploratory study, and grey literature. The correctness and usefulness of the proposed decision models have been evaluated through semi-structured interviews with microservices practitioners. The proposed decision models can assist practitioners in selecting appropriate patterns and strategies for addressing the challenges related to the architecture design of microservices systems.
Full-text available
Continuous Delivery (CD) is a relatively new software development approach. Companies that have adopted CD have reported significant benefits. Motivated by these benefits, many companies would like to adopt CD. However, adopting CD can be very challenging for a number of reasons, such as obtaining buy-in from a wide range of stakeholders whose goals may seemingly be different from—or even conflict with—our own; gaining sustained support in a dynamic complex enterprise environment; maintaining an application development team's momentum when their application's migration to CD requires an additional strenuous effort over a long period of time; and so on. To help overcome the adoption challenges, I present six strategies: (1) selling CD as a painkiller; (2) establishing a dedicated team with multi-disciplinary members; (3) continuous delivery of continuous delivery; (4) starting with the easy but important applications; (5) visual CD pipeline skeleton; (6) expert drop. These strategies were derived from four years of experience in implementing CD at a multi-billion-euro company. Additionally, our experience led to the identification of eight further challenges for research. The information contributes toward building a body of knowledge for CD adoption.
Conference Paper
Full-text available
Continuous deployment is the software engineering practice of deploying many small incremental software updates into production, leading to a continuous stream of 10s, 100s, or even 1,000s of deployments per day. High-profile Internet firms such as Amazon, Etsy, Facebook, Flickr, Google, and Netflix have embraced continuous deployment. However, the practice has not been covered in textbooks and no scientific publication has presented an analysis of continuous deployment. In this paper, we describe the continuous deployment practices at two very different firms: Facebook and OANDA. We show that continuous deployment does not inhibit productivity or quality even in the face of substantial engineering team and code size growth. To the best of our knowledge, this is the first study to show it is possible to scale the size of an engineering team by 20X and the size of the code base by 50X without negatively impacting developer productivity or software quality. Our experience suggests that top-level management support of continuous deployment is necessary, and that given a choice, developers prefer faster deployment. We identify elements we feel make continuous deployment viable and present observations from operating in a continuous deployment environment.
Full-text available
When DevOps started gaining momentum in the software industry, one of the first service-based architectural styles to be introduced, be applied in practice, and become popular was microservices. Migrating monolithic architectures to cloud-native architectures such as microservices reaps many benefits, such as adaptability to technological changes and independent resource management for different system components. This article reports on experiences and lessons learned during incremental migration and architectural refactoring of a commercial MBaaS (mobile back end as a service) to microservices. It explains how adopting DevOps facilitated a smooth migration. Furthermore, the researchers transformed their experiences in different projects into reusable migration practices, resulting in microservices migration patterns. This article is part of a theme issue on DevOps. The Web extra at is an audio recording of Brian Brannon speaking with author Pooyan Jamshidi and James Lewis, principal engineer at ThoughtWorks, about DevOps and microservices architecture.
Conference Paper
Full-text available
Continuous Delivery (CD) has emerged as an auspicious software development discipline, with the promise of providing organizations the capability to release valuable software continuously to customers. Our organization has been implementing CD for the last two years. Thus far, we have moved 22 software applications to CD. I observed that CD has created a new context for architecting these applications. In this paper, I will try to characterize such a context of CD, explain why we need to architect for CD, describe the implications of architecting for CD, and discuss the challenges this new context creates. This information can provide insights to other practitioners for architecting their software applications, and provide researchers with input for developing their research agendas to further study this increasingly important topic.
Full-text available
When Service Orientation was introduced as the solution for retaining and rehabilitating legacy assets, both researchers and practitioners proposed techniques, methods, and guidelines for SOA migration. With so much hype surrounding SOA, it is not surprising that the concept was interpreted in many different ways, and consequently, different approaches to SOA migration were proposed. Accordingly, soon there was an abundance of methods that were hard to compare and eventually adopt. Against this backdrop, this paper reports on a systematic literature review that was conducted to extract the categories of SOA migration proposed by the research community. We provide the state-of-the-art in SOA migration approaches, and discuss categories of activities carried out and knowledge elements used or produced in those approaches. From such categorization, we derive a reference model, called SOA migration frame of reference, that can be used for selecting and defining SOA migration approaches. As a co-product of the analysis, we shed light on how SOA migration is perceived in the field, which further points to promising future research directions. Copyright © 2015 John Wiley & Sons, Ltd.
Full-text available
This article presents a framework for characterizing architecturally significant requirements (ASRs) on the basis of an empirical study using grounded theory. The study involved interviews with 90 practitioners with an accumulated 1,448 years of software development experiences in more than 500 organizations of various sizes and domains. These findings could provide researchers with a framework for discussing and conducting further research on ASRs and can inform researchers' development of technologies for dealing with ASRs. The findings also enrich understanding of requirements and architecture interactions, allowing the twin peaks to move from aspiration to reality.
Conference Paper
Continuous Deployment is an important enabler of rapid delivery of business value and early end user feedback. While frequent code deployment is well understood, the impact of frequent change on persistent data is less understood and supported. SQL schema evolutions in particular can make it expensive to deploy a new version, and may even lead to downtime if schema changes can only be applied by blocking operations. In this paper we study the problem of continuous deployment in the presence of database schema evolution in more detail. We identify a number of shortcomings to existing solutions and tools, mostly related to avoidable downtime and support for foreign keys. We propose a novel approach to address these problems, and provide an open source implementation. Initial evaluation suggests the approach is effective and sufficiently efficient.
An evaluation of recent industrial and societal trends revealed three key factors driving software engineering's future: speed, data, and ecosystems. These factors' implications have led to guidelines for companies to evolve their software engineering practices. This article is part of a special issue on the Future of Software Engineering.
Conference Paper
There is growing interest in continuous delivery practices to enable rapid and reliable deployment. While practices are important, we suggest architectural design decisions are equally important for projects to achieve goals such continuous integration (CI) build, automated testing and reduced deployment-cycle time. Architectural design decisions that conflict with deploy ability goals can impede the team's ability to achieve the desired state of deployment and may result in substantial technical debt. To explore this assertion, we interviewed three project teams striving to practicing continuous delivery. In this paper, we summarize examples of the deploy ability goals for each project as well as the architectural decisions that they have made to enable deploy ability. We present the deploy ability goals, design decisions, and deploy ability tactics collected and summarize the design tactics derived from the interviews in the form of an initial draft version hierarchical deploy ability tactic tree.