Conference PaperPDF Available

Service Cutter: A Systematic Approach to Service Decomposition

Authors:
  • Zühlke Engineering AG
  • University of Applied Sciences of Eastern Switzerland (OST)

Abstract and Figures

Decomposing a software system into smaller parts always has been a challenge in software engineering. It is particularly important to split distributed systems into loosely coupled and highly cohesive units. Service-oriented architectures and their microservices deployments tackle many related problems, but remain vague on how to cut a system into discrete, autonomous, network-accessible services. In this paper, we propose a structured, repeatable approach to service decomposition based on 16 coupling criteria distilled from the literature and industry experience. These coupling criteria form the base of Service Cutter, our method and tool framework for service decomposition. In the Service Cutter approach, coupling information is extracted from software engineering artifacts such as domain models and use cases and represented as an undirected, weighted graph to find and score densely connected clusters. The resulting candidate service cuts promise to reduce coupling between and promote high cohesion within services. In our validation activities, which included prototyping, action research and case studies, we successfully decomposed two sample applications with acceptable performance; most (but not all) test scenarios resulted in appropriate service cuts. These results as well as early feedback from members of the target audience in industry and academia suggest that our coupling criteria catalog and tool-supported service decomposition approach have the potential to assist a service architect’s design decisions in a viable and practical manner.
Content may be subject to copyright.
Note: A definitive version of this paper was subsequently published in
the Proceedings Of ESOCC 2016, Springer LNCS Volume 9846 2011, ISBN:
978-3-319-44481-9 (Print) 978-3-319-44482-6 (Online).
Service Cutter:
A Systematic Approach to Service Decomposition
Michael Gysel1, Lukas Kölbener1, Wolfgang Giersche2, Olaf Zimmermann1
1 University of Applied Sciences of Eastern Switzerland (HSR FHO),
Oberseestrasse 10, 8640 Rapperswil, Switzerland
michael.gysel@lifetime.hsr.ch, lukas.koelbener@lifetime.hsr.ch, ozimmerm@hsr.ch
2 Zühlke Engineering AG,
Wiesenstrasse 10a, 8952 Schlieren, Switzerland
wolfgang.giersche@zuehlke.com
Abstract. Decomposing a software system into smaller parts always has been a
challenge in software engineering. It is particularly important to split distributed
systems into loosely coupled and highly cohesive units. Service-oriented
architectures and their microservices deployments tackle many related problems, but
remain vague on how to cut a system into discrete, autonomous, network-accessible
services. In this paper, we propose a structured, repeatable approach to service
decomposition based on 16 coupling criteria distilled from the literature and industry
experience. These coupling criteria form the base of Service Cutter, our method and
tool framework for service decomposition. In the Service Cutter approach, coupling
information is extracted from software engineering artifacts such as domain models
and use cases and represented as an undirected, weighted graph to find and score
densely connected clusters. The resulting candidate service cuts promise to reduce
coupling between and promote high cohesion within services. In our validation
activities, which included prototyping, action research and case studies, we
successfully decomposed two sample applications with acceptable performance;
most (but not all) test scenarios resulted in appropriate service cuts. These results as
well as early feedback from members of the target audience in industry and
academia suggest that our coupling criteria catalog and tool-supported service
decomposition approach have the potential to assist a service architect’s design
decisions in a viable and practical manner.
Keywords: functional partitioning, loose coupling, knowledge management,
microservices, service interface design guidelines, service granularity, service quality
1 Introduction
In 1972, D. L. Parnas reflected “On the Criteria to Be Used in Decomposing Systems
into Modules” [11]. Since then, functional decomposition has remained an important
topic in software engineering. As software systems grew and became more complex,
software engineers started to distribute modules and procedures over networks, e.g.,
as remote objects, components or Web services [1]. Architectural styles such as
Service-Oriented Architecture (SOA) aim at tackling the many design challenges of
such distributed systems; however, designing service interface boundaries at the right
level of granularity remained an important challenge for SOA practitioners [3,17].
While partial solutions have been found, two of the related Research Problems (RP)
remained open: (RP1) The architecturally significant requirements and stakeholder
concerns to be addressed during service (de-)composition are still not understood ful-
ly and have not been documented consistently and comprehensively yet. (RP2) A re-
quirements-driven, repeatable, and scalable service decomposition method, to be sup-
ported and partially automated by service design tools, has been missing until now.
In this paper, we collect architecturally significant requirements for service
decomposition and introduce Service Cutter, our knowledge management method and
supporting tool framework that assist software architects when they make service
design decisions (note that we do not intend to fully automate this decision making
process, but rather support it). The remainder of the paper presents our solutions to
RP1 and RP2 as well as their validation in the following way: Section 2 scopes the
context of our work and the research problems solved, and defines our basic service
decomposition terminology. Section 3 presents our first research contribution, a
coupling criteria catalog for service decomposition; Section 4 then defines a novel
service decomposition process and an extensible tool architecture that integrates
existing graph clustering algorithms to derive candidate service cuts from system
specification artifacts. Section 5 presents an implementation of the tool architecture
and our validation, which includes action research, two case studies, and performance
measurements; Section 6 discusses strengths and weaknesses of Service Cutter and
presents initial industry feedback. Section 7 concludes and highlights future work.
2 Context, Problem and Supporting Definitions
The impact of service boundary design is far-reaching. Loosely coupled, but highly
cohesive services are crucial for the maintainability and scalability of software and
allow architects and developers to choose a suitable technology independently for
each particular business problem and context. Nevertheless, the decomposition of a
monolithic application into services still is not fully understood, even with the rise of
microservices [16], a contemporary incarnation of SOA principles and patterns
combined with modern software engineering practices such as continuous, indepen-
dent deployment. For instance, a popular introduction to microservices states that
“deciding how to partition a system into a set of services is very much an art.” [15]
Microservices advocates suggest leveraging Domain-Driven Design (DDD) [5] to
obtain service boundaries: For instance, instances of the DDD pattern aggregate esta-
blish composed services that are aligned to consistency constraints, and services deri-
ved from bounded contexts are aligned to domain model boundaries or team organiza-
tion structures. Both of these two DDD strategies are suitable approaches to service
identification (assuming that one knows how to find aggregates and bounded contexts
in the requirements). However, our collective industry experience and a literature
review indicate that many more stakeholder concerns have to be taken into account
during service decomposition in particular, architecturally significant requirements
including software quality attributes [2]. We believe that this process can and should
be approached in a more structured way. This leads to our first hypothesis:
The driving forces for service decomposition can be presented to architects in a
comprehensive and comprehensible coupling criteria catalog.
This criteria catalog, which will be introduced in the next section, assembles 16
decomposition criteria commonly used by architects to frame and guide their
architectural decisions. We distilled it in an iterative and incremental way, leveraging
consecutive project retrospectives, interviews, and a coupling criteria workshop.
A systematic collection of design knowledge can serve as the foundation for partial
automation of analysis and design. This observation leads to our second hypothesis:
Based on the coupling criteria catalog, a system’s specification artifacts can be
processed in a structured and partially automated way to suggest service de-
compositions that promote loose coupling between and high cohesion within services.
To investigate whether these two hypotheses hold true, we conceptualized and
developed Service Cutter, a tool framework architecture and prototype to analyze
software engineering artifacts, including use cases and domain models, and to suggest
candidate service decompositions.
Service Cutter and its presentation in this paper use the following terminology:
Definitions. The term service can be defined both on a logical and on a physical
level:
1. A service is the technical authority for a specific business capability [3].
2. A service is accessed remotely through some invocation interface and
communication protocol, either synchronously or asynchronously [6].
In order to provide capabilities, a service requires resources. We identified three
types of resources that serve as the building blocks of services in our approach:
1. Data. A service may have ownership over a subset of a system’s data [16]. It then
is the only authority allowed to change this data, notifying other services on such
changes. The data is often, but not always, stored in a database (then called applica-
tion state); data exposed at the service interface constitutes its published language [5].
2. Operations. A service can encapsulate business rules and calculation (processing)
logic. Operations are often, but not always, based on the data owned by the service.
3. Artifacts. An artifact is a snapshot of data or operation results transformed into a
specific format. An example is a business report such as monthly sales figures by
geography, which was assembled using operations and data.
To facilitate a systematic approach to service decomposition, we generalize these
resources with the concept of a nanoentity shown in Figure 1:
Figure 1. Data, operations and artifacts generalized into the nanoentity concept.
Service decomposition then can be defined as the process of identifying a set of
services and assigning all nanoentities to one (and only one) of these services. A
coupling criterion represents a particular driving force for service decomposition;
such criteria capture architecturally significant requirements and arguments why two
nanoentities should or should not be owned and exposed by the same service. Soft-
ware System Artifacts (SSAs) represent the analysis and design artifacts that contain
information about coupling criteria; scoring priorities weigh the coupling criteria. A
service cut is the output of a single execution of the service decomposition process.
3 Coupling Criteria Catalog
We conducted a literature review, reflected on past projects, and met for a workshop
to assemble our collective, precompiled architecture design experience. We
consolidated the results of these knowledge gathering activities in a coupling criteria
catalog in an iterative and incremental manner. Our coupling criteria catalog aims at
serving as a comprehensive, yet not complete collection of architecturally significant
requirements and decision drivers for service decomposition. Note that we strived for
consensus, clarity, and compactness; hence, not all candidate criteria made it into the
catalog. Figure 2 lists the 16 Coupling Criteria (CC) in the final catalog version:
Figure 2. Coupling Criteria (CC) catalog compiling 16 CC in four categories.
We grouped the CC into four categories in the catalog (to improve readability):
1. Cohesiveness: Criteria describing certain common properties of mutually related
nanoentities that justify why these nanoentities should belong to the same service. An
example of a cohesiveness argument is that all nanoentities involved in the realization
of a use case should belong to a single service to simplify use case execution.
2. Compatibility: Criteria indicating divergent characteristics of nanoentities. A
service should not contain nanoentities with incompatible characteristics. Examples of
such characteristics are “high, “eventually, and “weak for the criterion Consis-
tency Criticality; these data consistency management options are mutually exclusive.
3. Constraints: Criteria specifying high-impact requirements that enforce that certain
groups of nanoentities a) must jointly constitute a dedicated service or b) must be
distributed amongst different services. The fact that a set of nanoentities has to be
modified jointly and atomically, e.g. in the same database transaction, forms a strong
requirement that justifies to be represented as constraint criterion in the catalog.
4. Communication: Criteria exclusively pertaining to the technical cost of remoting,
e.g., mutability. Immutable resources do not require complex synchronization means.
All 16 CC are recorded in a common card layout inspired by pattern languages and
agile practices. Table 1 introduces this Coupling Criterion Card (C3) template:
Table 1. A template for Coupling Criterion Cards (C3).
[Coupling Criteria Identifier and Name]
Description
[A brief summary of the Coupling Criterion (CC) w.r.t. its
impact on/usage of nanoentities.]
System Specificati-
on Artifacts (SSAs)
[Requirements engineering input and software architecture
concepts/deliverables pertaining to this coupling criterion.]
Literature
[References to books, articles, and/or blog posts.]
Type
Cohesiveness | Compatibility | Constraint | Communication
Characteristics
[Defines a set of possible values for this CC. Only applies to
CC of type Compatibility. E.g., “critical”, “normal”, “low”].
The usage of such C3s makes the catalog structure recognizable and the catalog
extensible. Table 2 and Table 3 present two examples of filled-out C3 instances:1
Table 2. The “Identity & Lifecycle Commonality” CC.
CC-1 Identity & Lifecycle Commonality
Description
Nanoentities that belong to the same identity and therefore
share a common lifecycle (create, read, update, delete).
System Specification
Artifacts (SSAs)
Entity-Relationship Models
Domain-Driven Design Entity pattern instances
Literature
Entity definition in Domain-Driven Design [5]:
Some objects are not defined primarily by their attributes.
They represent a thread of identity that runs through time
and often across distinct representations.
Type
Cohesiveness
Table 3. The “Semantic Proximity” CC.
CC-2 Semantic Proximity
Description
Two nanoentities are semantically proximate when they
have a semantic connection given by the business domain.
The strongest indicator for semantic proximity is coherent
(joint) access of/to nanoentities within the same use case.
System Specification
Coherent access to or updates of nanoentities in
1 All 16 coupling criteria cards are published in full length in the Service Cutter wiki on
GitHub, https://github.com/ServiceCutter/ServiceCutter/wiki/Coupling-Criteria
Artifacts (SSAs)
use cases (or user stories).
Aggregation or association relationships in an
entity-relationship model.
Literature
Single Responsibility Principle by R. Martin [9]:
Gather together the things that change for the same
reasons. Separate those things that change for different
reasons.
C. Richardson on microservice decomposition [15]:
There are number of strategies that can help [to partition
a system into a set of services]
. One approach is to
partition services by verb or use case.
Type
Cohesiveness
Eliciting CC instances to reflect the non-functional requirements of a specific
software product is a key aspect of analysis and design. Hence, software architects
can leverage the CC catalog to establish a common terminology for their design
discussions as well as architecture documentation. Moreover, our CC catalog can
serve as the basis of a structured, repeatable way to identify, make, and capture
related decisions [18]; it serves as ubiquitous language [5] for service decomposition.
4 Service Decomposition Concepts and Tool Architecture
To allow architects to leverage the CC catalog and receive service decomposition
advice, we created the Service Cutter tool framework. Service Cutter derives
candidate service cuts from user-prioritized coupling criteria (obtained from SSAs) to
achieve loose coupling between services and high cohesion within services. To do so,
additional design concepts are required, which will be introduced in this section.
Decomposition input. The input to Service Cutter is a machine-readable represen-
tation of selected software engineering artifacts that represent intermediate stages of
analysis and design. To represent these artifacts, we introduce System Specification
Artifacts (SSAs). SSAs serve as data sets from which the Service Cutter can extract
the required coupling criteria information. Examples of SSA types are use cases,
DDD entities/aggregates, and Entity-Relationship Models (ERMs); e.g., information
about CC-2 Semantic Proximity comes from these two SSA types. We designed
additional SSA types to supply information that is not contained in existing ones (e.g.,
shared owner groups, predefined services, separated security zones and security
access groups). The Service Cutter wiki provides detailed explanations and a
reference of these nine types of SSAs (called user representations in the
prototype).2
Figure 3 specifies the dependencies of coupling criteria and SSAs. For instance,
information about CC-16, Security Constraint, can be obtained from the SSA
2 https://github.com/ServiceCutter/ServiceCutter/wiki/User-Representations
“separated security zones”. Security zones group nanoentities by their diverging
privacy requirements, e.g. sensible personal information vs. unclassified, public data.
Figure 3. Dependencies between System Specification Artifacts (SSAs) and CC.
Decomposition process. Figure 4 specifies the service cutting process in BPMN:
Figure 4. Serving decomposition process (human vs. automated/tool-supported tasks).
Service Cutter processes the provided SSA instances and extracts nanoentities as
well as coupling criteria instances from them. Prioritized coupling criteria and SSAs
are transformed into an undirected, weighted graph; nodes represent nanoentities, and
the weights of edges indicate how cohesive and/or coupled two nanoentities are.
Algorithm integration. We then employ clustering algorithms on this graph to find
candidate service cuts. Our concepts and tool architecture are designed to be general
enough to allow the inclusion of multiple algorithms; e.g., a programming interface is
provided which can be implemented for any clustering algorithm that is based on
undirected, weighted graphs. At present, we included Java implementations of two
algorithms, namely Girvan-Newman [10] and the Epidemic Label Propagation (ELP),
originally defined by Raghavan and later refined by Leung et al. [14]. A comparison
of and rationale for the selection of these two different approaches can be found in
[7]. For instance, the two algorithms differ from each other in their (non-
)deterministic behavior; only one of them required a number-of-clusters in parameter.
Results of a deterministic algorithm like Girvan-Newman can be reproduced by
running the algorithm repeatedly using the same input data. The impact of different
input data, scoring values and priorities can therefore be analyzed as the algorithm
itself does not include a random element. A non-deterministic algorithm like ELP
(Leung) complicates analysis, as changes in the results do not always result from
input changes. Furthermore, results always need to be safely persisted and reloaded
since they cannot be reproduced reliably. An element of randomness is not necessarily
a disadvantage: Running multiple algorithm cycles presents different solutions and
outlines where the difficult architectural decisions reside.
Providing the number of clusters as a parameter to the algorithm has the advantage
of analyzing the service decomposition with any possible number of services. This
feature can be used to better understand the structure and coupling between parts of
the system when running the algorithm with varying input. Requesting a high number
of services, for instance, may indicate how services can be decomposed further; a
small predefined service number allows systems to gradually emerge from a monoli-
thic architecture to service orientation. However, algorithms requiring the number of
services as input shift the responsibility to answer this critical question back to the
user; as architects are often prejudiced on the number of services their system should
be composed of, this is not always desirable. Letting Service Cutter suggest not only
the content of each service, but also the number of services (as ELP does) challenges
the user to reassess his/her ideas against the suggested candidate service cuts.
Priority scoring. The analysis and processing of coupling criteria uses a weighted
graph and scorers. The weight on an edge between two nanoentities is the sum of all
scores per CC multiplied by their priorities. Table 4 illustrates the calculation:
Table 4. An exemplary calculation of the weight of an edge.
Coupling Criterion
Priority
Result
CC-1: Semantic Proximity
1
41 = 4
CC-7: Availability Criticality
5
2.5 5 = 12.5
CC-9: Consistency Constraint
3
83 = 24
Total weight
4 + 12.5 + 24 =40.5
The score is a number from -10 to +10. A score of +10 expresses that these two
nanoentities should definitely reside in the same service according this coupling
criterion. A score of -10 therefore represents the opposite extreme, i.e., that the
nanoentities should be placed into different services.
Figure 5. Weighted edges representing the coupling connect the nanoentities.
The calculation is performed for every link between nodes with coupling
information; Figure 5 shows an example. The calculation depends on the involved
coupling criteria; the scorers map coupling criteria to actual numbers used to
construct the weighted graph. Table 5 maps CCs to the five types of scorers that differ
in their calculation logic:
Table 5. Coupling criteria and the scorers calculating the weight of the edges.
Coupling Criterion
Scorer Type
Identity & Lifecycle Commonality
Shared Owner
Latency
Security Contextuality
Consistency Constraint
Cohesive Group Scorer
Nanoentities in a cohesive group should remain
together in one service. All relations between
nanoentities in a group are scored +10.
Semantic Proximity
Semantic Proximity Scorer
The joint access to a pair of nanoentities is counted and
mapped to an even distribution between 0 and 10.
Structural Volatility
Consistency Criticality
Storage Similarity
Content Volatility
Availability Criticality
Security Criticality
Characteristics Scorer
To achieve homogenous services, this scorer sets a
penalty of -1 to -10 to relations with diverging
requirements.
Security Constraint
Separated Group Scorer
Sets a score of -10 to all nanoentities that belong to a
group other than the current one.
Predefined Service Constraint
Exclusive Group Scorer
Same as Cohesive Group, but also adds a penalty of -10
to nanoentities not in the group.
Mutability
Network Traffic Suitability
Not defined and implemented yet.
A detailed description of the scorers in Service Cutter can be found in [7].
5 Evaluation via Prototyping, Case Studies, Action Research
We validated our research results via implementation, case study, and action research.
Service Cutter’s current implementation supports a basic feature set that realizes the
structured approach of splitting a system into discrete, loosely coupled services:
14 out of 16 coupling criteria from Section 3 are implemented (see Table 5).
All nine System Specification Artifacts (SSAs) that represent user input (see
Figure 3 in Section 4) can be imported in the form of custom JSON files.
Seven criteria priorities, in the prototype casually defined as “T-Shirt sizes”
(IGNORE, XS, S, M, L, XL, XXL) allow users to characterize the context of
a system by valuating the coupling criteria in relation to each other.
The suggested candidate service cuts and their dependencies are visualized.
The published language [5] of a service pair (including the data transferred
to and from the invoked service) is exposed via the involved nanoentities.
Figure 6 features a candidate service cut for the “cargo tracking” domain model
from [5]. This candidate service cut consists of three services A, B and C (larger
squares), each owning a set of (cohesive) nanoentities represented as small squares:
Figure 6. Screenshot of Service Cutter presenting a candidate service cut.
Arrows between two services (e.g., Service A and Service B) indicate a dependen-
cy between them. The resulting published language, which characterizes the amount
of coupling between these services in terms of the shared understanding about the
nanoentities that are exposed at the service boundary, is also shown.
Release 1.1 of the Service Cutter implementation is available on GitHub3. This
prototype consists of two components implemented in Java and JavaScript (using
Spring Boot, Spring MVC, AngularJS, and JHipster), RESTful HTTP Web services
wrapping the scoring logic, and a Web application for input and output visualization.
3 https://github.com/ServiceCutter/ServiceCutter
Validation approach and results. To further validate the implemented concepts, we
assessed the candidate service cuts of the following two case studies:
1. A fictitious “Trading System” for which we forward-engineered the require-
ments, drawing on industry experience with financial services software.
2. The DDD sample application “Cargo Tracking” that accompanies the DDD
book [5]; we reverse engineered the requirements for this scenario from the
existing implementation that is available on SourceForge.4
To objectify the validation and have a comparison baseline, we defined expected
service cuts for both systems according to our experience in service design; to reduce
bias, we developed a service design checklist for this task.5 Next, we defined three
result categories in order to rate the candidate service cuts:
A: Excellent service cut. The cut (i.e., suggested service decomposition) does not
follow the way we expected, but we find reasons why the cut makes sense from an
architect’s perspective. It therefore improves our own view of the analysed system.
B: Expected service cut. The cut meets and therefore validates our expectations.
C: Unreasonable service cut. There is a mismatch between the cut and the expected
one, and we do not find any reasons why this cut would be beneficial.
To be able to assess the quality of the output of Service Cutter, we use a four-level
classification: An excellent output contains zero unreasonable service cuts and at least
one excellent service cut (i.e., a cut in category A). A good output contains zero un-
reasonable service cuts (C). An acceptable output contains at most one unreasonable
service cut (C). A bad output contains two or more unreasonable service cuts (C).
Table 6 summarizes the decomposition results for both systems. Both algorithms,
Girvan-Newman and ELP (Leung), were able to produce acceptable or good service
cuts (but not in all cases):
Table 6. Assessment of service cuts for analyzed systems (case studies).
Evaluated Application
Girvan-Newman
ELP (Leung)
Trading System
Good output
Good (note: in some exceptional cases, Leung
produced acceptable and excellent output)
Cargo Tracking System
Bad output
Acceptable
Both test systems contain approximately 20 nanoentities. To analyze Service
Cutter’s performance behavior with more complex systems, we conducted additional
performance tests. These tests are derived from the trading system; all nanoentities
and SSAs were replicated and scaled up 60 times to create larger and more complex
domain models and graphs. These load tests measure the runtime for graph creation
and clustering algorithm and leave out data import and visualization. The tests were
conducted on a Windows 10 developer notebook with an Intel i5 2.2GHz CPU and
8GB RAM as documented in detail online.6 Figure 7 shows the test results.
4 https://sourceforge.net/projects/dddsample/
5 https://github.com/ServiceCutter/ServiceCutter/wiki/Decomposition-Questionnaire
6 https://github.com/ServiceCutter/ServiceCutter/wiki/Runtime-Performance-Tests
Figure 7. Performance test results: service cut calculation (scaled up sample application)
The calculation for systems with up to 600 nanoentities is done in less than five
seconds, which we consider reasonable. Around 75% of the time used is consumed by
graph creation whereas the clustering algorithm only uses around 25% of the time.
Hence, our Java code building the graph based on the imported data could be
analyzed and improved to improve runtime performance even further.
6 Discussion: User Feedback, Pros and Cons, Related Work
User feedback. We presented the Service Cutter concepts and their implementation
to more than 20 members of the target audience (i.e., software engineers and
architects with experience in designing SOAs), and one of the authors of the paper
applied Service Cutter to a single project case (as a form of technical action research).
The systematic overall approach was appreciated and considered to be promising; it
was pointed out that Service Cutter cannot only be used in an SOA context, but also
be used to split modules without remote interfaces (with adjusted CC priorities).
The template-based coupling criteria cards were generally appreciated, but some of
the current texts were assessed to be too terse (by one provider of feedback); a more
elaborate, but not yet verbose wording was requested. The naming of some coupling
criteria in our catalog also was challenged. An example is “CC-13 Network Traffic
Suitability”, which covers the more common and basic concept of throughput (which
in turn is one facet of the top-level quality attribute performance). Furthermore,
system and process assurance audit compliance [8] was suggested to be added as a
compatibility criterion; further research is required to investigate how to integrate
such a composite and complex, possibly even recursive criterion into Service Cutter.
Finally, our selection of two clustering algorithms was questioned, and it was
suggested to only integrate deterministic algorithms that do not require the number of
clusters as a parameter. This critique pertains to the current tool implementation only;
the Service Cutter concepts from Sections 3 and 4 do not rely on any particular
algorithm. Due to the generality of our concepts and the modular, extensible architec-
ture of their implementation, we expect the effort to integrate other algorithms into
the Service Cutter framework to be in the range of a few person days per algorithm.
0
10
20
30
40
0 200 400 600 800 1000 1200 1400
Seconds
Nanoentities
According to the feedback of our industry project partner, who leads an architect
and developer community in professional services, Service Cutter and its underlying
reasoning represent a sound framework to prepare and back architectural decisions.
More specifically, it allows architects to study the impact of weight variations on the
resulting candidate service cuts. Questions like “what, if security wasn’t an issue
here” can be answered easily by changing the respective scoring priority of criterion
“security criticality”. When used with care, Service Cutter can improve the credibility
of architects involved in critical architecture assessments (evaluations) significantly.
The SSAs and coupling criteria can also be used to educate junior architects or
students on the driving forces of service decomposition.
Benefits. From our internal and external validation activities, we can conclude that
Service Cutter offers a number of advantages to service architects: The coupling
criteria catalog indeed collects relevant architecturally significant requirements and
decision drivers for service decomposition, and it does so in an accessible, reusable,
and extensible way. It therefore contributes to the body of reusable architectural
decision knowledge as envisioned in our previous work [18].
Service Cutter suggests candidate service cuts that are obtained from commonly
used analysis and design artifacts, such as use cases and domain models, via a
nanoentity abstraction and the coupling criteria. By expecting several such analysis
and design artifacts, Service Cutter challenges its users (i.e., service architects) to
reflect which stakeholder input and non-functional quality characteristics are relevant
for his/her system (and architecture design process). Hence, service architects might
use these artifacts as a checklist and stimulus for the requirement engineering.
The candidate service cuts verify and/or challenge the architect’s expectations
regarding the number of services and their interface definitions. Both green field
scenarios and iterative approaches for migrating a monolith to services are supported.7
Drawbacks and liabilities. The benefits that we could observe during our evaluation
activities come at a price; usage of Service Cutter concepts and their implementation
during these activities has unveiled some (expected) drawbacks and liabilities.
Significant effort is required to enter SSAs (such as use cases and domain models)
in JSON; in future versions, we plan to import them, e.g., from UML modeling tools.
We are aware of the risk of a “pseudo accuracy” effect. It is subject to debate
whether service design work, dealing with rather diverse requirements (some of
which are hard to quantify) can really be delegated to algorithms that look for an
aggregated optimal solution. Architects traditionally apply their tacit knowledge and
“gut feel” when making the relate decisions; they are biased. This discussion can be
seen as the SOA variant of the more general discussion on “a rational design process:
how and why to fake it” [12]. However, we believe our approach to be valuable even
when being confronted with a healthy amount of skepticism relevant design
questions are asked and related criteria listed, and the relation between these concerns
and the user input in SSAs is unveiled. Furthermore, a checklist effect occurs;
discussions among collaborating architects are stimulated.
7 Explained on GitHub: https://github.com/ServiceCutter/ServiceCutter/wiki/Usage-Scenarios
Other drawbacks and liabilities concern framework architecture design and
extensibility. First and foremost, the clustering algorithms that are currently integrated
possibly should be complemented with additional ones due to the only partially satis-
fying evaluation results. Algorithmic complexity is a major source of performance
limitations and therefore has to be taken into account in any such future algorithm
selection decisions; fortunately, clustering algorithms with linear complexity exist.
As the Service Cutter framework continues to evolve, additional validation and
evaluation activities work will be required. For instance, it has to be verified that the
tool performance does not degrade significantly when processing even larger amounts
of user input that go beyond scaled up sample data and case studies (e.g., complex
domain models from enterprise information systems).
Related work. Quality attribute-driven design has been an important research topic in
the software architecture community for many years [2, 11]; the specific requirements
and constraints of service-oriented architectures and microservices have also been
investigated and related methods proposed [4,13,17]. Such methods are comple-
mentary to the approach presented in this paper, providing an overall frame for the
use of Service Cutter, as well as input for coupling criteria, SSAs, and priority scores.
Other research areas in service-oriented computing include service discovery and
runtime topology lookup (e.g., in clouds), dynamic service matchmaking, service
composition into business processes and workflows, quality-of-service awareness,
policies, and agreement, as well as service management. These efforts have different
goals than Service Cutter, which aims at assisting architects making design decisions;
however, well-crafted service cuts can be seen as a prerequisite for the successful
application of any advanced service-oriented computing concepts and technologies. In
our future work, we therefore consider to include additional criteria and SSAs that
represent the concepts from these research efforts as they mature.
7 Summary and Outlook
In this paper, we presented Service Cutter, a systematic approach to system de-
composition, which has been a relevant problem since the very origins of program
modularization and software engineering. Service Cutter advances the state of the art
a) with the concept of coupling criteria cards, b) 16 instances of such cards (harvested
from practical experience and the literature), and c) an extensible service
decomposition tool framework architecture that integrates graph clustering algorithms
and features priority scoring starting from nanoentities and nine types of analysis and
design specifications (including domain models and use cases). This structured and
extensible combination of a criteria-driven method with supporting architectural
knowledge and a design optimization and visualization tool paves the way towards
the desired engineering approach to service interface and service granularity design.
We evaluated Service Cutter via implementation (integrating two existing graph
clustering algorithms), a combination of action research and case study investigations,
and load tests. The validation results and additional user feedback indicate that the
proposed semi-automated approach to service decomposition works as designed and
has the potential to benefit practitioners significantly. While the suggested service
cuts did not always meet all early adoptersexpectations, artifact input and coupling
criteria were regarded adequate; the proposed decomposition process was appreciated.
While our early experiences with the presented structured, partially automated (i.e.,
tool supported) approach are promising, work remains to be done both on the
conceptual (research) level, as well as on the implementation (engineering) level. For
instance, further enhancements of Service Cutter may include seamless integrations of
the analysis and design tool chain members so that SSAs can be extracted from other
tools automatically. We discussed other directions for future work in Section 6;
related development issues are tracked in the open source release of Service Cutter.
References
1. Alonso, G., Casati, F., Kuno, H.A., Machiraju, V., Web Services Concepts,
Architectures and Applications. Data-Centric Systems and Applications, Springer 2004.
2. Cervantes, H., Velasco, P., Kazman, R., A Principled Way of Using Frameworks in
Architectural Design, IEEE Software Vol 30 Issue 2, pp 46-53, March - April 2013.
3. Dahan, U., The Known Unknowns of SOA, Blog Post, November 2010,
http://udidahan.com/2010/11/15/the-known-unknowns-of-soa/
4. Erradi A., Anand S., Kulkarni N., SOAF: An Architectural Framework for Service
Definition and Realization. Proceedings of SCC’06, IEEE Computer Society.
5. Evans, E., Domain-Driven Design: Tackling Complexity in the Heart of Software. Pearson
Education, 2003.
6. Fowler, M., Inversion of Control Containers and the Dependency Injection Pattern, Online
Article, January 2014, http://www.martinfowler.com/articles/injection.html
7. Gysel, M., Kölbener, L., Service Cutter A Structured Way to Service Decomposition.
bachelor thesis, HSR Hochschule für Technik Rapperswil, 2015, https://eprints.hsr.ch/476/
8. Julisch, K., Suter, C., Woitalla, T., Zimmermann, O., Compliance by Design Bridging
the Chasm between Auditors and IT Architects. Computers & Security, Elsevier. Volume
30, Issue 6-7, Sep.-Oct. 2011.
9. Martin, R.C., Agile Software Development: Principles, Patterns, and Practices. Prentice
Hall PTR, 2003.
10. Newman M.E., Girvan, M., Finding and evaluating community structure in networks. In:
Phys. Rev. E 69 (2004). arXiv: cond-mat/0308217
11. Parnas, D. L., On the Criteria to Be Used in Decomposing Systems into Modules.
Commun. ACM 15(12): 1053-1058 (1972)
12. Parnas, D. L., Clements, P.C., A Rational Design Process: How and Why to Fake it. IEEE
Trans. Software Eng. 12(2): 251-257 (1986)
13. Papazoglou M., van den Heuvel W. J., Service-Oriented Design and Development
Methodology, International Journal of Web Engineering and Technology (IJWET)
Volume 2 No 4. Inderscience Enterprises, 2006.
14. Raghavan, U.N., Albert, R., Kumara, S., Near linear time algorithm to detect community
structures in large-scale network”. In: Phys. Rev. E 76 (2007). arXiv: 0709.2938
15. Richardson, C., Microservices: Decomposing Applications for Deployability and
Scalability, InfoQ article, May 2014, http://www.infoq.com/articles/microservices-intro
16. Zimmermann, O., Microservices Tenets: Agile Approach to Service Development and
Deployment. Overview and Vision Paper, SummerSoC 2016. Journal of Computer
Science Research and Development (CSRD), Springer (to appear).
17. Zimmermann, O., Krogdahl P., Gee C., Elements of Service-Oriented Analysis and
Design, IBM developerWorks, July 2004.
18. Zimmermann, O., Wegmann, L., Koziolek, H., Goldschmidt, T.: Architectural decision
guidance across projects. In: Proceedings of the 12th Working IEEE/IFIP Conference on
Software Architecture (WICSA), 85-92 (2015). IEEE Computer Society.
NOTICE: This is the author’s version of a work that was
accepted for publication in Springer LNCS. A definitive
version was subsequently published in Marco Aiello, Einar
Broch Johnsen, Schahram Dustdar, Ilche Georgievski
(Editors), Service-Oriented and Cloud Computing, 5th IFIP
WG 2.14 European Conference, ESOCC 2016, Vienna, Austria,
September 5-7, 2016, Proceedings, ISBN: 978-3-319-44481-9
(Print) 978-3-319-44482-6 (Online),
http://link.springer.com/book/10.1007/978-3-319-44482-6
... Therefore, big enterprises such as IBM [6], Amazon [7], and GitHub [8] frequently advocate for partitioning applications by identifying functional boundaries in the code that may be extracted as microservices. This has led to a rapid growth of research in using program analysis to automatically discover these functional boundaries and partitions within the application [9][10][11][12][13][14][15][16][17]. Existing work and limitations. ...
... Therefore, the best strategy to migrate monolithic applications to microservices is to do so incrementally by identifying boundaries in the application such that the functionalities encompassed by them are highly cohesive yet as loosely coupled as possible to other functionalities in the code. The study of identifying these boundaries has seen a lot of interest lately with prominent industry tools such as Mono2Micro 5 and several others in academia [6,[9][10][11][12][13][14][15][16][17]. However, when confronted with enterprise applications, each of these approaches faces some difficulties. ...
... Since Daytrader is the only dataset that uses external databases, we run all partitioning algorithms only on Daytrader. We run each algorithm with = [3,5,7,9,11,13] and 5 random seeds, and measure the average transactional purity across all seeds and values of . Discussion: The transactional purity is shown in Fig. 6. ...
Preprint
Microservices Architecture (MSA) has become a de-facto standard for designing cloud-native enterprise applications due to its efficient infrastructure setup, service availability, elastic scalability, dependability, and better security. Existing (monolithic) systems must be decomposed into microservices to harness these characteristics. Since manual decomposition of large scale applications can be laborious and error-prone, AI-based systems to detect decomposition strategies are gaining popularity. However, the usefulness of these approaches is limited by the expressiveness of the program representation and their inability to model the application's dependency on critical external resources such as databases. Consequently, partitioning recommendations offered by current tools result in architectures that result in (a) distributed monoliths, and/or (b) force the use of (often criticized) distributed transactions. This work attempts to overcome these challenges by introducing CARGO({short for [C]ontext-sensitive l[A]bel p[R]opa[G]ati[O]n})-a novel un-/semi-supervised partition refinement technique that uses a context- and flow-sensitive system dependency graph of the monolithic application to refine and thereby enrich the partitioning quality of the current state-of-the-art algorithms. CARGO was used to augment four state-of-the-art microservice partitioning techniques that were applied on five Java EE applications (including one industrial scale proprietary project). Experiments demostrate that CARGO can improve the partition quality of all modern microservice partitioning techniques. Further, CARGO substantially reduces distributed transactions and a real-world performance evaluation of a benchmark application (deployed under varying loads) shows that CARGO also lowers the overall the latency of the deployed microservice application by 11% and increases throughput by 120% on average.
... The above listed baseline approaches do not perform their analysis on the Cargo Tracking System. So, for this application, we compare our approach with other four well-known baselines for microservice identification: Service Cutter [39], API Interface Analysis [12], DFD Analysis [9] and Business Processes Analysis [16]. ...
... For quantitative evaluation of the Cargo Tracking System, we couldn't find any research paper where above listed metrics are utilized. So, we make use of another four object-oriented design metrics namely i) Number of Incoming Dependencies, ii) Number of Outgoing Dependencies, iii) Instability, and iv) Relational Cohesion 12 as used in other baseline techniques [9], [12], [16], [39]. In general, all the metrics are based on coupling, cohesion and number of interactions between microservices. ...
Article
Full-text available
Microservices architecture is a new paradigm for developing a software system as a collection of independent services that communicate via lightweight protocols. In greenfield development, identifying the microservices is not a trivial task, as there is no legacy code lying around and no old development to start with. Thus, identification of microservices from requirements becomes an important decision during the analysis and design phase. Use cases play a vital role in the requirements analysis modeling phases in a model-driven software engineering process. Use cases capture the high-level user functions and the scope of system. In this paper, we propose GreenMicro, an automatic microservice identification technique that utilizes the use cases model and the database entities. Both features are the artifacts of analysis and design phase that depict complete functionality of an overall system. In essence, a collection of related use cases indicates a bounded context of the system that can be grouped in a suitable way as microservices. Therefore, our approach GreenMicro clusters close-knit use cases to recover meaningful microservices. We investigate and validate our approach on an in-house proprietary web application and three sample benchmark applications. We have mapped our approach to the state-of-the-art software quality assessment attributes and have presented the results. Preliminary results are motivating and the proposed methodology works as anticipated in identifying functionally cohesive and loosely coupled microservice candidate recommendations. Our approach enables the system architects to identify microservice candidates at an early analysis and design phase of development.
... Approaches to the identification of microservices ( [8], [9]) mainly dealt with the issues of clustering the IS code according to the criteria of strong and weak connectivity, while simultaneously clarifying the principles that the identified microservices should comply with. The resulting technologies give unstable good results. ...
... This is in line with the CCP principle. In [8], 16 criteria for the identification of microservices were proposed, concerning, in addition to CCP and SRP, also the issues of determining the development boundaries. Based on a catalog of such criteria, the required system specification artifacts are identified, which can be processed in a structured, semi-automated manner to propose service decomposition that promotes weak communication between services and a high degree of consistency within them. ...
Article
Full-text available
The paper deals with the formation and transformation of stakeholder requirements for the information system throughout the entire life cycle. It is shown how the seamless architecture provides traceability of requirements from the level of the business process, to the functional and logical architectures of systems, to the selection of criteria and identification of microservices. It is shown how maintaining the traceability of requirements in the presence of business, functional and logical architecture models can reduce the cost of planning complex functional and load testing of systems, as well as ensure the interaction of operation, maintenance services and contractors that form the entire system, maintain its integrity during the life cycle.
... 5.4 Deuxième expérimentation : Application de suivi de cargaisons Cargo 5.4.1 Description de l'application de suivi de cargaisons Cette section est dédiée à la présentation de la comparaison des performances de notre approche à celles de certaines approches qui traitent de la même problématique et présentées dans(9),(48), et(60). Ces différentes approches utilisent toutes une étude de cas du suivi des cargaisons de Gysel et al.(48).Pour les besoins de la deuxième expérience, nous avons conçu un modèle de processus métier basé sur le BPMN pour le suivi des cargaison. ...
... 5.4 Deuxième expérimentation : Application de suivi de cargaisons Cargo 5.4.1 Description de l'application de suivi de cargaisons Cette section est dédiée à la présentation de la comparaison des performances de notre approche à celles de certaines approches qui traitent de la même problématique et présentées dans(9),(48), et(60). Ces différentes approches utilisent toutes une étude de cas du suivi des cargaisons de Gysel et al.(48).Pour les besoins de la deuxième expérience, nous avons conçu un modèle de processus métier basé sur le BPMN pour le suivi des cargaison. Ce processus métier est représenté dans le formalisme BPMN comme illustré par la figure 5.7.Le processus métier Cargo démarre lorsque la compagnie maritime expédie les conteneurs d'un client (a′ 1 ) par voie terrestre et maritime. ...
Thesis
Les microservices sont apparus comme une solution alternative à de nombreuses technologies existantes, permettant de décomposer les applications monolithiques en ``petits'' composants/modules de granularité fine, hautement cohésifs et faiblement couplés. Cependant, l'identification des microservices reste un défi pouvant remettre en cause le succès de ce type de migration. Cette thèse propose une approche pour l'identification automatique des microservices à partir d'un ensemble de processus métier (BP). L'approche combine différents modèles indépendants représentant respectivement les dépendances de contrôle, les dépendances de données et les dépendances sémantiques d'un BP. L'approche se base sur un clustering collaboratif afin de regrouper les activités en microservices. Pour illustrer la démarche et démontrer sa faisabilité et ses performances, nous avons adopté deux études de cas, la location de vélos et le suivi de cargaison. En termes de précision, les résultats expérimentaux montrent que les différents types de dépendances entre activités extraites de spécification de BPs comme paramètres d'entrée permettent de générer des microservices de meilleure qualité par rapport aux autres approches proposées dans l'état de l'art.
... Overall, concerning application decomposition strategies, the result of our study partially confirms the finding of several existing studies (e.g., [9,29,102,104] ) and the personal experience of the microservices practitioners (e.g., [130][131][132] ). We also found several application decomposition strategies for microservices (e.g., decomposed by verbs or use cases [133] , data flow-driven approach [134] , interface analysis [135] , service cutter [136] ) in the literature. However, none of the survey and interview participants mentioned these or other strategies for decomposing applications into microservices in their responses. ...
Thesis
Full-text available
This thesis explored software architecture design of microservices systems in the context of DevOps and make the following novel aspects: (1) This thesis proposes a set of taxonomies of the research themes, problems, solutions, description methods, design patterns, quality attributes as well as the challenges of microservices architecture in DevOps that contributes to the software engineering body of knowledge by conducting state of the art (i.e., systematic mapping) and practice (i.e., mixed-method) studies on architecture design of microservices systems. These studies mainly identify, analyze, and classify the challenges and the solutions for microservices architecture in DevOps, design of microservices systems, as well as monitoring and testing of microservices systems. The findings of these studies can help practitioners to improve the design of microservices systems. (2) This thesis proposes a taxonomy of issues occurring in microservices systems by analyzing 1,345 issue discussions extracted from five open source microservices systems. The proposed taxonomy of issues consisting of 17 categories, 46 subcategories, and 138 types. This thesis also identified a comprehensive list of causes and mapped them to the identified issues. The identified causes consist of 7 categories, 26 subcategories, and 109 types. The proposed taxonomy and identified causes can help practitioners to avoid and address various types of issues in the architecture design of microservices systems. (3) This thesis proposes a set of decision models for selecting patterns and strategies in four MSA design areas: application decomposition into microservices, microservices security, microservices communication, and service discovery. The proposed decision models are based on the knowledge gained from the systematic mapping study, mixed-method study, exploratory study, and grey literature. The correctness and usefulness of the proposed decision models have been evaluated through semi-structured interviews with microservices practitioners. The proposed decision models can assist practitioners in selecting appropriate patterns and strategies for addressing the challenges related to the architecture design of microservices systems.
Article
Context Re-architecting monolithic systems with microservice architecture is a common trend. However, determining the "optimal" size of individual services during microservice extraction has been a challenge in software engineering. Common limitations of the literature include not being reasonable enough to be put into practical application; relying too much on human experience; neglection of the impact of hardware environment on the performance. Objective To address these problems, this paper proposes a novel method based on knowledge-graph to support the extraction of microservices during the initial phases of re-architecting existing applications. Method According to the microservice extraction method based on the AKF principle which is a widely practiced microservice design principle in the industry, four kinds of entities and four types of entity-entity relationships are designed and automatically extracted from specification and design artifacts of the monolithic application to build the knowledge graph. A constrained Louvain algorithm is proposed to identify microservice candidates. Results Our approach is tested based on two open-source projects with the other three typical methods: the domain-driven design-based method, the similarity calculation-based method, and the graph clustering-based method . Conducted experiments show that our method performs well concerning all the evaluation metrics.
Preprint
Full-text available
One of the most challenging problems in the migration of a monolith to a microservices architecture is the identification of the microservices boundaries. Several approaches have been recently proposed for the automatic identification of microservices, which, even though following the same basic steps, diverge on how data of the monolith system is collected and analysed. In this paper, we compare the decompositions generated for two monolith systems into a set of candidate microservices, when static and dynamic analysis data collection techniques are used. The decompositions are generated using a combination of similarity measures and are evaluated according to a complexity metric to answer the following research question: which collection of monolith data, static or dynamic analysis, allows to generate better decompositions? As result of the analysis we conclude that neither of the analysis techniques, static nor dynamic, outperforms the other, but the dynamic collection of data requires more effort.
Thesis
开发运维一体化(DevOps)与微服务被认为是解决软件需求快速变更并提供快速可靠的软件开发能力的重要范式。行业报告显示,DevOps与微服务从出现以来就被一些大型IT和互联网公司同时采用。DevOps通过建立开发与运维团队之间的情感共鸣与跨职能协作来打破部门墙。微服务将整体应用分解为微型服务达到独立与快速交付价值的目的。他们在软件组织中的应用与实施深刻地影响了软件团队这个软件开发基础单元。一些企业应用与实施DevOps与微服务时,为了解决软件工程的社会技术(Social-technical)属性带来的沟通交流问题,对软件开发中的小型团队进行了一系列的探索与实践。然而,小型团队可能给组织结构和科学技术带来不良影响,因此小型团队在DevOps与微服务环境下的适用条件亟待研究。通过应用定性与定量方法论、软件工程研究中的ABC框架和经验软件工程数据收集技术,本文回顾了近五年发表在三个经验软件工程领域顶级期刊与会议上有关软件团队的研究,构建了一种研究软件团队的迭代式混合方法模型(Iterative Mixed-Method Model for studying Software Teams, IMMMST)。本文使用IMMMST对实践中面向DevOps与微服务的软件团队进行了三个阶段的研究。首先,第一阶段的案例研究使用文档分析、小组访谈和调查问卷收集了微战队这种小型团队的工作形式、工作流程以及战队成员对于微战队的观点。这一阶段的研究从微战队在实践中的类型、关键活动、收益和仍然存在的问题四个角度调查了这种小型团队实践并提出了架构解耦、自组织性和实践指导三个方面的持续改进建议。综合考虑微战队的上下文背景后,本文提出了一个包含组织、部门、技术三个层级五个考虑因素(产品重要性、产品安全敏感性、部门规模、开发过程、架构类型)的小型团队决策框架(Decision-Making Framework for Small Teams, DMFST)。在第二阶段的民族志研究中通过参与观察和访谈在三家企业中收集了软件团队应用与实施DevOps与微服务的情况。这一阶段的研究提炼了DevOps与微服务在软件团队日常工作中的实践现状,发现了四个主要问题,即DevOps在软件团队间的不完整实施、微服务在软件团队中的滥用、顽固的组织结构和DevOps与微服务在日常工作中的弱相关性。本文还以DevSecOps为例讨论了DevOps相关概念的争论。此外,本文也指出了在工业界进行实践创新的两点挑战。最后,本文利用对三家公司软件团队的观察结果改进了DMFST,得到了一个三个层级六个考虑因素的改进的小型团队决策框架(Improved DecisionMaking Framework for Small Teams, ImDMFST)。本文通过基于层次分析法的专家调查对ImDMFST进行了评估。组织层级在ImDMFST中的权重占比接近50%。组织特征和架构类型被专家认为是最重要的两个考虑因素,对软件组织决定是否在软件开发中使用小型团队起到了重要作用。
Article
Nowadays, microservice architecture has become a dominant software development and deployment paradigm. Decomposing a system into loosely coupled, highly cohesive, and fine-grained microservices while meeting various technical constraints and implementing business capabilities is particularly important for microservice system (MS) designers. When an MS has a large number of functionalities and complex interconnections, it is a big challenge to identify microservices solely based on the experience of MS designers. We propose a structured and automated microservice identification method to decompose a system into appropriate microservices for this challenge. We model a system as unified modeling language (UML) class and sequence diagrams. In the identification phase, we take into account not only the traditional coupling-related criteria but also the quality expectation and deployment constraints, both of which have not yet been fully concerned in previous studies. Based on the criteria, a microservice identification algorithm using the clustering technique is designed. A case study of elderly care services illustrates the identification process. Experiments are conducted to evaluate and compare the proposed method against state-of-the-art methods. Results indicate that the proposed method significantly outperforms those compared from the literature.
Article
Full-text available
Some microservices proponents claim that microservices form a new architectural style; in contrast, advocates of service-oriented architecture (SOA) argue that microservices merely are an implementation approach to SOA. This overview and vision paper first reviews popular introductions to microservices to identify microservices tenets. It then compares two microservices definitions and contrasts them with SOA principles and patterns. This analysis confirms that microservices indeed can be seen as a development- and deployment-level variant of SOA; such microservices implementations have the potential to overcome the deficiencies of earlier approaches to SOA realizations by employing modern software engineering paradigms and Web technologies such as domain-driven design, RESTful HTTP, IDEAL cloud application architectures, polyglot persistence, lightweight containers, a continuous DevOps approach to service delivery, and comprehensive but lean fault management. However, these paradigms and technologies also cause a number of additional design choices to be made and create new options for many “distribution classics” type of architectural decisions. As a result, the cognitive load for (micro-)services architects increases, as well as the design, testing and maintenance efforts that are required to benefit from an adoption of microservices. To initiate and frame the buildup of architectural knowledge supporting microservices projects, this paper compiles related practitioner questions; it also derives research topics from these questions. The paper concludes with a summarizing position statement: microservices constitute one particular implementation approach to SOA (service development and deployment).
Article
Full-text available
In the past decade, researchers have devised many methods to support and codify architecture design. However, what hampers such methods' adoption is that these methods employ abstract concepts such as views, tactics, and patterns, whereas practicing software architects choose technical design primitives from the services offered in commercial frameworks. A holistic and more realistic approach to architecture design addresses this disconnect. This approach uses and systematically links both top-down concepts, such as patterns and tactics, and implementation artifacts, such as frameworks, which are bottom-up concepts. The Web extra at http://youtu.be/kygFOV8TqEw is a video in which Humberto Cervantes from Autonomous Metropolitan University interviews Josué Martìnez Buenrrostro, a software architect at Quarksoft in Mexico City, about the design process discussed in the article "A Principled Way to Use Frameworks in Architecture Design".
Article
Full-text available
Many have sought a software design process that allows a program to be derived systematically from a precise statement of requirements. It is proposed that, although designing a real product in that way will not be successful, it is possible to produce documentation that makes it appear that the software was designed by such a process. The ideal process and the documentation that it requires are described. The authors explain why one should attempt to design according to the ideal process and why one should produce the documentation that would have been produced by that process. The contents of each of the required documents are outlined.
Article
Full-text available
In the past decade, researchers have devised many methods to support and codify architecture design. However, what hampers such methods' adoption is that these methods employ abstract concepts such as views, tactics, and patterns, whereas practicing software architects choose technical design primitives from the services offered in commercial frameworks. A holistic and more realistic approach to architecture design addresses this disconnect. This approach uses and systematically links both top-down concepts, such as patterns and tactics, and implementation artifacts, such as frameworks, which are bottom-up concepts.
Article
Full-text available
Service Oriented Architectures (SOA) are rapidly emerging as the premier integration and architectural approach in contemporary, complex, heterogeneous computing environments. SOA is not simply about deploying software: it also requires that organisations evaluate their business models, come up with service-oriented analysis and design techniques, deployment and support plans, and carefully evaluate partner/customer/supplier relationships. Since SOA is based on open standards and is frequently realised using Web Services (WS), developing meaningful WS and business process specifications is an important requirement for SOA applications that leverage WS. Designers and developers cannot be expected to oversee a complex service-oriented development project without relying on a sound design and development methodology. This paper provides an overview of the methods and techniques used in service-oriented design and development. The aim of this paper is to examine a service development methodology from the point of view of both service producers and requesters and review the range of elements in this methodology that are available to them.
Article
Full-text available
System and process auditors assure – from an information processing perspective – the correctness and integrity of the data that is aggregated in a company’s financial statements. To do so, they assess whether a company’s business processes and information systems process financial data correctly. The audit process is a complex endeavor that in practice has to rely on simplifying assumptions. These simplifying assumptions mainly result from the need to restrict the audit scope and to focus it on the major risks. This article describes a generalized audit process. According to our experience with this process, there is a risk that material deficiencies remain undiscovered when said simplifying assumptions are not satisfied. To address this risk of deficiencies, the article compiles thirteen control patterns, which – according to our experience – are particularly suited to help information systems satisfy the simplifying assumptions. As such, use of these proven control patterns makes information systems easier to audit and IT architects can use them to build systems that meet audit requirements by design. Additionally, the practices and advice offered in this interdisciplinary article help bridge the gap between the architects and auditors of information systems and show either role how to benefit from an understanding of the other role’s terminology, techniques, and general work approach.
Article
From the Book:Leading software designers have recognized domain modeling and design as critical topics for at least twenty years, yet surprisingly little has been written about what needs to be done or how to do it. Although it has never been clearly formulated, a philosophy has developed as an undercurrent in the object community, which I call "domain-driven design".I have spent the past decade focused on developing complex systems in several business and technical domains. I've tried best practices in design and development process as they have emerged from the leaders in the object-oriented development community. Some of my projects were very successful; a few failed. A feature common to the successes was a rich domain model that evolved through iterations of design and became part of the fabric of the project.This book provides a framework for making design decisions and a technical vocabulary for discussing domain design. It is a synthesis of widely accepted best practices along with my own insights and experiences. Projects facing complex domains can use this framework to approach domain-driven design systematically.Contrasting Three ProjectsThree projects stand out in my memory as vivid examples of the dramatic effect domain design practice has on development results. Although all three delivered useful software, only one achieved its ambitious objectives and delivered complex software that continued to evolve to meet ongoing needs of the organization.I watched one project get out of the gate fast with a useful, simple web-based trading system. Developers were flying by the seat of their pants, but simplesoftware can be written with little attention to design. As a result of this initial success, expectations for future development were sky-high. It was at this point that I was approached to work on the second version. When I took a close look, I saw that they lacked a domain model, or even a common language on the project, and were saddled with an unstructured design. So when the project leaders did not agree with my assessment, I declined the job. A year later, they found themselves bogged down and unable to deliver a second version. Although their use of technology was not exemplary, it was the business logic that overcame them. Their first release had ossified prematurely into a high-maintenance legacy.Lifting this ceiling on complexity calls for a more serious approach to the design of domain logic. Early in my career, I was fortunate to end up on a project that did emphasize domain design. This project, in a domain at least as complex as the one above, also started with a modest initial success, delivering a simple application for institutional traders. But this delivery was followed up with successive accelerations of development. Each successive iteration opened exciting new options for integration and elaboration of functionality. The team way able to respond to the needs of the traders with flexibility and expanding capability. This upward trajectory was directly attributable to an incisive domain model, repeatedly refined and expressed in code. As the team gained new insight into the domain, the model deepened. The quality of communication improved among developers and between developers and domain experts, and the design, far from imposing an ever-heavier maintenance burden, became easier to modify and extend.Unfortunately, not all projects that start with this intention manage to arrive at this virtuous cycle. One project I joined started with lofty aspirations to build a global enterprise system based on a domain model, but finally had a disappointing result. The team had good tools, a good understanding of the business and gave serious attention to modeling. But a separation of developer roles led to a disconnect between the model and implementation, so the design did not reflect the deep analysis that was going on. In any case, the design of detailed business objects was not rigorous enough to support combining them in elaborate applications. Repeated iteration produced no improvement in the code, due to uneven skill-level among developers with no clear understanding of the particular kind of rigor needed. As months rolled by, development work became mired in complexity and the team lost its cohesive vision of the system. After years of effort, the project did produce modest, useful software, but had given up the early ambitions along with the model focus.Of course many things can put a project off course, bureaucracy, unclear objectives, lack of resources, to name a few, but it is the approach to design that largely determines how complex software can become. When complexity gets out of hand, the software can no longer be understood well enough to be easily changed or extended. By contrast, a good design can make opportunities out of those complex features.Some of these design factors are technological, and a great deal of effort has gone into the design of networks, databases, and other technical dimension of software. Books have been written about how to solve these problems. Developers have cultivated their skills.Yet the most significant complexity of many applications is not technical. It is in the domain itself, the activity or business of the user. When this domain complexity is not dealt with in the design, it won't matter that the infrastructural technology is well-conceived. A successful design must systematically deal with this central aspect of the software.The premise of this book is that For most software projects, the primary focus should be on the domain and domain logic. Complex domain designs should be based on a model. Domain-driven design is a way of thinking and a set of priorities, aimed at accelerating software projects that have to deal with complicated domains. To accomplish that goal, this book presents an extensive set of design practices, techniques and principles.Design vs. Development ProcessDesign books. Process books. They seldom even reference each other. Each is a complex topic in its own right. This is a design book. But I believe that these two issues are inextricable if design concepts are to be put into successful practice and not dry up into academic discussion. When people learn design techniques, they feel excited by the possibilities, but then the messy realities of a real project descend on them. They don't see how to fit the new design ideas with the technology they must use. Or they don't know when to worry about a particular design aspect and when to let go in the interest of time. While it is possible to talk with other team members about the applica design principle in the abstract, it is more natural to talk about the things we do together. So, while this is a design book, I'm going to barge right across that artificial boundary when I need to. This will place design in the context of a development process. This book is not specific to a particular methodology, but it is oriented toward the new family of "Agile Development Processes". Specifically, it assumes a couple of process practices are in place on the project. These two practices are prerequisites for applying the approach in this book. Iterative development. The practice of iterative development has been advocated and practiced for decades, and is a corner stone of the Agile development methods. There are many good discussions in the literature of Agile development and Extreme Programming, among them, Cockburn1998 and Beck 1999. A close relationship between developers and domain experts. Domain-driven design crunches a huge amount of knowledge into a model that reflects deep insight into the domain and a focus on the key concepts. This is a collaboration between those who know the domain and those who know how to build software. Because it is iterative, this collaboration must continue throughout the project's life.Extreme Programming (XP), conceived by Kent Beck, Ward Cunningham and others Beck2000, is the most prominent of the agile processes and the one I have worked with most. To make the discussion concrete, I will use XP throughout the book as the basis for discussion of the interaction of design and process. The principles illustrated are easily adapted to other Agile Processes. In recent years there has been a rebellion against elaborate development methodologies that burden projects with useless, static documents and obsessive upfront planning and design. Instead, the Agile Processes, such as XP, emphasize the ability to cope with change and uncertainty. XP recognizes the importance of design decisions, but strongly resists upfront design. Instead, it puts an admirable effort into increasing communication, and increasing the project's ability to change course rapidly. With that ability to react, developers can use the "simplest thing that could work" at any stage of a project and then continuously refactor, making many small design improvements, ultimately arriving at a design that fits the customer's true needs.This has been a much-needed antidote to some of the excesses of design enthusiasts. Projects have bogged down in cumbersome documents that provided little value. They have suffered "analysis paralysis", so afraid of an imperfect design that they made no progress at all. Something had to change.Unfortunately, some of these new process ideas can be easily misinterpreted. Each person has a different definition of "simplest". Continuous refactoring without design principles to guide these small redesigns developers can produce a code base hard to understand or change - the opposite of agility. And, while fear of unanticipated requirements often leads to over-engineering, the attempt to avoid over-engineering can develop into another fear: The fear of any deep design thinking at all. In fact, XP works best for developers with a sharp design sense. The XP process assumes that you can improve a design by refactoring, and that you will do this often and rapidly. But design choices make refactoring itself easier or harder. The XP process attempts to increase team communication. But model and design choices clarify or confuse communication. What is needed is an approach to domain modeling and design that pulls its weight.This book intertwines design and development practice and illustrates how domain-driven design and agile development reinforce each other. A sophisticated approach to domain modeling within the context of an agile development process will accelerate development. The interrelationship of process with domain development makes this approach more practical than any treatment of "pure" design in a vacuum.The Structure of This BookThe book is divided into four major sections:Part I: Putting the Domain Model to Work presents the basic goals of domain-driven development that motivate the practices in later sections. Since there are so many approaches to software development, Part I defines terms, and gives an overview of the implications of placing the domain model in the role of driving communication and design. Part II: The Building Blocks of Model-driven Design condenses a core of best practices in object-oriented domain modeling into a set of basic building blocks. The focus of this section is on bridging the gap between models and practical, running software. Sharing these standard patterns brings order to the design and makes it easy for team members to understand each other's work. Using standard patterns also establishes a common language, which all team members can use to discuss model and design decisions.But the main point of this section is on the kind of decisions that keep the model and implementation aligned with each other, reinforcing each other's effectiveness. This alignment requires attention to the detail of individual elements. Careful crafting at this small scale gives developers a steady platform to apply the modeling approaches of Parts III and IV.Part III: Refactoring Toward Deeper Insight goes beyond the building blocks to the challenge of assembling them into practical models that provide the payoff. Rather than jumping directly into esoteric design principles, this section emphasizes the discovery process. Valuable models do not emerge immediately. They require a deep understanding of the domain. That understanding comes from diving in, implementing an initial design based on a probably naive model, and then transforming it again and again. Each time the team gains insight, the model is transformed to reveal that richer knowledge, and the code is refactored to reflect the deeper model and make it's potential available to the application. Then, once in a while, this onion pealing leads to an opportunity to break through to a much deeper model, attended by a rush of profound design changes.Exploration is inherently open-ended, but it does not have to be random. Part III delves into modeling principles that can guide choices along the way, and techniques that help direct the search.Part IV: Strategic Design deals with situations that arise in complex systems, larger organizations, interactions with external systems and legacy systems. This section explores a triad of principles that apply to the system as a whole: Bounded Context, Distillation, and Large-Scale Structure. Strategic design decisions are made by teams, or even between teams. Strategic design enables the goals of Part I to be realized on a larger scale, for a big system or in an application that fits in an enterprise-wide network.Throughout the book, discussions are illustrated with realistic examples, drawn from actual projects, rather than oversimplified "toy" problems.Much of the book is written as a set of "patterns." The reader should be able to fully understand the material without concern about this device, but those who are interested in the style and format of the patterns can read Appendix A.Who This Book Is Written ForThis book is primarily written for developers of object-oriented software. Most members of a software project team can benefit from some parts of it. It will make most sense to people who are on a project, trying to do some of these things as they go through, or who have deep experience already to relate it to.Some knowledge of object-oriented modeling is necessary to benefit from this book. The examples include UML diagrams and Java code, so the ability to read those languages at a basic level is important, but it is unnecessary to have mastered the details of either UML or Java. Knowledge of Extreme Programming will add perspective to the discussions of development process, but the discussion should be understandable without background knowledge.For an intermediate software developer, a reader who already knows something of object-oriented design and may have read one or two software design books, this book will fill in gaps and provide perspective on how object modeling fits into real life on a software project. It will help an intermediate developer make the jump to applying sophisticated modeling and design skills to practical problems.An advanced or expert software developer should be interested in the comprehensive framework for dealing with the domain. The systematic approach to design will help them in leading teams down this path. The coherent terminology will help them communicate with peers.Readers of various backgrounds may wish to take different paths through the book, shifting emphasis to different points. I recommend all readers to start with the introduction to Part I, and Chapter 1. This book is a narrative, and can be read beginning to end, or from the beginning of any chapter. A skimmer who already has some grasp of a topic should be able to pick up the main points by reading headings and bolded text. A very advanced reader may want to skim Parts I and II, and will probably be most interested in Parts III and IV.In addition to this core readership, the book will be of interest to analysts and to relatively technical project managers. Analysts can draw on the connection between model and design to make more effective contributions in the context of an "Agile" project. Analysts may also use some of the principles of strategic design to better focus and organize their work. Project managers should be interested in the emphasis on making a team more effective and more focused on designing software meaningful to business experts and users. And, since, strategic design decisions are interrelated with team organization and work styles, these design decisions necessarily involve the leadership of the project, and have a major impact on the project's trajectory.While an individual developer who understands domain-driven design will gain valuable design techniques and perspective, the biggest gains come when a team joins to apply a domain-driven design approach and move the domain model to the center of discourse of the project. The team members will share a language that enriches their communication and keeps it connected to the software. They will produce an implementation in step with the model, giving leverage to application development. They will share a map of how design work of different teams relates, and will systematically focus attention on the most features most distinctive and valuable to the organization.A domain-driven design is a difficult technical challenge that can pay off big, opening opportunities just at the stage when most software projects begin to ossify into legacy. Eric Evans, San Francisco, California, March 2003
Article
From the Publisher:Best selling author and world-renowned software development expert Robert C. Martin shows how to solve the most challenging problems facing software developers, project managers, and software project leaders today. This comprehensive, pragmatic tutorial on Agile Development and eXtreme programming, written by one of the founding father of Agile Development: Teaches software developers and project managers how to get projects done on time, and on budget using the power of Agile Development. Uses real-world case studies to show how to of plan, test, refactor, and pair program using eXtreme programming. Contains a wealth of reusable C++ and Java code. Focuses on solving customer oriented systems problems using UML and Design Patterns. Robert C. Martin is President of Object Mentor Inc. Martin and his team of software consultants use Object-Oriented Design, Patterns, UML, Agile Methodologies, and eXtreme Programming with worldwide clients. He is the author of the best-selling book Designing Object-Oriented C++ Applications Using the Booch Method (Prentice Hall, 1995), Chief Editor of, Pattern Languages of Program Design 3 (Addison Wesley, 1997), Editor of, More C++ Gems (Cambridge, 1999), and co-author of XP in Practice, with James Newkirk (Addison-Wesley, 2001). He was Editor in Chief of the C++ Report from 1996 to 1999. He is a featured speaker at international conferences and trade shows. Author Biography: ROBERT C. MARTIN is President of Object Mentor Inc. Martin and his team of software consultants use Object-Oriented Design, Patterns, UML, Agile Methodologies, and eXtreme Programming with worldwide clients. He is the author of the best-selling book Designing Object-Oriented C++ Applications Using the Booch Method (Prentice Hall, 1995), Chief Editor of, Pattern Languages of Program Design 3 (Addison Wesley, 1997), Editor of, More C++ Gems (Cambridge, 1999), and co-author of XP in Practice, with James Newkirk (Addison-Wesley, 2001). He was Editor in Chief of the C++ Report from 1996 to 1999. He is a featured speaker at international conferences and trade shows.
Conference Paper
Service oriented architecture (SOA) is an approach for building distributed systems that deliver application functionality as a set of self-contained business-aligned services with well-defined and discoverable interfaces. This paper presents a systematic and architecture-centric framework, named service oriented architecture framework (SOAF), to ease the definition, the design and the realization of SOA in order to achieve a better business and IT alignment. The proposed framework is business-process centric and comprises a set of structured activities grouped in five phases. It incorporates a range of techniques and guidelines for systematically identifying services, deciding service granularity and modeling services while integrating existing operational/legacy systems. The results from a pilot validation of SOAF for SOA enablement of a realistic securities trading application are presented. Best practices and lessons learned are also discussed
Book
Like many other incipient technologies, Web services are still surrounded by a substantial level of noise. This noise results from the always dangerous combination of wishful thinking on the part of research and industry and of a lack of clear understanding of how Web services came to be. On the one hand, multiple contradictory interpretations are created by the many attempts to realign existing technology and strategies with Web services. On the other hand, the emphasis on what could be done with Web services in the future often makes us lose track of what can be really done with Web services today and in the short term. These factors make it extremely difficult to get a coherent picture of what Web services are, what they contribute, and where they will be applied.Alonso and his co-authors deliberately take a step back. Based on their academic and industrial experience with middleware and enterprise application integration systems, they describe the fundamental concepts behind the notion of Web services and present them as the natural evolution of conventional middleware, necessary to meet the challenges of the Web and of B2B application integration. Rather than providing a reference guide or a "how to write your first Web service" kind of book, they discuss the main objectives of Web services, the challenges that must be faced to achieve them, and the opportunities that this novel technology provides. Established, as well as recently proposed, standards and techniques (e.g., WSDL, UDDI, SOAP, WS-Coordination, WS-Transactions, and BPEL), are then examined in the context of this discussion in order to emphasize their scope, benefits, and shortcomings. Thus, the book is ideally suited both for professionals considering the development of application integration solutions and for research and students interesting in understanding and contributing to the evolution of enterprise application technologies.