Content uploaded by Wilhelm Hasselbring
Author content
All content in this area was uploaded by Wilhelm Hasselbring on Sep 14, 2018
Content may be subject to copyright.
FOCUS: MICROSERVICES
44 IEEE SOFTWARE | PUBLISHED BY TH E IEEE COMPU TER SOCIE TY 0740-7459/18/$33.00 © 2018 IEEE
Using
Microservices
for Legacy
Software
Modernization
Holger Knoche and Wilhelm Hasselbring, Kiel University
// Microservices promise high maintainability,
making them an interesting option for
software modernization. This article presents
a migration process to decompose an
application into microservices, and presents
experiences from applying this process
in a legacy modernization project. //
MICROSERVICES1 HAVE GAINED
attention as an architectural style
for highly scalable applications run-
ning in the cloud. However, our dis-
cussions with practitioners show
that microservices are also seen as
a promising architectural style for
applications for which scalability is
not (yet) a priority. For business ap-
plications, which are typically in
service for many years, maintain-
ability is of particular importance.
With respect to maintainability,
microservices promise an improve-
ment over traditional monoliths due
to their smaller code bases, strong
component isolation, and organiza-
tion around business capabilities.
Furthermore, the team autonomy
fostered by microservices is likely to
reduce coordination effort and im-
prove team productivity.
It is therefore not surprising that
companies are considering micro-
service adoption as a viable option
for modernizing their existing soft-
ware assets. Although some compa-
nies have succeeded in a complete
rewrite of their applications,2 incre-
mental approaches are commonly
preferred that gradually decompose
the existing application into micro-
services.3 Other approaches to
modernization—e.g., restructuring
and refactoring of existing legacy
applications—are also valid options.4
However, decomposing a large, com-
plex application is far from trivial.
Even seemingly easy questions like
“Where should I start?” or “What
services do I need?” can actually be
very hard to answer.
In this article, we present a pro-
cess to modernize a large existing
software application using micro-
service principles, and report on ex-
periences from implementing it in an
ongoing industrial modernization
project. We particularly focus on the
process of actually decomposing the
existing application, and point out
best practices as well as challenges
and pitfalls that practitioners should
watch out for.
The Legacy System
The legacy system to be modern-
ized is the customer management ap-
plication of an insurance company,
which provides customer data such
as names and postal addresses to
the specic insurance applications. It
was initially developed in the 1970s
and 1980s as a terminal-based ap-
plication on a mainframe using the
Cobol programming language. In the
early 2000s, user interfaces based on
Java Swing were added, with the
underlying logic being implemented
MAY/JUN E 2018 | IEEE SOFTWARE
45
in Cobol on Unix servers. In recent
years, new applications have been
developed solely in Java, and a ser-
vice infrastructure has been intro-
duced to access the functionality on
the mainframe directly from Java.
The customer application is cur-
rently used by a total of 35 client ap-
plications, ranging across the entire
technology stack. These applications
access the customer application in
different ways, some by using dened
interfaces, but many by invoking in-
ternal modules or even accessing the
underlying database tables directly.
In addition to the technological
complexity, the scope of the applica-
tion has widened over the years. New
functionality not native to the cus-
tomer domain has been added, lead-
ing to further growth and complexity.
As of today, the customer application
consists of about one million lines of
code in 1,200 Cobol modules.
Modernization Drivers
and Goals
The primary driver for modernizing
the customer application is the fact
that it has become increasingly dif-
cult to deliver new features on time.
Since a large, high-priority project
requires fundamental changes to
large parts of the application, this
lack of evolvability is considered a
strategic risk. According to the de-
velopers, there are two main reasons
for this low evolvability—namely, a
deterioration of the application’s in-
ternal structure and the high number
of entry points for client applica-
tions. This has made the impact of
changes difcult to assess, leading to
a high amount of testing and rework.
Secondary modernization drivers are
the vendor and technology lock-in as
well as the fact that many developers
are close to retirement and Cobol de-
velopers are difcult to obtain.
The modernization goals are
therefore as follows:
• establishing well-dened,
platform-independent interfaces
based on the bounded contexts
of the underlying domain;
• improving the evolvability of
the application—in particular,
reducing the number of entry
points and preventing access to
internals, moving non customer
functionality into separate
components, and eliminating
redundant and obsolete parts of
the application; and
• an incremental platform migra-
tion from Cobol to Java.
Microservices are a suitable ar-
chitecture for achieving these goals
due to their organization around
business capabilities, high evolv-
ability, strong component separa-
tion, and focus on cross-platform
interaction.
In order to choose a viable modern-
ization process, several options were
assessed with respect to their nancial,
technical, and staff-related feasibility.
One of the fundamental questions of
this assessment was whether to imme-
diately implement the new services on
the Java-based technology stack or to
rst build Cobol implementations and
gradually replace them after migrating
the client applications. As the platform
migration from Cobol to Java was
considered a high technical risk, the
latter option was chosen to separate
the client migration from the platform
migration.
The Modernization
Process
The proposed modernization pro-
cess consists of ve steps and is
shown schematically in Figure 1.
For the sake of conciseness, we show
only two client applications, A and
B, both of which invoke the cus-
tomer application’s functionality by
directly calling modules. In addition,
module B1 from B directly accesses
one of the customer database tables
as part of a query—for instance, a
join. This initial situation is depicted
in Figure 1a. The remaining steps
are described next, together with ex-
periences from implementing them
in the aforementioned moderniza-
tion project.
Step 1: Dening an External Service
Facade
The rst step of the modernization
process is concerned with den-
ing an external service facade that
captures the functionality required
by the client systems in the form of
well-dened service operations (see
Figure 1b). The implementation of
these operations is performed in
later steps.
The major challenge of this step
is to dene domain-oriented ser-
vices that provide the functionality
needed by the clients, but without
conserving questionable design de-
cisions from the legacy application.
We approached this challenge from
two sides. First, we created a target
domain model and used it to dene
service operations from scratch that
we expected to be provided by the
application. Afterward, we employed
static analysis to identify the “entry
points” of the application—i.e., pro-
grams, methods, or database tables
that were accessed from other appli-
cations. This was indispensable, as
nobody was aware of all the accesses.
Once we knew the entry points, we
proceeded to analyze the invoked
functionality and to formulate it as
provisional service operations.
Then, similar or redundant op-
erations were merged, and the
46 IEEE SOFTWARE | WWW.COMPUTER.ORG/SOFTWARE | @IEEESOFTWARE
FOCUS: MICROSERVICES
FIGURE 1. Over view of the modernization process. (a) The initial situation. (b) Dening an external service facade. (c) Adapting
the service facade. (d) Migrating clients to the service facade. (e) Establishing internal service facades. (f) Replacing the service
implementations by microservices. Changes in the respective process steps are highlighted in blue.
Platform migration
Platform migration
Shared database
Database migration
Table TC1Table TB1
A1
Client A
Client BCustomer
application C1
C2B1
Shared database
Table TC1Table TB1
A1
Client A
Client BCustomer
application C1
C2B1
Shared database
Table TC1Table TB1
A1
Client A
Client BCustomer
application C1
Ad
Ad
C2B1
Shared database
Table TC1Table TB1
A1
Client A
Client BCustomer
application C1
Ad
Ad
C2
Shared database
B1
Table TC1Table TB1
A1
Client A
Client B
C1
Ad
Ad
C2
Shared database
B1
Ad
Table TC1Table TB1
A1
Client A
Client B
C1
Ad
Ad
C2B1
Ad
(a) (b)
(c) (d)
(e) (f)
MAY/JUN E 2018 | IEEE SOFTWARE
47
provisional operations were seman-
tically matched against the expected
operations. Ultimately, 29 service in-
terfaces with about 150 service oper-
ations were derived from more than
500 entry points.
An important advantage of this
approach is that in addition to the
actual facade, it also produces in-
formation on how to replace the
existing entry points with service op-
erations, and provides candidates for
adaptation. This information is used
extensively in subsequent steps.
Step 2: Adapting the Service Facade
After dening the service operations,
implementations must be supplied.
While it is possible to immediately
provide microservice implemen-
tations, we chose to rst build an
implementation by adapting the ex-
isting system, as shown in Figure 1c.
Thus, the client migration to the
service facade and the platform
migration, which both posed con-
siderable risk, were split into sepa-
rate steps, at the cost of creating a
throwaway implementation.
A key challenge of this step is to
nd appropriate candidates for adap-
tation. For this task, the results from
the entry point analysis from step 1
proved to be particularly helpful.
However, due the renement of the
service operations, several operations
had to be implemented from scratch.
For a successful adaptation, it is
imperative to ensure sufcient test-
ing. This can be difcult in legacy
environments, as such environments
may provide little to no facilities
for common testing techniques like
mocking. The new micro services
are usually much easier to test, as
technologies like Docker Com-
pose can be used to set up environ-
ments in an ad hoc fashion. In our
modernization project, the lack of
mocking for the legacy database
proved to be a particular challenge.
All tests were therefore designed so
that modications to the test data
were either rolled back or explicitly
compensated.
Step 3: Migrating Clients to the
Service Facade
Once the service operations are im-
plemented, client applications can
start migrating to the new facade by
replacing their existing accesses with
service invocations (see Figure 1d).
This step poses organizational as
well as technical challenges and
usually consumes a large part of
the overall project time and budget,
since large parts of the client appli-
cations must be changed and tested.
In order to support the develop-
ment teams during the migration, we
created a transition documentation.
This documentation contained a tex-
tual description of how to replace
each of the entry points identied in
step 1 with one or more service op-
erations. For each of the new opera-
tions, detailed descriptions and code
snippets were provided to facilitate
the transition as much as possible.
This documentation was considered
very helpful by the developers.
For the actual migration, many
client applications successfully em-
ployed client-side adapters that
emulated specic parts of the old
interfaces using the new service
facade. Thus, the changes to the
existing modules could be reduced
signicantly. Following the idea of
the Tolerant Reader pattern,5 these
adapters relied on only those elds
and operations actually required
for the respective application, thus
preventing interface changes from
trickling into the client applications.
The idea of creating a shared adapta-
tion facility for all client applications
was, however, discarded, as this
would have preserved the highly
complex interface structure the mod-
ernization was aiming to improve.
During the migration, a few ser-
vice methods proved to be too nely
grained, leading to performance deg-
radation due to invocation overhead.
Therefore, these methods had to be
rened during the migration.
In our project, the client migra-
tion took almost two years. To keep
track of the overall migration prog-
ress, we employed the static-analysis
toolset used in step 1, regularly creat-
ing a report of modules still using the
old entry points. This report was then
compared to the migration plans of
the client applications to ensure that
the migration proceeded as planned.
Step 4: Establishing Internal Service
Facades
The techniques described in steps 1
to 3 can also be used to restructure
applications internally (see Figure 1e).
Although this restructuring can be
done in parallel with establishing the
external service facade, we decided to
perform this step separately, as per-
forming both restructurings at the
same time was considered both too
risky and resource-demanding. Since
no naming conventions could be ex-
ploited, all programs were inspected
manually and assigned to the appro-
priate components. Programs con-
taining functionality from different
components were assigned to all the
respective components and marked
for separation. Potentially obso-
lete programs were agged for later
deletion.
Step 5: Replacing the Service
Implementations with Microservices
Once all desired service facades are
established, the process of actually
introducing microservices can begin,
48 IEEE SOFTWARE | WWW.COMPUTER.ORG/SOFTWARE | @IEEESOFTWARE
FOCUS: MICROSERVICES
as the adapted service implementa-
tions can now be transparently re-
placed (see Figure 1f). It is, however,
important to note that several mod-
ernization goals have already been
reached. Although the implementa-
tion is still based on the old Cobol
code, it is now only accessed using
well-dened, platform-independent
interfaces, and the application has
been internally restructured into the
desired components. In particular,
the database has been disentangled
so that, for instance, schema changes
can now be performed without af-
fecting client applications.
Although the platform migration
is transparent in theory, there are
numerous practical challenges. Two
exemplary challenges are described
next—namely, transactions and
resilience.
Transactions are ubiquitous in
business software, and many ap-
plications rely on the fact that all
changes are automatically rolled
back in case of failure. For distrib-
uted architectures such as micro-
services, transactions are notoriously
difcult to implement, and trans-
actionless approaches such as ex-
plicit compensation are preferred.
However, when microservice imple-
mentations are introduced that do
not provide the same transactional
guarantees as the former implemen-
tation, the potential inconsistencies
must be investigated, and compensa-
tion needs to be added to the client
application if necessary.
Distributed architectures are fur-
thermore susceptible to partial fail-
ure, a problem that does not arise in
monolithic applications. Therefore,
many applications rely on the avail-
ability of their dependencies. In such
situations, the ability to cope with
unavailable dependencies needs to be
established before the replacement—
for instance, by inserting circuit
breakers.6
Current Situation, Further Steps, and
Limitations
As of now, our modernization proj-
ect has been running for almost
four years. The client migration is
almost completed and is running
successfully in production. Further-
more, the rst new service opera-
tions have been implemented, and
the rst batch of requirements for
the strategic project was delivered
on time.
In total, new implementations
were created for 23 service opera-
tions, several of which, though, still
have to access the Cobol code to re-
main compatible with operations
that have not yet been migrated.
These implementations are not yet
true microservices, as several tech-
nological and organizational bar-
riers still have to be overcome. In
particular, the issue of transactional
guarantees is still subject to discus-
sion, which is why mostly read-only
operations have been migrated so
far. We observe, however, that the
rst careful steps toward infrastruc-
ture automation and DevOps prac-
tices7 are being taken in the wake of
the migration, as the new implemen-
tations create opportunities for ex-
perimenting with these approaches.
As for the time spent on the indi-
vidual process steps, the denition
and adaptation of the service facade
took 10 and 15 months, respectively.
The client migration took about 20
months, and the creation of the new
service implementations has taken
about nine months.
It should, however, also be noted
that certain parts of the application
cannot be modernized using the pre-
sented approach. In particular, some
user interfaces, which are built on
highly proprietary technologies, lack
the necessary means for service ab-
straction. As these user interfaces
are embedded by other client appli-
cations, they must be preserved for
the time being.
In this article, we have presented
a process for decomposing an
existing software asset into
micro services based on our experi-
ence from an industrial case study.
Obviously, this article can only
scratch the surface of this complex
FURTHER READING
Additional information on migrating to microservices can be found in numerous
sources. Patterns and transition strategies are discussed in Building Microservices8
and Migrating to Cloud-Native Application Architectures.3 Detailed information on
common pitfalls and antipatterns can be found in Microservices AntiPatterns and
Pitfalls.9 Further industrial case studies are presented in “On Monoliths and
Microservices”10 and “Microservices Architecture Enables DevOps: Migration to
a Cloud-Native Architecture.”11 Information for implementing microservices on
specic platforms can be found, for instance, in Microservices, IoT, and Azure:
Leveraging DevOps and Microservice Architecture to Deliver SaaS Solutions12
and Spring Microservices in Action.13
MAY/JUN E 2018 | IEEE SOFTWARE
49
matter. Therefore, additional mate-
rial on the topic has been compiled
in the “Further Reading” sidebar.
Even though our modernization
is not yet complete, we have already
reached essential goals, as we were
able to eliminate uncontrolled access
to the internals of the application
and we met the rst batch of new
requirements on time. Furthermore,
practices like automated testing and
code reviews have been established
as a by-product of the moderniza-
tion, which is considered a notable
success.
In retrospect, we conclude that
despite the additional cost of creat-
ing throwaway implementations,
separating the client migration from
the platform migration was the right
thing to do in this particular project,
as several technical challenges are
still not solved. For projects with a
less risky platform migration, how-
ever, this effort is not justied, and
new service implementations should
be immediately provided as micro-
services. Furthermore, due to the
effort involved, this process is viable
only for migrating large, complex
software systems with a high busi-
ness value.
References
1. J. Lewis and M. Fowler, “Micro-
services,” 2014; martinfowler.com
/articles/microservices.html.
2. W. Hasselbring and G. Steinacker,
“Microservice Architectures for Scal-
ability, Agility and Reliability in
E-C om merc e,” Proc. IEEE Int’l Conf.
Software Architecture Workshops
(ICSAW 17), 2017, pp. 243–246.
3. M. Stine, Migrating to Cloud -Native
Application Architectures, O’Reilly,
2015.
4. W. Hasselbring et al., “The Dublo
Architecture Pattern for Smooth
Migration of Business Information
Systems,” Proc. 26th Int ’l Conf.
Software Eng. (ICSE 04), 2004, pp.
117–1 26 .
5. M. Fowler, “Tolerant Reader,” 9 May
2011; martinfowler.com/bliki
/TolerantReader.html.
6. M. Nygard, Release It! Design and
Deploy Production-Ready Software,
Pragmatic Bookshelf, 2007.
7. L . Bass, I. Weber, and L. Zhu,
DevOps: A Software Architect’s
Perspective, Addison-Wesley, 2015.
8. S Newman, Building Microservices,
O’Reilly, 2015.
9. M. Richards, Microservices AntiPat-
terns and Pitfalls, O’Reilly, 2015.
10. G. Steinacker, “On Monoliths and
Microser vices,” 2015; dev.otto.de
/2015/09/30/on-monoliths-and
-microservices.
11. A . Balalaie, A. Heydarnoori, and P.
Jamshidi, “Microservices Architec-
ture Enables DevOps: Migration to
a Cloud-Native Architecture,” IEEE
Software, vol. 33, no. 3, 2016, pp.
42–52 .
12. B. Famil ia r, Microservices, IoT, and
Azure: Leveraging DevOps and
Microservice Architecture to Deliver
SaaS Solutions, Apress, 2015.
13. J. C arnell , Spring Microservices in
Action, Manning, 2017.
ABOUT THE AUTHORS
HOLGER KNOCHE is a senior software architect at b1m
Informatik and a PhD student in Kiel University’s Software
Engineering Group. His research interests include sof tware
architecture and software modernization, especially runtime
performance and data consistency during the move toward
decentralized architectures such as microservices. Knoche
received a master’s in computer science from FHDW Han-
nover. He’s a member of the German Association for Computer
Science. Contact him at hkn@informatik.uni-kiel.de.
WILH ELM HAS SELBR ING is a professor of soft ware engi-
neering at Kiel University. In the Software Systems Engineer-
ing competence cluster, he coordinates technology-transfer
projects with industry. His research interests include software
engineering and distributed systems, particularly software
architecture design and evaluation. Hasselbring received a
PhD in computer science from the University of Dortmund.
He’s a member of ACM, the IEEE Computer Society, and the
German Association for Computer Science. Contact him at
hasselbring@email.uni-kiel.de.
Read your subscriptions
throu gh the myCS
publications portal at
http://mycs.computer.org