ArticlePDF Available

Cost comparison of running web applications in the cloud using monolithic, microservice, and AWS Lambda architectures

Article

Cost comparison of running web applications in the cloud using monolithic, microservice, and AWS Lambda architectures

Abstract and Figures

Large Internet companies like Amazon, Netflix, and LinkedIn are using the microservice architecture pattern to deploy large applications in the cloud as a set of small services that can be independently developed, tested, deployed, scaled, operated, and upgraded. However, aside from gaining agility, independent development, and scalability, how microservices affect the infrastructure costs is a major evaluation topic for companies adopting this pattern. This paper presents a cost comparison of a web application developed and deployed using the same scalable scenarios with three different approaches: 1) a monolithic architecture, 2) a microservice architecture operated by the cloud customer, and 3) a microservice architecture operated by the cloud provider. Test results show that microservices can help reduce infrastructure costs in comparison with standard monolithic architectures. Moreover, the use of services specifically designed to deploy and scale microservices, such as AWS Lambda, reduces infrastructure costs by 70% or more, and unlike microservices operated by cloud customers, these specialized services help to guarantee the same performance and response times as the number of users increases. Lastly, we also describe the challenges we faced while implementing and deploying microservice applications, and include a discussion on how to replicate the results on other cloud providers.
This content is subject to copyright. Terms and conditions apply.
SOCA (2017) 11:233–247
DOI 10.1007/s11761-017-0208-y
ORIGINAL RESEARCH PAPER
Cost comparison of running web applications in the cloud using
monolithic, microservice, and AWS Lambda architectures
Mario Villamizar1·Oscar Garcés1·Lina Ochoa1·Harold Castro1·
Lorena Salamanca2·Mauricio Verano2·Rubby Casallas2·Santiago Gil3·
Carlos Valencia3·Angee Zambrano3·Mery Lang3
Received: 13 June 2016 / Revised: 10 April 2017 / Accepted: 12 April 2017 / Published online: 27 April 2017
© Springer-Verlag London 2017
Abstract Large Internet companies like Amazon, Netflix,
and LinkedIn are using the microservice architecture pat-
tern to deploy large applications in the cloud as a set of
small services that can be independently developed, tested,
deployed, scaled, operated, and upgraded. However, aside
from gaining agility, independent development, and scala-
bility, how microservices affect the infrastructure costs is a
major evaluation topic for companies adopting this pattern.
This work is the result of a research project partially funded by
Colciencias under contract 0569-2013 “Construcción de una línea de
productos de aplicaciones de crédito/cartera que serán ofrecidas a
través de un Marketplace en forma SaaS para el sector no bancarizado
(PYMES) de Colombia” COD. 1204-562-37152.
BMario Villamizar
mj.villamizar24@uniandes.edu.co
Oscar Garcés
ok.garces10@uniandes.edu.co
Lina Ochoa
lm.ochoa750@uniandes.edu.co
Harold Castro
hcastro@uniandes.edu.co
Lorena Salamanca
l.salamanca10@uniandes.edu.co
Mauricio Verano
m.verano239@uniandes.edu.co
Rubby Casallas
rcasalla@uniandes.edu.co
Santiago Gil
sgil@heinsohn.com.co
Carlos Valencia
cvalencia@heinsohn.com.co
Angee Zambrano
azambrano@heinsohn.com.co
Mery Lang
mlang@heinsohn.com.co
This paper presents a cost comparison of a web application
developed and deployed using the same scalable scenarios
with three different approaches: 1) a monolithic architec-
ture, 2) a microservice architecture operated by the cloud
customer, and 3) a microservice architecture operated by
the cloud provider. Test results show that microservices can
help reduce infrastructure costs in comparison with stan-
dard monolithic architectures. Moreover, the use of services
specifically designed to deploy and scale microservices, such
as AWS Lambda, reduces infrastructure costs by 70% or
more, and unlike microservices operated by cloud customers,
these specialized services help to guarantee the same perfor-
mance and response times as the number of users increases.
Lastly, we also describe the challenges we faced while
implementing and deploying microservice applications, and
include a discussion on how to replicate the results on other
cloud providers.
Keywords Cloud computing ·Microservices ·
Service-oriented architectures ·Scalable applications ·
Software engineering ·Software architecture ·
Microservice architecture ·Serverless architectures ·
AWS Lambda ·Amazon Web Services
1COMIT Research Group, Systems and Computing
Engineering Department, Universidad de los Andes, Bogotá
D.C., Colombia
2TICSw Research Group, Systems and Computing
Engineering Department, Universidad de los Andes, Bogotá
D.C., Colombia
3Project Management Department, Mapeo, Bogotá D.C.,
Colombia
123
234 SOCA (2017) 11:233–247
1 Introduction
Cloud computing [1] is a model that allows companies to
deploy enterprise applications that, if properly designed, can
scale their computing resources on demand. Companies can
either deploy their own applications on Infrastructure as a
Service (IaaS) [2] or Platform as a Service (PaaS) [3] solu-
tions, or they can buy ready-to-use applications that use the
Software as a Service (SaaS) [4] model. When companies
deploy their own applications on IaaS or PaaS solutions in
order to take advantage of cloud computing capabilities such
as configuring auto scaling, ensuring continuous delivery
and hot deployments, and achieving high availability and
dynamic monitoring, among others, they face different time
and cost-consuming challenges. Furthermore, while trying
to avoid the hassle and the development costs of migrating
to the cloud, most companies start by deploying traditional
and monolithic applications that use IaaS/PaaS solutions.
In this context, we take the definition made by Martin
Fowler in [5] about what a monolithic architecture means.
A monolithic architecture makes reference to an application
with a single codebase/repository that expose tens or hun-
dreds of different services to external systems or consumers
using different interfaces such as HTML pages, Web ser-
vices, and/or REST services. The application is developed by
a development group and changes made by any developer can
affect the whole set of services because all changes are made
on the same codebase. The codebase can be deployed on
single-server or multi-server (behind a load balancer) envi-
ronments.
Scaling a monolithic application is a challenge because of
the irregular consumption behaviour of the different services
deployed on the same application. Thus, when the demand
for highly consumed services increases, additional infras-
tructure is required for the whole application, regardless of
the consumption pattern of the services. Therefore, given that
infrastructure is shared among services, server resources are
wasted in the execution of unused services, thus increasing
related costs.
Microservice architectures propose a solution to effi-
ciently scale computing resources and help solve many other
issues that arise in monolithic architectures [5]. The allocated
infrastructure can be better tailored to the microservices’
needs due to the independent scaling of each one of them,
which eventually diminishes the infrastructure costs needed
to run the applications.
However, microservice architectures face additional chal-
lenges such as the effort required to deploy each microser-
vice, and to scale and operate them in cloud infrastructures.
To address these concerns, services like AWS Lambda
[6] have been launched by leading providers like Amazon
Web Services (AWS). AWS Lambda allows implement-
ing microservice architectures without the need to manage
servers. Thus, it facilitates the creation of functions (i.e.
microservices) that can be easily deployed and automatically
scaled, and it also helps reduce infrastructure and operation
costs.
To understand how microservice architectures and AWS
Lambda affect the infrastructure costs of an application, we
developed and tested a case study with a real application
implemented and deployed in the cloud using three different
approaches: a monolithic architecture, a microservice archi-
tecture managed and scaled by the cloud consumer, and a
microservice architecture automatically managed and scaled
by the cloud provider (i.e. AWS Lambda). In this case study,
we identified and compared the infrastructure costs required
for each approach and also detected some of the efforts and
challenges of using each approach while the application was
developed, tested, deployed, scaled, operated, and upgraded.
The remainder of this paper is organized as follows: Sec-
tion 2presents different efforts, strategies, and methodolo-
gies used in applications developed with the service-oriented
architecture (SOA) approach and the microservices archi-
tecture pattern. The description of the case study that was
developed and tested is presented in Sect. 3. The implemen-
tation of the different architectures is described in Sect. 4.
Section 5describes the application deployments on the AWS
infrastructure. Section 6shows the results of performance
tests executed for each architecture in order to compare
their infrastructure cost, and presents the concerns and trade-
offs that must be considered by companies when trying to
implement microservices. Section 7presents how the results
may be used to test other serverless services such as those
launched recently by Google, Microsoft, and IBM. Section 8
concludes and presents several research lines that can be
addressed.
2 Background
Applications that need to be scaled to thousands or millions
of users are very common today due to factors such as the
high amount of Internet users, the increased use of mobile app
stores, the creation of SaaS products, the creation of massive
products by startups, the change of many business models
from Business to Business (B2B) to Business to Consumer
(B2C), and the execution of different government initiatives
to provide more online services to citizens. Many of such
applications are deployed on IaaS/PaaS solutions to support
their rapid growth and unpredictable peak periods.
Suppose that, at an enterprise level, an application A starts
using a monolithic approach as a single codebase and offers
a set of services S (S1,S2,...,Sx). The codebase is shared
among a set of developers D (D1,D2,...,Dy), and the pro-
duction environment is operated by a set of operators O
(O1,O2,...,Oz). When the application begins to increase
123
SOCA (2017) 11:233–247 235
its demand, more services or developers will be added, thus
increasing the complexity and the time required to launch
new features or improvements. The problem of the complex-
ity of large business applications has been addressed using
different SOA [7] approaches, where an application is divided
as a set of business applications A (A1,A2,...,Ax)and each
one offers services to the others through different protocols
(mainly SOAP). Some routing mechanisms/systems, such as
the Enterprise Service Bus (ESB) [8], are used to route/send
messages among applications. A SOA strategy allows each
application to be developed by a set of developer teams T
(T1,T2,...,Ty)(regularly grouped by business functions)
and operated by a team of operators O.
Although SOA implementations can be a solution for the
requirements of some companies, such implementations are
expensive, time-consuming, and complex [9]. Therefore, the
challenges of implementing SOA strategies have been widely
studied by businesses and academia [10]. Additionally, ESB
products were designed to support the workloads of enter-
prise applications with hundreds or thousands of users, but
when ESBs are used with Internet scale applications that have
hundreds of thousands or millions of users, they become a
bottleneck, generating high latencies and providing a single
point of failure. ESBs were not designed for cloud envi-
ronments, because they make it difficult to add or remove
servers on demand. Regarding agility, the addition of new
requirements for end-users in SOA implementations requires
a lot of complex configurations in the ESB, which is a time-
consuming task.
To avoid the problems of monolithic applications and
take advantage of some of the SOA architecture bene-
fits, the microservice architecture pattern has emerged as a
lightweight subset of the SOA architecture pattern. This pat-
tern is being used by companies like Amazon [11], Netflix
[12], Gilt [13], LinkedIn [14], and SoundCloud [15] to sup-
port and scale their applications and products. There has been
a lot of discussion [1618] between industrial practitioners
and researchers about whether the microservice architec-
ture pattern is a new software architecture style or the same
reference architecture proposed by SOA. That discussion
converges on the idea that microservices adopted SOA con-
cepts used during the last decade, but are still an architecture
style focused on achieving agility [19] and simplicity at busi-
ness and technical levels, while avoiding the complexity of
centralized ESBs and allowing development teams to quickly
and continuously scale and deploy applications to millions
of users.
The microservice pattern [20] proposes to divide an
application A into a set of small business services µS
(µS1S2,...,µSn), each one of them offering a subset
of the services S (S1,S2,...,Sx)provided by such applica-
tion A. Every microservice is developed independently by a
development team µTiby using the technological stack —
including the presentation, business, and persistence layers—
that is more appropriate for the services offered by the
microservice. Each microservice is developed using inde-
pendent codebases, and the team µTiis also in charge of
deploying, scaling, and operating the microservice in a cloud
computing IaaS/PaaS solution. In the presentation layer, the
services are published using the REST (Representational
State Transfer) [21] architecture style due to its simplicity
and its adoption by large Internet companies.
Facing the defined microservices, there is a set of appli-
cation gateways G (G1,G2,...,Gm), each one offering
services to specific types of end-users such as web users,
iOS users, Android users, public API users, among others.
Each gateway exposes its services using different inter-
faces and protocols like webpages/HTTP, SOAP/HTTP, and
REST/HTTP. Gateways receive requests from end-users,
consume one or several microservices, and send the results
to the requesters. Each gateway is also independently devel-
oped, tested, deployed, scaled, operated, and upgraded by a
team GT j. Microservices commonly expose their services to
gateways instead of end-users, and they typically do not have
persistence layers.
The fact that microservices and gateways are developed
and maintained as self-managed applications by indepen-
dent teams, allows increasing the number of developers in
a more scalable way. Moreover, a large and complex mono-
lithic application can be regarded as a set of small and simple
applications. Each microservice/gateway is developed using
different types of programing languages (e.g. Java, .NET,
PHP, Ruby, Phyton, Scala) and persistent technologies (e.g.
SQL, No-SQL). In the cloud, each microservice/gateway can
be scaled independently, using the server types (e.g. high
CPU, high Memory, high I/O) and auto scaling rules that
are more appropriate. The microservice pattern also avoids
single points of failure and allows the use of continuous deliv-
ery strategies, because each new deployment affects only the
microservice/gateway being updated while other microser-
vices/gateways continue their operation without disruption.
One of the concerns of implementing microservices is
related to the efforts required to deploy and scale each
microservice/gateway in the cloud. Although companies
implementing microservices can use different DevOps [22]
automation tools such as Docker, Chef, Puppet, Auto Scaling
(Amazon), among others, the implementation of such tools
consumes time and resources. To address this concern, cloud
providers like AWS have recently launched services such as
AWS Lambda, which allows the deployment of microser-
vices without the need to manage servers. This service is
designed to offer a per request cost structure, which results in
developers only worrying about writing individual functions
to implement each microservice/gateway and then deploy-
ing them on AWS Lambda. Once deployed, those functions
can be scaled automatically; AWS charges for each func-
123
236 SOCA (2017) 11:233–247
tion execution, while hiding the deployment, operation, and
monitoring of load balancers or web servers. This per request
model helps reduce infrastructure costs because each func-
tion can be executed in computing environments adjusted to
its requirements, and the customer pays only for each func-
tion execution, thus avoiding infrastructure payment when
there is no microservice/gateway consumption (base infras-
tructure is not required).
Microservice architectures are being implemented by
large companies to scale their applications in the cloud in an
efficient way, to reduce complexity, to easily expand develop-
ment teams and to achieve agility. However, when companies
want to start adopting microservices while developing new
applications, they generally have some of the following
doubts: 1) how can microservices help to reduce infrastruc-
ture costs, 2) how are the development process and business
culture changed when microservices are implemented, and
3) how can emerging cloud services such as AWS Lambda—
designed to allow the automatic deployment and scalability
of microservices— help reduce infrastructure costs.
To answer the questions above, we developed a small
enterprise application using the monolithic architecture, the
microservice architecture operated by the cloud customer,
and the microservice architecture operated by the cloud
provider. We deployed the three applications on AWS in
order to get and compare the infrastructure costs required to
execute them, and we identified areas in the software devel-
opment process, and in the operations and scalability of the
applications, that are affected when microservice architec-
tures are used.
3 Case study
In order to evaluate the implications of using microservices
operated by the cloud customer and the cloud provider in
a real scenario, versus using a monolithic architecture, we
worked together with a software company in the development
of an application that uses the three aforementioned architec-
tures. The application was designed to support the business
process of generating and querying payment plans for loans
of money delivered by an institution to its customers. The
application was offered to different tenants (institutions),
each one using the application within the SaaS model. Each
tenant had an admin user that managed the tenant account.
The admin user provided access to tenant employees, allow-
ing them to generate and query payment plans when tenant
customers visited their offices. In the database layer, the
application used the shared database multi-tenant model,
where the information of all tenants is stored in the same
database schema. At an application level, all tenants were
supported using the same set of applications/web servers.
The company (i.e. the SaaS provider) expected to add a
lot of tenants with thousands of users during the first years of
operation. In addition, they planned to deploy this application
in an IaaS solution that would enable them to scale their
infrastructure on demand. In order to provide a simple case
study, we considered only two services from the complete
set of services offered by the original application.
The first service, called S1, was in charge of generating a
payment plan that included the set of payments (from 1 to
180 months). This service implemented CPU intensive algo-
rithms to generate payment plans because it uses different
variables related to the customer acquiring the credit (ages,
salary, location, expenses, etc.), the type of credit (mort-
gage, vehicles, etc.), among other topics defined by the SaaS
customers. It did not store any information in the database,
and its typical response time was around 3000 milliseconds.
Moreover, it received some parameters (principal, number
of payments, credit type, interest rate, etc.) and returned the
payment plan based on those parameters.
The second service, called S2, was responsible for return-
ing an existing payment plan and its corresponding set of
payments. This service received the unique ID of a payment
plan stored in a relational database, and returned the complete
information of the payment plan and its set of payments (the
service used to store payment plans was not considered). The
typical response time of S2was around 300 milliseconds and
it had a high consumption of database queries.
Below, we describe the three architectures that were
defined to develop the application:
3.1 Monolithic architecture
In order to develop the application using a typical mono-
lithic approach, we kept in mind that it should have a single
codebase and that it should be developed using an MVC
web application framework such as JEE, .NET, Symfony,
Rails, Grails, Play, among others. Such frameworks enable
the development of three-tier applications, and provide dif-
ferent tools and libraries to develop the presentation, business
and persistence layers. The architecture of a monolithic appli-
cation is illustrated in Fig. 1.
At the presentation tier, the application servers generally
send the required static assets (HTML, CSS, and JavaScript)
Fig. 1 Monolithic architecture
123
SOCA (2017) 11:233–247 237
Fig. 2 Deployment of the monolithic architecture
and dynamic data to the browsers; however, the application
may implement a front-end MVC framework in the browser,
such as Angular.js [24] or Backbone.js. In this case, most
static assets are downloaded by browsers in the first request,
and subsequent requests to web servers are performed by
invoking REST services using JavaScript Object Notation
(JSON)—a lightweight data-interchange format. Given that
the last approach removes load from web servers, due to the
execution of HTML, CSS, and JavaScript inside the browser,
we decided to use that approach in the monolithic application.
In this architecture, the web application publishes the two
services as REST services over the Internet, and they are
consumed by the MVC front-end application executed in the
browser. This approach can be deployed in a single-server
environment, where the scalability is limited to the com-
puting resources of one server, or it can be deployed in a
multi-server environment. As proposed by Bass et al. [23],
in multi-server environments, a load balancer is required
to distribute the load among multiple applications servers.
Additionally, several web servers are deployed with the appli-
cation codebase and a relational database is used to store
information. The deployment of the monolithic architecture
in a multi-server environment is shown in Fig. 2, where the
scalability is limited to the computing resources of the server
cluster, where several instances of the same type are used,
allowing to scale the application in a linear way (1X, 2X,
3X, etc.).
Strategies to scale monolithic applications at more gran-
ular levels (1.1X, 1.2X, 1.3X, etc.) could be achieved using
different instance types; however, these strategies are difficult
to manage and scale, due to, for example, AWS autoscaling
group only allows automatically adding or removing cluster
instances of the same type. As described above, these scaling
strategies were not considered in the paper.
3.2 Microservice architecture operated by the cloud
customer
In order to use a microservice architecture, the first task relies
on deciding the number of microservices that will be imple-
Fig. 3 Microservice architecture
mented. For simplicity, in this case study we selected two
micro services (µS1and µS2), one for each service of the
monolithic application. Each microservice may be developed
as an independent three-tier application using different tech-
nological stacks. The microservice architecture proposed for
the application is illustrated in Fig. 3. At the presentation
layer, both microservices expose their main service (S1and
S2) using REST over a private network. The service exposed
by each microservice is consumed by the gateway. Given that
µS1only generates new payment plans without storing infor-
mation, it does not need the persistence layer. In contrast, µS2
returns the complete information of a payment plan saved in
the relational database, therefore, it requires persistence.
The gateway was developed as a light web application
that receives requests from end-users (browsers) through
the Internet, consumes the private services offered by the
microservices (µS1and µS2) through REST, gets the results
from the microservices, and returns the results to end-users.
In this architecture, the gateway publishes the two services
as REST services over the Internet, which are consumed by
the MVC front-end application executed in the browser. The
message interchange protocol used between browsers and the
gateway, and the gateway and each microservice, is JSON.
The gateway does not store any information, so, it does not
need a persistence layer.
The microservice architecture can be deployed in a cloud
solution using the deployment illustrated in Fig. 4.Inthis
architecture, the gateway and each microservice can be scaled
independently. Microservice µS1is deployed using a load
balancer and several web servers. Microservice µS2is also
deployed using a load balancer and several web servers, and
it also uses a relational database. The gateway is deployed
using a load balancer and several web servers.
3.3 Microservice architecture operated by AWS
Lambda
For the microservice architecture operated by the cloud
provider, we have selected AWS and its AWS Lambda
service, because it was the first cloud service specifically
designed to run microservices. As mentioned in Sect. 7,
other cloud providers have started to offer similar services.
123
238 SOCA (2017) 11:233–247
Fig. 4 Deployment of the microservice architecture
Functions are the unit of execution/development in AWS
Lambda, and they can be developed in Java 8, Python, or
Node.js. In the context of a web application, a function is a
REST/JSON service. Based on the latter, each microservice
(µS1and µS2) can be implemented as a function that exposes
a REST service, which is consumed by gateway functions.
In contrast with the microservice architecture, where both
gateway services were implemented in the same web appli-
cation, in AWS Lambda, the two gateway services must
be implemented as two independent functions that receive
requests from end-users (browsers) through the Internet con-
sume microservice functions through REST, get the results
from microservice functions, and return the results to end-
users. Each gateway function uses the Internet to publish
a REST service, which is consumed by the MVC front-end
application. Although we separate the functions according to
their role to facilitate the comparison with the microservice
architecture operated by the cloud customer, in gateway and
microservice functions, in AWS Lambda every function is a
microservice so Gateway functions must also be considered
as microservices.
The AWS Lambda architecture is shown in Fig. 5.The
message interchange protocol used between browsers and
gateway functions, and between gateway functions and
microservice functions, is JSON. Gateway functions and the
µS1function do not store any information, so they do not
need a persistence layer, while the µS2function does require
persistence. Given that the MVC front-end application must
be retrieved by browsers when users access the applica-
tion, this application is stored in a blob storage system with
capacity to respond to HTTP/HTTPS requests. When a user
accesses the application, the static assets (HTML, CSS, and
JavaScript) are downloaded to the web browser from the blob
storage system in the first request, and subsequent requests
Fig. 5 Deployment of the AWS Lambda architecture
are sent to the gateway functions through REST. The blob
storage system was not required by the monolithic architec-
ture and the microservice architecture operated by the cloud
customer, because in those architectures, the MVC front-end
application is retrieved from web servers and the gateway,
respectively.
The three architectures were developed by the same devel-
opment team. In a real scenario, the monolithic application
would be developed by two teams: a team developing the
web application and another for the front-end application.
The microservice architecture operated by the cloud cus-
tomer and the microservice architecture operated by AWS
Lambda were developed by four small teams: a team devel-
oping the gateway, another the µS1microservice, another the
µS2microservice, and a last one the front-end application.
4 Implementation
We implemented the monolithic architecture with two tech-
nological stacks, in order to compare and get a baseline
performance. The selected stacks were Play web framework
[24] with Java (Play applications can be developed using
Java or Scala) and Jax-RS [25]. These frameworks were
chosen because they provide a lightweight, stateless, and
cloud-friendly architecture. Play and Jax-RS applications are
executed in embedded servers, Netty and Jetty, respectively,
which are designed to start their execution within seconds
and to consume low computing resources.
It is important to highlight that the monolithic and
microservice architectures may be implemented using other
back-end frameworks such as JEE, .NET, Symfony, Rails
or Grails, or front-end frameworks such as Backbone.js or
Ember.js; nonetheless, the goal of this work is to compare
how the infrastructure costs are affected with the develop-
ment of each architecture. The implementation of microser-
vice architectures using other frameworks may change some
123
SOCA (2017) 11:233–247 239
technical details; however, one of the benefits of using
microservices is the ability of using multiple technological
stacks, so that the architectures, deployments, and results
generated in this paper can be used as a reference to imple-
ment microservices or gateways in other frameworks.
Given that the gateways in microservice architectures are
mainly sending REST requests to microservices, it is impor-
tant that they implement non-blocking I/O REST libraries
[26], which means that their threads do not block their exe-
cution while waiting for a response from the microservices.
The Play web framework was developed implementing non-
blocking mechanisms in its service stack (unlike Jax-RS);
this is the reason why the microservice architecture oper-
ated by the cloud customer was implemented using Play.
The AWS Lambda architecture was implemented develop-
ing functions with Node.js [27], one of the programming
languages supported by AWS Lambda.
4.1 Monolithic architecture development
The monolithic architecture in Play/Java was implemented
as two independent applications:
Web application. This application was developed using
Play 2.2.2, Scala 2.10.2, and Java 1.7.0. The rela-
tional database that was used is PostgreSQL 9.3.6. and
the Object Relational Mapping (ORM) employed was
Ebeans.
Front-end application. This application was developed
using Angular.js 1.3.14 (HTML5, CSS, and jQuery) due
to its support by Google.
The monolithic architecture in Jax-RS/Java was imple-
mented as two independent applications:
Web application. This application was developed using
JAX-RS (Jersey 1.8) and Java 1.7.0. The relational
database that was used is PostgreSQL 9.3.6. and the ORM
employed was JPA 2.0/Hibernate.
Front-end application. This application was developed
using Angular.js 1.3.14 (HTML5, CSS, and jQuery).
4.2 Microservice architecture development
The microservice architecture operated by the cloud cus-
tomer was implemented as four independent applications:
Microservice µS1application. This application was
developed using Play 2.2.2, Scala 2.10.2, and Java 1.7.0.
Microservice µS2application. This application was
developed using Play 2.2.2, Scala 2.10.2, and Java 1.7.0.
The used relational database was PostgreSQL 9.3.6 and
the employed ORM was Ebeans.
Gateway application. This application was developed
using Play 2.2.2, Scala 2.10.2, and Java 1.7.0.
Front-end application. This application was developed
using Angular.js 1.3.14 (HTML5, CSS, and jQuery).
4.3 AWS Lambda architecture development
The microservice architecture operated by AWS Lambda
was implemented as four independent functions of type
microservice-http-endpoint and an MVC front-end applica-
tion:
Microservice µS1function. This function was devel-
oped using Node.js 0.10.25.
Microservice µS2function. This function was devel-
oped using Node.js 0.10.25. The relational database that
was used is PostgreSQL 9.3.6, and the access to the
database was implemented using the pg npm module.
Gateway S1function. This function was developed
using Node.js 0.10.25.
Gateway S2function. This function was developed
using Node.js 0.10.25.
Front-end application. This application was developed
using Angular.js 1.3.14 (HTML5, CSS, and jQuery).
5 Deployment in a cloud computing infrastructure
In order to compare the infrastructure costs of running each
architecture, the three architecture implementations were
deployed in AWS. We started the comparison by defining
a baseline architecture for the monolithic architecture. Then,
we executed the performance tests described in Sect. 6,in
order to calculate the number of requests per minute that
the monolithic architecture supported, in both stacks (Play
and Jax-RS). Based on that performance, we defined a sim-
ilar infrastructure for the deployment of the microservice
architecture, with the goal of supporting a similar number
of requests per minute. Finally, we executed individual tests
to identify the best configuration to execute each function in
AWS Lambda. We describe the deployment and infrastruc-
ture services used for each architecture below.
5.1 Monolithic architecture deployment
The monolithic architecture in Play and Jax-RS was deployed
as shown in Fig. 6. The two applications—in Play and Jax-
RS—were deployed as follows:
Web application. Each web application, in Play and Jax-
RS, was deployed in four Elastic Compute Cloud (EC2)
instances of c4.large type (2 vCPUs, 8 ECUs, and 3,75GB
RAM), and used the Netty web server 3.7.0 and the
123
240 SOCA (2017) 11:233–247
Fig. 6 Deployment of the monolithic architecture on Amazon Web
Services
Jetty web server 7.6.0, respectively. For the PostgreSQL
database, we used the AWS Relational Database Ser-
vice (RDS) with Single-AZ in a db.m3.medium instance
type (1 vCPU and 3,75GB RAM). Load balancing among
the multiple web servers was configured with the Elas-
tic Load Balancer (ELB), a service provided by AWS.
Other instance types may have been used; however, per-
formance comparisons with different instance types are
beyond the scope of this paper. The services of the web
application were exposed through the Internet.
Front-end application. The static files of the Angular.js
application (views, models, and controllers) were stored
in the Netty and Jetty web servers. When a user accesses
the web application, the assets of Angular.js are down-
loaded to the browser from the web server in the first
request; then, the REST services exposed by the web
server are consumed from the Angular.js application.
5.2 Microservice architecture deployment
The microservice architecture operated by the cloud cus-
tomer was deployed as illustrated in Fig. 7. The four
applications were deployed as follows:
Microservice µS1application. This Play web applica-
tion was deployed in three EC2 instances of c4.large type
(2 vCPUs, 8 ECUs, and 3,75GB RAM) and using the
Netty web server 3.7.0. An ELB was used to balance the
load among the multiple web servers. The REST services
exposed by the µS1microservice were configured to be
accessible only by the gateway.
Fig. 7 Deployment of the microservice architecture on Amazon Web
Services
Microservice µS2application. This Play web applica-
tion was deployed in one EC2 instance of t2.small type (1
vCPUs, Variable ECUs, and 2,0GB RAM) and using the
Netty web server 3.7.0. The PostgreSQL database was
deployed in RDS with a db.m3.medium instance type (1
vCPU and 3,75GB RAM). The REST service exposed by
the µS2Microservice was also configured to be accessi-
ble only by the gateway.
Gateway application. This Play web application was
deployed in an EC2 instance of m3.medium type (1
vCPUs, 3 ECUs, and 3,75GB RAM) and using the Netty
web server 3.7.0. The REST services of the gateway were
exposed through the Internet.
Front-end application. Similar to the monolithic appli-
cation, the static files of the Angular.js application were
stored in the gateway. When a user accesses the web
application, the assets of Angular.js are downloaded to
the browser from the gateway in the first request; then,
the REST services exposed by the gateway are consumed
from the Angular.js application.
123
SOCA (2017) 11:233–247 241
Fig. 8 Deployment of the Amazon Web Services Lambda architecture
5.3 Deployment of the AWS Lambda architecture
The microservice architecture operated by AWS Lambda
was deployed as shown in Fig. 8. The four independent
microservice-http-endpoint type functions and the MVC
front-end application were deployed as follows:
Microservice µS1function. This function was deployed
in a configuration of 512MB RAM and it was only acces-
sible from gateway functions.
Microservice µS2function. This function was deployed
in a configuration of 320MB RAM, and it was only acces-
sible from gateway functions. The PostgreSQL database
was deployed in RDS with a db.m3.medium instance type
(1 vCPU and 3,75GB RAM).
Gateway S1function. This function was deployed in
a configuration of 512MB RAM, and it was exposed
through the Internet.
Gateway S2function. This function was deployed in
a configuration of 320MB RAM, and it was exposed
through the Internet.
Front-end application. The static files of the Angular.js
application were stored in a Simple Storage Service (S3)
bucket provided by AWS. When a user accesses the web
application, the assets of Angular.js are downloaded to
the browser from S3 in the first request; then, the REST
services exposed by the gateway functions are consumed
from the Angular.js application.
6 Test and results
The development of the case study allowed us to compare
the infrastructure costs and performance of each architecture
by executing different stress tests. The corresponding results
are described in this section.
6.1 Performance tests
In order to test and compare the performance and infras-
tructure costs of the three architectures, we defined three
business scenarios. The first scenario determined that 20%
of the requests made to the web application consumed service
S1and 80% consumed service S2. We called this the 20/80
scenario. In the second scenario, each service received 50%
of the requests; therefore, it is called the 50/50 scenario. In the
third scenario, called 80/20, 80% of the requests consumed
service S1and 20% consumed service S2.
To execute the stress tests of each architecture, we con-
figured JMeter [28] 2.13 in an AWS c4.large EC2 instance,
and we defined the maximum response time of S1and S2
in 20.000 and 3.000 ms, respectively (their typical response
times are 3.000 and 300 ms, respectively). In addition, JMeter
was configured to execute a predefined number of requests
per minute, with services S1and S2simulating a constant
workload, depending on the scenario tested throughout 10
min.
During the performance tests to the monolithic and the
microservice architecture operated by the cloud customer
(Figs. 6,7), the applications servers (Netty/Jetty) were con-
figured with a pool of several threads to attend several
requests concurrently according to the computing capabil-
ities available to the instances. In the performance test to
AWS Lambda (Fig. 8), this was not necessary because each
request was executed in an isolated environment.
In order to test the performance of the monolithic archi-
tecture, we defined the base infrastructure shown in Fig. 6,
whose monthly infrastructure costs of $403.20 USD are
shown in Table 1. Costs associated with bandwidth, stor-
age, and backups were not taken into account, as they were
the same in the three architectures. All costs were calculated
using the pricing list available in September 2015 for the
AWS US East (N. Virginia) Region.
The stress tests were executed with the goal of identi-
fying the maximum number of requests supported by the
monolithic architecture in Play and Jax-RS. This number
was calculated by increasing the number of requests for each
scenario until the application began to generate errors or the
response time defined for S1and S2was not met. The results
of the performance tests executed for the monolithic archi-
tecture are shown in Fig. 9. These results show that, when
considering the monolithic architecture, Jax-RS provides a
better performance than Play.
Based on the number of requests per minute supported by
the monolithic architecture, we executed performance tests
for the microservice architecture operated by the cloud cus-
tomer, in order to identify the minimum infrastructure needed
123
242 SOCA (2017) 11:233–247
Table 1 Infrastructure costs of
the monolithic architecture on
AWS
Service Cost per hour
(USD)
Quantity per
month
Cost per
month (USD)
Web application. EC2 Instance
c4.large
0.110 720*4 316.80
Web application. RDS Instance
db.m3.medium with Single AZ
0.095 720*1 68.40
Web application. ELB Instance 0.025 720*1 18.00
Monthly infrastructure costs 403.20
Fig. 9 Performance results of the monolithic architecture in Play and
Jax-RS on AWS
for supporting the same number of requests. The execution
of such tests allowed us to identify the most suitable instance
type used for the µS1microservice, the µS2microservice,
and the gateway. The identified infrastructure is illustrated
in Fig. 7, and its monthly cost of $390.96 USD is given in
Table 2.
The results of the performance tests for the microservice
architecture are illustrated in Fig. 10. These results show that,
in our case study, microservices can provide an even better
performance than Jax-RS does in the monolithic architecture,
at a lower cost. The monthly cost of the microservice architec-
ture was $390.96 USD in contrast to the $403.20 USD of the
monolithic architecture; furthermore, the former supported
more requests per minute in the three defined scenarios.
Fig. 10 Performance results of the microservice architecture in Play
on AWS
Similarly, the costs in AWS Lambda are based on the
number of requests and their configuration per executed
microservice/gateway function. Therefore, the performance
tests and cost estimation performed for the deployment
showninFig.8were defined to support the same num-
ber of requests per minute as the microservice architecture.
The costs involved during the performance tests in AWS
Lambda to support the same number of requests per minute
as the microservice architecture are summarized in Table 3.
These results show that AWS Lambda supports the same
number of requests per minute as the microservice archi-
tecture, at a lower cost in the three defined scenarios, and
involves monthly costs of $193.81, $173.26, and $168.08,
Table 2 Infrastructure costs of
themicroservicearchitectureon
AWS
Service Cost per hour
(USD)
Quantity
per month
Cost per
month (USD)
Microservice µS1. EC2 Instance
c4.large
0.110 720*3 237.60
Microservice µS1. ELB Instance 0.025 720*1 18.00
Microservice µS2. EC2 Instance
t2.small
0.026 720*1 18.72
Microservice µS2. RDS Instance
db.m3.medium with Single AZ
0.095 720*1 68.40
Gateway. EC2 Instance
m3.medium
0.067 720*1 48.24
Monthly infrastructure costs $390.96
123
SOCA (2017) 11:233–247 243
Table 3 Infrastructure costs of the AWS Lambda architecture
Description Scenario 20/80 Scenario 50/50 Scenario 80/20
Total number of requests 450 180 112
µS1and Gateway S1functions. Number of requests per minute 90 90 90
µS1and Gateway S1functions. Memory configuration (MB) 512 512 512
µS1and Gateway S1functions. Requests per month 3,888,000 3,888,000 3,888,000
µS1and Gateway S1functions. Response time per request (ms) 3000 3000 3000
µS1and Gateway S1functions. Total computing seconds 11,664,000 11,664,000 11,664,000
µS1and Gateway S1functions. Total computing (GB/s) 5,832,000 5,832,000 5,832,000
µS2and Gateway S2functions. Number of requests per minute 360 90 22
µS2and Gateway S2functions. Memory configuration (MB) 320 320 320
µS2and Gateway S2functions. Requests per month 15,552,000 3,888,000 950,400
µS2and Gateway S2functions. Response time per request (ms) 300 300 300
µS2and Gateway S2functions. Total computing seconds 4,665,600 1,166,400 285,120
µS2and Gateway S2functions. Total computing (GB/s) 145,800 364,500 89,100
µS1and S1Gateway functions. Cost per number of requests $0.78 $0.78 $0.78
µS1and S1Gateway functions. Cost per computing duration $97.22 $97.22 $97.22
µS2and S2Gateway functions. Cost per number of requests $3.11 $0.78 $0.19
µS2and S2Gateway functions. Cost per computing duration $24.30 $6.08 $1.49
µS2Microservice. RDS Instance db.m3.medium with Single AZ $68.40 $68.40 $68.40
Monthly costs (USD) $193.81 $173.26 $168.08
respectively, while the monthly cost in microservices is
$390.96.
6.2 Cost comparison
Given that each architecture was deployed in different infras-
tructures, we defined and calculated the metric Cost per
Million of Requests (CMR) for each architecture in the three
scenarios, in order to easily compare their execution costs.
For each scenario and architecture, this metric was calculated
by dividing the monthly infrastructure costs by the num-
ber of requests supported per month, which is calculated by
multiplying the number of requests supported per minute by
43,200—the number of minutes per month (60*24*30). We
assumed a constant throughput per minute during a month.
The CMR metric for each architecture is shown in Fig. 11.
These results show that the microservice architecture
implemented with Play can help reduce up to 9.50, 13.81, and
13.42% of the costs per scenario in contrast to the monolithic
architecture implemented with Jax-RS. However, the use of
services exclusively designed to implement microservices,
such as the third architecture implemented in AWS Lambda,
can help reduce up to 50.43, 55.69, and 57.01% of the costs
per scenario in contrast to the microservice architecture; up
to 55.14, 61.81, and 62.78% of the costs per scenario in con-
trast to the monolithic architecture implemented in Jax-RS;
and up to 62.61, 77.08, and 70.23% of the costs per scenario
in contrast to the Play monolithic architecture.
Fig. 11 Cost comparison of the three architectures per million of
requests
6.3 Response time
To identify how each architecture affects the response time
to requests during peak periods, we measured the average
response time (ART) during the performance tests. The ART
for S1 and S2 are shown in Figs. 12 and 13, respectively.
The ART for S1and S2in the monolithic architectures with
Play and Jax-RS are similar; however, when considering
the microservice architecture, the ART for both services
increases because each request must pass between the gate-
way and each microservice. For the microservice architecture
operated by AWS Lambda, the ART remains very similar
for both services in the three scenarios, even though it var-
123
244 SOCA (2017) 11:233–247
Fig. 12 Average response time for S1during peak periods
Fig. 13 Average response time for S2during peak periods
ied between scenarios for the other architectures. This result
was obtained because each function/request in AWS Lambda
is executed in an isolated computing environment dedicated
to a single request, while each web server executes several
concurrent requests in the other architectures.
6.4 Development methodology and efforts
With the development of the case study, we could validate
that microservice architectures (including AWS Lambda)
require a change in the way that most companies and
software vendors have traditionally been developing mono-
lithic applications. Microservice architectures allow small
teams to work on small applications (microservices) with-
out worrying about how other microservices or teams work.
Every team can use different technologies to implement
microservices/gateways according to business and technical
requirements, which makes mandatory the definition of com-
pany guidelines to avoid the use of technologies that can be
difficult to manage. Therefore, the documentation of REST
services exposed by microservices/gateways gains impor-
tance in order to enable services to be used by multiple teams.
During the development process, we also confirmed that
when microservices are implemented, the development and
deployment of several independent applications make the
development and testing processes more complex; Developer
and operation teams need to identify and resolve prob-
lems related to distributed systems that can affect several
independent and decentralized services (e.g. failures, time-
outs, distributed transactions, data federation, responsibility
assignments, service versioning). Problems that are solved
by application containers (e.g. JBOSS, Glassfish, WebLogic,
IIS) in monolithic applications or by ESBs in SOA imple-
mentations must be managed at application level when
considering microservices.
One of the problems identified during the use of AWS
Lambda is that each function must be independently devel-
oped and deployed, which makes the per-function develop-
ment and deployment processes difficult to manage. Projects
such as Jaws [29] are trying to create tools to facilitate these
processes. Additionally, the use of services such as AWS
Lambda are inherent to a particular vendor by nature, so
companies must evaluate the lock-in problems that they may
face when using them in the short and long term.
6.5 Deployment, scaling, and continuous delivery
The deployment of a microservice architecture in AWS
required the deployment of several independent applications
(microservices and gateways), each one requiring specific
configurations in cloud infrastructures. When new gateway
or microservice versions are published, it is easy to break
external services, which highlights the importance of main-
taining service versioning and plan among multiple teams,
and of defining the upgrade process to avoid breaking prob-
lems.
The use of ready-to-use services specifically designed for
microservice deployment, such as AWS Lambda, helps save
time and money in infrastructure management tasks, because
a subset of the implementation of continuous delivery and
DevOps (Development + Operation) strategies are assumed
by the cloud provider. However, if microservices are imple-
mented without using this type of services, the use of those
strategies can be a time-consuming task, due to the repetitive
execution of manual tasks in each deployment. In those cases,
the use of automation tools is mandatory in order to save
time and gain agility. To deploy and scale microservices and
gateways, the use of lightweight/embedded servers—such as
Netty or Jetty, which can be started in seconds—is recom-
mended. These servers do not share states and can be added
or removed from clusters and load balancers at any time.
One of the concerns identified during performance tests
of microservice architectures is the inability to monitor the
flow of a request made by an end-user through gateways
and multiple microservices, this continues to be a challenge.
However, the greatest benefit of using microservices is the
ability to scale each microservice/gateway independently by
using different policies. At a business level, this may rep-
resent large savings in IT infrastructure costs and a more
efficient way to take advantage of the pay per use benefits of
the cloud model.
123
SOCA (2017) 11:233–247 245
6.6 Adoption, business culture, and guidelines
Based on the case study, microservice architectures should
be employed in businesses that need to scale their applica-
tions to hundreds of thousands or millions of users. However,
their implementation requires additional abilities that are not
present in many companies. The adoption of microservice
architectures requires a new culture of development and inno-
vation that must be complemented by a set of guidelines and
good practices at a company level. Accordingly, the adoption
of microservices should be implemented as a long-term busi-
ness strategy and it should not be seen as a project, because
their adoption requires efforts and abilities that must be incre-
mentally developed.
7 Discussion
This paper focuses on comparing the cost of developing and
deploying the same application using three different archi-
tectures and deployment models with the goal of identifying
how different architectures and deployments can affect the
infrastructure costs of running and scaling an application in
the cloud. In order to get valid results, we had to implement
three versions of the application and deploy each one using
the appropriate services on AWS.
After the process of developing, deploying, and testing
the different architectures, we could determine and analyse
the different technical and culture challenges required to use
microservice architectures. To estimate the costs of running
each architecture, we experimented with different perfor-
mance tests. We also defined the Cost per Million of Requests
(CMR) metric to be able to compare the cost of the different
architectures. During the performance tests, the ART met-
ric allowed us to conclude that cloud services specifically
designed to deploy microservices, such as AWS Lambda,
can maintain the same response time even when the number
of users increases.
7.1 Tests on other cloud providers
Although the efforts of the paper were focused on comparing
the costs on AWS, the process, architecture, and deployment
models used can be replicated on other cloud providers. The
architectures, experiments, metrics, and challenges could be
used as a guide to test the same architectures on other cloud
providers. For example, the three architectures presented in
this paper, and their corresponding performance tests and
cost comparison, could be implemented on other providers
such as Windows Azure, Google Cloud, or IBM. The results
of such a test could facilitate the cost comparison of running
each architecture across different cloud providers as well as
help to identify the pros and cons that must be taken into
account to deploy and scale each architecture on other cloud
providers.
7.2 Emerging microservice/serverless cloud services
Although AWS Lambda was the first cloud service specifi-
cally designed to run microservices, the popularity of such
services roots from their ability to scale applications without
cloud customers having to face the challenges of running and
scaling application servers, which has resulted in other cloud
providers launching similar services.
Google Cloud launched the Alpha release of the Google
Cloud Functions service [30], which is defined as a light-
weight compute solution for developers to create single-
purpose, stand-alone functions that respond to Cloud events
without the need to manage a server or runtime environ-
ment. Windows Azure launched the Preview release of the
Azure Functions service [31], which is defined as a Server-
less event driven experience that extends the existing Azure
App Service platform. These nano-services can scale based
on demand and you pay only for the resources you consume.
And finally, IBM launched the IBM Bluemix OpenWhisk
service [32], which they describe serves to Execute code on
demand in a highly scalable environment without servers.
A comparison regarding the cost of running applications
on those services might be quite interesting. Such a com-
parison requires the third application presented in this paper
to be adapted according to the requirements, development
framework, and restrictions of each service.
7.3 Implementing microservices using Serverless
Frameworks
It is very important to consider that applications developed
and deployed on cloud services such as AWS Lambda are
highly coupled to the solution of each provider; therefore,
the lock-in when these solutions are used is very high. Some
efforts such as the Serverless Framework [33] are directed
towards creating frameworks that facilitate the creation of
microservices or serverless functions (another name given
to this type of services), which can be easily deployed on
services such as AWS Lambda, Google Cloud Functions,
Azure Functions, or IBM Bluemix OpenWhisk.
The third application developed in this paper, specifically
for AWS Lambda, could be redeveloped using the Server-
less Framework to facilitate its deployment on serverless
services of other cloud providers. The same performance
tests, cost comparison, metrics, and results could be exe-
cuted, identified, and analysed for each cloud provider. Tests
on other cloud providers would help the research commu-
nity and companies to easily compare the costs of running
microservices applications on different cloud providers, and
123
246 SOCA (2017) 11:233–247
to identify other interest topics that must be considered to
deploy scalable applications on emerging serverless services.
The test on other cloud providers and also the use of the
Serverless Framework are proposed as future work.
8 Conclusion and future work
Based on the performance tests executed for a monolithic
architecture, a microservice architecture operated by the
cloud customer, and a microservice architecture operated by
AWS Lambda, we can conclude that the use of emerging
cloud services such as AWS Lambda, exclusively designed
to deploy microservices at a more granular level (per HTTP
request/function), allows companies to reduce their infras-
tructure costs in up to 77.08%. Microservices also enable
large applications to be developed as a set of small applica-
tions that can be independently implemented and operated,
thus managing large codebases by using a more practical
methodology, where incremental improvements are executed
by small teams on independent codebases. Agility, cost
reduction, and granular scalability must be balanced with the
development efforts, technical challenges, and costs incurred
by companies resulting from microservices requiring the
adoption of new practices, processes, and methodologies.
Furthermore, the case study allows us to conclude that
for applications with a small number of users (hundreds
or thousands of users), the monolithic approach may be a
more practical and faster way to start. In the reviewed practi-
cal cases, most applications using microservice architectures
started as monolithic applications and were incrementally
modified to implement microservices due to scaling prob-
lems at infrastructure and team management levels.
As future work, we will evaluate the architectures with a
greater number of services with the goal of testing the impact
of implementing other type of services such as memory, I/O,
and network-intensive services. We will also evaluate the
costs of running microservices on serverless services from
providers such as Google, Windows Azure, and IBM, as well
as the costs incurred by companies during the process of
implementing microservices.
The process of defining the number of microservices to
implement, and the tools required to automate the deploy-
ment of microservices in general purpose IaaS solutions
and services such as AWS Lambda, will also be evalu-
ated. Tests to evaluate other technical concerns regarding
microservices, such as performance analyses across differ-
ent components involved in microservices, failure tolerance,
distributed transactions, data distribution, service versioning,
and microservice granularity, are also interesting research
areas.
The cost and performance comparison to migrate legacy
monolithic stateful application to stateless cloud services
such as AWS Lambda is another future work area, where dif-
ferent topics such as the efforts and challenges to redesign,
rearchitect, reimplement, and redeploy the applications are
very interesting to analyse.
References
1. Buyya R (2010) Cloud computing: the next revolution in informa-
tion technology. In: 2010 1st international conference on parallel
distributed and grid computing (PDGC), pp 2–3
2. Vosshall P (2008) Web scale computing: the power of infrastruc-
ture as a service. In: Bouguettaya A, Krueger I, Margaria T (eds)
Service-oriented computing ICSOC 2008. Lecture notes in com-
puter science, vol 5364. Springer, Heidelberg, pp 1–1
3. Beimborn D, Miletzki T, Wenzel S (2011) Platform as a service
(PaaS). Bus Inf Syst Eng 3(6):381–384
4. Schtz S, Kude T, Popp K (2013) The impact of software-as-a-
service on software ecosystems. In: Herzwurm G, Margaria T (eds)
Software business. From physical products to software services
and solutions. Lecture notes in business information processing,
vol 150. Springer, Berlin, pp 130–140
5. Lewis J, Fowler M (2014) Microservices. http://martinfowler.com/
articles/microservices.html. Accessed 23 Apr 2017
6. Amazon Web Services (2015) AWS Lambda. https://aws.amazon.
com/lambda/. Accessed 23 Apr 2017
7. McGovern J, Sims O, Jain A, Little M (2006) Understanding
service-oriented architecture. In: Enterprise service oriented archi-
tectures. Springer Netherlands, pp 1–48. doi:10.1007/1-4020-
3705-8_1
8. La H, Bae J, Chang S, Kim S (2007) Practical methods for adapting
services using enterprise service bus. In: Baresi L, Fraternali P,
Houben G-J (eds) Web engineering. Lecture notes in computer
science, vol 4607. Springer, Berlin, pp 53–58
9. Papazoglou M, Traverso P, Dustdar S, Leymann F (2007) Service-
oriented computing: state of the art and research challenges.
Computer 40:38–45
10. Hutchinson J, Kotonya G, Walkerdine J, Sawyer P, Dobson G,
Onditi V (2007) Evolving existing systems to service-oriented
architectures: perspective and challenges. In: IEEE international
conference on web services 2007, ICWS 2007, pp 896–903
11. GIGAOM (2011) The biggest thing Amazon got right: the
platform. https://gigaom.com/2011/10/12/419-the- biggest-thing-
amazon-got-right-the- platform/. Accessed 23 Apr 2017
12. Nginx (2015) Adopting microservices at Netflix: lessons for archi-
tectural design. http://nginx.com/blog/ microservices-at- netflix-
architectural-best-practices/. Accessed 23 Apr 2017
13. InfoQ (2014) Scaling Gilt: from monolithic ruby application to dis-
tributed scala micro-services architecture. http://www.infoq.com/
presentations/scale-gilt. Accessed 23 Apr 2017
14. InfoQ (2015) From a monolith to microservices + REST: the evo-
lution of LinkedIn’s service architecture. http://www.infoq.com/
presentations/linkedin-microservices- urn. Accessed 23 Apr 2017
15. SoundCloud (2014) Building products at SoundCloud—Part I:
Dealing with the monolith. https://developers.soundcloud.com/
blog/building-products- at-soundcloud-part- 1-dealing-with-the-
monolith. Accessed 23 Apr 2017
16. Thones J (2015) Microservices. IEEE Softw 32:116
17. InfoQ (2014) Microservices and SOA. http://www.infoq.com/
news/2014/03/microservices-soa. Accessed 23 Apr 2017
18. Oracle (2015) Microservices and SOA. http://www.oracle.com/
technetwork/issue-archive/2015/15-mar/o25architect-2458702.
html. Accessed 23 Apr 2017
123
SOCA (2017) 11:233–247 247
19. TechTarget (2015) How microservices bring agility to SOA.
http://searchcloudapplications.techtarget.com/feature/How-
microservices-bring-agility-to- SOA. Accessed 23 Apr 2017
20. Microservices (2014) Pattern: microservices architecture. http://
microservices.io/patterns/microservices.html.Accessed23Apr
2017
21. Vinoski S (2007) REST eye for the SOA guy. IEEE Internet Com-
puting 11:82–84
22. Nemeth F, Steinert R, Kreuger P, Skoldstrom P (2015) Roles of
DevOps tools in an automated, dynamic service creation architec-
ture. In: 2015 IFIP/IEEE international symposium on integrated
network management (IM), pp 1153–1154
23. Bass L, Clements P, Kazman R (2012) Software Architecture in
Practice, 3rd edn. Addison-Wesley Professional, Boston
24. Hunt J (ed) (2014) Play framework. In: A beginner’s guide to
scala, object orientation and functional programming. Springer,
Berlin, pp 413–428
25. Juneau J (ed) (2013) Building RESTful web services. In: Introduc-
ing Java EE 7. Apress, New York, pp 113–130
26. Venkatesan V, Chaarawi M, Gabriel E, Hoefler T (2011) Design and
evaluation of nonblocking collectiveI/O operations. In: Cotronis Y,
Danalis A, Nikolopoulos D, Dongarra J (eds) Recent advances in
the message passing interface. Lecture notes in computer science,
vol 6960. Springer, Berlin, pp 90–98
27. Doglio F (ed) (2015) Node.js and REST. In: Pro REST API devel-
opment with Node.js. Apress, New York, pp 47–63
28. Rahmel D (ed) (2013) Testing a site with ApacheBench, JMeter,
and Selenium. In: Advanced Joomla!. Apress, New York, pp 211–
247
29. InfoWorld (2015) Jaws takes a bite out of AWS Lambda
app deployment. http://www.infoworld.com/article/2990795/
cloud-computing/jaws-takes-a-bite- out-of-aws-lambda-app-
deployment.html. Accessed 23 Apr 2017
30. Google (2016) Google Cloud Functions. https://cloud.google.com/
functions/. Accessed 23 Apr 2017
31. Microsoft (2016) Azure Functions. https://azure.microsoft.com/
en-us/services/functions/. Accessed 23 Apr 2017
32. IBM (2016) IBM Bluemix OpenWhisk. http://www.ibm.com/
cloud-computing/bluemix/openwhisk. Accessed 23 Apr 2017
33. Serverless (2016) Serverless Framework. https://serverless.com/.
Accessed 23 Apr 2017
123
... Due to a microservice's independence, data transfer (e.g., for encryption and authorization in communication between microservices) requires additional effort in providing security. Communication over a network requires managing a network's infrastructure and may contribute to additional latency that should be taken into consideration [47]. ...
Article
Full-text available
Various architectures can be applied in software design. The aim of this research is to examine a typical implementation of Jakarta EE monolithic and microservice software architectures in the context of software quality attributes. Software quality standards are used to define quality models, as well as quality characteristics and sub-characteristics, i.e., software quality attributes. This paper evaluates monolithic and microservice architectures in the context of Coupling, Testability, Security, Complexity, Deployability, and Availability quality attributes. The performed examinations yielded a quality-based mixed integer goal programming mathematical model for software architecture optimization. The model incorporates various software metrics and considers their maximal, minimal or targeted values, as well as upper and lower deviations. The objective is the sum of all deviations, which should be minimal. Considering the presented model, a solution which incorporated multiple monoliths and microservices was defined. This way, the internal structure of the software is defined in a consistent and symmetrical context, while the external software behavior remains unchanged. In addition, an intersection point of monolithic and microservice software architectures, where software metrics obtain the same values, was introduced. Within the intersection point, either one of the architectures can be applied. With the exception of some metrics, an increase in the number of features leads to a value increase of software metrics in microservice software architecture, whilst these values are constant in monolithic software architecture. An increase in the number of features indicated a quality attribute’s importance for the software system should be examined and an appropriate architecture should be selected accordingly. Finally, practical recommendations regarding software architectures in terms of software quality were given. Since each software system needs to meet non-functional in addition to functional requirements, a quality-driven software engineering can be established.
... An approximate solution to Geospatial Interlinking is provided by progressive algorithms, which run for a limited time or number of calculations. These algorithms are necessary for applications with limited resources, such as cloud-based applications with a specific budget for AWS Lambda functions, which charge whenever they are called [44]. ...
Preprint
Full-text available
Geospatial data constitutes a considerable part of (Semantic) Web data, but so far, its sources are inadequately interlinked in the Linked Open Data cloud. Geospatial Interlinking aims to cover this gap by associating geometries with topological relations like those of the Dimensionally Extended 9-Intersection Model. Due to its quadratic time complexity, various algorithms aim to carry out Geospatial Interlinking efficiently. We present JedAI-spatial, a novel, open-source system that organizes these algorithms according to three dimensions: (i) Space Tiling, which determines the approach that reduces the search space, (ii) Budget-awareness, which distinguishes interlinking algorithms into batch and progressive ones, and (iii) Execution mode, which discerns between serial algorithms, running on a single CPU-core, and parallel ones, running on top of Apache Spark. We analytically describe JedAI-spatial's architecture and capabilities and perform thorough experiments to provide interesting insights about the relative performance of its algorithms.
... The modern pipelines supporting remote updates are automated by DevOps tools [28] initially developed for the management and automation of large cloud systems. Their development was triggered by the need to scale such capacity in a cost-efficient way with low human involvement and enabled largely by moving from monolithic software architecture to service oriented architectures of which microservices are an example [29]. Similar automation approaches have been proposed for networking, including cellular networks, and is referred to as Networking DevOps [30]. ...
Article
Full-text available
Deployment and maintenance of large smart infrastructures used for powering data-driven decision making, regardless of retrofitted or newly deployed infrastructures, still lack automation and mostly rely on extensive manual effort. In this paper, we focus on the two main challenges in the life cycle of smart infrastructures: deployment and operation, each of which is rather generic and apply to all infrastructures. We discuss the existing technologies designed to help improve and automate deployment and operation for for smart infrastructures in general and use smart grid as a guiding example to ground some examples across the paper. Next, we identify and discuss opportunities where the broad field of artificial intelligence (AI) can help further improve and automate the life cycle of smart infrastructures to eventually improve their reliability and drive down their deployment and operation costs. Finally, based on the usage of AI for web and social networks as well as our previous experience in AI for networks and cyber-physical systems, we provide decision guidelines for the adoption of AI. Index Terms-smart infrastructure, artificial intelligence, deployment automation, operation automation.
... A thorough research was carried on the various situations and Border Legalization Acts imposed by the countries [7], [8], [9]. ...
... Durch diese zeitlich geteilte Ausführung von Containern auf der gleichen Hardware ermöglicht FaaS sogar eine Skalierbarkeit bis auf Null. Studien konnten diese verbesserte FaaS-Ressourceneffizienz sogar monetär messen (Villamizar et al. et al., 2017). All dies hat letztlich mit der Minimierung der statischen Workload-Anteile zu tun, die den ineffektivsten Workload für Cloud Computing ausmachen. ...
Book
Märkte verändern sich immer schneller, Kundenwünsche stehen im Mittelpunkt – viele Unternehmen sehen sich 𝗛𝗲𝗿𝗮𝘂𝘀𝗳𝗼𝗿𝗱𝗲𝗿𝘂𝗻𝗴𝗲𝗻 gegenüber, die nur 𝗱𝗶𝗴𝗶𝘁𝗮𝗹 𝗯𝗲𝗵𝗲𝗿𝗿𝘀𝗰𝗵𝗯𝗮𝗿 sind. Um diese Anforderungen zu bewältigen, bietet sich der Einsatz von 𝗖𝗹𝗼𝘂𝗱-𝗻𝗮𝘁𝗶𝘃𝗲-𝗧𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝗶𝗲𝗻 an. Dabei reicht es jedoch nicht aus, einen Account bei einem Cloud-Anbieter anzulegen. Es geht auch darum, die unterschiedlichen Faktoren zu verstehen, die den Erfolg von Cloud-native-Projekten beeinflussen. Anhand von realen 𝗣𝗿𝗮𝘅𝗶𝘀𝗯𝗲𝗶𝘀𝗽𝗶𝗲𝗹𝗲𝗻 wird gezeigt, was bei der Umsetzung in 𝘂𝗻𝘁𝗲𝗿𝘀𝗰𝗵𝗶𝗲𝗱𝗹𝗶𝗰𝗵𝗲𝗻 𝗕𝗿𝗮𝗻𝗰𝗵𝗲𝗻 gut und was schlecht gelaufen ist und welche Best Practices sich daraus ableiten lassen. Dabei wird auch die Migration von Legacy-Code berücksichtigt. 𝗜𝗧-𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗸𝘁𝗲𝗻 𝘃𝗲𝗿𝗺𝗶𝘁𝘁𝗲𝗹𝘁 𝗱𝗶𝗲𝘀𝗲𝘀 𝗕𝘂𝗰𝗵 𝗱𝗮𝘀 𝗴𝗿𝘂𝗻𝗱𝗹𝗲𝗴𝗲𝗻𝗱𝗲 𝗪𝗶𝘀𝘀𝗲𝗻, 𝘂𝗺 𝗖𝗹𝗼𝘂𝗱-𝗻𝗮𝘁𝗶𝘃𝗲-𝗧𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝗶𝗲𝗻 𝘂𝗻𝗱 𝗱𝗶𝗲 𝗗𝗲𝘃𝗢𝗽𝘀-𝗞𝘂𝗹𝘁𝘂𝗿 𝗶𝗻 𝗶𝗵𝗿𝗲𝗺 𝗣𝗿𝗼𝗷𝗲𝗸𝘁 𝗼𝗱𝗲𝗿 𝗶𝗺 𝗴𝗲𝘀𝗮𝗺𝘁𝗲𝗻 𝗨𝗻𝘁𝗲𝗿𝗻𝗲𝗵𝗺𝗲𝗻 einzuführen. Das Buch beleuchtet den 𝗖𝗹𝗼𝘂𝗱-𝗻𝗮𝘁𝗶𝘃𝗲-𝗪𝗮𝗻𝗱𝗲𝗹 aus 𝘂𝗻𝘁𝗲𝗿𝘀𝗰𝗵𝗶𝗲𝗱𝗹𝗶𝗰𝗵𝗲𝗻 𝗣𝗲𝗿𝘀𝗽𝗲𝗸𝘁𝗶𝘃𝗲𝗻: Von der Unternehmenskultur, der Cloud-Ökonomie und der Einbeziehung der Kunden (Co-Creation) über das Projektmanagement (Agilität) und die Softwarearchitektur bis hin zu Qualitätssicherung (Continuous Delivery) und Betrieb (DevOps). ▪️ Grundlagen des Cloud Computings (Service-Modelle und Cloud-Ökonomie) ▪️ Das Everything-as-Code-Paradigma (DevOps, Deployment Pipelines, IaC) ▪️ Den Systembetrieb mit Container-Orchestrierung automatisieren ▪️ Microservice- und Serverless-Architekturen verstehen und Cloud-native-Architekturen mit Domain Driven Design entwerfen EXTRA: 𝗖𝗖𝟬-𝗹𝗶𝘇𝗲𝗻𝘀𝗶𝗲𝗿𝘁𝗲 𝘂𝗻𝗱 𝗲𝗱𝗶𝘁𝗶𝗲𝗿𝗯𝗮𝗿𝗲 𝗛𝗮𝗻𝗱𝗼𝘂𝘁𝘀 und Labs für Trainer:innen und Dozent:innen finden sich übrigens hier: https://cloud-native-computing.de
... To the best of our knowledge, there are currently no publications on comparison of SOA and microservices in terms of complexity and performance. Few works have been carried out in evaluating the performance of microservices and compared with monolithic applications (Gos and Zabierowski, 2020;Ueda et al., 2016;Villamizar et al., 2017). However, the focus of our work is to compare the microservices architecture with SOA style. ...
Article
Microservices has emerged as a new architectural style of designing applications to overcome the challenges of service oriented architecture (SOA). With the evolution of microservices architecture, architects have started migrating legacy applications to microservices. However, some of the architects are in chaos whether to migrate the application from SOA to microservices or not. The need for empirical evaluation and comparison of both the SOA and microservices architecture is also on-demand. This work helps the software architects in better understanding of the technical differences between both styles. We, therefore, present a comparison of a web application that is designed using both SOA and microservices architectures. The comparison is presented with two different parameters: 1) complexity with architectural metrics; 2) performance with load testing. It is clear from the results that though the complexity of microservices architecture is high, the response time for processing the requests is very fast compared to SOA services.
Article
Recently, workflow applications are increasingly migrated to Function‐as‐a‐Service platforms which are easy to manage, highly‐scalable, and pay‐as‐you‐go. Meanwhile, users face challenges in migration of serverless applications because of the lack of efficient algorithm for workflow memory configuration to optimize the performance. To this end, this article proposes a heuristic urgency‐based algorithm UWC and a meta‐heuristic hybrid algorithm BPSO to tackle the time‐cost tradeoff. UWC sorts functions and allocates each function an appropriate memory size by greedy strategy. BPSO hybridizes particle swarm optimization as well as beetle antennae search algorithm to guide particles to search directionally and utilizes nonlinear inertia weight to avoid local premature convergence. Extensive experiments with classical serverless application demonstrate that UWC and BPSO are very competitive in comparison with existing algorithms as they can find the optimal workflow memory configuration.
Chapter
In infrastructure as a service (IaaS), the edge computing paradigm proposes the network node distribution of storage and computing resources to ensure swift access throughout a fog environment. The edge platform landscape fragmentation requires flexible and scalable approaches. Based on the above, the most recent works highlight lightweight virtualization, the process of making any hardware shares its resources with other applications without impacting on performance issues. In this sense, this chapter conveys current concepts, techniques and open challenges of lightweight virtualization for edge computing.
Conference Paper
Full-text available
Nonblocking operations have successfully been used to hide network latencies in large scale parallel applications. This paper presents the challenges associated with developing nonblocking collective I/O operations, in order to help hiding the costs of I/O operations. We also present an implementation based on the libNBC library, and evaluate the benefits of nonblocking collective I/O over a PVFS2 file system for a micro-benchmark and a parallel image processing application. Our results indicate the potential benefit of our approach, but also highlight the challenges to achieve appropriate overlap between I/O and compute operations.
Article
Scala is a new programming language developed by Martin Odersky and his team at the EPFL. The name Scala is derived from Sca(lable) La(nguage). Scala is a multi-paradigm language, incorporating object oriented approaches with functional programming. Although some familiarity with standard computing concepts is assumed (such as the idea of compiling a program and executing this compiled form, etc.) and with basic procedural language concepts (such as variables and allocation of values to variables) the early chapters of the book do not assume any familiarity with object orientation nor functional programming. These chapters also step through other concepts with which the reader may not be familiar (such as list processing). From this background, John Hunt provides a practical introduction to object and functional technology using Scala, one of the newest and most interesting programming languages available. A variety of concepts are introduced through practical experience taking the reader beyond the level of the language syntax to the philosophy and practice of object-oriented development and functional programming. Students, and those actively involved in the software industry will find this comprehensive introduction to Scala, and to object orientation and functional programming, invaluable.
Chapter
We started this chapter discussing web services, which can provide an international technology standard for enterprise IT. Accessible not only from outside the enterprise, this technology is also widely usable internally — even within the same address space on a single machine. So now all interfacing problems are solved? Well, no. Web services provide the potential for a single software interfacing standard throughout enterprise IT. But the realization of that potential must be consciously designed — or architected. This chapter has outlined the major aspects of such an enterprise architecture based on service orientation. Starting with the nature of the web services technology, we have indicated how it can be applied systemically across the enterprise, how services can be federated, and how business logic can be separated from underlying platform technology. And we have briefly discussed how a project or program to evolve to an ESOA can be planned. In the next chapter, we look at the implementation of services, and also discuss an approach to lessening the divide between business requirements and the IT systems that deliver solutions to meet them.
Chapter
Automated testing—the process of having a computer run a scripted set of commands and checking the results—has come a long way in recent years. In the 1980s, the widespread adoption of graphical user interfaces (GUI) put a damper on the promise of automated testing. It was difficult and expensive to create tests that could accurately and reliably navigate a GUI. Small changes in position or functionality of the screen elements would often break existing tests and substantial resources would be required to correct the tests so they worked under the new conditions. This made testing more expensive than it was worth. The growth of the Web and standardization of HTML-based web applications has breathed new life into the field of automated testing.
Chapter
There are currently too many technologies out there—be it programming languages, platforms, or frameworks. Why is it, then, that Node.js—a project that’s hasn’t even reached version 1.0 at the time of this writing—is so popular these days?
Chapter
The Java EE 6 platform introduced the Java API for RESTful Web Services (JAX-RS), enabling developers to easily develop powerful web services. RESTful web services are those that support the Representational State Transfer (REST) architectural style, which is an architecture for producing web services that focus on a system’s resources, specifically on how states are transferred over HTTP. JAX-RS web services are stateless, and they utilize HTTP methods explicitly by mapping methods of web service classes to HTTP protocols (GET, POST, PUT, DELETE) via annotations. A RESTful web service provides custom URIs for access to web service resources, allowing web service methods to be invoked and passing zero or more parameters via a simple URI call from a web service client. RESTful web services can send responses that are in XML, JavaScript Object Notation (JSON), or other formats. The JAX-RS stack provides an annotation-rich architecture for designing web services, which makes it much easier for developers to produce powerful web services without XML configuration.
Article
In this excerpt from Software Engineering Radio, Johannes Thönes talks with James Lewis, principal consultant at ThoughtWorks, about microservices. They discuss microservices' recent popularity, architectural styles, deployment, size, technical decisions, and consumer-driven contracts. They also compare microservices to service-oriented architecture and wrap up the episode by talking about key figures in the microservice community and standing on the shoulders of giants. The Web extra at http://www.se-radio.net/2014/10/episode-213-james-lewis-on-microservices is an audio recording of Tobias Kaatz speaking with James Lewis, principal consultant at ThoughtWorks, about microservices. They discuss microservices' recent popularity, architectural styles, deployment, size, technical decisions, and consumer-driven contracts. They also compare microservices to service-oriented architecture and wrap up the episode by talking about key figures in the microservice community and standing on the shoulders of giants.
Conference Paper
The trend towards cloud-based applications changes the way customers run their businesses, but also the way software is sold and delivered. This affects software ecosystems and the way software vendors interact and manage their partners. In order to explore on the impacts, we conducted a single-case study by examining a globally leading software vendor of both, traditional on-premises software as well as cloud solutions. The study reveals new insights on how the SaaS revolution impacts partner management within software ecosystems from a vendor perspective, for instance that successful cloud partners may not necessarily come from a cloud background.