Conference PaperPDF Available

Cloud Function Lifecycle Considerations for Portability in Function as a Service

Cloud Function Lifecycle Considerations for Portability in Function as a
Robin Hartauer, Johannes Manner aand Guido Wirtz b
Distributed Systems Group, University of Bamberg, Germany
{johannes.manner, guido.wirtz}
Keywords: Function as a Service, Serverless Computing, Portability, Function Lifecycle
Abstract: Portability is an important property to assess the quality of software. In cloud environments, where functions
and other services are hosted by providers with proprietary interfaces, vendor lock-in is a typical problem.
In this paper, we investigated the portability situation for public Function as a Service (FaaS) offerings based on
six metrics. We designed our research to address the software lifecycle of a cloud function during implemen-
tation, packaging and deployment. For a small use case, we derived a portability-optimized implementation.
Via this empirical investigation and a prototypical migration from AWS Lambda to Azure Function and from
AWS Lambda to Google Cloud Function respectively, we were able to reduce writing source code in the latter
case by a factor of 17 measured on a Lines of Code (LOC) basis. We found that the default zip packaging
option is still the favored approach at Function as a Service (FaaS) platforms. For deploying our functions to
the cloud service provider, we used Infrastructure as Code (IaC) tools. For cloud function only deployments
the Serverless Framework is the best option whereas Terraform supports developers for mixed deployments
where cloud functions and dependent services like databases are deployed at once.
In 1976 already, BOE HM and others stated that porta-
bility of an application is one of eleven proper-
ties for describing the quality of software (Boehm
et al., 1976). An ISO/IEC/IEEE standard for systems
and software engineering vocabulary defined porta-
bility as “the ease with which a system or compo-
nent can be transferred from one hardware or soft-
ware environment [...] with little or no modifica-
tion” (ISO/IEC/IEEE, 2010, p. 261). This notion of
portability is the same for applications hosted in the
cloud. It is especially important that users are able
to switch between vendors based on their quality of
services and their performance (Gonidis et al., 2013).
Efforts to enable portability are already present in
the community. One example are cloud function trig-
gers. These triggers are events from different sources
like databases. When a new entry is created, an event
is created which triggers the execution of the cloud
function. The structure of these events are propri-
etary and specified by the service provider, in our
case a public cloud service provider. CloudEvents1,
a Cloud Native Computing Foundation (CNCF) incu-
bator project, tries to solve the interoperability2issue.
This standard tries to build a foundation for communi-
cation across vendors when invoking cloud functions.
We will not address the interoperability issue in this
work since we agree with KOLB (Kolb, 2019) that in-
teroperability and portability are no synonyms despite
their interchangeable usage in research and industry.
FIS CHER and others (Fischer et al., 2013) support this
assessment. They see interoperability from a commu-
nication perspective compared to a deployment per-
spective on portability.
Since portability for Platform as a Service (PaaS)
has already been investigated in detail (Kolb, 2019),
we want to suggest first steps towards portability in
FaaS as well as mitigation strategies to avoid a ven-
dor lock-in. However, in practice, a single function
is rarely used on its own. Dependent services like
2“the capability to communicate, execute programs, and
transfer data among various functional units in a manner
that requires the user to have little or no knowledge of the
unique characteristics of those units“ (ISO/IEC/IEEE, 2010,
p. 186)
- - - PREPRINT - - -
databases, gateways, messaging systems are needed
to build cloud applications as well. These services
increase the risk for an ecosystem lock-in at the se-
lected vendor. This was already identified as a risk
for building cloud (Armbrust et al., 2010) and espe-
cially serverless applications (Baldini et al., 2017) by
having the fast changing “API jungle” (van Eyk et al.,
2018) in mind.
Additionally we want to extend our scope in or-
der to consider on the software lifecycle rather than
the development of a single function only. Initial de-
velopment, evolution, service deployment and opera-
tion, closedown and phaseout are the five steps of the
software lifecycle as defined by RA JLICH and BEN -
NE TT (Rajlich and Bennett, 2000). Since portability
considerations only influence the first three phases,
we will exclude the latter two. We combine the first
two phases in implementation aspects and split servic-
ing into packaging and deployment. Each of our re-
search questions is related to one of these three steps:
RQ1: (1) How can the portability of a serverless
application be assessed based on code properties
and metrics? (2) How can the challenges of porta-
bility be solved when considering dependencies
on other services like in-function database calls?
RQ2: (1) Which packaging options exist for
cloud function provisioning? (2) Does the chosen
packaging option increase or decrease portability
for a cloud function?
RQ3: (1) Which deployment options exist for
cloud functions? (2) Does the chosen deployment
option increase or decrease portability for a cloud
function? (3) How can Infrastructure as Code so-
lutions support developers to provide cloud func-
tions with and without dependent services on dif-
ferent platforms?
To answer our research questions, we state related
work in Section 2. For deployment, we chose the
three public cloud providers Amazon Web Services
(AWS), Azure Cloud (AZC) and Google Cloud Plat-
form (GCP). As a baseline, we use AWS and migrate
our functions and the dependent services to AZC and
GCP as suggested by (Yussupov et al., 2019). Un-
like their approach, we already had the migration in
mind and looked at potential portability weaknesses
during experiment design. We discuss our experimen-
tal setup in Section 3, where we list the technologies
and tools used. In Sections 4, 5 and 6, we discuss
the research questions in order. At the beginning of
each of these Sections, we describe the situation and
the challenges we faced. We then propose different
implementation solutions and define dimensions on
which to compare the solutions. In Section 7, we dis-
cuss our results and point out some threats to validity
introduced by our methodology and experiment. Fu-
ture work concludes the paper.
Some authors identified the need to have a uni-
form interface to finally solve portability issues in
FaaS (Jonas et al., 2019). They argue that an ab-
straction introduced by Knative3via containerization
and Kubernetes could be a solution which is also
pushed by Google’s cloud platform. Another interest-
ing work was done by (Yussupov et al., 2019) who
implemented four typical FaaS use cases on AWS.
Then they migrated these applications to AZC and
IBM cloud where they faced different types of lock-
ins. To categorize these lock-ins, they used a scheme
according to HO PH E4. The assessment was based on
a binary (yes/no) decision to determine whether some
aspects need to be changed. These include, for exam-
ple, the programming language if the target platform
of the migration does not offer the initial language.
Furthermore, they included a mapping of typical
application components at different providers which
helps to find the corresponding service at the target
platform. This experience report on the challenges
of an “unplanned migration” (Yussupov et al., 2019)
is an interesting contribution, but lacks details like
the Lines of Code (LOC) metric, and the compari-
son between the providers. In addition, YUSSUP OV
and others (Yussupov et al., 2020) asked the question
whether and to which extent an application is portable
to another platform. To investigate this, they trans-
formed a serverless application to a provider agnostic
model using established frameworks like the Cloud-
Application-Management Framework (CAMF) (An-
toniades et al., 2015) or Topology and Orchestration
Specification for Cloud Applications (TOSCA) (Binz
et al., 2013).
In edge use cases, portability is even more impor-
tant since the hardware used is more heterogeneous.
Early works deal with these issues by suggesting a
FaaS platform based on WebAssembly5(Hall and Ra-
machandran, 2019) or using Linux containers, i.e.
cgroups and namespaces capabilities to allow for a
common interface for edge and cloud elements to en-
able also shifting of components (Wolski et al., 2019).
- - - PREPRINT - - -
To demonstrate the challenges of migrating source
code of a cloud function from one platform to another,
we created a small prototype6. We implemented a
function of a typical user management service con-
sisting of two parts: First, the function can create
users and store them in a database. Secondly, it can
retrieve the user by checking the credentials inserted
via a login. We keep our function simple since the fo-
cus of this work lies on looking at interfaces and the
lifecycle, but nevertheless this single, simple function
already reveals the most important challenges. During
implementation we focused on the portability chal-
lenges described above. For a better comparison of
the lifecycle tasks, we investigated each task in iso-
lation. We developed the implementations for each
platform in parallel and used the gained knowledge
to understand similarities and differences. This en-
abled us to quantify code changes and find best prac-
tices for every task. For the implementation we used
NodeJS 147, Typescript8, ExpressJs9, the Serverless
Framework10 and Terraform11.
4.1 Challenges, Properties and Metrics
During the implementation phase of our function, we
found several aspects to consider when tackling porta-
bility problems. We later relate these properties, see
Table 1, to metrics, see Table 2, and give an assess-
ment of the portability. The first property (P1) is the
handler implementation, which is different for each
platform and language. Listing 1 shows examples for
JavaScript handler interfaces for the three providers.
There, we see a difference in the order of parame-
ters and the data they contain. Google Cloud Func-
tions (GCF) uses the ExpressJS abstraction. AWS
Lambda and Azure Functions (AZF) have custom for-
mats for their handler abstraction. Additionally, the
return values of the different platforms are handled
differently. At AWS, we have to configure an addi-
tional gateway (Amazon API Gateway) to handle the
incoming and outgoing traffic. This gateway also has
6 hartauer
the option to add custom transformation rules, han-
dle security related features etc. We are aware that,
as YU SSUPOV and others (Yussupov et al., 2019) al-
ready stated, such a use of proprietary features hin-
ders the migration. Therefore, we did not use these
additional request and response capabilities at AWS.
Comparing the logging capabilities shows that the
providers use different logging frameworks. This
is the second property we want to consider (P2).
AWS Lambda and GCF use the JavaScript default
console.log() statement which writes the results
to the corresponding monitoring/logging service like
AWS CloudWatch. The logging data is stored on a
function and request basis. The same behavior can be
seen on Azure, when using context.log().
Listing 1: Handler Interfaces for included Providers in the
// G CF ha n d le r
e x p o r t s . h e l l o H t t p = ( r eq , r e s ) =>{
c o n s o l e . l o g ( H e l l o CLOSER ’ ) ;
r e s . s e n d ( He l l o CLOSER ’ ) ;
// A WS La m bda han d ler
e x p o r t s . h a n d l e r =
a s y n c function ( e v e n t , c o n t e x t ) {
c o n s o l e . l o g ( H e l l o CLOSER ’ ) ;
r e t u r n c o n t e x t . l og S tr ea mNa m e ;
// A ZF ha n d le r
m od ul e . e x p o r t s =
a s y n c function ( c o n t e x t , r e q ) {
c o n t e x t . l o g ( ’ H e l l o CLOSER ’ ) ;
r e t u r n {bo d y : He l l o CLOSER” };
The third and last property (P3) is the access to
other services in the provider’s ecosystem. For every
investigated platform, custom SDKs and APIs com-
plicate portability. The three properties and their man-
ifestations at the three providers are summarized in
Table 1.
Table 1: Implementation Aspect assessed at investigated
P1 custom format ExpressJs
P2 console.log context.log console.log
P3 native SDKs and APIs
When migrating the function from one provider to
another, the number of source code changes are one
metric (M1) to consider as suggested by (Lenhard and
- - - PREPRINT - - -
Table 2: Metrics for a Portability Assessment of the Cloud Function Lifecycle at different FaaS Providers.
Metric Description Section
M1 Source code changes based on LOC metric 4.3
M2 Number of different locations where source code has to be changed 4.3
M3 Number of steps in the packaging process 5
M4 Portability assessment of the deployment configuration 6.2
M5 Number of platform dependent configuration parameters 6.2
M6 Portability assessment of the deployment process and involved tooling 6.2
Wirtz, 2013). We measure M1 based on the LOC met-
ric which is easily quantifiable. In contrast, soft facts
like experience of developers, tool support etc. are
hard to quantify. Therefore, we include only the LOC
metric for an unbiased comparison. The number of
different locations where code needs to be altered is
another metric (M2) for assessing how error-prone the
migration might be.
The properties (P1-P3) and the metrics (M1 and
M2) give an answer to question RQ1.1. In the follow-
ing, we propose implementation improvements for
the three properties, which also positively influence
the metrics M1 and M2 as shown in the evaluation at
the end of this Section.
4.2 Prototypical Implementation
As stated before, we first implemented our cloud
function on AWS Lambda with portability in mind as
a baseline for our experiment. To harmonize the dif-
ferent function handler interfaces (P1), we adjusted
the AWS Lambda and AZF handler interface and
added another layer to conform to the request and re-
sponse handling based on the ExpressJS12 framework
as already done by GCF. We used a transformation
package to transform the incoming request to meet
the request/response interface. After this transforma-
tion we were able to use the generic ExpressJS request
handling. This enables a separation of business logic
and provider dependent logic through our middleware
layer. In addition, it allows us to test the business
functionality independently from the provider since
we can start the standalone ExpressJS application.
To handle the different logging-mechanisms (P2),
we used a logging interceptor. Because of this, ev-
ery console.log() statement will be translated to
the corresponding provider statement. This intercep-
tor was only used for AZF since the other platforms
already used the JavaScript standard.
To solve the last problem (P3) with different
vendor-specific services - in our case databases - we
used the Factory-Pattern (Ellis et al., 2007). This
pattern returns a provider specific object for database
access when a user requests it for the corresponding
ecosystem. This improves our metric M2 since the in-
ternal interface used by the business logic is not af-
fected by the migration and therefore changes are cen-
tralized. Only the factory method has to be extended
to work with the new database service when migrat-
ing to a new platform. For the database services,
we used Amazon DynamoDb, Azure Cosmos Db and
Google Firestore since they are all document oriented
databases. The corresponding database interface im-
plementation is selected based on an environment
variable at runtime. These implementation optimiza-
tions answer RQ1.2.
4.3 Implementation Evaluation
To evaluate the improvements suggested in the previ-
ous Section, we used the LOC metric. We evaluate
M1 in Table 3 by looking at the LOC measure for the
migration from AWS to AZC as well as from AWS to
Table 3: Source Code Changes by Migrating the Function
of our Use Case.
Unoptimized Optimized
AWS-AZC +86 -71 +80 -9
AWS-GCP +106 -72 +62 -10
In the unoptimized case none of the aforemen-
tioned improvements (handler wrapper, log intercep-
tor, database factory) are implemented. Both un-
optimized migrations show a lot of code changes,
added as well as removed LOC. This high number
of changes is due to platform dependent code. For the
optimized case, we see a lot of LOC additions in since
it features all the changes described in Section 4.2.
When we assume, that these middleware components
are in place already, the code additions for AWS to
AZC would be only +20/-6 LOC and +6/-8 for the
migration of AWS to GCP respectively.
M2 is hard to assess quantitatively for the unopti-
mized case since some platform specific features can-
not be easily compared with another. The need to
have resource groups on AZC or releasing the API
on GCP showcases this problem. Furthermore, us-
- - - PREPRINT - - -
ing native database accesses, which are spread in the
code, lead to a situation where we have to adjust every
database related call during migration. It is impor-
tant to reduce the number of code location changes
as introduced by our improvements and shown due
to the results in Table 3. When assessing M2 in the
optimized case, we only have three code locations to
change namely the abstract factory, the handler har-
monization and the new implementation for the cor-
responding database service. The fewer locations to
change, the less error-prone is the migration.
The presented results might be slightly different,
when starting with GCF as baseline. However, this
does not affect our findings about portability of func-
tions using our approach.
After the implementation of our cloud function, the
next step in the lifecycle is packaging. In our sec-
ond research question (RQ2.1), we ask: Which op-
tions are offered for packaging an application? The
FaaS providers started their offerings by accepting zip
archives with the source code and the dependencies
needed to run the function.
One packaging alternative is a OCI13 compliant
image. AWS Lambda and AZF provide this option,
whereas GCF is currently not capable to run func-
tions based on a custom image. This constrains the
portability of a containerized solution to GCF. How-
ever, GCP offers another service called Google Cloud
Run (GCR) where a user can run arbitrary images in
aserverless fashion. For comparison, we use this
equivalent offering for discussing the packaging op-
tion for our function at GCP.
Containerization is popular especially due to
Docker and frameworks like Kubernetes. The claim
“Build, Ship, Run, Any App Anywhere”14 and its
practical implications are the reason for the wide
adoption of this technology. Since containers allow
an abstraction from the platform and operating sys-
tem they are running on, their declarative descrip-
tion (in the Docker universe the Dockerfiles) faces
the same problems as function handlers. Due to the
design of the platforms, e.g. when using optimized
virtual machine monitors like Firecracker15, the plat-
form providers restrict the number of potential base
Infographic OneDocker 09.20.2016.pdf
images and publish a set of valid ones. The base im-
age problem at AWS Lambda and AZF is different
to the offering of GCR. GCR accepts all images by
running the functions on a Knative hosting as already
mentioned in related work (Jonas et al., 2019).
Table 4: Assessment for M3, process portable ( ) to not
portable ( ) and (-) for not available.
zip -
image -
To assess the packaging of a cloud function, we
focus on the metric M3 which tries to quantify the
number of steps in the packaging process. Table 4
shows an assessment for M3 and gives an answer to
RQ2.2 which addresses the question, if the chosen
packaging option increases or decreases portability.
Zip packaging option enables an easier portability
compared to the image option when the implemen-
tation challenges are addressed, as discussed in the
previous Section. Our metric M3 is zero in this case
since only a custom zip tool has to be used by the de-
veloper. There are differences for deployment, when
choosing zip or image packaging, but we only con-
sider the packaging process here.
Image packaging on the other side is not that
portable. AWS Lambda has a set of base images con-
figured which support the same runtimes available for
zip packaging. For AWS Lambda, M3 is only affected
by exchanging the base image to a platform compliant
one. For AZF, the user additionally has to configure
some environmental parameters, at least AzureWeb-
JobsScriptRoot and whether logging is enabled or not.
For this reason, we assess the portability worse com-
pared to AWS Lambda. In the GCP case, there is no
image support for GCF. The corresponding alterna-
tive can use any image for starting containers. There-
fore, portability is not hindered by choosing GCR.
Due to these aspects and in order to use only FaaS
offerings from the corresponding providers, we use
the zip packaging option for deployment in the next
6.1 Deployment Options
Before comparing different deployment solutions, we
want to describe the current suitable deployment op-
tions for our use case, see RQ3.1. However, this list
of options is not necessarily comprehensive. First
- - - PREPRINT - - -
of all, there are provider related deployment tools as
listed in Table 5. These tools have their own syn-
tax and semantics for deploying functions and de-
pendent services like our database to the correspond-
ing ecosystem. Differences in the used configura-
tion syntax further complicate the migration from one
provider to another. Currently, the Google Deploy-
mentManager does not offer the deployment of cloud
functions, but offers a database deployment. There-
fore, we had to use the gcloud CLI feature and the
DeploymentManager to deploy the function and the
corresponding database.
Table 5: Selection of Provider Deployment Tools and ag-
nostic Solutions.
Platform Config
CloudFormation AWS JSON,
ResourceManager AZC JSON
+ gcloud CLI
Framework independent YAML
Terraform independent HCL,
Besides these three tools (AWS CloudFormation,
Azure ResourceManager and Google Deployment-
Manager), there are provider independent tools. The
Serverless Framework with more than 40,000 GitHub
stars is a widely used option for deploying cloud func-
tions. It specifies an abstraction for the configuration
of the functions and transforms this provider indepen-
dent format for deployment in the provider dependent
formats. Since it is specialized for cloud functions,
dependent services like our database cannot be de-
ployed via the Serverless Framework. Furthermore,
it addresses only a single provider at a time. For
multi-cloud deployments, multiple independent de-
ployments need to be started.
In our use case scenario, the most generic Infras-
tructure as Code (IaC) solution is Terraform. Despite
using a custom HCL syntax for describing compo-
nents, the tool is capable to deploy cloud functions
and dependent services as well as components to dif-
ferent providers within the same deployment process.
6.2 Deployment Metrics
For the deployment assessment we use some metrics
introduced by (Kolb et al., 2015) for their use case of a
cloud to cloud deployment. We do not state the num-
ber of LOC changes here, since the different config-
uration syntaxes are more or less verbose and, there-
fore, hardly comparable. Instead we rate the metrics
on a scale from not portable ( ) to portable ( ) as
already done in Table 4. The first metric we want
to consider is whether the configuration of the de-
ployment is easily transferable based on the used syn-
tax (M4). It works on a more general level and assesses
whether changes occur in the tooling and its syntax
and semantics. Such changes hinder portability since
another tool increases the complexity and forces the
developer to learn a new syntax. The next metric,
M5, is the number of platform dependent configura-
tion parameters which need to be adapted from one
provider to another. For example a resource setting,
i.e. the memory setting, is one of the most important
options to choose. At AWS Lambda, the property is
called MemorySize, at AZF a specification is not pos-
sible due to the dynamic allocation of resources and
at GCF, the property is named memory. The last met-
ric (M6) assesses the number of different steps during
deployment when looking at the difference between
the source and target platform.
Table 6: Portability Assessment of Selected Tools, not
portable ( ) to portable ( ).
M4 M5 M6
IaC Tools
Framework / /
Table 6 contains the information we need to an-
swer RQ3.2 and gives first insights into the portability
assessment of our selected IaC tools. In summary, the
providers’ native IaC tools lead to a vendor lock-in
and a low level of portability for all metrics, which is
not surprising due to the challenges identified in this
work. The situation is different for the provider inde-
pendent IaC tools. The first metric investigated for the
Serverless Framework is medium portable, since the
cloud functions can be defined in one configuration
format across platforms, but for dependent services
like the database, we need the provider’s native IaC
tool which leads to a mixed evaluation. Terraform al-
lows full portability with regard to this metric since
the HCL syntax is used for functions and dependent
Since the Serverless Framework’s scope is on
cloud function deployment at different providers, they
harmonize the configuration settings where possi-
ble. For example, they use memorySize as a key in
their deployment YAML for AWS Lambda and GCF,
- - - PREPRINT - - -
whereas there is no corresponding entry for AZF.
We assess the portability of the configuration param-
eters as medium since there are some special settings
present for different platforms, which are part of the
framework adapters but cannot be ported that easily
to other platforms. In these situations, workarounds
need to be developed which hinder migrations. Ter-
raform applies a different concept for the platform de-
pendent parameters. Similar settings across different
providers are not harmonized and stay therefore plat-
form dependent, e.g. the storage location for the func-
tion is named s3 bucket for AWS Lambda, whereas
GCF uses two properties source archive bucket and
source archive object. Therefore the Terraform HCL
is not reusable and has to be rewritten for every mi-
For the last metric M6 we look at starting the
deployment process and the commands and tools
needed. For the Serverless Framework, the situa-
tion is similar to M4, when dependent services are
deployed as well. For a cloud function only sce-
nario, the commands and the tools needed are iden-
tical, hence the solution is fully portable in this case.
For other scenarios with provider specific services, we
need the provider tools as well and, therefore, only
the Serverless Framework part is reusable for starting
the deployment. As already discussed, Terraform is
a provider agnostic tool and the command to deploy
and the steps to execute are the same for all providers.
7.1 Discussion of the Results
Our research on cloud function lifecycle considera-
tions for portability in FaaS revealed, that it is im-
portant to also consider other aspects besides imple-
mentation. Since cloud functions are seldom used in
isolation, dependent services have to be considered at
the same time.
We first concentrated on the source code to de-
couple the logic and the provider specific interfaces
in Section 4. We especially focused on three as-
pects. We investigated the Cloud Function Handler
interfaces where we suggested the use of a transfor-
mation package to transform the incoming request
to meet a generic interface, in our case study the
ExpressJS-Framework interface, as well as using a
generic log interceptor. To solve the problem with
different vendor-specific database services, we used
the Factory-Pattern. We found that we are able to re-
duce the code changes to a minimum as stated in Ta-
ble 3 for metric M1. We reduced writing new code
measured on a LOC basis by a factor of 17 when
migrating from AWS to GCP comparing the unopti-
mized case to the optimized case excluding the im-
proved middleware.
For packaging and deploying of cloud functions,
to the best of our knowledge, we are the first ones
to consider this in an empirical evaluation. We used
the default zip packaging option as well as build im-
ages from our functions. Due to some limitations, e.g.
GCF is not supporting containers and proprietary im-
ages, we recommend zip packaging (see Table 4 with
metric M3).
For deployment, we used the native provider IaC
framework like AWS CloudFormation and provider
independent frameworks like Terraform. We already
answered the research questions on the pros and cons
of the deployment tools in Table 6. Only RQ3.3 is
unanswered where we ask how IaC solutions support
developers to provide cloud functions with and with-
out dependent services. The answer is two-fold. For
situations where only functions should be deployed,
the Serverless Framework is favoured whereas for
mixed deployments with other services from the same
or a different provider’s ecosystem, Terraform offers
the better features. It is capable of combining the
packaging and the deployment step. Due to its holis-
tic approach, a consistent deployment and undeploy-
ment process can be guaranteed. The only downside
when using platform independent tools is that im-
provements might not be available directly for inde-
pendent services compared to native solutions.
7.2 Threats to Validity
The introduced properties and metrics used for our
comparison are not complete in a sense that all facets
of portability issues are addressed already. Neverthe-
less, since the body of knowledge in this area is small,
only a few publications in the FaaS area address porta-
bility at all (Yussupov et al., 2019; Yussupov et al.,
2020), this work empirically contributes to it. We are
aware of some threats to validity which might com-
promise the results when reproducing the research:
Selected Providers for Migration - We only used
a subset of public cloud provider offerings available.
The LOC metrics will be different when using other
providers or starting with another provider like AZF
or GCF. The challenges remain the same but addi-
tional challenges might arise. Furthermore, includ-
ing open source offerings using a Kubernetes abstrac-
tion might improve some drawbacks identified like
the handler interface.
Selected Programming Language - In our exper-
iment, we only used JavaScript for implementing our
- - - PREPRINT - - -
function. The main reason was that JavaScript is well
supported for all providers used and does not intro-
duce inconsistency in the language dimension. There-
fore, as for the provider case, exact measures might
differ when repeating the experiment with another
language but the challenges like the native SDKs and
handler interfaces are the same.
Our ideas for future work are threefold. First, the
identified challenges are similar to already known
integration problems, where different data formats
and interfaces need to be harmonized. Interesting
ideas are described in an abstract way by HO HP E
and WO OLF (Hohpe and Woolf, 2003). When using
these abstract building blocks, the enterprise integra-
tion patterns, migrations would be a lot easier. Devel-
opers using these patterns understand their meaning
and how to implement them. Furthermore, a shared
open source tool box where these patterns are already
implemented - like in our factory example for the
databases - reduce migration efforts and finally the
portability issue.
Based on the previous idea, established frame-
works like CAMF or TOSCA could be used to de-
scribe applications and annotate the components with
metadata to get portable functions and use the previ-
ously mentioned building blocks based on metadata
specified at design time.
Lastly, open source platforms were not considered
at all, when talking about portability. Since many
open source platforms like OpenFaaS or Knative use
a sort of Kubernetes abstraction, the question would
be if this already solves the portability issues to some
Antoniades, D. et al. (2015). Enabling cloud application
portability. In Proc. of UCC, pages 354–360.
Armbrust, M. et al. (2010). A View of Cloud Computing.
Communications of the ACM, 53(4):50–58.
Baldini, I. et al. (2017). Serverless Computing: Current
Trends and Open Problems. In Research Advances in
Cloud Computing, pages 1–20. Springer Singapore.
Binz, T. et al. (2013). TOSCA: Portable automated deploy-
ment and management of cloud applications. In Ad-
vanced Web Services, pages 527–549. Springer New
Boehm, B. W. et al. (1976). Quantitative evaluation of soft-
ware quality. In Proc. of ICSE.
Ellis, B. et al. (2007). The factory pattern in API design: A
usability evaluation. In Proc. of ICSE.
Fischer, R. et al. (2013). Eine bestandsaufnahme von stan-
dardisierungspotentialen und -l¨
ucken im cloud com-
puting. In Proc. of WI.
Gonidis, F. et al. (2013). Cloud application portability: An
initial view. In Proc. of BCI.
Hall, A. and Ramachandran, U. (2019). An execution model
for serverless functions at the edge. In Proc. of IoTDI.
Hohpe, G. and Woolf, B. (2003). Enterprise Integration
Patterns: Designing, Building, and Deploying Mes-
saging Solutions. Addison-Wesley Professional.
ISO/IEC/IEEE (2010). 24765-2010 - ISO/IEC/IEEE Inter-
national Standard - Systems and software engineering
– Vocabulary.
Jonas, E. et al. (2019). Cloud Programming Simplified: A
Berkeley View on Serverless Computing. Technical
Report UCB/EECS-2019-3.
Kolb, S. (2019). On the Portability of Applications in Plat-
form as a Service. Bamberg University Press.
Kolb, S. et al. (2015). Application migration effort in
the cloud - the case of cloud platforms. In Proc. of
Lenhard, J. and Wirtz, G. (2013). Measuring the portability
of executable service-oriented processes. In Proc. of
Rajlich, V. and Bennett, K. (2000). A staged model for the
software life cycle. Computer, 33(7):66–71.
van Eyk, E. et al. (2018). Serverless is More: From PaaS to
Present Cloud Computing. IEEE Internet Computing,
Wolski, R. et al. (2019). Cspot: Portable, multi-scale
functions-as-a-service for iot. In Proc. of SEC.
Yussupov, V. et al. (2019). Facing the unplanned migration
of serverless applications. In Proc. of UCC.
Yussupov, V. et al. (2020). SEAPORT: Assessing the porta-
bility of serverless applications. In Proc. of CLOSER.
- - - PREPRINT - - -
... Mitigate the risk of an application being locked to a serverless provider. Due to the lack of a standard and proprietary serverless offerings among providers, we may find it challenging or impossible to migrate an application [3], [4]. Multi-cloud deployment suggests choosing common services in order to preclude being locked into a vendor. ...
... We are motivated to deploy serverless multi-cloud to mitigate the effects of vendor lock-in as migrating an even simplified workload may encounter tricky issues or dead-ends [3], [4]. Lithops is a multi-cloud serverless computing framework for big data analytics and parallel jobs, supporting running the same code across multiple clouds [29]. ...
Serverless computing is a widely adopted cloud execution model composed of Function-as-a-Service (FaaS) and Backend-as-a-Service (BaaS) offerings. The increased level of abstraction makes vendor lock-in inherent to serverless computing, raising more concerns than previous cloud paradigms. Multi-cloud serverless is a promising emerging approach against vendor lock-in, yet multiple challenges must be overcome to tap its potential. First, we need to be aware of both performance and cost of each FaaS provider. Second, a multi-cloud architecture needs to be proposed before deploying a multi-cloud workflow. Domain-specific serverless offerings must then be integrated into the multi-cloud architecture to improve performance or save costs. Moreover, dealing with serverless offerings from multiple providers is challenging. Finally, we require workload portability support for serverless multi-cloud. In this paper, we present a multi-cloud library for cross-serverless offerings. We develop the End Analysis System (EAS) to support comparison among public FaaS providers in terms of performance and cost. Moreover, we design proof-of-concept multi-cloud architectures with domain-specific serverless offerings to alleviate problems such as data gravity. Finally, we deploy workloads on these architectures to evaluate several public FaaS offerings.
... We used version 1.1.1 of the BSPD 12 and version 2.1.1 for the WSS application 13 . Both the BSPD and WSS benchmark are similar in their overall structure. ...
... This gives a developer the chance to read all code which changes data concurrently in a single or limited number of files. From a portability investigation [13], we know that the lower the number of locations where source code has to be read or changed, the less error prone is the implementation. In the case of WSS this class is called DataConsistencyManager. Another important aspect is to prevent the application from becoming deadlocked. ...
Conference Paper
Full-text available
MicroStream is a new in-memory data engine for Java applications. It directly stores the Java object graph in an optimized way, removing the burden of having to map data from the Java object model to the relational data model and vice versa, a problem well known as the impedance mismatch. Its vendor claims that their product outper-forms JPA-based systems realized with Hibernate. They furthermore argue that it is well-suited for implementing microservices in a cloud-native way where each service complies with the decentralized data management principle of microservices. Our work empirically assessed the performance of MicroStream by implementing two applications. The first one is a modified version of Mi-croStream's BookStore performance demo application. We used it to reproduce the data the MicroStream developers used as backing for their performance claims. The second application is an OLTP system based on the TPC-C benchmark specification. MicroStream does not provide any sophisticated features for concurrent data access management. Therefore, we created two distinct MicroStream-based approaches for our OLTP application. For the first solution, we used a third-party transaction management system called JACIS. The second solution relies on structured modelling and Java 1.0 concurrency concepts. Our results show that MicroStream is indeed up to 427 times faster when comparing the service execution time on the server with the fastest JPA transaction. From a user's perspective, where network overhead, scheduling etc. impact the overall server response time, MicroStream is still up to 47% faster than a comparable JPA-based solution. Furthermore, we implemented concurrent data access by using an approach based on structured modelling to handle lock granularity and deadlocks.
... Combined with the multi-tenancy (c7) and selfcontainedness of these services (c2), all characteristics of our Serverless definition are fulfilled. Since the number of BaaS offerings needed to get FaaS in production is higher than e.g. in a PaaS scenario, some authors coin this stronger kind of vendor lock-in as ecosystem lock-in [52]- [54]. ...
Conference Paper
Full-text available
Nearly a decade ago, the emergence of an event-driven computing model attracted a lot of attention in industry and academia. This new computing concept has been described with a plethora of terms: Serverless Computing, Function as a Service (FaaS), Serverless functions, cloud functions, server-aware vs. server-less, to name only a few of the most frequently used terms. This enumeration showcases a problem of the domain: the lack of clear terminology, a conceptualization of Serverless and FaaS and a differentiation of the two. To overcome this limitation we present a Structured Literature Review (SLR) and usage trends which help to understand the current usage of the terms. Based on characteristics extracted from several definitions and validated in the course of the SLR we demarcate Serverless and FaaS. Our SLR shows that 90% of the considered papers predominantly use the term Serverless whereas most of the authors intend to write about FaaS in the sense of our definition. Another insight is that the viewpoint on Serverless and FaaS evolved over time and that since 2021 most authors see Serverless as an umbrella term for FaaS and Backend as a Service (BaaS). Some of them also include further cloud service models, i.e. parts of PaaS and SaaS, providing a server-less experience for an application developer. This is an interpretation we share.
MicroStream is a new in-memory data engine for Java applications. It directly stores the Java object graph in an optimized way, removing the burden of having to map data from the Java object model to the relational data model and vice versa, a problem well known as the impedance mismatch. Its vendor claims that their product outperforms JPA-based systems realized with Hibernate. They furthermore argue that it is well-suited for implementing microservices in a cloud-native way where each service complies with the decentralized data management principle of microservices. Our work empirically assessed the performance of MicroStream by implementing two applications. The first one is a modified version of MicroStream’s BookStore performance demo application. We used it to reproduce the data the MicroStream developers used as backing for their performance claims. The second application is an OLTP system based on the TPC-C benchmark specification. MicroStream does not provide any sophisticated features for concurrent data access management. Therefore, we created two distinct MicroStream-based approaches for our OLTP application. For the first solution, we used a third-party transaction management system called JACIS. The second solution relies on structured modelling and Java 1.0 concurrency concepts. Our results show that MicroStream is indeed up to 427 times faster when comparing the service execution time on the server with the fastest JPA transaction. From a user’s perspective, where network overhead, scheduling etc. impact the overall server response time, MicroStream is still up to 47% faster than a comparable JPA-based solution. Furthermore, we implemented concurrent data access by using an approach based on structured modelling to handle lock granularity and deadlocks.
Full-text available
In recent years, the cloud hype has led to a multitude of different offerings across the entire cloud market, from Infrastructure as a Service (IaaS) to Platform as a Service (PaaS) to Software as a Service (SaaS). Despite the high popularity, there are still several problems and deficiencies. Especially for PaaS, the heterogeneous provider landscape is an obstacle for the assessment and feasibility of application portability. Thus, the thesis deals with the analysis and improvement of application portability in PaaS environments. In the course of this, obstacles over the typical life cycle of an application - from the selection of a suitable cloud provider, through the deployment of the application, to the operation of the application - are considered. To that end, the thesis presents a decision support system for the selection of cloud platforms based on an improved delimitation and conceptualization of PaaS. With this system, users can identify offerings that enable application portability. For validation, a case study with a real-world application is conducted that is migrated to different cloud platforms. In this context, an assessment framework for measuring migration efforts is developed, which allows making the differences between compatible providers quantifiable. Despite semantically identical use cases, the application management interface of the providers is identified as a central effort factor of the migration. To reduce the effort in this area, the thesis presents a unified interface for application deployment and management. In summary, the work provides evidence of application portability problems in PaaS environments and presents a framework for early detection and avoidance. In addition, the results of the work contribute to a reduction of lock-in effects by proposing a suitable standard for management interfaces.
Full-text available
In the late-1950s, leasing time on an IBM 704 cost hundreds of dollars per minute. Today, cloud computing, that is, using IT as a service, on-demand and pay-per-use, is a widely used computing paradigm that offers large economies of scale. Born from a need to make platform as a service (PaaS) more accessible, fine-grained, and affordable, serverless computing has garnered interest from both industry and academia. This article aims to give an understanding of these early days of serverless computing: what it is, where it comes from, what is the current status of serverless technology, and what are its main obstacles and opportunities.
Full-text available
Serverless computing has emerged as a new compelling paradigm for the deployment of applications and services. It represents an evolution of cloud programming models, abstractions, and platforms, and is a testament to the maturity and wide adoption of cloud technologies. In this chapter, we survey existing serverless platforms from industry, academia, and open source projects, identify key characteristics and use cases, and describe technical challenges and open problems.
Conference Paper
Full-text available
Over the last years, the utilization of cloud resources has been steadily rising and an increasing number of enterprises are moving applications to the cloud. A leading trend is the adoption of Platform as a Service to support rapid application deployment. By providing a managed environment, cloud platforms take away a lot of complex configuration effort required to build scalable applications. However, application migrations to and between clouds cost development effort and open up new risks of vendor lock-in. This is problematic because frequent migrations may be necessary in the dynamic and fast changing cloud market. So far, the effort of application migration in PaaS environments and typical issues experienced in this task are hardly understood. To improve this situation, we present a cloud-to-cloud migration of a real-world application to seven representative cloud platforms. In this case study, we analyze the feasibility of the migrations in terms of portability and the effort of the migrations. We present a Docker-based deployment system that provides the ability of isolated and reproducible measurements of deployments to platform vendors, thus enabling the comparison of platforms for a particular application. Using this system, the study identifies key problems during migrations and quantifies these differences by distinctive metrics.
The term serverless is often used to describe cloud applications that comprise components managed by third parties. Like any other cloud application, serverless applications are often tightly-coupled with providers, their features, models, and APIs. As a result, when their portability to another provider has to be assessed, application owners must deal with identification of heterogeneous lock-in issues and provider-specific technical details. Unfortunately, this process is tedious, error-prone, and requires significant technical expertise in the domains of serverless and cloud computing. In this work, we introduce SEAPORT, a method for automatically assessing the portability of serverless applications with respect to a chosen target provider or platform. The method introduces (i) a canonical serverless application model, and (ii) the concepts for portability assessment involving classification and components similarity calculation together with the static code analysis. The method aims to be compatible with existing migration concepts to allow using it as a complementary part for serverless use cases. We present an architecture of a decision support system supporting automated assessment of the given application model with respect to the target provider. To validate the technical feasibility of the method, we implement the system prototypically.
Conference Paper
In this paper, we present CSPOT, a distributed runtime system implementing a functions-as-service (FaaS) programming model for the "Internet of Things" (IoT). With FaaS, developers express arbitrary computations as simple functions that are automatically invoked and managed by a cloud platform in response to events. We extend this FaaS model so that it is suitable for use in all tiers of scale for IoT - sensors, edge devices, and cloud - to facilitate robust, portable, and low-latency IoT application development and deployment. To enable this, we combine the use of Linux containers and namespaces for isolation and portability, an append-only object store for robust persistence, and a causal event log for triggering functions and tracking event dependencies. We present the design and implementation of CSPOT, detail its abstractions and APIs, and overview examples of its use. We empirically evaluate the performance of CSPOT using different devices and applications and find that it implements function invocation with significantly lower latency than other FaaS offerings, while providing portability across tiers and similar data durability characteristics.
Portability and automated management of composite applications are major concerns of todayâs enterprise IT. These applications typically consist of heterogeneous distributed components combined to provide the applicationâs functionality. This architectural style challenges the operation and management of the application as a whole and requires new concepts for deployment, configuration, operation, and termination. The upcoming OASIS Topology and Orchestration Specification for Cloud Applications (TOSCA) standard provides new ways to enable portable automated deployment and management of composite applications. TOSCA describes the structure of composite applications as topologies containing their components and their relationships. Plans capture management tasks by orchestrating management operations exposed by the components. This chapter provides an overview on the concepts and usage of TOSCA.
Conference Paper
Growing interest towards cloud application platforms has resulted in a large number of platform offerings to be already available on the market and new related products to be continuously launched. However, there are a number of challenges that prevent cloud application platforms from becoming widely adopted. One such challenge is application portability. This paper reports on an ongoing effort to explore the area of cloud application portability. We briefly examine the issue of heterogeneity in cloud platforms and highlight specific platform characteristics that may hinder the portability of cloud applications. We present some high level approaches and existing work that attempts to address this challenge. In order to narrow down the area of our exploration we have been carrying out an experiment in cross-platform application development and deployment with four prominent cloud platforms: OpenShift, Google App Engine, Heroku, and Amazon Elastic Beanstalk. We briefly discuss our initial conclusions from this ongoing experimentation.