Conference PaperPDF Available

Towards Performance and Cost Simulation in Function as a Service

Authors:

Abstract

Function as a Service (FaaS) promises a more cost-efficient deployment and operation of cloud functions compared to related cloud technologies, like Platform as a Service (PaaS) and Container as a Service (CaaS). Scaling, cold starts, function configurations, dependent services, network latency etc. influence the two conflicting goals cost and performance. Since so many factors have impact on these two dimensions, users need a tool to simulate the function in an early development stage to solve these conflicting goals. Therefore, a simulation framework is proposed in this paper.
Towards Performance and Cost Simulation in
Function as a Service
Johannes Manner
Distributed Systems Group, University of Bamberg, Germany
johannes.manner@uni-bamberg.de
Abstract.
Function as a Service (FaaS) promises a more cost-efficient
deployment and operation of cloud functions compared to related cloud
technologies, like Platform as a Service (PaaS) and Container as a Service
(CaaS). Scaling, cold starts, function configurations, dependent services,
network latency etc. influence the two conflicting goals cost and perfor-
mance. Since so many factors have impact on these two dimensions, users
need a tool to simulate the function in an early development stage to solve
these conflicting goals. Therefore, a simulation framework is proposed in
this paper.
Keywords: Serverless Computing, Function as a Service, FaaS, Benchmarking,
Load Pattern, Cold Start, Pricing, Simulation Framework
1Introduction
Function as a Service (FaaS) [
3
] is a new, event-driven computing model in
the cloud, where single functions are executed in containers on a per request
basis. It promises a faster, easier and more cost-efficient development, deployment
and operation due to the abstraction of operational tasks by the FaaS provider.
Scaling to zero as one of the game changers avoids running instances of cloud
functions unnecessarily. Also the most granular pay-per-use model in the overall
cloud stack leads to a serious cost reduction for suited use cases. Therefore, it
is not surprising that early cost studies [
1
,
12
] compared this new paradigm
with established ones, like monolithic architectures or microservices deployed on
virtual machines and container infrastructure.
The position paper is structured as follows. Sect. 2puts the work in relation to
already conducted studies and approaches, which are important for a simulation
framework for FaaS. Based on these insights and the lack of a reproducible and
structured approach, Sect. 3describes the main objectives of the dissertation
plan and concludes with a short discussion.
2Related Work
One of the first cost comparisons between monolithic, microservice and cloud
function architecture was done by Villamizar et al. [
12
]. They state that cloud
- - - PREPRINT - - -
2Johannes Manner
functions saved more than 70% cost compared to the other implementations in
their use case scenario. Another case study [
1
] reduced cost up to 95%. But, there
are also cases, where cloud functions are more costly than traditional Virtual
Machine (VM) based solutions. An example use case is a large distributed data
processing application by Lee et al. [6], which is ten times more expensive.
The overall price is calculated by multiplying the execution time and the
price for the function configuration. Performance, i.e. execution time, of a cloud
function directly influences the price calculation. Due to this billing model,
there exist a lot of publications, which focus on performance. McGrath and
Brenner [
11
] implemented a performance-oriented FaaS platform on top of a
cloud provider to gain control over the infrastructure and improve the throughput
and scaling property. Lloyd et al. [
8
] identified a performance variation of
1500% w.r.t. the cold and warm infrastructure components of a FaaS platform.
They also assessed the throughput by concurrent access of cloud functions
on different FaaS platforms, especially the commercial ones, like Amazon Web
Services (AWS) Lambda, Google Cloud Functions, Azure Functions and IBM
OpenWhisk. A similar multi-provider study proposed a CPU-intensive bench-
mark [
9
] to compare the different FaaS offerings and support customers to select
the right platform for their needs.
Pricing and performance is "difficult to decipher", as Back and Andrikopoz-
los [
2
] stated. They also implemented a first minimal wrapper to harmonize the
different function handler interfaces of the major FaaS platforms. Applications
with bursty workloads are in focus of their work since these applications could
especially profit from the characteristics of FaaS.
3Simulation Framework
Since the conducted research reveals a mixed picture so far, there is a need to
combine all these directions together in a structured way. By now, the related
work investigates aspects in isolation and preconditions for benchmarks and
other experiments are often not clear to readers. Besides the abstraction of
operational tasks in FaaS, cost and performance are important considerations
when deciding to build or migrate an application which includes cloud function
building blocks. To shed light on the cost and performance perspective of FaaS,
this paper proposes a simulation framework.
This framework is of conceptual nature to assess single cloud functions. The
overall idea is to simulate a single cloud function in isolation under various circum-
stances in an early development phase. Chaining or orchestration of functions is
out of scope. Influential factors, like the cold start, are investigated by conducting
benchmarks [
10
] on the different FaaS platforms. Results of these experiments
are aggregated to mean values and their deviations to cover the best, worst and
average case. These values serve as an input for the simulation framework and
enable local testing of a cloud function on a developers machine. It also shows
the variation in price and performance w.r.t. the function characteristics, e.g.
CPU-bound functions or IO-bound functions.
Towards Performance and Cost Simulation in Function as a Service 3
Therefore, the following research objectives are of particular interest. They are
preparatory work to get a solid foundation for the overall goal of the dissertation.
Load Patterns
- Benchmarking applications is a problematic field as Hup-
pler [
4
] noted. Repeatability, verifiability and economical considerations are
some of his requirements for a good benchmark. Since FaaS introduces a
completely different scaling notion than related paradigms, such as PaaS or
CaaS, the requirements for a suited FaaS benchmark are different. There is a
lack of standardized load patterns for the cloud and especially for the event-
triggered execution of cloud functions. The idea is to extract standardized
load patterns via a literature study or real world use cases, group them and
define template patterns. The catalog contains a few generic, parameterized
load patterns, such as linear or bursty workloads, and could also serve as
a reference for other benchmarks in the cloud area. Based on such a load
pattern catalog, experiments are controllable and comparable.
Each function is executed in a lightweight container environment. If the
load pattern leads to a lot of up and downscaling, the execution time is
directly influenced due to the cold start overhead and communication setup
to dependent services. To understand the impact of these application load
patterns on cloud function price and performance compared to IaaS, PaaS or
CaaS, their characteristics [
2
] need deeper investigation and the proposed
load pattern research is the first step towards this understanding.
Cold Start
- Cold starts are an inherent problem of every virtualization tech-
nology. Discussed in the previous aspect, the scaling property of FaaS results
in a lot of cold starts. Based on our previous work [
10
] and results conducted
similarly [
5
], there is a cold start overhead present ranging between 300ms
up to seconds for a single function. This execution time overhead has a direct
performance and cost impact.
Pricing
- Pay-per-use billing model leads to a simple pricing for cloud function at
a first glance. Only execution time, memory setting and number of invocations
are necessary. But a single cloud function is rarely an application in the sense
of serving business value to the customer. To get FaaS in production, a lot of
additional services are mandatory, like databases, API gateways etc. Cost
models like the CostHat model [
7
] could be adapted from the microservice to
the nanoservice scope and include such mandatory services.
Portability
- Portability is another important aspect. Only when portability
is ensured, a simulation is useful to test, if the function shows a better
performance on another platform. Therefore, the transformation effort and
the estimated savings have to be considered w.r.t. cost or performance.
Wrapper utilities [
2
] are a first step to enable portable functions and allow a
comparison between custom functionality. Since cloud functions profit from
rich provider ecosystems and are tightly integrated with other services, like
databases, messaging systems etc., the main problem of portability are the
custom interfaces of these services.
A simulation framework for cloud functions to solve conflicting goals between
cost and performance is only realizable, if the items of the previous section are
4Johannes Manner
assessed in detail. The presented research objectives are a starting point and
cover, to the best of the author’s understanding, the most important objectives
relevant to the proposed framework. This leads to the conclusion, that the author
is aware, that there might be missing dimensions and aspects, which will be also
considered when they arise. Similar to the problem of dev-prod parity in software
engineering, a proof of concept is necessary. This validation step is at first a
simulation, second conducting experiments and third, compare the simulated
values to the metered ones in a real life experiment (sim-prod parity).
Typically, a function with fewer resources allocated is slower, but cheaper and
vice versa. The catalog of load patterns, the configurations of cloud functions,
the cold start values for different languages, providers and other dimensions and
the price structure are the input besides the cloud function code for simulating
the cost. Additional meta data are needed in form of a model, which depicts
the interaction with other services in the provider’s ecosystem. A small-sized
benchmark is the processing step of our simulation framework. The simulation is
reproducible since the parameters are constant and only the local execution on
a client’s machine could influence the outcome slightly. A report serves as the
output, where the user can see the best configuration and provider for his use
case dependent on his leading dimension.
References
1.
Adzic, G., Chatley, R.: Serverless Computing: Economic and Architectural Impact. In:
Proc. ESEC/FSE (2017)
2. Back, T., Andrikopoulos, V.: Using a Microbenchmark to Compare Function as a
Service Solutions. In: Service-Oriented and Cloud Computing. Springer International
Publishing (2018)
3.
van Eyk, E., et al.: The SPEC Cloud Group’s Research Vision on FaaS and Serverless
Architectures. In: Proc. WoSC (2017)
4.
Huppler, K.: The art of building a good benchmark. In: Performance Evaluation and
Benchmarking (2009)
5.
Jackson, D., Clynch, G.: An Investigation of the Impact of Language Runtime on the
Performance and Cost of Serverless Functions. In: Proc. WoSC (2018)
6.
Lee, H., Satyam, K., Fox, G.: Evaluation of Production Serverless Computing
Environments. In: Proc. CLOUD (2018)
7.
Leitner, P., Cito, J., Stöckli, E.: Modelling and Managing Deployment Costs of
Microservice-Based Cloud Applications. In: Proc. UCC (2016)
8.
Lloyd, W., et al.: Serverless Computing: An Investigation of Factors Influencing
Microservice Performance. In: Proc. IC2E (2018)
9.
Malawski, M., et al.: Benchmarking Heterogeneous Cloud Functions. In: Euro-Par
2017: Parallel Processing Workshops (2018)
10.
Manner, J., et al.: Cold Start Influencing Factors in Function as a Service. In: Proc.
WoSC (2018)
11.
McGrath, G., Brenner, P.R.: Serverless Computing: Design, Implementation, and
Performance. In: Proc. ICDCSW (2017)
12.
Villamizar, M., et al.: Infrastructure Cost Comparison of Running Web Applications
in the Cloud Using AWS Lambda and Monolithic and Microservice Architectures. In:
Proc. CCGrid (2016)
... Moreover, a decision-making approach among Function as a Service (FaaS), Platform as Service (PaaS) and Container as a Service (CaaS) in terms of cost and effectiveness was discussed in [7] by proposing a simulation framework. The paper concluded that scaling, function configurations, dependent services, network latency influenced cost and performance. ...
... Summarizing the aforementioned related literature, it is evident that cost holds is a key factor in the cloud service selection process. In [6] authors explored the differences between a monolithic architecture and a microservice architecture in terms of cost, whereas authors in [7] explored Function as s Service (FaaS), Platform as Service (PaaS) and Container as a Service (CaaS) in terms of cost and effectiveness. In addition in [8] authors proposed a performance benchmarking for selecting the optimal VM based on users' requirements and cost. ...
Article
Full-text available
As cloud computing has grown in popularity, several different models and deployment strategies have emerged, fulfilling specific needs of different users. Thus cloud service selection is a major task that combines business and technical aspects. Several cloud providers categorize their bundles to help users find the best fit. However, the policy behind this categorization is unclear, making it difficult to be consistent among providers. Therefore, this work explores the complication of selecting the optimal cloud service among numerous and comparable solutions and introduces a selection framework for cloud services based on clustering analysis, providing an overall size categorization of cloud services derived from the cloud market and making the categorization policy explicit and homogeneous across all providers. Finally, a real-word case study is presented, highlighting the application of the proposed framework.
... While working on our simulation and benchmarking pipeline proposed in [17], we have seen the performance behavior shown in Figure 1 when executing our calibration function. ...
... V. METHODOLOGY We propose the following solution (RQ). We use the LINPACK benchmark as a CPU intensive calibration function and report the metrics specified in Table II to the user of our research prototype's CLI 17 . Even though the case described in the previous section is contrived, it gives us the option to isolate CPU performance and to make changes in the configuration of intel_pstate visible. ...
Conference Paper
Full-text available
Benchmarking experiments often draw strong conclusions but lack information about the environmental influences like the hardware used to deploy the investigated system. Fairness and repeatability of these benchmarks are at least questionable. Developing for or migrating applications to the cloud or DevOps environments often requires performance testing, either for ensuring quality-of-service or for choosing the correct service parameters when deciding for a cloud offering. While building a benchmarking pipeline for cloud functions, the typical assumption is that a CPU scales the resources linearly to the used utilization. Due to heat generation, noise and other constraints, this is not the case due to the trade off between efficiency and performance. To investigate this trade off and its implications, we set up some experiments in order to evaluate the influence of these factors for benchmark results. We solely focus on Intel CPUs. Beginning with the second generation (Sandy Bridge), Intel uses their own scaling driver intel_pstate. Our results show that different settings for this scaling driver have a significant impact on the measured performance and therefore on the linear regression models we computed using LINPACK benchmarks. These benchmarks are executed at different CPU utilization points. An active intel_pstate scaling driver with enabled turbo boost and powersave governor reached a R² of 0.7349, whereas the performance governor shows a significantly better, ideal determination coefficient with 0.9999 on a machine used in the benchmarks. Therefore, we propose a methodology for system calibration to ensure fair and repeatable benchmarks.
... Many recent works in the area of serverless computing platforms have focused on studying and finding ways to improve the performance of serverless computing platforms Manner et al., 2018;Manner, 2019;Boza et al., 2017;Abad et al., 2018;Jeon et al., 2019). However, to the best of the authors' knowledge, none have been able to predict or simulate comprehensive performance or quality of metrics characteristics for a given workload. ...
... Their experiments on AWS Lambda and Azure Functions show that factors like the programming language, deployment package size, and memory settings affect the performance on serverless computing platforms. In a later study, Manner et al. (Manner, 2019) describe the importance of an accurate simulator for Function-as-a-Service (FaaS) products. They mention how scaling, cold starts, function configurations, dependent services, network latency, and other important configurations influence cost-performance trade-off. ...
Conference Paper
Full-text available
Developing accurate and extendable performance models for serverless platforms, aka Function-as-a-Service (FaaS) platforms, is a very challenging task. Also, implementation and experimentation on real serverless platforms is both costly and time-consuming. However, at the moment, there is no comprehensive simulation tool or framework to be used instead of the real platform. As a result, in this paper, we fill this gap by proposing a simulation platform, called SimFaaS, which assists serverless application developers to develop optimized Function-as-a-Service applications in terms of cost and performance. On the other hand, SimFaaS can be leveraged by FaaS providers to tailor their platforms to be workload-aware so that they can increase profit and quality of service at the same time. Also, serverless platform providers can evaluate new designs, implementations, and deployments on SimFaaS in a timely and cost-efficient manner. SimFaaS is open-source, well-documented, and publicly avail able, making it easily usable and extendable to incorporate more use case scenarios in the future. Besides, it provides performance engineers with a set of tools that can calculate several characteristics of serverless platform internal states, which is otherwise hard (mostly impossible) to extract from real platforms. In previous studies, temporal and steady-state performance models for serverless computing platforms have been developed. However, those models are limited to Markovian processes. We designed SimFaaS as a tool that can help overcome such limitations for performance and cost prediction in serverless computing. We show how SimFaaS facilitates the prediction of essential performance metrics such as average response time, probability of cold start, and the average number of instances reflecting the infrastructure cost incurred by the serverless computing provider. We evaluate the accuracy and applicability of SimFaaS by comparing the prediction results with real-world traces from Amazon AWS Lambda.
... Many recent works in the area of serverless computing platforms have focused on studying and finding ways to improve the performance of serverless computing platforms Manner et al., 2018;Manner, 2019;Boza et al., 2017;Abad et al., 2018;Jeon et al., 2019). However, to the best of the authors' knowledge, none have been able to predict or simulate comprehensive performance or quality of metrics characteristics for a given workload. ...
... Their experiments on AWS Lambda and Azure Functions show that factors like the programming language, deployment package size, and memory settings affect the performance on serverless computing platforms. In a later study, Manner et al. (Manner, 2019) describe the importance of an accurate simulator for Function-as-a-Service (FaaS) products. They mention how scaling, cold starts, function configurations, dependent services, network latency, and other important configurations influence cost-performance trade-off. ...
Preprint
Full-text available
Developing accurate and extendable performance models for serverless platforms, aka Function-as-a-Service (FaaS) platforms, is a very challenging task. Also, implementation and experimentation on real serverless platforms is both costly and time-consuming. However, at the moment, there is no comprehensive simulation tool or framework to be used instead of the real platform. As a result, in this paper, we fill this gap by proposing a simulation platform, called SimFaaS, which assists serverless application developers to develop optimized Function-as-a-Service applications in terms of cost and performance. On the other hand, SimFaaS can be leveraged by FaaS providers to tailor their platforms to be workload-aware so that they can increase profit and quality of service at the same time. Also, serverless platform providers can evaluate new designs, implementations, and deployments on SimFaaS in a timely and cost-efficient manner. SimFaaS is open-source, well-documented, and publicly available, making it easily usable and extendable to incorporate more use case scenarios in the future. Besides, it provides performance engineers with a set of tools that can calculate several characteristics of serverless platform internal states, which is otherwise hard (mostly impossible) to extract from real platforms. We show how SimFaaS facilitates the prediction of essential performance metrics such as average response time, probability of cold start, and the average number of instances reflecting the infrastructure cost incurred by the serverless computing provider. We evaluate the accuracy and applicability of SimFaaS by comparing the prediction results with real-world traces from Amazon AWS Lambda.
... Equations and Figures used in this work are added to the steps where they correspond to. This workflow is an essential part towards performance and cost simulation in FaaS [50]. As already mentioned, the generated artifact at the end of the subprocess is a graphical representation of various local simulation runs which serves as a decision guidance to choose a suitable resource setting depending on the developer's needs. ...
... Equations and Figures used in this work are added to the steps where they correspond to. This workflow is an essential part towards performance and cost simulation in FaaS [50]. As already mentioned, the generated artifact at the end of the subprocess is a graphical representation of various local simulation runs which serves as a decision guidance to choose a suitable resource setting depending on the developer's needs. ...
Conference Paper
Full-text available
Function as a Service (FaaS)-the reason why so many practitioners and researchers talk about Serverless Computing-claims to hide all operational concerns. The promise when using FaaS is that users only have to focus on the core business functionality in form of cloud functions. However, a few configuration options remain within the developer's responsibility. Most of the currently available cloud function offerings force the user to choose a memory or other resource setting and a timeout value. CPU is scaled based on the chosen options. At a first glance, this seems like an easy task, but the tradeoff between performance and cost has implications on the quality of service of a cloud function. Therefore, in this paper we present a local simulation approach for cloud functions and support developers in choosing a suitable configuration. The methodology we propose simulates the execution behavior of cloud functions locally, makes the cloud and local environment comparable and maps the local profiling data to a cloud platform. This reduces time during the development and enables developers to work with their familiar tools. This is especially helpful when implementing multi-threaded cloud functions.
... Adzic et al. [20] examines cost model of migrating two online application to AWS Lambda architecture and they observed significant cost reductions. Manner [21] discussed two conflicting parameters: cost and performance in terms of FaaS adoption. ...
Chapter
Full-text available
Software development paradigm is transitioning from monolithic architecture to microservices and serverless architecture. Keeping monolithic application as a single large unit of scale is a threat for its agility, testability and maintainability. Complete migration of a monolithic application to microservice or serverless architecture poses additional challenges. Design and development of microservices is complex and cumbersome in comparison to monoliths. As number of microservices increase, their management also becomes challenging. Using serverless platforms can offer considerable savings, however it doesn’t work for all types of workload patterns. Many a times, it may be more expensive in comparison to dedicated server deployments, particularly when application workload scales significantly. In this paper, we propose partial migration of monolith application into microservices and serverless services. For the purposes of refactoring, we use web access log data of monolith application. Our proposed architecture model is based on unsupervised learning algorithm and aims to optimize the resource utilization and throughput of an application by identifying the modules having different scalability and resource requirements. This facilitates segregation of services of a monolith application into monolith-microservice-serverless. We have applied our proposed approach on a Teachers Feedback monolith application and presented the results.
... Subsequently, the user analyses platform execution data and compares the results with predicted values of the simulation. Finally, the results are stored for further improving the simulation framework, proposed in a prior paper [18]. ...
Conference Paper
Full-text available
Function as a Service (FaaS) introduces a different notion of scaling than related paradigms. The unlimited upscaling and the property of downscaling to zero running containers leads to a situation where the application load influences the number of running containers directly. We propose a combined simulation and benchmarking process for cloud functions to provide information on the performance and cost aspect for developers in an early development stage. Our focus in this paper is on simulating the concurrently running containers on a FaaS platform based on different function configurations. The experiment performed serves as a proof of concept work and emphasizes the importance for design decisions and system requirements. Especially for self-hosted FaaS platforms or resources bound to cloud functions like database connections, this information is crucial for deployment and maintenance.
Article
Full-text available
Cloud computing environments allow businesses to deploy applications in a fast and scalable way. Infrastructure-as-a-service (IaaS) and container-as-a-service (CaaS) models can be adopted for the deployment of cloud-based applications. The current paper presents a specific near-real-world scenario of a cloud-based application, deployed by the two aforementioned cloud models. The deployment cost differs between the cloud models and relies on the number of utilized resources, which is driven by the user demand. Since the cost is a major importance factor that finally determines the adoption of cloud technology, it is challenging to estimate and examine the cost of each proposed approach. This research can help cloud computing professionals pick a model that meets their goals and budget. It's also useful for cloud cost analysis. The corresponding costs of the two deployments are estimated based on the pricing policies of major providers, Amazon, Google, and Microsoft.
Book
This book constitutes the refereed proceedings of the 5th International Conference on Information, Communication and Computing Technology, ICICCT 2020, held in New Delhi, India*, in May 2020. The 24 full papers and one short paper presented in this volume were carefully reviewed and selected from 220 submissions. The papers are organized in topical sections on data communication & networking; advanced computing using machine learning. *The conference was held virutally due to the COVID-19 pandemic.
Conference Paper
Full-text available
Function as a Service (FaaS) is a young and rapidly evolving cloud paradigm. Due to its hardware abstraction, inherent virtualization problems come into play and need an assessment from the FaaS point of view. Especially avoidance of idling and scaling on demand cause a lot of container starts and as a consequence a lot of cold starts for FaaS users. The aim of this paper is to address the cold start problem in a benchmark and investigate influential factors on the duration of the perceived cold start. We conducted a benchmark on AWS Lambda and Microsoft Azure Functions with 49500 cloud function executions. Formulated as hypotheses, the influence of the chosen programming language, platform, memory size for the cloud function, and size of the deployed artifact are the dimensions of our benchmark. Cold starts on the platform as well as the cold starts for users were measured and compared to each other. Our results show that there is an enormous difference for the overhead the user perceives compared to the billed duration. In our benchmark, the average cold start overheads on the user's side ranged from 300ms to 24s for the chosen configurations.
Conference Paper
Full-text available
Cloud computing enables an entire ecosystem of developing, composing, and providing IT services. An emerging class of cloud-based software architectures, serverless, focuses on providing software architects the ability to execute arbitrary functions with small overhead in server management, as Function-as-a-service (FaaS). However useful, serverless and FaaS suffer from a community problem that faces every emerging technology, which has indeed also hampered cloud computing a decade ago: lack of clear terminology, and scattered vision about the field. In this work, we address this community problem. We clarify the term serverless, by reducing it to cloud functions as programming units, and a model of executing simple and complex (e.g., workflows of) functions with operations managed primarily by the cloud provider. We propose a research vision, where 4 key directions (perspectives) present 17 technical opportunities and challenges.
Conference Paper
Full-text available
Cloud Functions, often called Function-as-a-Service (FaaS), pioneered by AWS Lambda, are an increasingly popular method of running distributed applications. As in other cloud offerings, cloud functions are heterogeneous, due to different underlying hardware, runtime systems, as well as resource management and billing models. In this paper, we focus on performance evaluation of cloud functions, taking into account heterogeneity aspects. We developed a cloud function benchmarking framework, consisting of one suite based on Serverless Framework, and one based on HyperFlow. We deployed the CPU-intensive benchmarks: Mersenne Twister and Linpack, and evaluated all the major cloud function providers: AWS Lambda, Azure Functions, Google Cloud Functions and IBM OpenWhisk. We make our results available online and continuously updated. We report on the initial results of the performance evaluation and we discuss the discovered insights on the resource allocation policies.
Chapter
The Function as a Service (FaaS) subtype of serverless computing provides the means for abstracting away from servers on which developed software is meant to be executed. It essentially offers an event-driven and scalable environment in which billing is based on the invocation of functions and not on the provisioning of resources. This makes it very attractive for many classes of applications with bursty workload. However, the terms under which FaaS services are structured and offered to consumers uses mechanisms like GB–seconds (that is, X GigaBytes of memory used for Y seconds of execution) that differ from the usual models for compute resources in cloud computing. Aiming to clarify these terms, in this work we develop a microbenchmark that we use to evaluate the performance and cost model of popular FaaS solutions using well known algorithmic tasks. The results of this process show a field still very much under development, and justify the need for further extensive benchmarking of these services.
Conference Paper
Amazon Web Services unveiled their ‘Lambda’ platform in late 2014. Since then, each of the major cloud computing infrastructure providers has released services supporting a similar style of deployment and operation, where rather than deploying and running monolithic services, or dedicated virtual machines, users are able to deploy individual functions, and pay only for the time that their code is actually executing. These technologies are gathered together under the marketing term ‘serverless’ and the providers suggest that they have the potential to significantly change how client/server applications are designed, developed and operated. This paper presents two case industrial studies of early adopters, showing how migrating an application to the Lambda deployment architecture reduced hosting costs – by between 66% and 95% – and discusses how further adoption of this trend might influence common software architecture design practices.
Conference Paper
We present an approach to model the deployment costs, including compute and IO costs, of Microservice-based applications deployed to a public cloud. Our model, which we dubbed CostHat, supports both, Microservices deployed on traditional IaaS or PaaS clouds, and services that make use of novel cloud programming paradigms, such as AWS Lambda. CostHat is based on a network model, and allows for what-if and cost sensitivity analysis. Further, we have used this model to implement tooling that warns cloud developers directly in the Integrated Development Environment (IDE) about certain classes of potentially costly code changes. We illustrate our work based on a case study, and evaluate the CostHat model using a standalone Python implementation. We show that, once instantiated, cost calculation in CostHat is computationally inexpensive on standard hardware (below 1 ms even for applications consisting of thousand services and endpoints). This enables its use in real-time for developer tooling which continually re-evaluates the costs of an application in the background, while the developer is working on the code.