Thesis

Description and evaluation of elasticity strategies for business processes in the Cloud

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Elasticity is the ability of a system to be adjustable to workload change by allocating and releasing as many resources as needed while ensuring the agreed QoS. It has played a pivotal role in many research works for ensuring QoS. Therefore, Elasticity management is witnessing a lot of attention from IT community as a pivotal issue for finding the right tradeoffs between QoS levels and operational costs by working on developing novel methods and mechanisms. However, controlling business process elasticity and defining non-trivial elasticity strategies are challenging issues. Elasticity strategies are policies that are used to manage elasticity by deciding when, where and how to use elasticity mechanisms (e.g, adding or removing resources). Many strategies can be defined to ensure applications elasticity. The abundance of possible strategies requires their evaluation and validation in order to guarantee their effectiveness before using them in real Cloud environments. Our thesis work aims to overcome the limitations of the existing approaches for elasticity strategies management. It consists in developing a configurable Domain-Specific language to describe different types of elasticity strategies in a unified way. We define a formal model that captures a set of QoS metrics and defines elasticity operations. This model will also be used to define and verify elasticity strategies. We will also work on the alignment of Service Level Agreements with the elasticity strategies.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Web-facing applications are expected to provide certain performance guarantees despite dynamic and continuous workload changes. As a result, application owners are using cloud computing as it offers the ability to dynamically provision computing resources (e.g., memory, CPU) in response to changes in workload demands to meet performance targets and eliminates upfront costs. Horizontal, vertical, and the combination of the two are the possible dimensions that cloud application can be scaled in terms of the allocated resources. In vertical elasticity as the focus of this work, the size of virtual machines (VMs) can be adjusted in terms of allocated computing resources according to the runtime workload. A commonly used vertical resource elasticity approach is realized by deciding based on resource utilization, named capacity-based. While a new trend is to use the application performance as a decision making criterion, and such an approach is named performance-based. This paper discusses these two approaches and proposes a novel hybrid elasticity approach that takes into account both the application performance and the resource utilization to leverage the benefits of both approaches. The proposed approach is used in realizing vertical elasticity of memory (named as vertical memory elasticity), where the allocated memory of the VM is auto-scaled at runtime. To this aim, we use control theory to synthesize a feedback controller that meets the application performance constraints by auto-scaling the allocated memory, i.e., applying vertical memory elasticity. Different from the existing vertical resource elasticity approaches, the novelty of our work lies in utilizing both the memory utilization and application response time as decision making criteria. To verify the resource efficiency and the ability of the controller in handling unexpected workloads, we have implemented the controller on top of the Xen hypervisor and performed a series of experiments using the RUBBoS interactive benchmark application, under synthetic and real workloads including Wikipedia and FIFA. The results reveal that the hybrid controller meets the application performance target with better performance stability (i.e., lower standard deviation of response time), while achieving a high memory utilization (close to 83%), and allocating less memory compared to all other baseline controllers.
Article
Full-text available
Hypervisors and Operating Systems support vertical elasticity techniques such as memory ballooning to dynamically assign the memory of Virtual Machines (VMs). However, current Cloud Management Platforms (CMPs), such as OpenNebula or OpenStack, do not currently support dynamic vertical elasticity. This paper describes a system that integrates with the CMP to provide automatic vertical elasticity to adapt the memory size of the VMs to their current memory consumption, featuring live migration to prevent overload scenarios, without downtime for the VMs. This enables an enhanced VM per host consolidation ratio while maintaining the Quality of Service for VMs, since their memory is dynamically increased as necessary. The feasibility of the development is assessed via two case studies based on OpenNebula featuring (i) horizontal and vertical elastic virtual clusters on a production Grid infrastructure and (ii) elastic multi-tenant VMs that run Docker containers coupled with live migration techniques. The results show that memory oversubscription can be integrated on CMPs to deliver automatic memory management without severely impacting the performance of the VMs. This results in a memory management framework for on-premises Clouds that features live migration to safely enable transient oversubscription of physical resources in a CMP.
Conference Paper
Full-text available
We elaborate on the ingredients of a model-driven approach for the dynamic provisioning of cloud resources in an autonomic manner. Our solution has been experimentally evaluated using a NoSQL database cluster running on a cloud infrastructure. In contrast to other techniques, which work on a best-effort basis, we can provide probabilis-tic guarantees for the provision of sucient resources. Our approach is based on the probabilistic model checking of Markov Decision Processes (MDPs) at runtime. We present: (i) the specification of an appropriate MDP model for the provisioning of cloud resources, (ii) the generation of a parametric model with system-specific parameters, (iii) the dynamic instantiation of MDPs at runtime based on logged and current measurements and (iv) their verification using the PRISM model checker for the provisioning/deprovisioning of cloud resources to meet the set goals.
Conference Paper
Full-text available
Contemporary cloud providers offer out-of-the-box auto-scaling solutions. However, defining a non-trivial scaling behavior that goes beyond the feature set provided by existing solutions is still challenging. In this paper we present SPEEDL, a declarative and extensible domain-specific language that simplifies the creation of elastic scaling behavior on top of IaaS clouds. SPEEDL simplifies the creation of event-driven policies for resource management (How many resources, and what resource types, are needed?), as well as task mapping (Which tasks should be handled by which resources?). Based on a dataset of real-life scaling policies, we demonstrate that SPEEDL can cover most scaling behaviors real-life developers want to express, and that the resulting SPEEDL policies are at the same time substantially more compact, easier to read, and less error-prone than the same behavior expressed via a general-purpose programming language.
Conference Paper
Full-text available
The focus of this work is the on-demand resource provisioning in cloud computing, which is commonly referred to as cloud elasticity. Although a lot of effort has been invested in developing systems and mechanisms that enable elasticity, the elasticity decision policies tend to be designed without quantifying or guaranteeing the quality of their operation. We present an approach towards the development of more formalized and dependable elasticity policies. We make two distinct contributions. First, we propose an extensible approach to enforcing elasticity through the dynamic instantiation and online quantitative verification of Markov Decision Processes (MDP) using probabilistic model checking. Second, various concrete elasticity models and elasticity policies are studied. We evaluate the decision policies using traces from a real NoSQL database cluster under constantly evolving external load. We reason about the behaviour of different modeling and elasticity policy options and we show that our proposal can improve upon the state-of-the-art in significantly decreasing under-provisioning while avoiding over-provisioning.
Conference Paper
Full-text available
The benefits of cloud computing have led to a proliferation of infrastructures and platforms covering the provisioning and deployment requirements of many cloud-based applications. However, the requirements of an application may change during its life cycle. Therefore, its provisioning and deployment should be adapted so that the application can deliver its target quality of service throughout its entire life cycle. Existing solutions typically support only simple adaptation scenarios, whereby scalability rules map conditions on fixed metrics to a single scaling action targeting a single cloud environment (\eg, scale out an application component). However, these solutions fail to support complex adaptation scenarios, whereby scalability rules could map conditions on custom metrics to multiple scaling actions targeting multi-cloud environments. In this paper, we propose the Scalability Rule Language (SRL), a language for specifying scalability rules that support such complex adaptation scenarios of multi-cloud applications. SRL provides Eclipse-based tool support, thus allowing modellers not only to specify scalability rules but also to syntactically and semantically validate them. Moreover, SRL is well integrated with the Cloud Modelling Language (CloudML), thus allowing modellers to associate their scalability rules with the components and virtual machines of provisioning and deployment models.
Conference Paper
Full-text available
Resources allocation and scheduling has been recognised as an important topic for business process execution. However, despite the proven benefits of using Cloud to run business process, users lack guidance for choosing between multiple offering while taking into account several objectives which are often conflicting. Moreover, when running business processes it is difficult to automate all tasks. In this paper, we propose three complementary approaches for Cloud computing platform. On the other side, elastic computing, such as Amazon EC2, allows users to allocate and release compute resources (virtual machines) on-demand and pay only for what they use. Therefore, it is reasonable to assume that the number of virtual machine is infinite while the number of human resources is finite. This feature of Clouds has been called “illusion of infinite resources”. In this paper, we design an allocation strategy for Cloud computing platform taking into account the above characteristics. More precisely, we propose three complementary bi-criterion approaches for scheduling business process on distributed Cloud resources.
Conference Paper
Full-text available
Many IaaS providers allow cloud consumers to define elasticity (or auto-scaling) rules that carry out provisioning or de-provisioning actions in response to monitored variables crossing user-defined thresholds. Defining elasticity rules, however, remains as a key challenge for cloud consumers as it requires choosing appropriate threshold values to satisfy performance and cost requirements. In this paper we propose novel analytical models that enable the study of application performance under different elasticity rules. Based on these, we develop algorithms for performing scale-in and scale-out operations. We simulate our models and algorithms using different thresholds, and validate the results against empirical data obtained using the same rules with the TPC-W benchmark on Amazon cloud.
Conference Paper
Full-text available
Although there are few efficient algorithms in the literature for scientific workflow tasks allocation and scheduling for heterogeneous resources such as those proposed in grid computing context, they usually require a bounded number of computer resources that cannot be applied in Cloud computing environment. Indeed, unlike grid, elastic computing, such asAmazon's EC2, allows users to allocate and release compute resources on-demand and pay only for what they use. Therefore, it is reasonable to assume that the number of resources is infinite. This feature of Clouds has been called âillusion of infiniteresourcesâ. However, despite the proven benefits of using Cloud to run scientific workflows, users lack guidance for choosing between multiple offering while taking into account several objectives which are often conflicting. On the other side, the workflow tasks allocation and scheduling have been shown to be NP-complete problems. Thus, it is convenient to use heuristic rather than deterministic algorithm. The objective of this paper is to design an allocation strategy for Cloud computing platform. More precisely, we propose three complementary bi-criteria approaches for scheduling workflows on distributed Cloud resources, taking into account the overall execution time and the cost incurred by using a set of resources.
Conference Paper
Full-text available
Cloud environments are being increasingly used for deploying and executing business processes and particularly Service-based Business Processes (SBPs). One of the expected features of Cloud environments is elasticity at di�erent levels. It is obvious that provisioning of elastic platforms is not su�cient to provide elasticity of the deployed business process. Therefore, SBPs should be provided with elasticity so that they would be able to adapt to the workload changes while ensuring the desired functional and non-functional properties. In this paper, we propose a formal model for stateful SBPs elasticity that features a duplication/consolidation mechanisms and a generic controller to de�fine and evaluate elasticity strategies.
Article
Full-text available
Elasticity (on-demand scaling) of applications is one of the most important features of cloud computing. This elasticity is the ability to adaptively scale resources up and down in order to meet varying application demands. To date, most existing scaling techniques can maintain applications’ Quality of Service (QoS) but do not adequately address issues relating to minimizing the costs of using the service. In this paper, we propose an elastic scaling approach that makes use of cost-aware criteria to detect and analyse the bottlenecks within multi-tier cloud-based applications. We present an adaptive scaling algorithm that reduces the costs incurred by users of cloud infrastructure services, allowing them to scale their applications only at bottleneck tiers, and present the design of an intelligent platform that automates the scaling process. Our approach is generic for a wide class of multi-tier applications, and we demonstrate its effectiveness against other approaches by studying the behaviour of an example e-commerce application using a standard workload benchmark.
Conference Paper
Full-text available
Elasticity in cloud computing is a complex problem, regarding not only resource elasticity but also quality and cost elasticity, and most importantly, the relations among the three. Therefore, existing support for controlling elasticity in complex applications, focusing solely on resource scaling, is not adequate. In this paper we present SYBL - a novel language for controlling elasticity in cloud applications - and its runtime system. SYBL allows specifying in detail elasticity monitoring, constraints, and strategies at different levels of cloud applications, including the whole application, application component, and within application component code. Based on simple SYBL elasticity directives, our runtime system will perform complex elasticity controls for the client, by leveraging underlying cloud monitoring and resource management APIs. We also present a prototype implementation and experiments illustrating how SYBL can be used in real-world scenarios.
Conference Paper
Full-text available
Cloud elasticity is the ability of the cloud infrastructure to rapidly change the amount of resources allocated to a service in order to meet the actual varying demands on the service while enforcing SLAs. In this paper, we focus on horizontal elasticity, the ability of the infrastructure to add or remove virtual machines allocated to a service deployed in the cloud. We model a cloud service using queuing theory. Using that model we build two adaptive proactive controllers that estimate the future load on a service. We explore the different possible scenarios for deploying a proactive elasticity controller coupled with a reactive elasticity controller in the cloud. Using simulation with workload traces from the FIFA world-cup web servers, we show that a hybrid controller that incorporates a reactive controller for scale up coupled with our proactive controllers for scale down decisions reduces SLA violations by a factor of 2 to 10 compared to a regression based controller or a completely reactive controller.
Conference Paper
Full-text available
Elasticity is a key feature in the cloud computing context, and perhaps what distinguishes this computing paradigm of the other ones, such as cluster and grid computing. Considering the importance of elasticity in cloud computing context, the objective of this paper is to present a comprehensive study about the elasticity mechanisms available today. Initially, we propose a classification for elasticity mechanisms, based on the main features found in the analysed commercial and academic solutions. In a second moment, diverse related works are reviewed in order to define the state of the art of elasticity in clouds. We also discuss some of the challenges and open issues associated with the use of elasticity features in cloud computing.
Article
Full-text available
Web services provide the basic technical platform required for application interoperability. They do not, however, provide higher level control, such as which web services need to be invoked, which operations should be called and in what sequence. Nor do they provide ways to describe the semantics of interfaces, the workflows, or e-business processes. BPEL is the missing link to assemble and integrate web services into a real business process BPEL4WS standardizes process automation between web services. This applies both within the enterprise, where BPEL4WS is used to integrate previously isolated systems, and between enterprises, where BPEL4WS enables easier and more effective integration with business partners. In providing a standard descriptive structure BPEL4WS enables enterprises to define their business processes during the design phase. Wider business benefits can flow from this through business process optimization, reengineering, and the selection of most appropriate processes . Supported by major vendors - including BEA, Hewlett-Packard, IBM, Microsoft, Novell, Oracle, SAP, Sun, and others - BPEL4WS is becoming the accepted standard for business process management.This book provides detailed coverage of BPEL4WS, its syntax, and where, and how, it is used. It begins with an overview of web services, showing both the foundation of, and need for, BPEL. The web services orchestration stack is explained, including standards such as WS-Security, WS-Coordination, WS-Transaction, WS-Addressing, and others. The BPEL language itself is explained in detail, with Code snippets and complete examples illustrating both its syntax and typical construction. Having covered BPEL itself, the book then goes on to show BPEL is used in context. by providing an overview of major BPEL4WS servers. It covers the Oracle BPEL Process Manager and Microsoft BizTalk Server 2004 in detail, and shows how to write BPEL4WS solutions using these servers.
Conference Paper
Full-text available
On-demand provisioning of scalable and reliable compute services, along with a cost model that charges consumers based on actual service usage, has been an objective in distributed computing research and industry for a while. Cloud Computing promises to deliver on this objective: consumers are able to rent infrastructure in the Cloud as needed, deploy applications and store data, and access them via Web protocols on a pay-per-use basis. The acceptance of Cloud Computing, however, depends on the ability for Cloud Computing providers and consumers to implement a model for business value co-creation. Therefore, a systematic approach to measure costs and benefits of Cloud Computing is needed. In this paper, we discuss the need for valuation of Cloud Computing, identify key components, and structure these components in a framework. The framework assists decision makers in estimating Cloud Computing costs and to compare these costs to conventional IT solutions. We demonstrate by means of representative use cases how our framework can be applied to real world scenarios.
Article
Full-text available
Cloud computing's success has made on-demand computing with a pay-as-you-go pricing model popular. However, cloud computing's focus on resources and costs limits progress in realizing more flexible, adaptive processes. The authors introduce elastic processes, which are based on explicitly modeling resources, cost, and quality, and show how they improve on the state of the art.
Article
In the recent years, growing attention has been paid to the concept of Cloud Computing as a new computing paradigm for executing and handling operations/processes in an efficient and cost-effective way. Cloud Computing's elasticity and its flexibility in service delivery have been the most important features behind this attention which encourage companies to migrate their operation/processes to the cloud to ensure the required QoS while using resources and reduce their expenses. Elasticity management has been considered as a pivotal issue among IT community that works on finding the right tradeoffs between QoS levels and operational costs by developing novel methods and mechanisms. However, controlling process elasticity and defining non-trivial elasticity strategies are challenging issues. Also, despite the growing attention paid to the cloud and its elasticity property in particular, there is still a lack of solutions that support the evaluation of elasticity strategies used to ensure the elasticity of processes at service-level. In this paper, we present a framework for describing and evaluating elasticity strategies for Service-based Business Processes (SBP), called STRATFram. It is composed of a set of domain-specific languages designed to generalize the use of the framework and to facilitate the description of evaluation elements that are needed to evaluate elasticity strategies before using them in real cloud environment. Using STRATFram, SBP holders are allowed to define: (i) an elasticity model with specific elasticity capabilities on which they want to define and evaluate their elasticity strategies, (ii) a SBP model for which the elasticity strategies will be defined and evaluated, (iii) a set of elasticity strategies based on the elasticity capabilities of the defined elasticity model and for the provided SBP model, and (iv) a simulation configuration which identifies simulation properties/elements. The evaluation of elasticity strategies consists in providing a set of plots that allows the analysis and the comparison of strategies. Our contributions and developments provide Cloud tenants with facilities to choose elasticity strategies that fit to their business processes and usage behaviors.
Article
Auto-scaling, a key property of cloud computing, allows application owners to acquire and release resources on demand. However, the shared environment, along with the exponentially large configuration space of available parameters, makes configuration of auto-scaling policies a challenging task. In particular, it is difficult to quantify, a priori, the impact of a policy on Quality of Service (QoS) provision. To address this problem, we propose a novel approach based on performance modelling and formal verification to produce performance guarantees on particular rule-based auto-scaling policies. We demonstrate the usefulness and efficiency of our techniques through a detailed validation process on two public cloud providers, Amazon EC2 and Microsoft Azure, targeting two cloud computing models, Infrastructure as a Service (IaaS) and Platform as a Service (PaaS), respectively. Our experimental results show that the modelling process along with the model itself can be very effective in providing the necessary formal reasoning to cloud application owners with respect to the configuration of their auto-scaling policies, and consequently helping them to specify an auto-scaling policy which could minimise QoS violations.
Conference Paper
Nowadays, Cloud Computing is receiving more and more attention from IT companies as a new computing paradigm for executing and handling their Business Processes in an efficient and cost-effective way. One of the most important features behind this attention is the Cloud Computing’s elasticity which became the focus of many research works. Its management has been considered as a pivotal issue among IT community that works on finding the right tradeoffs between QoS levels and operational costs by developing novel methods and mechanisms. Elasticity controller has been used in many research works to automate the provisioning of cloud resources and control cloud applications elasticity. However, most of the previous works have been proposed based on a specific elasticity model for either vertical or horizontal elasticity. In this paper, we propose an elasticity model description language for Service-based Business processes (SBP), called StratModel. It allows business process holders to define different elasticity models with different elasticity capabilities by providing their elasticity mechanisms through set of examples and automatically generate their associated elasticity controllers. The generated elasticity controllers are used for evaluating elasticity strategies before using them in real cloud environments. Based on StratModel, we present our elasticity strategies evaluation framework that facilitates the description and evaluation of elasticity strategies for SBPs according to a customized elasticity model. Our contributions and developments provide Cloud tenants with facilities to choose elasticity strategies that fit to their business processes and usage behaviors using a customized elasticity controller.
Article
Advancements in the areas of Cloud Computing, Internet of Things (IoT), and hybrid Human-Computer systems have made feasible the creation of a highly integrated human-machine world. The concept of elasticity plays a crucial role in fulfilling this vision, enabling systems to address various requirements reflecting performance, security, and business concerns. However, elastic systems are still in their inception, and numerous challenges need to be addressed in their development and management. In this article we present an overview of our experience on elastic systems, with a focus on elastic cloud systems. In the quest for designing and managing elastic systems, several challenges need to be addressed, such as: (i) enabling the systems to fulfill different requirements from multiple involved stakeholders, (ii) designing elastic systems considering various degrees of elasticity capabilities provided by different technologies and environments, (iii) understanding behavioral relationships in elastic systems, and their effects on stakeholder requirements, (iv) monitoring costs and analyzing cost efficiency of elastic systems, (v) controlling the elasticity of such systems at runtime in order to fulfill stakeholders' requirements, and (vi) supporting system elasticity through operations management. We present the techniques we have adopted in order to tackle the above challenges. We introduce our solution for creating elastic systems, following their complete lifecycle, from design-time to operations management.
Article
The Air Force announced today that it has a machine that can receive instruction sin English, figure out how to make whatever is wanted, and teach other machines how to make it. An Air Force general said it will enable the United States to “build a war machine that nobody would want to tackle.”.
Article
In a cloud environment, consumers can deploy and run their software applications on a sophisticated infrastructure that is owned and managed by a cloud provider (eg, Amazon Web Services, Microsoft Azure, and Google Cloud Platform). Cloud users can acquire resources for their applications on demand, and they have to pay only for the consumed resources. In order to take this advantage of cloud computing, it is vital for a consumer to determine if the cloud infrastructure can rapidly change the type and quantity of resources allocated to an application in the cloud according to the application's demand. This property of the cloud is known as elasticity. Ideally, a cloud platform should be perfectly elastic; ie, the resources allocated to an application exactly match the demand. This allocation should occur as the load to the application increases, with no degradation of application's response time, and a consumer should pay only for the resources used by the application. However, in reality, clouds are not perfectly elastic. One reason for that is it is difficult to predict the elasticity requirements of a given application and its workload in advance, and optimally match resources with the applications’ needs. In this chapter, we investigate the elasticity problem in the cloud. We explain why it is still a challenging problem to solve and consider what services current cloud service providers are offering to maintain the elasticity in the cloud. Finally, we discuss the existing research that can be used to improve elasticity in the cloud.
Article
In today's IT industry resource-intensive tasks are playing an increasing role in business processes. By the emergence of Cloud computing it is nowadays possible to deploy such tasks onto computing resources leased in an on-demand fashion from Cloud providers. This enabled the realization of so-called Elastic Processes (EPs). These are able to dynamically adjust their used resources in order to meet varying workloads. Till now, traditional Business Process Management Systems (BPMSs) do not consider the needs of Elastic Processes such as monitoring the current system load, reasoning about optimally utilized resources, in order to ensure given Quality of Service constraints while executing required actions such as starting, stopping servers or moving services from one server to an other. This paper focuses on our current work on ViePEP, a research BPMS for the Cloud capable of handling the aforementioned requirements of EPs.
Article
Cloud computing is being increasingly used for deploying and executing business processes and particularly Service-based Business Processes (SBPs). Among other properties, Cloud environments provide elasticity at different scopes. The principle of elasticity is to ensure the provisioning of necessary and sufficient resources such that a Cloud service continues running smoothly even when the number or quantity of its utilization scales up or down, thereby avoiding under-utilization and over-utilization of resources. It is obvious that provisioning of elastic infrastructures and/or platforms is not sufficient to provide elasticity of deployed business processes. In fact, it is also necessary to consider the elasticity at the application scope. This allows the adaptation of deployed applications during their execution according to demands variation. Therefore, business processes should be provided with elasticity mechanisms allowing their adaptation to the workload changes while ensuring the desired functional and non-functional properties. In our work, we were interested in providing a holistic approach for modeling, evaluating and provisioning of elastic SBPs in the Cloud. We started by proposing a formal model for SBPs elasticity. To do this, we modeled SBPs using Petri nets and defined two elasticity operations (duplication / consolidation). In addition, we proposed to intertwine these elasticity operations with an elasticity controller that monitors SBPs execution, analyzes monitoring information and executes the appropriate elasticity operation (duplication/consolidation) in order to enforce the elasticity of SBPs. After facing the challenge of defining a model and mechanisms for SBPs elasticity, we were interested in the evaluation of elasticity before implementing it in real environments. To this end, we proposed to use our elasticity controller as a framework for the validation and evaluation of elasticity using verification and simulation techniques. Finally, we were interested in the provisioning of elasticity mechanisms for SBPs in real Cloud environments. For this aim, we proposed two approaches. The first approach packages non-elastic SBPs in micro-containers, extended with our elasticity mechanisms, before deploying them in Cloud infrastructures. The second approach integrates our elasticity controller in an autonomic infrastructure to dynamically add elasticity facilities to SBPs deployed on Cloud platforms
Article
To optimize the cost and performance of complex cloud services under dynamic requirements, workflows and diverse cloud offerings, we rely on different elasticity control processes. An elasticity control process, when being enforced, produces effects in different parts of the cloud service. These effects normally evolve in time and depend on workload characteristics, and on the actions within the elasticity control process enforced. Therefore, understanding the effects on the behavior of the cloud service is of utter importance for runtime decision-making process, when controlling cloud service elasticity. In this paper, we present a novel methodology and a framework for estimating and evaluating cloud service elasticity behaviors. To estimate the elasticity behavior, we collect information concerning service structure, deployment, service runtime, control processes, and cloud infrastructure. Based on this information, we utilize clustering techniques to identify cloud service elasticity behavior, in time, and for different parts of the service. Knowledge about such behavior is utilized within a cloud service elasticity controller to substantially improve the selection and execution of elasticity control processes. These elasticity behavior estimations are successfully being used by our elasticity controller, in order to improve runtime decision quality. We evaluate our framework with three real-world cloud services in different application domains. Experiments show that we are able to estimate the behavior in 89.5% of the cases. Moreover, we have observed improvements in our elasticity controller, which takes better control decisions, and does not exhibit control oscillations.
Conference Paper
SNAKES (SNAKES is the Net Algebra Kit for Editors and Simulators) is a general purpose Petri nets library, primarily for the Python programming language but portable to other ones. It defines a very general variant of Python-coloured Petri nets that can be created and manipulated through the library, as well as executed to explore state spaces. Thanks to a variety of plugins, SNAKES can handle extensions of Petri nets, in particular algebras of Petri nets [4,26]. SNAKES ships with a compiler for the ABCD language that is precisely such an algebra. Finally, one can use the companion tool Neco [14] that compiles a Petri net into an optimised library allowing to compute efficiently its state space or perform LTL model-checking thanks to library SPOT [8,13]. This paper describes SNAKES’ structure and features.
Article
Evaluation of service quality of public hospitals has been largely ignored by health care providers although it is vital for human life. For this, some regulations are being done by Turkish Ministry of Health to improve service quality. One of them is to construct quality teams for hospitals. The other is classification of hospitals into five groups based on their service quality and government provides promotion to hospitals based on this classifications. However, the current classification procedure is not based on an analytical method. Therefore, in the scope of this paper, service quality index has been developed in order to present a scientific basis for classification of hospitals by using multiple criteria decision making tools with respect to existing parameters in the literature. In the scope of this study, service quality parameters are determined and importance degrees of them are obtained by using analytic hierarchy process in terms of service providers and patients. Then, a formulation has been provided to obtain service quality index (SQI) for hospitals. A case study for public hospitals is presented to test applicability of the proposed method.
Article
Big data is one of the major technology usages for business operations in today’s competitive market. It provides organizations a powerful tool to analyze large unstructured data to make useful decisions. Result quality, time, and price associated with big data analytics are very important aspects for its success. Selection of appropriate cloud infrastructure at coarse and fine grained level will ensure better results. In this paper, a global architecture is proposed for QoS based scheduling for big data application to distributed cloud datacenter at two levels which are coarse grained and fine grained. At coarse grain level, appropriate local datacenter is selected based on network distance between user and datacenter, network throughput and total available resources using adaptive K nearest neighbor algorithm. At fine grained level, probability triplet (C, I, M) is predicted using naïve Bayes algorithm which provides probability of new application to fall in compute intensive (C), input/output intensive (I) and memory intensive (M) categories. Each datacenter is transformed into a pool of virtual clusters capable of executing specific category of jobs with specific (C, I, M) requirements using self organized maps. Novelty of study is to represent whole datacenter resources in a predefined topological ordering and executing new incoming jobs in their respective predefined virtual clusters based on their respective QoS requirements. Proposed architecture is tested on three different Amazon EMR datacenters for resource utilization, waiting time, availability, response time and estimated time to complete the job. Results indicated better QoS achievement and 33.15 % cost gain of the proposed architecture over traditional Amazon methods.
Article
The article is available here: http://authors.elsevier.com/a/1Qa4O_,OQCKOpe Abstract: With the advent of cloud computing, organizations are nowadays able to react rapidly to changing demands for computational resources. Not only individual applications can be hosted on virtual cloud infrastructures, but also complete business processes. This allows the realization of so-called elastic processes, i.e., processes which are carried out using elastic cloud resources. Despite the manifold benefits of elastic processes, there is still a lack of solutions supporting them. In this paper, we identify the state of the art of elastic Business Process Management with a focus on infrastructural challenges. We conceptualize an architecture for an elastic Business Process Management System and discuss existing work on scheduling, resource allocation, monitoring, decentralized coordination, and state management for elastic processes. Furthermore, we present two representative elastic Business Process Management Systems which are intended to counter these challenges. Based on our findings, we identify open issues and outline possible research directions for the realization of elastic processes and elastic Business Process Management.
Article
Docker promises the ability to package applications and their dependencies into lightweight containers that move easily between different distros, start up quickly and are isolated from each other.
Article
With the advent of Cloud Computing, Big Data management has become a fundamental challenge during the deployment and operation of distributed highly available and fault-tolerant storage systems such as the HBase extensible record-store. These systems can provide support for geo-replication, which comes with the issue of data consistency among distributed sites. In order to offer a best-in-class service to applications, one wants to maximise performance while minimising latency. In terms of data replication, that means incurring in as low latency as possible when moving data between distant data centres. Traditional consistency models introduce a significant problem for systems architects, which is specially important to note in cases where large amounts of data need to be replicated across wide-area networks. In such scenarios it might be suitable to use eventual consistency, and even though not always convenient, latency can be partly reduced and traded for consistency guarantees so that data-transfers do not impact performance. In contrast, this work proposes a broader range of data semantics for consistency while prioritising data at the cost of putting a minimum latency overhead on the rest of non-critical updates. Finally, we show how these semantics can help in finding an optimal data replication strategy for achieving just the required level of data consistency under low latency and a more efficient network bandwidth utilisation.
Conference Paper
The development of Smart Grids requires supplementary ICT-architectures in order to exploit the systems full potential. However, a green field approach is not applicable for developing these architectures. The existing power distribution and transmission infrastructure stipulates numerous requirements. Thus, the field of requirements engineering as a significant part of software engineering is of high importance for the development process. In this contribution requirements engineering is investigated in the context of Smart Grids. Different approaches are introduced and applied to software and architecture development in Smart Grids. Finally, a set of requirements and according use cases for developing a Smart Grid ICT-architecture is presented. They are derived using established methodologies and specifications that discussed in this contribution.
Article
Due to the continuous growth in the application of networks in manufacturing, quality of service (QoS) has become an important issue. In this paper, the concept of QoS for manufacturing networks is discussed. To provide overall performance assurance for manufacturing networks, a service framework integrating the QoS mechanisms of the networked resource service management function and the communication networks is proposed. The novel framework maps an application to resource services and then to communication networks, adopts an intelligent optimisation algorithm for QoS management of resource services, and provides QoS schemes for data transfer across communication networks. A prototype implementation has been realised and a set of simulation experiments conducted to evaluate the validity of the framework. The results obtained demonstrate the ability of the framework to satisfy the various performance requirements posed by such applications and provide efficient overall performance assurance for manufacturing networks.
Article
Cloud computing is revolutionizing the IT industry by enabling them to offer access to their infrastructure and application services on a subscription basis. As a result, several enterprises including IBM, Microsoft, Google, and Amazon have started to offer different Cloud services to their customers. Due to the vast diversity in the available Cloud services, from the customer’s point of view, it has become difficult to decide whose services they should use and what is the basis for their selection. Currently, there is no framework that can allow customers to evaluate Cloud offerings and rank them based on their ability to meet the user’s Quality of Service (QoS) requirements. In this work, we propose a framework and a mechanism that measure the quality and prioritize Cloud services. Such a framework can make a significant impact and will create healthy competition among Cloud providers to satisfy their Service Level Agreement (SLA) and improve their QoS. We have shown the applicability of the ranking framework using a case study.
Conference Paper
Recent studies of evolution at molecular level address two important issues: reconstruction of the evolutionary relationships between species and investigation of the forces of the evolutionary process. Both issues experienced an explosive growth in the last two decades due to massive generation of genomic data, novel statistical methods and computational approaches to process and analyze this large volume of data. Most experiments in molecular evolution are based on computing intensive simulations preceded by other computation tools and post-processed by computing validators. All these tools can be modeled as scientific workflows to improve the experiment management while capturing provenance data. However, these evolutionary analyses experiments are very complex and may execute for weeks. These workflows need to be executed in parallel in High Performance Computing (HPC) environments such as clouds. Clouds are becoming adopted for bioinformatics experiments due to its characteristics, such as, elasticity and availability. Clouds are evolving into HPC environments. In this paper, we introduce SciEvol, a bioinformatics scientific workflow for molecular evolution reconstruction that aims at inferring evolutionary relationships (i.e. to detect positive Darwinian selection) on genomic data. SciEvol is designed and implemented to execute in parallel over the clouds using SciCumulus workflow engine. Our experiments show that SciEvol can help scientists by enabling the reconstruction of evolutionary relationships using the cloud environment. Results present performance improvements of up to 94.64% in the execution time when compared to the sequential execution, which drops from around 10 days to 12 hours.
Book
An introduction to the modeling of business information systems, with processes formally modeled using Petri nets. This comprehensive introduction to modeling business-information systems focuses on business processes. It describes and demonstrates the formal modeling of processes in terms of Petri nets, using a well-established theory for capturing and analyzing models with concurrency. The precise semantics of this formal method offers a distinct advantage for modeling processes over the industrial modeling languages found in other books on the subject. Moreover, the simplicity and expressiveness of the Petri nets concept make it an ideal language for explaining foundational concepts and constructing exercises. After an overview of business information systems, the book introduces the modeling of processes in terms of classical Petri nets. This is then extended with data, time, and hierarchy to model all aspects of a process. Finally, the book explores analysis of Petri net models to detect design flaws and errors in the design process. The text, accessible to a broad audience of professionals and students, keeps technicalities to a minimum and offers numerous examples to illustrate the concepts covered. Exercises at different levels of difficulty make the book ideal for independent study or classroom use.
Conference Paper
The popularity of networks in manufacturing is continuously growing, and the quality of service (QoS) has become one of the most intriguing aspects for both the theory and practice of manufacturing networks. The concept of QoS for manufacturing networks is discussed, together with the classification and modeling of the QoS issues. Then, this paper presents a QoS-aware service framework integrating the QoS mechanisms from both the MGrid technology and communication networks, so that provides an integrated QoS guarantee for manufacturing systems under network environment. In order to evaluate the efficacy of the framework, a prototype implementation have been performed and analyzed. The analysis results demonstrate that the framework, QASF-MNet, is able to effectively satisfy the various performance requirements posted by the applications of manufacturing networks.
Conference Paper
Cloud computing emerges as a new computing paradigm which aims to provide reliable, customized and QoS guaranteed computing dynamic environments for end-users. This paper reviews recent advances of Cloud computing, identifies the concepts and characters of scientific Clouds, and finally presents an example of scientific Cloud for data centers
Article
Business process redesign (BPR) has become one of the most seductive management ideas of the 1990s. It has been associated with information technology and accordingly information systems (IS) academics are expected to have a view on it, and IS practitioners are expected to be knowledgeable about it. This article seeks to decompose BPR into its principal elements, suggesting what is novel and what is not so new. Questions for research are suggested, frameworks for analysis proposed and implications for IS raised. Evidence is drawn from recent research studies and the experiences of some large US and UK companies.