Autonomic virtual resource management for service hosting platforms

Proceedings of the Workshop on Software Engineering Challenges in Cloud Computing 01/2009; DOI: 10.1109/CLOUD.2009.5071526
Source: OAI

ABSTRACT Cloud platforms host several independent applications on a shared resource pool with the ability to allocate com- puting power to applications on a per-demand basis. The use of server virtualization techniques for such platforms provide great flexibility with the ability to consolidate sev- eral virtual machines on the same physical server, to resize a virtual machine capacity and to migrate virtual machine across physical servers. A key challenge for cloud providers is to automate the management of virtual servers while taking into account both high-level QoS requirements of hosted applications and resource management costs. This paper proposes an autonomic resource manager to con- trol the virtualized environment which decouples the provi- sioning of resources from the dynamic placement of virtual machines. This manager aims to optimize a global utility function which integrates both the degree of SLA fulfillment and the operating costs. We resort to a Constraint Pro- gramming approach to formulate and solve the optimization problem. Results obtained through simulations validate our approach.

1 Bookmark
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Cloud computing is a newly emerged computing infrastructure that builds on the latest achievements of diverse research areas, such as Grid computing, Service-oriented computing, business process management and virtualization. An important characteristic of Cloud-based services is the provision of non-functional guarantees in the form of Service Level Agreements (SLAs), such as guarantees on execution time or price. However, due to system malfunctions, changing workload conditions, hard- and software failures, established SLAs can be violated. In order to avoid costly SLA violations, flexible and adaptive SLA attainment strategies are needed. In this paper we present a self-manageable architecture for SLA-based service virtualization that provides a way to ease interoperable service executions in a diverse, heterogeneous, distributed and virtualized world of services. We demonstrate in this paper that the combination of negotiation, brokering and deployment using SLA-aware extensions and autonomic computing principles are required for achieving reliable and efficient service operation in distributed environments.
    Future Generation Computer Systems 03/2014; 32:54-68. · 2.64 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Resource provisioning is critical for cloud computing because it manages the virtual machines (VM) and allocated resources. Traditional resource provisioning schemes make decisions based on centralized configuration and global calculation of resource allocation. However these provisioning frameworks become hindered in large deployment followed by Service Level Objectives (SLO) violation and those are exposed to single point failure as well. The proposed provisioning scheme is based on Peer to Peer (P2P) architecture with no lone decision maker. Each node in the data center makes its own provisioning decision regarding VM allocation and migration which are resolved with Multi Attribute Utility Theory (MAUT) methods. Simulation experiments demonstrate that in comparison with centralized schemes, the proposed scheme generates 60.27% less SLO violations and 83.58% less VM migrations on average. Such outcome proves that the system can manage its resources better compared to centralized scheme.
    Third International Innovative Computing Technology (INTECH), London; 08/2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: With immense success and rapid growth within the last few years, cloud computing has been established as the dominant paradigm of IT industry. In order to meet the increasing demand of computing and storage resources, infra-structure cloud providers are deploying planet-scale data centers across the world, consisting of hundreds of thousands, even millions of servers. These data centers incur very high investment and operating costs for the compute and network devices as well as for the energy consumption. Moreover, because of the huge energy usage, such data centers leave large carbon footprints and thus have adverse effects on the environment. As a result, efficient computing resource utilization and energy consumption reduction are becoming crucial issues to make cloud computing successful. Intelligent workload placement and relocation is one of the primary means to address these issues. This chapter presents an overview of the infra-structure resource management systems and technologies, and detailed description of the proposed solution approaches for efficient cloud resource utilization and minimization of power consumption and resource wastages. Different types of server consolidation mechanisms are presented along with the solution approaches proposed by the researchers of both academia and industry. Various aspects of workload reconfiguration mechanisms and existing works on workload relocation techniques are described.
    11/2014: pages 33;


Available from