Autonomic Virtual Resource Management for Service Hosting Platforms

Proceedings of the Workshop on Software Engineering Challenges in Cloud Computing 05/2009; DOI: 10.1109/CLOUD.2009.5071526
Source: OAI

ABSTRACT Cloud platforms host several independent applications on a shared resource pool with the ability to allocate com- puting power to applications on a per-demand basis. The use of server virtualization techniques for such platforms provide great flexibility with the ability to consolidate sev- eral virtual machines on the same physical server, to resize a virtual machine capacity and to migrate virtual machine across physical servers. A key challenge for cloud providers is to automate the management of virtual servers while taking into account both high-level QoS requirements of hosted applications and resource management costs. This paper proposes an autonomic resource manager to con- trol the virtualized environment which decouples the provi- sioning of resources from the dynamic placement of virtual machines. This manager aims to optimize a global utility function which integrates both the degree of SLA fulfillment and the operating costs. We resort to a Constraint Pro- gramming approach to formulate and solve the optimization problem. Results obtained through simulations validate our approach.

  • Source
    • "Their algorithms take both optimisation and fairness into account and provide a relatively good compromise resource allocation. Van et al. (2009b) proposed an autonomic resource manager to control the virtualised environment, which decouples the provisioning of resources from the dynamic placement of virtual machines. The manager aims to optimise a global utility function which integrates both the degree of SLA fulfilment and the operating costs. "
    International Journal of Web and Grid Services 01/2015; 11(2):193. DOI:10.1504/IJWGS.2015.068899 · 1.58 Impact Factor
  • Source
    • "As cloud management is some kind of specialization of management of distributed computing systems, it inherits many techniques from the traditional computer network-management. However, as cloud computing environments are considerably more complex than those of legacy-distributed computing [1], new management methods and tools need to be implemented. Introduction of multiple-cloud platforms (such as VMware, HyperV OpenStack and others) and monitoring crucial aspects from a centralized point are a challenging task. "
  • Source
    • "In addition, various optimization techniques have been implemented at virtualized environment in the cloud to minimize server sprawling and SLO violation. Constrains Satisfaction Problem and various NP Hard optimization techniques such as Vector Bin packing Problems are utilized to host a fixed number of VMs into minimum number of PMs as well as allocate VMs in such a manner so that it issues less migration request[3], [12]. Those systems are similar to the previously discussed system because of the presence of Global Resource Arbiter. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Resource provisioning is critical for cloud computing because it manages the virtual machines (VM) and allocated resources. Traditional resource provisioning schemes make decisions based on centralized configuration and global calculation of resource allocation. However these provisioning frameworks become hindered in large deployment followed by Service Level Objectives (SLO) violation and those are exposed to single point failure as well. The proposed provisioning scheme is based on Peer to Peer (P2P) architecture with no lone decision maker. Each node in the data center makes its own provisioning decision regarding VM allocation and migration which are resolved with Multi Attribute Utility Theory (MAUT) methods. Simulation experiments demonstrate that in comparison with centralized schemes, the proposed scheme generates 60.27% less SLO violations and 83.58% less VM migrations on average. Such outcome proves that the system can manage its resources better compared to centralized scheme.
    Third International Innovative Computing Technology (INTECH), London; 08/2013
Show more


Available from