Evaluation of gang scheduling performance and cost in a cloud computing system

The Journal of Supercomputing (Impact Factor: 0.92). 01/2010; 59(2):975-992. DOI: 10.1007/s11227-010-0481-4
Source: DBLP

ABSTRACT Cloud Computing refers to the notion of outsourcing on-site available services, computational facilities, or data storage
to an off-site, location-transparent centralized facility or “Cloud.” Gang Scheduling is an efficient job scheduling algorithm
for time sharing, already applied in parallel and distributed systems. This paper studies the performance of a distributed
Cloud Computing model, based on the Amazon Elastic Compute Cloud (EC2) architecture that implements a Gang Scheduling scheme.
Our model utilizes the concept of Virtual Machines (or VMs) which act as the computational units of the system. Initially,
the system includes no VMs, but depending on the computational needs of the jobs being serviced new VMs can be leased and
later released dynamically. A simulation of the aforementioned model is used to study, analyze, and evaluate both the performance
and the overall cost of two major gang scheduling algorithms. Results reveal that Gang Scheduling can be effectively applied
in a Cloud Computing environment both performance-wise and cost-wise.

KeywordsCloud computing–Gang scheduling–HPC–Virtual machines

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Cloud computing provides us with the massive pool of resources in terms of pay-as-you-use policy. Cloud delivers these resources on demand through the use of network resources under different load conditions. As the users will be charged based on their usage the effective utilization of resources poses a major challenge. To accomplish this, a service request scheduling algorithm which reduces the waiting time of the task in the scheduler and maximizes the Quality of Service (QoS) is needed. Our proposed algorithm named Effective Resource Utilization Algorithm (ERUA) is based on 3-tier cloud architecture (Consumer, Service Provider and the Resource Provider) which benefits both the user (QoS) and the service provider (Cost) through effective schedule reallocation based on utilization ratio leading to better resource utilization. Performance analysis made with the existing scheduling techniques shows that our algorithm gives out a more optimized schedule and enhances the efficiency rate. I INTRODUCTION Cloud computing an emerging and an enabling technology which made us to think beyond what is possible. Realizing the services and amenities provided by the cloud many organizations decided to jump into cloud in order to reduce the infrastructure cost and energy consumption. Cloud makes them to move their business with different range and style of services. It had the changed the traditional way of using the resource infrastructure. Service request scheduling is the most crucial area with respect to the profit of the service provider and the QoS of the user. Cloud computing services are offered based on 3-tier architecture. The entire architecture of a cloud with respect to service request scheduling comprises of the resource provider, the service providers and the consumers. In order to service the request given by the consumer, the service provider needs either to procure new hardware resources or to rent it from resource provider. However, getting resource on rental basis incurs less cost than buying a new one. The service provider hires resources from the resource provider and creates Virtual Machine (VM) instances dynamically to serve the consumers. Resource provider takes on the responsibility of dispatching the VM's to the physical server. Charges for the running instance are based on the flat rate (/time unit). Users submit their request for processing an application consists of one or more services. These services along with the time and cost parameters are sent to the service provider. In general the actual processing time of a request is much longer than its estimated time as there incurs some delay at the service provider site. As the cloud is a form of "pay-as-you-use" utility, the service provider needs to reduce the response time and delay. Over here service request scheduling becomes an essential element to reduce maximize the profit of service provider and to improve the QoS offered to the user. Earlier research contributions towards service request scheduling algorithms were on SERver CONsolidation [1], optimized service scheduling algorithm [2], scheduling policy based on priority and admission control [3], integration of VM for sorting tasks based on the profit [4], multiple pheromone algorithm [5], gang scheduling on VM [6], utility model to balance the profit between the user and the service provider [7], dynamic service request resource allocation through gi-First In First Out (FIFO) [8], Service Level Agreement (SLA) creation, management and usage in utility computing [9], scheduling dynamic user request to maximize the profit of the service provider [10], Ant Colony Optimization (ACO) [11], Particle Swam Optimization (PSO) [12], dynamic distribution of user request between the application services in a decentralized way [13], scheduling algorithm based on genetic algorithm to reduce the waiting time [14], task consolidation heuristics with respect to idle and active energy consumption [15], pricing model based on processor – sharing through max_profit and max_utility [16], optimized service request – resource mapping using genetic algorithm [17], dynamic priority scheduling algorithm [18]. Our algorithm ERUA for service request scheduling schedules the task units based on the utilization ratio of the queue. It always ensures that the utilization ratio always falls within 1 leading to better resource utilization Ramkumar N et al. / International Journal of Engineering and Technology (IJET)
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Cloud Computing can be viewed as a dynamically-scalable pool of resources. Virtualization is one of the key technologies enabling Cloud Computing functionalities. Virtual machines (VMs) scheduling and allocation is essential in Cloud Computing environment. In this paper, two dynamic VMs scheduling and allocating schemes are presented and compared. One dynamically on-demand allocates VMs while the other deploys optimal threshold to control the scheduling and allocating of VMs. The aim is to dynamically allocate the virtual resources among the Cloud Computing applications based on their load changes to improve resource utilization and reduce the user usage cost. The schemes are implemented by using SimPy, and the simulation results show that the proposed adaptive scheme with one threshold can be effectively applied in a Cloud Computing environment both performance-wise and cost-wise.
    International Journal of Contents. 01/2012; 8(4).
  • [Show abstract] [Hide abstract]
    ABSTRACT: Recent studies have found cloud environments increasingly appealing for executing HPC applications, including tightly coupled parallel simulations. While public clouds offer elastic, on-demand resource provisioning and pay-as-you-go pricing, individual users setting up their on-demand virtual clusters may not be able to take full advantage of common cost-saving opportunities, such as reserved instances. In this paper, we propose a Semi-Elastic Cluster (SEC) computing model for organizations to reserve and dynamically resize a virtual cloud-based cluster. We present a set of integrated batch scheduling plus resource scaling strategies uniquely enabled by SEC, as well as an online reserved instance provisioning algorithm based on job history. Our trace-driven simulation results show that such a model has a 61.0% cost saving than individual users acquiring and managing cloud resources without causing longer average job wait time. Meanwhile, the overhead of acquiring/maintaining shared cloud instances is shown to take only a few seconds.
    Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis; 11/2013