Article

Evaluation of gang scheduling performance and cost in a cloud computing system

The Journal of Supercomputing (Impact Factor: 0.84). 02/2010; 59(2):975-992. DOI: 10.1007/s11227-010-0481-4
Source: DBLP

ABSTRACT Cloud Computing refers to the notion of outsourcing on-site available services, computational facilities, or data storage
to an off-site, location-transparent centralized facility or “Cloud.” Gang Scheduling is an efficient job scheduling algorithm
for time sharing, already applied in parallel and distributed systems. This paper studies the performance of a distributed
Cloud Computing model, based on the Amazon Elastic Compute Cloud (EC2) architecture that implements a Gang Scheduling scheme.
Our model utilizes the concept of Virtual Machines (or VMs) which act as the computational units of the system. Initially,
the system includes no VMs, but depending on the computational needs of the jobs being serviced new VMs can be leased and
later released dynamically. A simulation of the aforementioned model is used to study, analyze, and evaluate both the performance
and the overall cost of two major gang scheduling algorithms. Results reveal that Gang Scheduling can be effectively applied
in a Cloud Computing environment both performance-wise and cost-wise.

KeywordsCloud computing–Gang scheduling–HPC–Virtual machines

2 Followers
 · 
121 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: Gang scheduling combines time-sharing with space-sharing to ensure a short response time for interactive tasks and high overall system throughput. It has been widely studied in different areas including the Grid. Gang scheduling tries to assign the task belonging to one job to different Grid nodes. During the tasks assignment, there are three targets as follows: (1) to keep the Grid in higher resource utilization, (2) to keep the jobs in a low average waiting time and executing time, and, (3) to keep the system in fairness between jobs. In order to meet these targets, we propose a new model according to the waiting time of the jobs. Then we propose a new scheduling method ZERO–ONE scheduling with multiple targets (ZEROONEMT) to solve the Gang scheduling in the Grid. We have conducted extensive evaluations to compare our method with the existing methods based on a simulation environment and a real log from a Grid. In the experiments, in order to justify our method, different metrics, including adapted first come first served and largest job first served, are selected to test the performance of our methods. Experimental results illustrate that our proposed ZEROONEMT reduces the values in the average waiting time, the average response time, and the standard deviation of waiting time of all the jobs.
    Journal of Network and Systems Management 01/2014; DOI:10.1007/s10922-014-9312-x · 0.44 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Task scheduling is a problem which seeks to allocate, over time, various tasks from different resources. In this paper, we consider group task scheduling upon a heterogeneous multi-cluster system. Two types of job tasking are considered, parallel and sequential. In order to reduce fragmentation caused by the scheduler group, migration mechanisms were implemented. Moreover, the dispatchers global and local use distribution of jobs in order to minimize delays in the task queues, as well as in response time. To analyze the different situations, performance metrics were applied, aiming to compare schedulers in different situations.
    ICNS 2014 : The Tenth International Conference on Networking and Services; 04/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: The development of computing and communication systems has gone through a spiral cycle of centralization and decentralization paradigms. The earliest computer systems are centralized mainframe computers. The paradigm moved to decentralized as networked stations became more dependable, extensible and cost-effective. The decentralized systems have their limitations and inconveniences. The virtualization and cloud computing paradigm creates a centralized system that appears to users to be a centralized system, where computing and communication resources are not in the client computers but in an integrated infrastructure that is accessible anywhere and anytime. Nevertheless, the implementation of the centralized infrastructure is equipped with decentralized and redundant resources, which makes the system more dependable as any component failures can be tolerated internally. The Internet of Things extends the cloud computing concept beyond computing and communication to include everything, particularly, the physical devices. This paper discusses the architectures, interfaces, and behaviors of intelligent devices connected to the cloud computing environment. Robot as a Service is the case study, which has all the key features of Internet of Intelligent Things: autonomous, mobile, sensing, and action taking. The goal is to further extend the centralized cloud computing environment into a decentralized system to complete another cycle of the spiral development. The idea of achieving the goal is through autonomous and intelligent mobile physical services or robots as services to form local pool of intelligent devices and that can make local decisions without communicate with the cloud.
    Simulation Modelling Practice and Theory 05/2013; 34:159-171. DOI:10.1016/j.simpat.2012.03.006 · 1.05 Impact Factor

Full-text (2 Sources)

Download
21 Downloads
Available from
Feb 19, 2015