Deadline Aware Virtual Machine Scheduler for Grid and Cloud Computing

Advanced Information Networking and Applications Workshops, International Conference on 01/2010; DOI: 10.1109/WAINA.2010.107
Source: DBLP


Virtualization technology has enabled applications to be decoupled from the underlying hardware providing the benefits of portability, better control over execution environment and isolation. It has been widely adopted in scientific grids and commercial clouds. Since virtualization, despite its benefits incurs a performance penalty, which could be significant for systems dealing with uncertainty such as High Performance Computing (HPC) applications where jobs have tight deadlines and have dependencies on other jobs before they could run. The major obstacle lies in bridging the gap between performance requirements of a job and performance offered by the virtualization technology if the jobs were to be executed in virtual machines. In this paper, we present a novel approach to optimize job deadlines when run in virtual machines by developing a deadline-aware algorithm that responds to job execution delays in real time, and dynamically optimizes jobs to meet their deadline obligations. Our approaches borrowed concepts both from signal processing and statistical techniques, and their comparative performance results are presented later in the paper including the impact on utilization rate of the hardware resources.

Download full-text


Available from: Ivo Maljevic
  • Source
    • "[11], a deadline-aware scheduling algorithm is derived to deal with the subdivided jobs that rely on each other and constrained by the deadline. These problems are closely related to the machine sequencing problem of Combinatoric Optimization . "
    [Show abstract] [Hide abstract]
    ABSTRACT: With the advent of cloud computing, organizations tend to buy services from data centers of major cloud vendors. In contrast, community cloud computing as described in give the alternative way to reduce the costs or even obtain free resources, by sharing among communities. Because the utilization of computing resources in an organization is not constantly 100%, other members of the community can exploit these excessive resources. In this paper, we design an algorithm for admission control and resource allocation, in order to deal with unreliably excessive computing resources. Furthermore, we introduce a social price in order to manage the allocation more efficiently both in term of social relations and of revenue.
    Preview · Conference Paper · Jul 2011
  • [Show abstract] [Hide abstract]
    ABSTRACT: Computational grids generally used for scientific applications are fully utilized only at certain times. During that period, shortage of grid resources occurs and causes delay in execution of jobs. In such situation the grid jobs can be migrated to private cloud for execution. On the other hand, the private cloud when reaches its peak load, can utilize the grid if the grid resources are idle thereby private cloud gain more resources dynamically. We propose architecture to integrate grid and private cloud using an integrator component along with a storage cluster, which is responsible for managing resources and execution of jobs over grid and private cloud when any of these lack resources. The architecture supports dynamic clustering over the virtual resources and formation of Virtual Organization in integrated environment KeywordsGrid Cloud integration–Virtual Machine on grid–Grid Storage cluster–Grid Resource expansion–Cloud burst
    No preview · Chapter · Dec 2010
  • [Show abstract] [Hide abstract]
    ABSTRACT: High Performance Computing (HPC) leverages cluster combined with a set of computing nodes exploiting computational capacities to handle varying job submission for scientific computing work. Inappropriate capacity planning and related management mechanism applied, will lead HPC cluster into a rather large number of pending jobs which is considered as a critical factor to affect System's throughput against the goal of HPC cluster. Moreover it will result in inefficiency and wasted capacities cost. Thus an autonomic capacity management approach is therefore proposed in this paper, in order to overcome such issues as regards. Firstly we survey recent researches related in deep, and find that they all lack of consideration on computing node's personality which is crucial to solve job submission and is probable to lead submitted jobs into pending in case of there are insufficient computing nodes associated to this personality. Afterward we present our measures focused on autonomic capacity management by taking advantage of Cloud insight to provision capacities dynamically on demand. Such measures are capable of selfadaptively adjusting cluster capacities to form different personalities for varying job submission, by the repurposeability to coordinate capacities from the idle personality with lower running jobs to the other with higher demand on pending jobs. Finally we verify that, proposals in this paper are significantly in achieving optimized throughput by reducing the number of pending jobs in cost-efficiency, with a couple of simulations.
    No preview · Article · May 2012
Show more