Article

Deadline Aware Virtual Machine Scheduler for Grid and Cloud Computing

Advanced Information Networking and Applications Workshops, International Conference on 01/2010; DOI: 10.1109/WAINA.2010.107
Source: DBLP

ABSTRACT Virtualization technology has enabled applications to be decoupled from the underlying hardware providing the benefits of portability, better control over execution environment and isolation. It has been widely adopted in scientific grids and commercial clouds. Since virtualization, despite its benefits incurs a performance penalty, which could be significant for systems dealing with uncertainty such as High Performance Computing (HPC) applications where jobs have tight deadlines and have dependencies on other jobs before they could run. The major obstacle lies in bridging the gap between performance requirements of a job and performance offered by the virtualization technology if the jobs were to be executed in virtual machines. In this paper, we present a novel approach to optimize job deadlines when run in virtual machines by developing a deadline-aware algorithm that responds to job execution delays in real time, and dynamically optimizes jobs to meet their deadline obligations. Our approaches borrowed concepts both from signal processing and statistical techniques, and their comparative performance results are presented later in the paper including the impact on utilization rate of the hardware resources.

Full-text

Available from: Ivo Maljevic, May 22, 2015
0 Followers
 · 
114 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: Cloud computing delivers IT resources as a Service over Internet. It is growing as the future for application deployment depending upon Quality of Service provided by the service provider. Virtualization provides the base for the same; it has certain pros like portability, efficient hardware utilization, ease in maintenance, cost benefit and cons like performance degradation if virtualized resources are not uniformly allocated. In this paper user requests are bifurcated on the bases of type of deadline (unlimited or tight). To meet Tight Deadline requirements we have created Heterogeneous Cloud Computing Environment (virtual machines with different configuration are dynamically created and destroyed). CloudSim toolkit is used to simulate the results of Heterogeneous Cloud Environment over a node based on deadline of incoming user request. We show that our approach improves request rejection ratio, deadline miss rate, response time to user and resource utilization. Ultimately Quality of Service (QoS) is improved.
    Advances in Computing, Communications and Informatics (ICACCI), 2013 International Conference on; 01/2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: With the advent of cloud computing, organizations tend to buy services from data centers of major cloud vendors. In contrast, community cloud computing as described in give the alternative way to reduce the costs or even obtain free resources, by sharing among communities. Because the utilization of computing resources in an organization is not constantly 100%, other members of the community can exploit these excessive resources. In this paper, we design an algorithm for admission control and resource allocation, in order to deal with unreliably excessive computing resources. Furthermore, we introduce a social price in order to manage the allocation more efficiently both in term of social relations and of revenue.
    Communications (ICC), 2011 IEEE International Conference on; 07/2011
  • [Show abstract] [Hide abstract]
    ABSTRACT: High Performance Computing (HPC) leverages cluster combined with a set of computing nodes exploiting computational capacities to handle varying job submission for scientific computing work. Inappropriate capacity planning and related management mechanism applied, will lead HPC cluster into a rather large number of pending jobs which is considered as a critical factor to affect System's throughput against the goal of HPC cluster. Moreover it will result in inefficiency and wasted capacities cost. Thus an autonomic capacity management approach is therefore proposed in this paper, in order to overcome such issues as regards. Firstly we survey recent researches related in deep, and find that they all lack of consideration on computing node's personality which is crucial to solve job submission and is probable to lead submitted jobs into pending in case of there are insufficient computing nodes associated to this personality. Afterward we present our measures focused on autonomic capacity management by taking advantage of Cloud insight to provision capacities dynamically on demand. Such measures are capable of selfadaptively adjusting cluster capacities to form different personalities for varying job submission, by the repurposeability to coordinate capacities from the idle personality with lower running jobs to the other with higher demand on pending jobs. Finally we verify that, proposals in this paper are significantly in achieving optimized throughput by reducing the number of pending jobs in cost-efficiency, with a couple of simulations.