Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Cloud computing (CC) is fast-growing and frequently adopted in information technology (IT) environments due to the benefits it offers. Task scheduling and load balancing are amongst the hot topics in the realm of CC. To overcome the shortcomings of the existing task scheduling and load balancing approaches, we propose a novel approach that uses dominant sequence clustering (DSC) for task scheduling and a weighted least connection (WLC) algorithm for load balancing. First, users’ tasks are clustered using the DSC algorithm, which represents user tasks as graph of one or more clusters. After task clustering, each task is ranked using Modified Heterogeneous Earliest Finish Time (MHEFT) algorithm. where the highest priority task is scheduled first. Afterwards, virtual machines (VM) are clustered using a mean shift clustering (MSC) algorithm using kernel functions. Load balancing is subsequently performed using a WLC algorithm, which distributes the load based on server weight and capacity as well as client connectivity to server. A highly weighted or least connected server is selected for task allocation, which in turn increases the response time. Finally, we evaluate the proposed architecture using metrics such as response time, makespan, resource utilization, and service reliability.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Full-text available
Article
Cloud computing offers dynamic allocation of resources on demand, the feature which makes it to stand apart providing great performance, scalability, cost efficient and less maintenance, thus making it an apt choice. Task scheduling becomes the essential factor in increasing the performance for the dynamic allocation of resources which is most essential in the cloud environment to increase performance and decrease the cost. In this work, a solution is proposed using makespan and cost, taking them as important constraints for the optimization problem. We have merged two algorithms namely, cuckoo search algorithm (CSA) and oppositional based learning (OBL) and created a new hybrid algorithm called oppositional cuckoo search algorithm (OCSA) to provide solution to the above stated issue. Our proposed OCSA algorithm showed noticeable improvement over the other task scheduling algorithms. The proposed work is simulated in cloudsim programming environment and the simulation results show the effectiveness of the proposed work by minimizing cost and makespan parameters. The obtained results are better in comparison to other existing algorithms like particle swarm optimization (PSO), Improved Differential Evolution Algorithm (IDEA) and genetic algorithm (GA).
Full-text available
Article
Cloud computing is a new technology that brings new challenges to all organizations around the world. Improving response time for user requests on cloud computing is a critical issue to combat bottlenecks. As for cloud computing, bandwidth to from cloud service providers is a bottleneck. With the rapid development of the scale and number of applications, this access is often threatened by overload. Therefore, this paper our proposed Throttled Modified Algorithm(TMA) for improving the response time of VMs on cloud computing to improve performance for end-user. We have simulated the proposed algorithm with the CloudAnalyts simulation tool and this algorithm has improved response times and processing time of the cloud data center. © 2018, International Journal of Computer Networks & Communications (IJCNC).
Full-text available
Article
Task scheduling is one of the most challenging aspects to improve the overall performance of cloud computing and optimize cloud utilization and Quality of Service (QoS). This paper focuses on Task Scheduling optimization using a novel approach based on Dynamic dispatch Queues (TSDQ) and hybrid meta-heuristic algorithms. We propose two hybrid meta-heuristic algorithms, the first one using Fuzzy Logic with Particle Swarm Optimization algorithm (TSDQ-FLPSO), the second one using Simulated Annealing with Particle Swarm Optimization algorithm (TSDQ-SAPSO). Several experiments have been carried out based on an open source simulator (CloudSim) using synthetic and real data sets from real systems. The experimental results demonstrate the effectiveness of the proposed approach and the optimal results is provided using TSDQ-FLPSO compared to TSDQ-SAPSO and other existing scheduling algorithms especially in a high dimensional problem. The TSDQ-FLPSO algorithm shows a great advantage in terms of waiting time, queue length, makespan, cost, resource utilization, degree of imbalance, and load balancing.
Full-text available
Article
Cloud Computing is a gathering of physical and virtualized assets gave to the clients according to request and pay per uses bases via internet. Basically, the task scheduling and resource allocation two features are considered such as cost and makespan. In order to achieve better performance in task scheduling, resource allocation and task scheduling must be precisely organized and optimized jointly. Several works have been published in the literature to do the scheduling in cloud. In this paper, for enhancing the scheduling process cuckoo search (CS) and harmony search (HS) algorithm is hybrid as CHSA to improve the optimization problem. These two algorithms are effectively combined to do intelligent process scheduling. According to this, a new multi-objective function is proposed by combining cost, energy consumption, memory usage, credit and penalty. Finally, the performance of the CHSA algorithm is compared with different algorithms such as existing hybrid cuckoo gravitational search algorithm, individual CS and HS algorithm with various multi-objective parameters. By analyzing the result our proposed CHSA algorithm attain minimum cost, minimum memory usage, minimum energy consumption, minimum penalty and maximum credit compared to existing techniques.
Full-text available
Article
Cloud computing delivers computing resources like software and hardware as a service to the users through a network. The main idea of cloud computing is to share the tremendous power of storage, computation and information to the scientific applications. In cloud computing, the user tasks are organized and executed with suitable resources to deliver the services effectively. There are plenty of task allocation techniques that are used to accomplish task scheduling. In order to enhance the task scheduling technique, an efficient task scheduling algorithm is proposed in this paper. Optimization techniques are very popular in solving NP-hard problems. In this proposed technique, user tasks are stored in the queue manager. The priority is calculated and suitable resources are allocated for the task if it is a repeated task. New tasks are analyzed and stored in the on-demand queue. The output of the on-demand queue is given to the Hybrid Genetic-Particle Swarm Optimization (HGPSO) algorithm. To implement HGPSO technique, genetic algorithm and particle swarm optimization algorithm are combined and used. HGPSO algorithm evaluates suitable resources for the user tasks which are in the on-demand queue.
Full-text available
Article
Cloud computing is required by modern technology. Task scheduling and resource allocation are important aspects of cloud computing. This paper proposes a heuristic approach that combines the modified analytic hierarchy process (MAHP), bandwidth aware divisible scheduling (BATS) + BAR optimization, longest expected processing time preemption (LEPT), and divide-and-conquer methods to perform task scheduling and resource allocation. In this approach, each task is processed before its actual allocation to cloud resources using a MAHP process. The resources are allocated using the combined BATS + BAR optimization method, which considers the bandwidth and load of the cloud resources as constraints. In addition, the proposed system preempts resource intensive tasks using LEPT preemption. The divide-and-conquer approach improves the proposed system, as is proven experimentally through comparison with the existing BATS and improved differential evolution algorithm (IDEA) frameworks when turnaround time and response time are used as performance metrics.
Full-text available
Article
Cloud computing is the latest in distributed computing technology. The delivery mechanism between the service provider and users depends on Service Level Agreement (SLA). SLA contains Quality of Service (QoS), which has some constraints such as deadline to achieve user satisfaction. In this article, the authors propose a Deadline-Aware Priority Scheduling (DAPS) model to minimize the average makespan, and maximize resource utilization under deadline constraint. In the proposed model, the tasks are sorted based on length priority in ascending order and labeling the VM’s state as successful which achieves the deadline constraint, and then mapping the tasks to the suitable VM that has minimum processing time. The authors compared their proposed model to the existing algorithms GA, Min-Min, SJF and Round Robin. The proposed model outperforms other algorithms by reducing the average of makespan, mean of total average response time, number of violations, violation ratio, and failure ratio, while increasing resource utilization, and guarantee ratio for tasks that meet deadline constraint.
Full-text available
Article
Since cloud computing provides computing resources on a pay per use basis, a task scheduling algorithm directly affects the cost for users. In this paper, we propose a novel cloud task scheduling algorithm based on ant colony optimization that allocates tasks of cloud users to virtual machines in cloud computing environments in an efficient manner. To enhance the performance of the task scheduler in cloud computing environments with ant colony optimization, we adapt diversification and reinforcement strategies with slave ants. The proposed algorithm solves the global optimization problem with slave ants by avoiding long paths whose pheromones are wrongly accumulated by leading ants.
Full-text available
Article
Load balancing is the significant task in the cloud computing because the cloud servers need to store avast amount of information which increases the load on the servers. The objective of the load balancing technique is that it maintains a trade-off on servers by distributing equal load with less power. Accordingly, this paper presents the load balancing technique based on the constraint measure. Initially, the capacity and load of each virtual machine are calculated. If the load of the virtual machine is greater than the balanced threshold value then,the load balancing algorithm is used for allocating the tasks. The load balancing algorithm calculates the deciding factor of each virtual machine and checks the load of the virtual machine. Then, it calculates the selection factor of each task. Then, the task which has better selection factor is allocated to the virtual machine. The performance of the proposed load balancing method is evaluated with the existing load balancing methods, such as HBB-LB, DLB, and HDLB for the evaluation metrics load and capacity. The experimental results show that the proposed method migrate only three tasks while the existing method HDLB migrates seven tasks.
Full-text available
Conference Paper
Utilizing dynamic resource allocation for load balancing is considered as an important optimization process of task scheduling in cloud computing. A poor scheduling policy may overload certain virtual machines while remaining virtual machines are idle. Accordingly, this paper proposes a hybrid load balancing algorithm with combination of Teaching-Learning-Based Optimization (TLBO) and Grey Wolves Optimization algorithms (GWO), which can well contribute in maximizing the throughput using well balanced load across virtual machines and overcome the problem of trap into local optimum. The hybrid algorithm is benchmarked on eleven test functions and a comparative study is conducted to verify the results with particle swarm optimization (PSO), Biogeography-based optimization (BBO), and GWO. To evaluate the performance of the proposed algorithm for load balancing, the hybrid algorithm is simulated and the experimental results are presented.
Full-text available
Article
Cloud computing is a ubiquitous network access model to a shared pool of configurable computing resources where available resources must be checked and scheduled using an efficient task scheduler to be assigned to clients. Most of the existing task schedulers, did not achieve the required standards and requirements as some of them only concentrated on waiting time or response time reduction or even both neglecting the starved processes at all. In this paper, we propose a novel hybrid task scheduling algorithm named (SRDQ) combining Shortest-Job- First (SJF) and Round Robin (RR) schedulers considering a dynamic variable task quantum. The proposed algorithms mainly relies on two basic keys the first having a dynamic task quantum to balance waiting time between short and long tasks while the second involves splitting the ready queue into two sub-queues, Q1 for the short tasks and the other for the long ones. Assigning tasks to resources from Q1 or Q2 are done mutually two tasks from Q1 and one task from Q2. For evaluation purpose, three different datasets were utilized during the algorithm simulation conducted using CloudSim environment toolkit 3.0.3 against three different scheduling algorithms SJF, RR and Time Slice Priority Based RR (TSPBRR) Experimentations results and tests indicated the superiority of the proposed algorithm over the state of art in reducing waiting time, response time and partially the starvation of long tasks
Full-text available
Article
Load balancing is a method of workload distribution across various computers or instruction data centres for maximizing throughput and minimizing work load on resources. To perform load balancing techniques in cloud computing environments, various challenges such as data security, and proper distribution exist which requires serious attention. The most important challenge posed by cloud applicationsis the provision of Quality of Service (QoS) provision as it develops the problem of resource allocation to the application so as to guarantee a service level along dimensions such as performance, availability and reliability. A centralized hierarchical Cloud-based Multimedia System (CMS) consisting of a resource manager, cluster heads, and server clusters is being considered by which the resource manager assigns clients’ requests to server clusters for performing multimedia service tasks based on the job features after which each the job is assigned to the servers within its server cluster by the cluster head. Designing an effective load balancing algorithm for CMS however being a complicated and challenging task, enables spreading of multimedia service job load on servers at the minimal cost for transmitting multimedia data between server clusters and clients without exceeding the maximal load limit of each server cluster. In the present work, the Multiple Kernel Learning with Support Vector Machine (MKL-SVM) approach is proposed to quantify the disturbance in the utilization of multiple resources on a resource manager at client side and then verifying at the server side in the each cluster. Also, Fuzzy Simple Additive Weighting (FSAW) method is introduced for QoS provision for improving the system performance. The proposed model CMSdynMLB serves as the multiservice load balancing while considering the integer linear programming problem having unevenness measurement. In order to solve the problem of dynamic load balancing, Hybrid Particle Swarm Optimization (HSPO) is proposed as it holds well for dynamic problems. From the simulation results, it is determined that proposed MKL-SVM algorithm can efficiently manage the dynamic multiservice load balancing.
Full-text available
Article
In data centers are provided solution to the consumer and to the organization by means of store and process their data. When scheduling operation carrying more requirements for resources than it can hold, in this situation load balancing strategy distributes workloads across multiple servers to optimize the performances. However, resource allocation and load balancing is an inspiring problem for the cloud service providers to consumers in terms of Quality of Services. The proposed hybrid bacterial swarm optimization algorithm, achieve global seek over the entire search space through PSO while local search is achieved by BFO algorithm. This paper proposed a novel idea, how to tackle the scheduling problem by using hybrid load balancing techniques. The experimental results demonstrate that the projected algorithms overtake the existing SA, PSO, Dynamic ADS algorithms considerably by minimizing the operational cost, make-span and maximize the utilization of the resource.
Full-text available
Conference Paper
Cloud computing is a model for delivering information technology services, wherein resources are retrieved from the Internet through web-based tools and applications instead of a direct connection to a server. The capability to provision and release cloud computing resources with minimal management effort or service provider interaction led to the rapid increase of the use of cloud computing. Therefore, balancing cloud computing resources to provide better performance and services to end users is important. Load balancing in cloud computing means balancing three important stages through which a request is processed. The three stages are data center selection, virtual machine scheduling, and task scheduling at a selected data center. User task scheduling plays a significant role in improving the performance of cloud services. This paper presents a review of various energy-efficient task scheduling methods in a cloud environment. A brief analysis of various scheduling parameters considered in these methods is also presented. The results show that the best power-saving percentage level can be achieved by using both DVFS and DNS.
Full-text available
Article
Cloud computing is a type of parallel and distributed system consisting of a collection of interconnected and virtual computers. With the increasing demand and benefits of cloud computing infrastructure, different computing can be performed on cloud environment. One of the fundamental issues in this environment is related to task scheduling. Cloud task scheduling is an NP-hard optimization problem, and many meta-heuristic algorithms have been proposed to solve it. A good task scheduler should adapt its scheduling strategy to the changing environment and the types of tasks. In this paper a cloud task scheduling policy based on ant colony optimization algorithm for load balancing compared with different scheduling algorithms has been proposed. Ant Colony Optimization (ACO) is random optimization search approach that will be used for allocating the incoming jobs to the virtual machines. The main contribution of our work is to balance the system load while trying to minimizing the make span of a given tasks set. The load balancing factor, related to the job finishing rate, is proposed to make the job finishing rate at different resource being similar and the ability of the load balancing will be improved. The proposed scheduling strategy was simulated using Cloudsim toolkit package. Experimental results showed that, the proposed algorithm outperformed scheduling algorithms that are based on the basic ACO or Modified Ant Colony Optimization (MACO).
Article
Cloud computing is a fascinating and profitable area in modern distributed computing. Aside from providing millions of users the means to use offered services through their own computers, terminals, and mobile devices, cloud computing presents an environment with low cost, simple user interface, and low power consumption by employing server virtualisation in its offered services (e.g., Infrastructure as a Service). The pool of virtual machines found in a cloud computing data centre (DC) must run through an efficient task scheduling algorithm to achieve resource utilisation and good quality of service, thus ensuring the positive effect of low energy consumption in the cloud computing environment. In this paper, we present an energy-efficient scheduling algorithm for a cloud computing DC using the dynamic voltage frequency scaling technique. The proposed scheduling algorithm can efficiently reduce the energy consumption for executing jobs by increasing resource utilisation. GreenCloud simulator is used to simulate our algorithm. Experimental results show that, compared with other algorithms, our algorithm can increase server utilisation, reduce energy consumption, and reduce execution time.
Article
Data and computational centres consume a large amount of energy and limited by power density and computational capacity. As compared with the traditional distributed system and homogeneous system, the heterogeneous system can provide improved performance and dynamic provisioning. Dynamic provisioning can reduce energy consumption and map the dynamic requests with heterogeneous resources. The problem of resource utilization in heterogeneous computing system has been studied with variations. Scheduling of independent, non-communicating, variable length tasks in the concern of CPU utilization, low energy consumption, and makespan using dynamic heterogeneous shortest job first (DHSJF) model is discussed in this paper. Tasks are scheduled in such a manner to minimize the actual CPU time and overall system execution time or makespan. During execution, the load is balanced dynamically. Dynamic heterogeneity achieves reduced makespan that increases resource utilization. Some existing methods are not designed for fully heterogeneous systems. Our proposed method considers both dynamic heterogeneities of workload and dynamic heterogeneity of resources. Our proposed algorithm provides the better results than existing algorithm. The proposed algorithm has been simulated on CloudSim.
Article
Every day, numerous VMs are migrated inside a datacenter to balance the load, save energy or prepare production servers for maintenance. Despite VM placement problems are carefully studied, the underlying migration scheduler relies on vague adhoc models. This leads to unnecessarily long and energy-intensive migrations.
Article
Cloud computing has been increasingly concerned in scientific computing area. More and more enterprises and research institutes have migrated their applications to the clouds. Due to the complexity of cloud computing system in structural and behavioral aspects, how to design the fault tolerant cloud computing system becomes a challenging problem. This paper investigates the modeling and analysis of fault tolerant strategy for deadline constrained task scheduling in cloud computing. First, a formal description language is defined to accurately model the different components of cloud application, and use it to characterize the operational mechanisms and fault behaviors. Second, we propose a fault tolerant strategy, which includes the scheduling mechanism, synchronization mechanism, and exception mechanism, to dynamically compute the execution mode and required virtual machine for tasks, thus ensuring the reliability and real-time requirement of cloud application. An enforcement algorithm is also designed to realize the proposed strategy. Third, the techniques of Petri nets are provided to analyze and validate the correctness of proposed method. Finally, several experiments are done to illustrate that the reliability of cloud application is improved and its deadline is met.
Article
To provide robust infrastructure as a service (IaaS), clouds currently perform load balancing by migrating virtual machines (VMs) from heavily loaded physical machines (PMs) to lightly loaded PMs. The unique features of clouds pose formidable challenges to achieving effective and efficient load balancing. First, VMs in clouds use different resources (e.g., CPU, bandwidth, memory) to serve a variety of services (e.g., high performance computing, web services, file services), resulting in different overutilized resources in different PMs. Also, the overutilized resources in a PM may vary over time due to the time-varying heterogeneous service requests. Second, there is intensive network communication between VMs. However, previous load balancing methods statically assign equal or predefined weights to different resources, which lead to degraded performance in terms of speed and cost to achieve load balance. Also, they do not strive to minimize the VM communications between PMs. We propose a Resource Intensity Aware Load balancing method (RIAL). For each PM, RIAL dynamically assigns different weights to different resources according to their usage intensity in the PM, which significantly reduces the time and cost to achieve load balance and avoids future load imbalance. It also tries to keep frequently communicating VMs in the same PM to reduce bandwidth cost, and migrates VMs to PMs with minimum VM performance degradation. We also propose an extended version of RIAL with three additional algorithms. First, it optimally determines the weights for considering communication cost and performance degradation due to VM migrations. Second, it has a more strict migration triggering algorithm to avoid unnecessary migrations while still satisfying Service Level Objects (SLOs). Third, it conducts destination PM selection in a decentralized manner to improve scalability. Our extensive trace-driven simulation results and real-world experimental results show the superior performance of RIAL compared to other load balancing methods.
Article
With the popularity of cloud computing and high performance computing, the size and the amount of the datacenter develop rapidly, which also causes the serious challenges on energy consumption. Dynamic Voltage and Frequency Scaling (DVFS) is an effective technique for energy saving. Many previous works addressed energy-officiate task scheduling based on DVFS. However, these works need to know the total workload (execution time) of tasks, which is difficult for some real time tasks requests. In this paper, we propose a new task model that describes the QoS requirements of tasks with the minimum frequency. In addition, we define Energy Consumption Ratio (ECR) to evaluate the efficiency of different frequencies under which to execute a take. Thus, it is possible to convert the energy efficient task scheduling problem into minimizing the total ECR. By transforming the problem to the Variable Size Bin Packing, we prove that the minimization of ECR is NP-hard in this paper. Because of the difficulty of this problem, we propose task allocation and scheduling methods based on the feature of this problem. The proposed methods dispatch the coming tasks to the active servers by using servers as less as possible and adjust the execution frequencies of relative cores to save energy. When a task is finished, we propose a processor-level migration algorithm to reschedule remain tasks among processors on an individual server and dynamically balance the workloads and lower the total ECR on this server. The experiments in real test-bed system and simulation show that our strategy outperforms other ones, which verifies the good performance of our strategy on energy saving.
Article
In this globalized world with the advancement of technology the use of computation and simulation gradually increases. To fulfill the increased user demand cloud network provides its ubiquitous service in rent basis. The augmented ultimatum of cloud service increase loads on virtual machines and fallouts load imbalance in cloud system. There are many challenges associated with the cloud system, load balancing is one of them. Proper resource utilization and minimization of makespan is the basic motive of load balancing. This paper describes a multi-datacenter load adjustment technique called Multi-Rumen Anti-Grazing algorithm for assigning tasks to virtual machines of different datacenters. Our proposed mechanism is a static load balancing strategy that concerns about minimization of makespan and it gives better result than the existing ones. The simulation is carried out with different randomly generated datasets and result is compared with static Min–Min and ELBMM algorithm. In each time the proposed multi-datacenter method gives better performance and makespan as compare to the traditional intra datacenter Min–Min and ELBMM technique.
Article
To maximize task scheduling performance and minimize nonreasonable task allocation in clouds, this paper proposes a method based on a two-stage strategy. At the first stage, a job classifier motivated by a Bayes classifier's design principle is utilized to classify tasks based on historical scheduling data. A certain number of virtual machines (VMs) of different types are accordingly created. This can save time of creating VMs during task scheduling. At the second stage, tasks are matched with concrete VMs dynamically. Dynamic task scheduling algorithms are accordingly proposed. Experimental results show that they can effectively improve the cloud's scheduling performance and achieve the load balancing of cloud resources in comparison with existing methods.
Article
The economy of scale provided by cloud attracts a growing number of organizations and industrial companies to deploy their applications in cloud data centers (CDCs) and to provide services to users around the world. The uncertainty of arriving tasks makes it a big challenge for private CDC to cost-effectively schedule delay bounded tasks without exceeding their delay bounds. Unlike previous studies, this paper takes into account the cost minimization problem for private CDC in hybrid clouds, where the energy price of private CDC and execution price of public clouds both show the temporal diversity. Then, this paper proposes a temporal task scheduling algorithm (TTSA) to effectively dispatch all arriving tasks to private CDC and public clouds. In each iteration of TTSA, the cost minimization problem is modeled as a mixed integer linear program and solved by a hybrid simulated-annealing particle-swarm-optimization. The experimental results demonstrate that compared with the existing methods, the optimal or suboptimal scheduling strategy produced by TTSA can efficiently increase the throughput and reduce the cost of private CDC while meeting the delay bounds of all the tasks.
Article
Load-balanced flow scheduling for big data centers in clouds, in which a large amount of data needs to be transferred frequently among thousands of interconnected servers, is a key and challenging issue. The OpenFlow is a promising solution to balance data flows in a data center network through its programmatic traffic controller. Existing OpenFlow based scheduling schemes, however, statically set up routes only at the initialization stage of data transmissions, which suffers from dynamical flow distribution and changing network states in data centers and often results in poor system performance. In this paper, we propose a novel dynamical load-balanced scheduling (DLBS) approach for maximizing the network throughput while balancing workload dynamically. We firstly formulate the DLBS problem, and then develop a set of efficient heuristic scheduling algorithms for the two typical OpenFlow network models, which balance data flows time slot by time slot. Experimental results demonstrate that our DLBS approach significantly outperforms other representative load-balanced scheduling algorithms Round Robin and LOBUS; and the higher imbalance degree data flows in data centers exhibit, the more improvement our DLBS approach will bring to the data centers.
Article
Cloud computing offers a cost-effective and elastic computing paradigm that facilitates large scale data storage and analytics. By deploying virtualization technologies in the datacenter, cloud enables efficient resource management and isolation for various big data applications. Since the hotspots (i.e., overloaded machines) can degrade the performance of these applications, virtual machine migration has been utilized to perform load balancing in the datacenters to eliminate hotspots and guarantee Service Level Agreements (SLAs). However, the previous load balancing schemes make migration decisions based on deterministic resource demand estimation and workload characterization, without considering their stochastic properties. By studying real world traces, we show that the resource demand and workload of virtual machines are highly dynamic and bursty, which can cause these schemes to make inefficient migrations for load balancing. To address this problem, in this paper we propose a stochastic load balancing scheme which aims to provide probabilistic guarantee against the resource overloading with virtual machine migration, while minimizing the total migration overhead. Our scheme effectively addresses the prediction of the distribution of resource demand and the multidimensional resource requirements with stochastic characterization. Moreover, as opposed to the previous works that measure the migration cost without considering the network topology, our scheme explicitly takes into account the distance between the source physical machine and the destination physical machine for a virtual machine migration. The trace-driven experiments show that our scheme outperforms the previous schemes in terms of SLA violation and the migration cost.
Efficient Load Balancing Task Scheduling in Cloud Computing using Raven Roosting Optimization Algorithm
  • E Rani
  • H Kaur
Rani, E.; Kaur, H. Efficient Load Balancing Task Scheduling in Cloud Computing using Raven Roosting Optimization Algorithm. Int. J. Adv. Res. Comput. Sci. 2017, 8, 2419-2424.