ArticlePDF Available

Abstract and Figures

Cloud computing is an interesting and beneficial area in modern distributed computing. It enables millions of users to use the offered services through their own devices or terminals. Cloud computing offers an environment with low cost, ease of use and low power consumption by utilizing server virtualization in its offered services (e.g., Infrastructure as a Service). The pool of Virtual Machines (VMs) in a cloud computing Data Center (DC) needs to be managed through an efficient task scheduling algorithm to maintain quality of service and resource utilization and thus ensure the positive impact of energy consumption in the cloud computing environment. In this study, an experimental comparative study is carried out among three task scheduling algorithms in cloud computing, namely, random resource selection, round robin and green scheduler. Based on the analysis of the simulation result, we can conclude which algorithm is the best for scheduling in terms of energy and performance of VMs. The evaluation of these algorithms is based on three metrics: Total power consumption, DC load and VM load. A number of experiments with various aims are completed in this empirical comparative study. The results showed that there is no algorithm that is superior to the others. Each has its own pros and cons. Based on the simulation performed, the green scheduler gives the best performance with respect to energy consumption. On the other hand, the random scheduler showed the best performance with respect to both VM and DC load. The round robin scheduler gives better VM and DC load than the green scheduler but have more energy consumption than both random and green schedulers. However, since the RR scheduler distributes the tasks fairly, the network traffic is balanced and neither the server nor the network node will get overloaded or congested.
Content may be subject to copyright.
© 2015 Saleh Atiewi, Sal man Yussof and Mohd Ezanee. This open access article is di stributed under a Creative Co mmons
Attribution (CC -BY) 3.0 license.
Journal of Computer Science
Original Research Paper
A Comparative Analysis of Task Scheduling Algorithms of
Virtual Machines in Cloud Environment
Saleh Atiewi, Salman Yussof and Mohd Ezanee
College of Information Technology, Tenaga National University, Kajang, Malaysia
Article history
Received: 25-07-2015
Revised: 11-08-2015
Accepted: 15-09-2015
Corresponding Author:
Saleh Atiewi
College of Information
Technology, Tenaga National
University, Kajang, Malaysia
Email: s.atiewi@gmail.com
Abstract: Cloud computing is an interesting and beneficial area in modern
distributed computing. It enables millions of users to use the offered
services through their own devices or terminals. Cloud computing offers an
environment with low cost, ease of use and low power consumption by
utilizing server virtualization in its offered services (e.g., Infrastructure as a
Service). The pool of Virtual Machines (VMs) in a cloud computing Data
Center (DC) needs to be managed through an efficient task scheduling
algorithm to maintain quality of service and resource utilization and thus
ensure the positive impact of energy consumption in the cloud computing
environment. In this study, an experimental comparative study is carried
out among three task scheduling algorithms in cloud computing, namely,
random resource selection, round robin and green scheduler. Based on the
analysis of the simulation result, we can conclude which algorithm is the
best for scheduling in terms of energy and performance of VMs. The
evaluation of these algorithms is based on three metrics: Total power
consumption, DC load and VM load. A number of experiments with
various aims are completed in this empirical comparative study. The results
showed that there is no algorithm that is superior to the others. Each has its
own pros and cons. Based on the simulation performed, the green scheduler
gives the best performance with respect to energy consumption. On the
other hand, the random scheduler showed the best performance with respect
to both VM and DC load. The round robin scheduler gives better VM and
DC load than the green scheduler but have more energy consumption than
both random and green schedulers. However, since the RR scheduler
distributes the tasks fairly, the network traffic is balanced and neither the
server nor the network node will get overloaded or congested.
Keywords: Cloud Computing, Scheduling, Energy Efficient, GreenCloud,
Virtual Machine
Introduction
In cloud computing, the term “cloud” is a metaphor
for the Internet (Maggiani, 2009). A cloud shape is
used in network diagrams to conceal the Internet’s
flexible topology and abstract its underlying
infrastructure (Jin et al., 2010). Cloud computing
utilizes the Internet to deliver different computing
services, including software, hardware and programming
environments, while keeping users unaware of the
underlying infrastructure and security. Various experts
have defined cloud computing from different perspectives.
The most relevant cloud computing definition in this study
is from (Vaquero et al., 2008), who defined clouds as “a
large pool of easily usable and accessible virtualized
resources (such as hardware, development platforms
and/or services). These resources can be dynamically re-
configured to adjust to a variable load (scale), allowing for
optimum resource utilization. This pool of resources is
typically exploited by a pay-per-use mode which
guarantees are offered by the infrastructure provider by
means of customized Service Level of Agreement (SLA).”
From the above definition, cloud computing can be
depicted as a set of Data Center (DCs) that connect to the
Internet to offer their services. These DCs are based on
the virtualization of their infrastructure, with the Virtual
Machine (VM) as the basic unit of computation. In
general, DCs offer hardware services (i.e., VM for
Saleh Atiewi et al. / Journa l of Computer Scie nces 2015, 11 (6): 804.812
DOI: 10.3844/jcssp.2015.804.812
805
computations) or software services, which are
provided by mutual agreements through an SLA
contract and are charged based on a per-use pricing
method (Vaquero et al., 2008). The above definitions
indicate the need for a scheduling algorithm in the DC
that finds the VM with the ability to meet client
requirements. The VM must have sufficient resources,
such as CPU, RAM and storage, to handle user tasks.
There are two purposes held by the majority of
scheduling algorithms: To enhance the service quality
when carrying out the tasks and supplying the expected
output on time, other than to keep the efficiency and
fairness for all tasks assigned intact (Mohialdeen,
2013).“Fig. 1” demonstrates the recommended cloud
framework, with its established four tiers: Cloud users,
cloud DC, network infrastructure and connected hosts.
As shown in “Fig. 1”, cloud users send their tasks to
a DC where the tasks are queued in the main DC queue.
The DC controller maps the submitted tasks to the host
that suits the requirements (VM load or host queue size).
All the tasks must be passed before they can proceed to
the second layer (DC network layer), which consists of a
set of routers and links. All the layers have their own
energy monitors for monitoring energy consumption on
each layer (Mehdi et al., 2011).
The objectives of this work are to analyze and
investigate three task scheduling algorithms, where
they are Round Robin (RR), random resource selection
and green scheduler. The evaluation will be based on
how able they are in delivering quality service for the
tasks and their total consumed energy in the cloud
computing environment. Furthermore, the study aims to
observe the behavior of these scheduling algorithms
and determine the most appropriate scheduling
algorithm in cloud environment.
Task Scheduling in a Cloud Computing DC
Task scheduling is one of the most important roles in
cloud computing environment (Foster et al., 2008).
Scheduling primarily aims to maximize the resource use
and minimize the process time of the tasks. All tasks
should be balanced by a task scheduler to maintain quality
of service, efficiency and fairness (Mohialdeen, 2013).
The efficient use of task scheduling aims to produce
less response time so the submitted tasks will be done
within the stipulated time. It will also result in additional
tasks being submitted from cloud users, which ultimately
accelerates the business performance and efficient
resource utilization of the cloud system (Vaquero et al.,
2008) (Bilgaiyan et al., 2015).
Fig. 1. Proposed DC framework (inspired by Mehdi et al., 2011)
Saleh Atiewi et al. / Journa l of Computer Scie nces 2015, 11 (6): 804.812
DOI: 10.3844/jcssp.2015.804.812
806
VM Scheduling
The complimentary setting in scheduling refers to
the set of processes or tasks planned as indicated in the
particular requirements and used algorithm. According
to the requirement fulfilled with the requested
resources (i.e., RAM, memory, bandwidth, etc.), VM
scheduling algorithms function to schedule the VM
requests to the Physical Machines (PMs) of a specific
DC (Prajapati, 2013).
A scheduling algorithm generally works in three levels:
In the first level, the appropriate PM is identified for the set
of VMs; in the second level, the proper provisioning
scheme is determined for the VMs; and in the third level,
the tasks are scheduled on the VMs (Frincu et al., 2013).
VM Scheduling Algorithms
This section will dwell further into the VM
scheduling algorithms; or in particular, those that serve
to optimize different aspects, for example time, cost,
energy and security. Algorithms that provide VMs with
the perspective of the neighboring VMs or nodes
security are scarce.
Random VM Scheduling Algorithm
The random resource selection algorithm has the
notion of assigning the preferred task in a random
manner to the available VM. The status of the VM is
dismissed, even if the VM carries a load that is either
heavy or light. This action can lead to heavy-load
VM; thus, the task will propel a long waiting time
prior to it being served. As shown in some cases, the
task may fail in effect because the deadline is
overdue. This algorithm is not really complex as it
does not necessitate any overhead or pre-processing.
“Fig. 2 illustrates the process of giving tasks to any
VMs available (Liu et al., 2013).
Fig. 2. Random resource selection algorithm for selecting VM
Fig. 3. RR Algorithm for selecting VM
Saleh Atiewi et al. / Journa l of Computer Scie nces 2015, 11 (6): 804.812
DOI: 10.3844/jcssp.2015.804.812
807
RR VM Scheduling Algorithm
The RR task scheduling algorithm that has been
contemplated in this study assigns the selected tasks over
the available VMs in a round-robin order, where each task
is equally administered. It is the idea of this algorithm to
send the tasks chosen to the available VMs in round-robin.
Figure 3 represents the mechanism of the RR task
scheduling algorithm. Whichever pre-processing,
overhead, or scanning of the VMs to name the task
executor is not needed by the algorithm (Agarwal and
Jain, 2014). Since the RR scheduling algorithm can
distribute tasks fairly among all servers, the load
balancing is achieved, while congestion and delay can
well be averted. Furthermore, there is also the possibility
that failed task is minimized (Mathew et al., 2014).
Green VM Scheduling Algorithm
The workloads arriving at the DC are scheduled to be
carried out by the energy-aware “green” scheduler. This
“green” scheduler collects the workloads in the minimal
computing servers. To explain the high-performance
computing workloads, the scheduler tracks the buffer
occupancy of network switches on the path in a
continuous manner. Whenever congestion takes place, the
scheduler stays away from the congested routes even if
they are led to the servers that can meet the computational
requirement of the workloads. The servers idled are set
into sleep mode (dynamic shutdown DNS scheme),
whereas the supply voltage is minimized Dynamic
Voltage Frequency Scaling (DVFS scheme) on the under-
loaded servers (Kliazovich et al., 2012) (Lin et al., 2015).
Empirical Study
In this part of the work, we present a case study that
simulates an energy-aware DC in three-tier architecture.
Simulation is the process of emulating the actual system.
If we are presented with the difficulties in testing the
recommended system in a real system, a simulation was
run for performance evaluation using the GreenCloud
simulator (Kliazovich et al., 2012). The GreenCloud
Simulator was built upon as an extended work of the
network simulator Ns2 (Issariyakul and Hossain, 2011)
for the study of cloud computing environments. It
supplies a comprehensive fine-grained modeling of the
energy used up by the elements found in the DC, such as
servers, switches and links. Moreover, GreenCloud
performs a detailed investigation on the workload
distribution (Atiewi and Yussof, 2014).
The farm of servers in current DCs contains more
than 100,000 hosts, where about 70% of the
communications are performed within (Audzevich et al.,
2012). The most frequently applied DC architecture is
the three-tier architecture. “Figure 4” represents the three
layers of the DC architecture, which are the core
network, aggregation network and access network
(Baliga et al., 2011).
Fig. 4. Three-tier DC architecture
Saleh Atiewi et al. / Journa l of Computer Scie nces 2015, 11 (6): 804.812
DOI: 10.3844/jcssp.2015.804.812
808
Table 1. Simulation setup parameters
Parameters Value
DC type Three-tier topology
# of core switches 1
# of aggregation switches 2
# of access switches 3
# of servers 144
Access links 1 Gb
Aggregation links 1 Gb
Core links 10 Gb
DC load 0.1 0.2 0.3 …. 0.9 1.0
Simulation time 60 min
Power management in server DVFS and DNS
Task size 100000 bit
DC computing capacity 144014400 MIPS
Task generation rate 2456/60=40.93
The three-tier DC topology opted for the
simulations contains 144 servers which are set into
three racks (48 servers per rack) linked together using
one core switch, two aggregation switches and three
access switches. The network links that connect the
core and aggregation switches have a bit rate of 10
Gb/s. The network links which connect the aggregation
and access switches, also the access links connecting
computing servers to the top-of-rack switches have a
bit rate of 1 Gb/s. The propagation delay of all links is
fixed to 3.3 ms. Table 1 summarizes the simulation
setup parameters.
The experiment was conducted to compare the
amount of power consumption in hard deadline tasks of
the three scheduling algorithms and find which scheduler
can execute a set of tasks with minimum power
consumption while maintaining the SAL.
Simulation Results
Three experiments were carried out in this study.
The GreenCloud simulator was used in all the
experiments to analyze several performance metrics.
The first performance metric is the DC load which
represents the percentage of computing resources
allocated for incoming tasks with the respect to the
data center capacity. The load should be between 0
and 100%. The load close to 0 represents an idle data
center, while the load equal to 100% would saturate
data center (Kliazovich et al., 2012). The second one
is the VM load which is equal to the ratio of current
VM load to the maximum computing capability
(Kliazovich et al., 2013). The third one is the total
energy consumption in DC which represents sum of
energy consumed by both servers and switches
(Kliazovich et al., 2012).
Figure 5 shows the distribution of 2456 tasks over
144 servers in the DC. In this figure, the green
scheduling algorithm sends more tasks to a lesser
number of servers. The behavior of the RR scheduling
algorithm is also observed to scan numerous servers
and send the task to all of them. Meanwhile, the
random resource selection algorithm constantly varies
the number of tasks among all the servers.
Figure 6 describes the amount of power required to
execute the set of tasks over three different
algorithms. The worst algorithm for power
consumption is RR because of its ability to distribute
the load to all servers, which leads to a request for
more servers and consumes more power. The random
resource selection algorithm consumes less energy
than the RR algorithm, whereas the green scheduling
algorithm consumes less power than both algorithms
because of task distribution over the servers, as shown
in “Fig. 5”.
Figure 7 depicts the DC load under different
simulation load scenarios, starting from 10% load and
ending with 100% load. In the figure, all the
algorithms maintain the same load from 10 to 30%. At
40% load, both RR and green algorithm start to have
more load than the random resource selection
algorithm, which is attributed to the complexity of
both algorithms compared to the random algorithm.
Figure 8 demonstrates the VM average load at a
variety of input loads (10-100%). Owing to the nature
of the green scheduler algorithm which tends to
classify the workloads in the smallest possible amount
of computing servers, we can see from “Fig. 8” that
almost half of VMs obtained load from 90% and
further lessened to 50%, where the second half
obtained less than 50% down to 0% for the ideal
server (servers in the sleep mode). This is due to the
less number of the total tasks. Contrastingly, the RR
scheduler retains the load for all VM of approximately
50%. This is due to the fact that the tasks are equally
distributed among all the VMs. At the same time, the
random algorithm produces a vacillating load between
30 and 55% as the algorithm varies the number of
tasks among all the servers randomly and yet,
constantly. The RR scheduler distributes computing
and communication loads equally among servers and
switches; thus, the network traffic is balanced and no
server is loaded more than it should. Nevertheless,
one flaw is that no server or network switch is left idle
for powering down, simultaneously making the round-
robin scheduler the least energy-efficient.
Saleh Atiewi et al. / Journa l of Computer Scie nces 2015, 11 (6): 804.812
DOI: 10.3844/jcssp.2015.804.812
809
Fig. 5. Tasks distribution of over 144 servers
Fig. 6. Amount of consumed energy at different DC loads
Saleh Atiewi et al. / Journa l of Computer Scie nces 2015, 11 (6): 804.812
DOI: 10.3844/jcssp.2015.804.812
810
Fig. 7. DC load under different simulation load scenarios
Fig. 8. VM average load at various input load (10-100%)
Saleh Atiewi et al. / Journa l of Computer Scie nces 2015, 11 (6): 804.812
DOI: 10.3844/jcssp.2015.804.812
811
Conclusion
In this study, the behavior of three task scheduling
algorithms, namely, RR, random resource selection and
green scheduler, were investigated and examined under
the cloud computing environment using the GreenCloud
simulator. An extensive evaluation of these task
scheduling algorithms was conducted by focusing on the
energy consumption, DC load and VM load. The
simulation results revealed that each scheduling
algorithm has its own pros and cons. Green scheduling
algorithm consumed less energy than RR and random
scheduling algorithm. The experiments showed that
spreading the load over multiple servers can increase
power consumption more than expected. Therefore RR
scheduling algorithm consumed more energy than Green
and Random scheduling algorithm. The results also
showed that the complexity of the algorithm can increase
the DC load. Therefore green scheduling algorithm and
RR have more DC load than random scheduling
algorithm. The experiments also showed that random
algorithm has lower VM load than both green scheduler
and RR. However, with respect to load balancing, the
RR scheduler performed the best compared to the other
algorithms because it distributes the tasks fairly to all
VMs. Finally, the Random algorithm performed the
worst compared to the other scheduling algorithms with
regard to load balancing. This is because the random
algorithm randomly assigns the selected tasks to the
available (VM). The algorithm does not take into
considerations the VM status whether it was under high
or low load. On the basis of these results, no single
scheduling algorithm can provide superior performance
with respect to various types of quality services.
Acknowledgement
We thank Tenaga National University and faculty of
information and communication technology for
supporting us.
Funding Information
The authors have no support or funding to report.
Author’s Contributions
Saleh Atiewi: Design the model, participated in all
experiments coordinated the data-analysis and
contributedd to the writing of the manuscript.
Salman Yussof: Design the research plan and
organized the study.
Mohd Ezanee: Contribute in drafting the article or
reviewing it critically for significant intellectual content.
Ethics
This article is original and contains unpublished
material. The corresponding author confirms that all of
the other authors have read and approved the manuscript
and no ethical issues involved.
References
Agarwal, D. and S. Jain, 2014. Efficient optimal
algorithm of task scheduling in cloud computing
environment. Int. J. Comput. Trends Technol., 9:
344-349.
Atiewi, S. and S. Yussof, 2014. Comparison between
cloud SIM and green cloud in measuring energy
consumption in a cloud environment. Proceedings of
the 3rd International Conference on Advanced
Computer Science Applications and Technologies,
Dec. 29-30, IEEE Xplore Press, Amman, pp: 9-14.
DOI: 10.1109/ACSAT.2014.9
Audzevich, Y., A. Moore, A. Rice, R. Sohan and S.
Timotheou et al., 2012. Intelligent Energy Aware
Networks. In: Handbook of Energy-Aware and
Green Computing, Ahmad, I. and S. Ranka (Eds.),
Chapman and Hall/CRC, Boca Raton,
ISBN-10: 1439850402, pp: 239-282.
Baliga, J., R.W.A. Ayre, K. Hinton and R.S. Tucker,
2011. Green cloud computing: Balancing energy in
processing, storage and transport. Proc. IEEE, 99:
149-167. DOI: 10.1109/JPROC.2010.2060451
Bilgaiyan, S., S. Sagnika, S. Mishra and M. Das, 2015.
Study of task scheduling in cloud computing
environment using soft computing algorithms. Int. J.
Modern Educ. Comput. Sci., 7: 32-38.
DOI: 10.5815/ijmecs.2015.03.05
Foster, I., Y. Zhao, I. Raicu and S. Lu, 2008. Cloud
computing and grid computing 360-degree
compared. Proceedings of the Grid Computing
Environments Workshop, Nov. 12-16, IEEE Xplore
Press, Austin, pp: 1-10.
DOI: 10.1109/GCE.2008.4738445
Frincu, M.E., S. Genaud and J. Gossa, 2013. Comparing
provisioning and scheduling strategies for workflows
on clouds. Proceedings of the IEEE 27th International
Parallel and Distributed Processing Symposium
Workshops and PhD Forum, May 20-24, IEEE Xplore
Press, Cambridge, MA., pp: 2101-2110.
DOI: 10.1109/IPDPSW.2013.55
Issariyakul, T. and E. Hossain, 2011. Introduction to
Network Simulator NS2. 2nd Edn., Springer, U.S.,
ISBN-10: 978-1-4614-1406-3, pp: 512.
Jin, H., S. Ibrahim, T. Bell, W. Gao and D. Huang et al.,
2010. Cloud Types and Services. In: Handbook of
Cloud Computing, Borko, F. and A. Escalante
(Eds.), Springer, US., ISBN-10: 978-1-4419-6524-0,
pp: 335-355.
Saleh Atiewi et al. / Journa l of Computer Scie nces 2015, 11 (6): 804.812
DOI: 10.3844/jcssp.2015.804.812
812
Kliazovich, D., P. Bouvry and S.U. Khan, 2012.
GreenCloud: A packet-level simulator of energy-
aware cloud computing data centers. J.
Supercomput., 62: 1263-1283.
DOI: 10.1007/s11227-010-0504-1
Kliazovich, D., P. Bouvry and S.U. Khan, 2013.
Simulation and Performance Analysis of Data
Intensive and Workload Intensive Cloud Computing
Data Centers. In: Optical Interconnects for Future
Data Center Networks, Kachris, C., K. Bergman and
I. Tomkos (Eds.), Springer-Verlag New York,
ISBN-10: 978-1-4614-4630-9, pp: 47-63.
Lin, X., Y. Wang, Q. Xie and M. Pedram, 2015. Task
scheduling with dynamic voltage and frequency
scaling for energy minimization in the mobile cloud
computing environment. IEEE Trans. Services
Comput., 8: 175-186.
DOI: 10.1109/TSC.2014.2381227
Liu, N., Z. Dong and R. Rojas-Cessa, 2013. Task
scheduling and server provisioning for energy-efficient
cloud-computing data centers. Proceedings of the IEEE
33rd International Conference on Distributed
Computing Systems Workshops, Jul. 8-11, IEEE
Xplore Press, Philadelphia, PA., pp: 226-231.
DOI: 10.1109/ICDCSW.2013.68
Maggiani, R., 2009. Cloud computing is changing how
we communicate. Proceedings of the IEEE
International Professional Communication
Conference, Jul. 19-22, IEEE Xplore Press, Waikiki,
HI., pp: 1-4. DOI: 10.1109/IPCC.2009.5208703
Mathew, T., K.C. Sekaran and J. Jose, 2014. Study and
analysis of various task scheduling algorithms in the
cloud computing environment. Proceedings of the
International Conference on Advances in
Computing, Communications and Informatics, Sept.
24-27, IEEE Xplore Press, New Delhi, pp: 658-664.
DOI: 10.1109/ICACCI.2014.6968517
Mehdi, N.A., A. Mamat, A. Amer and Z.T. Abdul-
Mehdi, 2011. Minimum completion time for power-
aware scheduling in cloud computing. Proceedings
of the Developments in E-systems Engineering,
Dec. 6-8, IEEE Xplore Press, Dubai, pp: 484-489.
DOI: 10.1109/DeSE.2011.30
Mohialdeen, I.A., 2013. Comparative study of
scheduling al-grotihms in cloud computing. J.
Comput. Sci., 9: 252-263.
DOI: 10.3844/jcssp.2013.252.263
Prajapati, K.D., 2013. Comparison of virtual machine
scheduling algorithms in cloud computing. Int. J.
Comput. Appli., 38: 12-14.
DOI: 10.5120/14523-2914
Vaquero, L.M., L. Rodero-Merino, J. Caceres and M.
Lindner, 2008. A break in the clouds: Towards a
cloud definition. ACM SIGCOMM Comput.
Commun. Rev., 39: 50-55.
DOI: 10.1145/1496091.1496100
... The performance of the heuristics is compared to the optimal solutions in this study. [7][8][9][10] YES NO NO YES NO [11][12][13][14][15][16] NO NO NO NO NO [17] YES NO YES YES YES [18] YES NO NO YES NO [19] YES NO YES YES NO [20][21][22][23][24][25][26] YES NO NO NO NO [27,28] YES NO NO NO YES [29] NO YES YES NO NO [30] YES YES YES YES NO Section 2 provides the Mixed Integer Programming (MIP) formulation of the optimization problem for resource allocation. The optimal solutions of the optimization problem for various resource-service problem types are examined to gain insights and develop heuristics for the efficient resource allocation. ...
... 2) System performance objectives such as the system throughput measured by the number of jobs executed by a system [6,15,20,22,33,35,36], and 3) Application performance objectives such as response time (e.g., execution time and makespan), quality of service (QoS), etc. [6,11,12,14,15,19,22,24,26,29,30,35,37,[38][39][40]. ...
Article
Full-text available
The resource allocation in cloud computing determines the allocation of computer and network resources of service providers to service requests of users for meeting user service requirements. It is not scalable to solve the resource allocation problem as an optimization problem to obtain the optimal solution in real time. This paper presents the development and testing of heuristics for the efficient resource allocation to obtain near-optimal solutions in a scalable manner. We first define the resource allocation problem as a Mixed Integer rogramming (MIP) optimization problem and obtain the optimal solutions for various resource-service problem types. Based on the analysis of the optimal solutions, we design heuristics for the efficient resource allocation. Then we evaluate the performance of the resource allocation heuristics using various resource-service problem types and different numbers of service requests and resources. The results show the comparable performance of the heuristics to the optimal solutions. The resource allocation heuristics also demonstrate the better computational efficiency and thus scalability than solving the MIP problems to obtain the optimal solutions. Keywords: Resource allocation; Clouds computing; Heuristics; Mixed integer programming
... Jobs are then selected based on their priorities and allocated to available resources that satisfy a predetermined target function. One of the main goals of the task scheduling process is a reduction in makespan of applications [13,14].Makespan represents the difference time between starting and ending a sequence of tasks. Thus, algorithms that assign the tasks to the available resources and reduce makespan are needed. ...
Article
Full-text available
Independent task scheduling is considered one of the most popular issues in cloud computing environment. This study proposes a new metaheuristic optimization algorithm, which is called vocalization of humpback whale optimization algorithm (VWOA). The VWOA mimics the vocalization behavior of humpback whales, and it is employed to improve task scheduling in a cloud computing environment. The VWOA scheduler is based on a suggested multi-objective model. It enables reductions in makespan, cost, and energy consumption and maximizes the utilization of resources. The best optimization solution relies on the fitness parameters and their values, which should remain optimal to ensure minimum energy consumption, maximum resource utilization, and client satisfaction. The experiment results on the tested data showed that the VWOA scheduler has better performance than the results of the traditional whale optimization algorithm (WOA) and round robin (RR) algorithm in terms of makespan, cost, degree of imbalance, resource utilization, and energy consumption. The proposed algorithm has saved 17% and 72% energy compared with WOA and RR algorithms, respectively. The total execution cost of scheduling the tasks using VWOA is decreased by 13% and 22% and the average resource utilization using VWOA is increased by 8% and 35% compared with the WOA and RR algorithms, respectively.
... The simulated model addresses the challenge of testing hybrid cloud deployments for an optimal configuration in terms of performance, cost and satisfaction of business constraints. Existing works [9], [10], [11], [12] have focused on the performance and cost aspects of hybrid cloud computing but satisfaction of business constraints remains a challenge because of the procedural nature of programming resource allocation systems. Support for business constraints is critical in hybrid cloud resource scheduling to ensure that the goals for this cloud deployment model are met, i.e. exploiting the scalability of the public cloud whilst having the desired control in the private datacenter. ...
... Finally, they determined that e-STAB minimises congestion and communication delays in the network. Atiewi et al. (2015) performed a comparative analysis of task scheduling algorithms in the cloud environment and concluded that the green scheduling algorithm consumes less energy compared with other scheduling algorithms. ...
Article
Cloud computing is a fascinating and profitable area in modern distributed computing. Aside from providing millions of users the means to use offered services through their own computers, terminals, and mobile devices, cloud computing presents an environment with low cost, simple user interface, and low power consumption by employing server virtualisation in its offered services (e.g., Infrastructure as a Service). The pool of virtual machines found in a cloud computing data centre (DC) must run through an efficient task scheduling algorithm to achieve resource utilisation and good quality of service, thus ensuring the positive effect of low energy consumption in the cloud computing environment. In this paper, we present an energy-efficient scheduling algorithm for a cloud computing DC using the dynamic voltage frequency scaling technique. The proposed scheduling algorithm can efficiently reduce the energy consumption for executing jobs by increasing resource utilisation. GreenCloud simulator is used to simulate our algorithm. Experimental results show that, compared with other algorithms, our algorithm can increase server utilisation, reduce energy consumption, and reduce execution time.
... Finally, they determined that e-STAB minimises congestion and communication delays in the network. Atiewi et al. (2015) performed a comparative analysis of task scheduling algorithms in the cloud environment and concluded that the green scheduling algorithm consumes less energy compared with other scheduling algorithms. ...
Article
Full-text available
Cloud computing is a fascinating and profitable area in modern distributed computing. Aside from providing millions of users the means to use offered services through their own computers, terminals, and mobile devices, cloud computing presents an environment with low cost, simple user interface, and low power consumption by employing server virtualisation in its offered services (e.g., Infrastructure as a Service). The pool of virtual machines found in a cloud computing data centre (DC) must run through an efficient task scheduling algorithm to achieve resource utilisation and good quality of service, thus ensuring the positive effect of low energy consumption in the cloud computing environment. In this paper, we present an energy-efficient scheduling algorithm for a cloud computing DC using the dynamic voltage frequency scaling technique. The proposed scheduling algorithm can efficiently reduce the energy consumption for executing jobs by increasing resource utilisation. GreenCloud simulator is used to simulate our algorithm. Experimental results show that, compared with other algorithms, our algorithm can increase server utilisation, reduce energy consumption, and reduce execution time. Reference to this paper should be made as follows: Atiewi, S., Yussof, S., Bin Rusli, M.E. and Zalloum, M. (2018) 'A power saver scheduling algorithm using DVFS and DNS techniques in cloud computing data centres', Int.
Article
An improved discrete imperial competition algorithm (DICA) is proposed for the multi-objective order scheduling problems of automated warehouses. The aim is to minimize the completion time of all orders and the total tardiness of outbound orders. The priority value, influenced by the importance and relative cut-off time of the orders, as well as the number of tasks, is designed to sort the orders. Similar to the genetic algorithm (GA), the assimilation process of the improved DICA is constructed by crossover and mutation. Meanwhile, Hamming distance is introduced to verify the quality of assimilation. Taking a practical case as an example, the improved DICA is used to solve two scheduling strategies: outbound order priority scheduling and compound order scheduling. The main results show that the compound order scheduling can be performed effectively, especially when the number of orders is large, the completion time and total tardiness can be greatly reduced. In addition, compared with GA, the solution quality of the improved DICA is better, but its convergence speed is slower.
Chapter
Full-text available
Due to the continuous evolution of the Big Data phenomenon, data processing in Business Big Data Analytics (BBDA) needs new advanced load balancing techniques. This paper proposes a new algorithm based on a non-stigmergic approach to address these concerns. The algorithm imitates the behavior of a specific species of ants that perform by acoustics in situations of threats. Besides, the research methodology in this study presents a methodic filtration of the relevant metrics before carrying out the benchmarking trials of several ant-colony algorithms (i.e. makespan, response time, throughput, memory and CPU utilization, etc.). The experimentations' outcomes show the effectiveness of the proposed approach that might empower the research efforts in big data analytics, business intelligence, and intelligent autonomous software agents. The main objective of this research is to contribute in reinforcing the resilience of the Big Data processing environment for enterprises.
Article
Hybrid cloud, typically a combination of public and private cloud deployment models, is a rising paradigm due to the benefits it offers: full control of data and applications in the private cloud and elastic computing resource availability in the public cloud. This combination however brings an extra layer of complexity that can potentially erode the benefits and present serious challenges if not managed well. Among the challenges, ensuring business constraint compliance across the combination of cloud deployment models is a growing concern. Our article brings a sensitive, data‐ and process‐aware framework to bear on task scheduling in hybrid clouds with compliance to business constraints. Our proposed approach utilizes data from a real hybrid cloud‐based hospital billing system that is governed by complex and dynamic data processing rules. Our system successfully employs a process mining controlled algorithm to schedule tasks in the hybrid cloud to comply with the given set of business constraints.
Conference Paper
The existing resource scheduling algorithms for virtual machines usually use serial job deployment ways which easily lead to the job completion time overlong and the system load unbalance. To solve the problems, an Improved Potential Capacity (IPC) based resource scheduling algorithm for virtual machines is proposed, which comprehensively considers the overall job completion time and system load balancing, and applies a new metric to dynamically estimate the resource remaining capacities of virtual machines, and thus reduce the inexact matching between jobs and virtual machines. A batch job deployment method is also proposed to execute the batch job deployment. Many simulation experimental results show that the proposed algorithm can effectively decrease the overall job completion time and improve the load balancing of a cloud system.
Article
Full-text available
Cloud computing is a popular computing concept that performs processing of huge volume of data using highly accessible geographically distributed resources that can be accessed by users on the basis of Pay as per Use policy. Requirements of different users may change so the amount of processing involved in such paradigm also changes. Sometimes they need huge data processing. Such highly volumetric processing results in higher computing time and cost which is not a desirable part of a good computing model. So there must be some intelligent distribution of user's work on the available resources which will result in an optimized computing environment. This paper gives a comprehensive survey on such problems and provide a detailed analysis of some best scheduling techniques from the domain of soft computing with their performance in cloud computing.
Conference Paper
Full-text available
Cloud computing is emerging as a leading solution for deploying on-demand applications in both the industry and the scientific community. An important problem which needs to be considered is that of scheduling tasks on existing resources. Since clouds are linked to grid systems much of the work done on the latter can be ported with some modifications due to specific aspects that concern clouds, e.g., virtualization, scalability and on-demand provisioning. Two types of applications are usually considered for cloud migration: bag-of-tasks and workflows. This paper deals with the second case and investigates the impact virtual machine provisioning policies have on the scheduling strategy when various workflow types and execution times are used. Five provisioning methods are proposed and tested on well known workflow scheduling algorithms such as CPA, Gain and HEFT. We show that some correlation between the application characteristics and provisioning method exists. This result paves the way for adaptive scheduling in which based on the workflow properties a specific provisioning can be applied in order to optimize execution times or costs.
Chapter
Cloud computing data centers are becoming increasingly popular for the provisioning of computing resources. The cost and operating expenses of data centers have skyrocketed with the increase in computing capacity. In this chapter, we survey the main techniques behind enabling energy efficiency in data centers and present simulation environment for energy-aware cloud computing. Along with the workload distribution, the focus is devoted to simulating packet-level communications in realistic setups. Finally, the effectiveness of common power management solutions is assessed and a scheduling methodology that combines energy efficiency and network awareness is presented.
Chapter
Introduction to Network Simulator NS2 is a primer providing materials for NS2 beginners, whether students, professors, or researchers for understanding the architecture of Network Simulator 2 (NS2) and for incorporating simulation modules into NS2. The authors discuss the simulation architecture and the key components of NS2 including simulation-related objects, network objects, packet-related objects, and helper objects. The NS2 modules included within are nodes, links, SimpleLink objects, packets, agents, and applications. Further, the book covers three helper modules: timers, random number generators, and error models. Also included are chapters on summary of debugging, variable and packet tracing, result compilation, and examples for extending NS2. Two appendices provide the details of scripting language Tcl, OTcl and AWK, as well object oriented programming used extensively in NS2. © 2012 Springer Science+Business Media, LLC. All rights reserved.
Conference Paper
Cloud computing is a novel perspective for large scale distributed computing and parallel processing. It provides computing as a utility service on a pay per use basis. The performance and efficiency of cloud computing services always depends upon the performance of the user tasks submitted to the cloud system. Scheduling of the user tasks plays significant role in improving performance of the cloud services. Task scheduling is one of the main types of scheduling performed. This paper presents a detailed study of various task scheduling methods existing for the cloud environment. A brief analysis of various scheduling parameters considered in these methods is also discussed in this paper.
Article
An essential requirement in cloud computing environment is scheduling the current jobs to be executed with the given constraints. The scheduler should order the jobs in a way where balance between improving the quality of services and at the same time maintaining the efficiency and fairness among the jobs. Thus, evaluating the performance of scheduling algorithms is crucial towards realizing large-scale distributed systems. In spite of the various scheduling algorithms proposed for cloud environment, there is no comprehensive performance study undertaken which provides a unified platform for comparing such algorithms. Comparing these scheduling algorithms from different perspectives is an aspect that needs to be addressed. This pa-per aims at achieving a practical comparison study among four common job scheduling algorithms in cloud computing. These algorithms are Round Rubin (RR), Random Resource Selection, Opportunistic Load Balancing and Minimum Completion Time. These algorithms have been evaluated in terms of their ability to provide quality service for the tasks and guarantee fairness amongst the jobs served. The three metrics for evaluating these job scheduling algorithms are throughput, makespan and the total execution cost. Several experiments with various aims have been accomplished in this comparative study.
Article
Cloud computing is a model that relies on sharing computing resources rather than having local servers or personal devices to handle applications. Numerous studies have shown that by replacing traditional computing with cloud infrastructure, the total power consumption can be decreased because cloud computing power consumption is influenced by four key cloud factors: dynamic provisioning, multi-tenancy, server utilization, and data center efficiency. Considering the huge amount of energy consumed by computing equipment in the last several years, energy consumption and sustainability have become one of the main areas of research. Thus, obtaining high performance at a reduced cost in cloud environments has reached a turning point where computing power is no longer the most important concern. Calculate the amount of energy consumption in a real cloud computing infrastructure is difficult. Thus, simulation tools have been developed in recent years to enable cloud computing researchers to analyze a cloud computing environment. This paper will introduce and compare the two popular cloud simulators: Cloud Sim and Green Cloud. Both simulators provide scenarios for cloud computing based on utilization, virtualization, and load and energy consumption of the server. The comparison showed that the Green Cloud simulator is more advantageous than the Cloud Sim simulator based on an accurate measurement of energy consumption in a cloud computing environment.
Article
Mobile cloud computing (MCC) offers significant opportunities in performance enhancement and energy saving for mobile, battery-powered devices. Applications running on mobile devices may be represented by task graphs. This work investigates the problem of scheduling tasks (which belong to the same or possibly different applications) in the MCC environment. More precisely, the scheduling problem involves the following steps: (i) determining the tasks to be offloaded onto the cloud, (ii) mapping the remaining tasks onto (potentially heterogeneous) local cores in the mobile device, (iii) determining the frequencies for executing local tasks, and (iv) scheduling tasks on the cores (for in-house tasks) and the wireless communication channels (for offloaded tasks) such that the task-precedence requirements and the application completion time constraint are satisfied while the total energy dissipation in the mobile device is minimized. A novel algorithm is presented, which starts from a minimal-delay scheduling solution and subsequently performs energy reduction by migrating tasks among the local cores and the cloud and by applying the dynamic voltage and frequency scaling technique. A linear-time rescheduling algorithm is proposed for the task migration. Simulation results demonstrate significant energy reduction with the application completion time constraint satisfied.
Article
Cloud computing is an emerging technology in distributed computing which facilitates pay per model as per user demand and requirement.Cloud consist of a collection of virtual machine which includes both computational and storage facility. The primary aim of cloud computing is to provide efficient access to remote and geographically distributed resources. Cloud is developing day by day and faces many challenges, one of them is scheduling. Scheduling refers to a set of policies to control the order of work to be performed by a computer system. A good scheduler adapts its scheduling strategy according to the changing environment and the type of task. In this research paper we presented a Generalized Priority algorithm for efficient execution of task and comparison with FCFS and Round Robin Scheduling. Algorithm should be tested in cloud Sim toolkit and result shows that it gives better performance compared to other traditional scheduling algorithm.