Conference PaperPDF Available

Impact of Virtualization on Cloud Computing Energy Consumption: Empirical Study

Authors:

Abstract and Figures

Global warming, which is currently one of the greatest environmental challenges, is caused by carbon emissions. A report from the Energy Information Administration indicates that approximately 98% of CO2 emissions can be attributed to energy consumption. The trade-off between efficient and ecologically sound operation represents a major challenge faced by many organizations at present. In addition, numerous companies are currently compelled to pay a carbon tax for the resources they use and the environmental impact of their products and services. Therefore, an energy consumption system can generate actual financial payback. Green information technology involves various approaches, including power management, recycling, telecommunications, and virtualization. This paper focuses on comparing and evaluating techniques used for reducing energy consumption in virtualized environments. We first highlight the impact of virtualization techniques on minimizing energy consumption in cloud computing. Then we present an experimental comparative study between two common energy-efficient task scheduling algorithms in cloud computing (i.e., the green scheduler, the power saver scheduler). These algorithms are discussed briefly and analyzed. The three metrics used to evaluate the task scheduling algorithms are (1) total power consumption, (2) data center load, and (3) virtual machine load. This work aims to gauge and subsequently improve energy consumption efficiency in virtualized environments.
Content may be subject to copyright.
Impact of Virtualization on Cloud Computing Energy
Consumption: Empirical Study
Saleh Atiewi
Department of Computer Science
Al Hussein Bin Talal University
Ma'an, Jordan
saleh@ahu.edu.jo
Abdullah Abuhussein
Department of Information Systems
St. Cloud State University
St. Cloud, MN
aabuhussein@stcloudstate.edu
Mohammad Abu Saleh
Department of Computer Science
Al Hussein Bin Talal University
Ma'an, Jordan
Mohammad.a.abusaleh@ahu.edu.jo
ABSTRACT
Global warming, which is currently one of the greatest
environmental challenges, is caused by carbon emissions. A report
from the Energy Information Administration indicates that
approximately 98% of CO2 emissions can be attributed to energy
consumption. The trade-off between efficient and ecologically
sound operation represents a major challenge faced by many
organizations at present. In addition, numerous companies are
currently compelled to pay a carbon tax for the resources they use
and the environmental impact of their products and services.
Therefore, an energy consumption system can generate actual
financial payback. Green information technology involves various
approaches, including power management, recycling,
telecommunications, and virtualization. This paper focuses on
comparing and evaluating techniques used for reducing energy
consumption in virtualized environments. We first highlight the
impact of virtualization techniques on minimizing energy
consumption in cloud computing. Then we present an experimental
comparative study between two common energy-efficient task
scheduling algorithms in cloud computing (i.e., the green
scheduler, the power saver scheduler). These algorithms are
discussed briefly and analyzed. The three metrics used to evaluate
the task scheduling algorithms are (1) total power consumption, (2)
data center load, and (3) virtual machine load. This work aims to
gauge and subsequently improve energy consumption efficiency in
virtualized environments.
CCS Concepts
C.4 [Computer Systems Organization]: Performance of Systems;
Keywords
Cloud Computing; Virtualization; Energy; Green Cloud; Green
Computing; Simulation; Cloud Economics.
1. INTRODUCTION
Organizations are currently focused on attaining an enduring
information and communications technology (ICT) technique for
their business processes. The major motivation for such intent is to
decrease their carbon impact and environmental influences, along
with reducing their operational costs. In this context, cloud
computing offers a useful means to achieve these goals. Cloud
computing is a promising technology that is becoming increasingly
prevalent because it facilitates access to computing resources, such
as programs, storage, expert services, video games, films, and
music, whenever necessary. These resources are provided such that
cloud clients do not have to be aware of how or from where they
are obtaining these materials. Instead, clients only need to be
concerned with acquiring broadband connectivity to the cloud.
Data centers possess powerful computing and storage capabilities.
Important domains, such as particle physics, scientific computing
and simulation, Earth observation, and oil prospecting, are
supported by data centers. Hundreds to thousands of densely
packed blade servers are utilized by data centers to maximize
management efficiency and space utilization. The energy
consumed by data centers increases remarkably as the quantity and
scale of servers grow; the amount of such energy is directly related
to the number of hosted servers and their respective workloads [1].
Numerous scholars have devoted their efforts to improve energy
efficiency in cloud environments. In such environments, simple
techniques offer basic energy management for servers. These
techniques include placing servers in sleep mode, turning servers
on and off, and adopting dynamic voltage/frequency scaling
(DVFS) to adjust the power states of servers. CPU power (and thus,
performance level) is regulated by DVFS according to the
workload. However, DVFS optimization is limited to CPUs. Using
virtualization techniques that can improve resource isolation and
decrease infrastructure energy consumption via resource
consolidation and live migration is another approach to enhance
energy efficiency [2]. A number of energy-aware scheduling
algorithms and resource allocation policies have also been
developed to optimize the total energy consumption in cloud
environments through virtualization methods [3]. Nevertheless, the
different system resource configurations, allocation strategies,
workloads, and types of tasks running in the cloud cause the energy
consumption and system performance of data centers to vary
considerably [4].
In this paper we explore the effect of virtualization techniques on
improving energy consumption and comparatively assess the
efficiency (i.e., in terms of energy saving) of task scheduling
algorithms (i.e., the green scheduler [5], the power saver scheduler
[6]) using the following metrics (1) total power consumption, (2)
data center load, and (3) virtual machine load metrics. Section 2
briefly reviews virtualization with focus on server virtualization.
Section 3 and 4 describe the experiment system, simulator
components and workflow. Section 5 defines the data enter model
used to conduct the experiment. Section 6 presents the criteria used
for the the evaluation. We demonstrate experiment results and
conclude in sections 7 and 8 respectively.
2. VIRTUALIZATION
Most available computer hardware is designed and architected to
host a single operating system (OS) and application. The primary
solution for this problem is virtualization. The term “virtualization”
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
CPSIOT’18, September 2123, 2018, Stockholm, Sweden.
Copyright 2018 ACM
has been used since the 1960s to refer to mainframes. At that time,
virtualization was a logical method for allocating mainframe
resources to different applications. Since then, the meaning of the
term has evolved. At present, virtualization refers as the act of
creating a virtual version of something, including (but not limited
to) a virtual computer hardware platform, OS, storage device, or
network resources. That is, virtualization is the ability of a system
to host multiple virtual computers while running on a single
hardware platform.
As an essential aspect of cloud computing, virtualization provides
stability in the aspects of cost and energy efficiency. On the basis
of its definition, virtualization substantially minimizes the number
of working computers by replicating them to perform within a
single physical computer through software execution, thereby
reducing carbon emission and energy costs [7].
Virtualization is the one of the most efficient methods for achieving
energy efficiency. It is applicable to conventional and cloud data
centers. In the former, virtualization is applied depending on the
extant policy and the need to utilize such method. In the latter,
virtualization plays an important role in energy efficiency, and thus,
is highly recommended. Every component of information
technology (IT), including servers, desktops, applications,
management probes, input/output, local area networks, switches
and routers, storage systems, wide area network optimization
controllers, application delivery controllers, and firewalls, can be
virtualized. The five main forms of virtualization involve the
server, desktop, appliances, storage, and network. Considering the
association among these forms, network virtualization has been
selected as the focal topic because it is the most significant among
these forms [8]. Virtualization is a technique for running multiple
independent virtual OSs on a single physical computer [9], thereby
maximizing the return on investment for a computer. The term was
coined in the 1960s in reference to a virtual machine (VM;
occasionally called a pseudo-machine). The creation and
management of VMs is frequently referred to as platform
virtualization. Platform virtualization is performed on a computer
(hardware platform) using a software called a control program. This
program creates a simulated environment or a virtual computer.
The virtual computer enables the device to use a hosted software
specific to the virtual environment (occasionally called guest
software).
Virtualization manages workload by making traditional computing
highly scalable, efficient, and economical. A wide range of system
layers, such as OSs, hardware, and servers, can benefit from the
application of visualization [10].
Virtualization technology offers numerous advantages in cloud
computing environments [10], including the items listed as follows.
Server consolidation: Through this concept, 10 server
applications that previously require as many physical
computers can be run on a single machine, thereby providing
a unique OS and technical specification environments for
operating various applications.
Energy consumption: With server virtualization, the server
can support multiple VMs and will probably have more
memory, CPUs, and other hardware that will require minimal
or no additional power and will occupy the same physical
space, thereby reducing utility costs and power consumption.
Redundancy: This concept essentially refers to the repetition
of data that is mainly encountered when systems do not share
a common storage and different memory storage units are
created. Given the sizeable number of data centers, fault
tolerance is extremely high, which decreases redundancy.
2.1 Server Virtualization
A virtual server enables merging such that numerous VMs run
by sharing the same physical server rather than each machine
having its own server, thereby reducing cost. The decrease in
expenditure occurs in the aspects of hardware, administration for
site infrastructure amenities, and space. The provision of VMs
immediately addresses the needs of clients for additional resources,
while VM migration ensures the accessibility of services [8].
Server virtualization is currently under constant scrutiny from the
media and major organizations as a contributor to green IT; this
concept was first introduced by the IBM Corporation in the 1960s
as a method for the simultaneous timesharing of mainframe
computers [11]. Server virtualization was further developed to
incorporate a hardware abstraction layer known as a virtual
machine monitor (VMM) that enables interaction between the
hardware and software layers [12]. However, [13] indicated that the
concept was only transformed from being strictly applied to
mainframes to being used with the industry standard 86x hardware
in 1999 when virtualization was adopted by VMware.
Consequently, a standard 86x server obtained the capability of
being partitioned into several VMs that use virtualized components.
This characteristic allowed for an independent and concurrent
processing of different OSs and software applications. Although
[14] claimed that the ability to run multiple VMs on a single server
could reduce hardware costs and IT department overhead, [15]
argued that this feature potentially creates a single point of failure
because these VMs solely depend on the physical server to function
properly. In [16], VMMs were classified into type I hypervisor
(OS-level virtualization) and type II hardware virtualizer
(hypervisor virtualization).
3. SYSTEM MODEL
The system model is developed based on [17,18,19]. Figure 1
depicts this model, which mainly consists of the user, task, VM
manager, scheduling algorithm, servers, and energy meter.
1. User: represents the cloud user who will send tasks to the cloud
computing data center (DC).
2. Task: refers to the task sent by cloud users to the cloud
computing DC. Each task has the following elements: size,
maximum completion time, and ID number.
3. VM manager: handles the received tasks after accepting the VM
status and decision from the power saver scheduling algorithm
(PSSA).
4. PSSA: schedules the tasks of cloud users according to the
information sent from the VM manager.
Figure 1. System Model
5. VM and servers: execute the tasks of users and resend them.
6. Energy meter: computes the energy consumed in DC.
4. SIMULATION TOOL
The architecture and main features of the cloud computing
simulator used in this study are explained in this section. Social
networking, content delivery, web hosting, and real-time
instrumented data processing are examples of traditional and
emerging cloud-based applications. These types of application
possess different compositions, configurations, and deployment
requirements. Quantifying the performance of scheduling and
allocation policies in real cloud environments under different
conditions and various applications and service models is
extremely difficult due to the following: (i) users have
heterogeneous and conflicting quality of service requirements and
(2) clouds have varying demands, supply patterns, and system
sizes. When real infrastructures, such as Amazon EC2, are adopted,
experiments are limited to the infrastructure scale, and reproducing
the results becomes challenging. This situation arises because the
conditions prevailing in an Internet-based environment cannot be
controlled by resource allocation developers and application
scheduling algorithms [20]. Therefore, we used the GreenCloud
simulator, which can be applied to develop novel solutions for
monitoring, resource allocation, workload scheduling, and
optimization of communication protocols and network
infrastructure. The GreenCloud simulator is an extension of the
well-known NS2 network simulator and was released under the
General Public License Agreement [5].
4.1 Simulator Architecture
Figure 2 shows the structure of the GreenCloud simulator using
three-tier data center architecture.
The main components of this simulator are listed as follows [5]:
1. Servers that form data center in the cloud, are used to run tasks.
2. Switches and links constitute network topology and the resulting
connections by providing different cabling solutions.
3. Workloads are considered objects that model various cloud user
services, such as instant messaging and social networks.
4.2 Simulator Implementation
In this experiment, GreenCloud was used to test, evaluate, and
compare the adopted and proposed algorithms. Implementation was
realized by modifying the original source code of the simulator. The
original source code was written in C++ and the Tool Command
Language and was based on the NS2 network simulator. Eclipse
Standard version 4.4 editor was used for the modification.
Figure 3 provides a general view of the simulation steps. The
GreenCloud simulator is set up and installed during the pre-
simulation phase, and the simulator configurations are read from
the files. In the next step, the data center is created, and the cloud
network is developed. This step requires the simulation
configuration settings that represent the network and the servers’
specifications. Notably, each server may have its own
specifications for forming a heterogeneous paradigm.
Subsequently, the simulator initiates an event for the arrival of each
task to the system. After the events are triggered, the simulator
begins to execute the scheduling algorithm to map the tasks onto
appropriate VMs. Then, the simulator begins monitoring the
execution of tasks and recording the ending time of execution and
the consumed energy in special tracing files. When all the tasks
have passed through the GreenCloud simulator, simulation stops
and the post-simulation phase begins. This phase involves reading
the tracing files and sending the results to an Excel sheet for
analysis.
The main configuration for the GreenCloud simulator used in this
work is provided in Table 1. The table lists the components of the
proposed system and their specifications.
Table 1. GreenCloud Configuration
Parameters
Value
DC type
Threetier topology
No. of core switches
2
No. of aggregation switches
4
No. of access switches
8
No. of servers
1,440
Access links
1 Gb
Aggregation links
1 Gb
Core links
10 Gb
DC load
0.1, 0.2, 0.3, …, 0.9, 1.0
Simulation time
60 min
Power management in server
DVFS and DNS
Task size
8,500 bit
Task deadline
20 s
Task type
High-performance computing
5. Data Center Design Model
Commonly adopted network architectures in data centers include
multi-tier architectures, i.e., two-tier (2T), three-tier (3T), and 3T
high-speed (3Ths) architectures [21]. 3T is the most popular
architecture in large-scale data centers. In such architecture, the
core layer connects the data center to the Internet backbone, the
aggregation layer provides diverse functions (such as content
switching, Secure Sockets Layer, and firewalls), and the access
layer connects the internal data servers that are arranged in a rack
blade assembly. Multiple links are present from one tier to another.
These links, along with multiple internal servers, ensure availability
and fault tolerance in the data center, but at the cost of generating
redundancy.
Server farms in current DCs include over 100,000 hosts, in which
70% of all communication activities are internally performed [22].
The most frequently applied DC architecture is the 3T architecture.
The three layers of the DC architecture, namely, the core,
aggregation, and access networks, are presented in Figure 2 [23].
The 3T DC topology selected for the simulations includes 1,440
servers, which are set into 16 racks (i.e., 90 servers per rack). The
racks are linked using 2 cores, 4 aggregations, and 8 access
switches. The network links that connect the aggregation switches
to the core have a data rate of 10 Gb/s. The links that connect the
aggregation and access switches, along with the access links that
connect computing servers to the top-of-rack switches, have a data
rate of 1 Gb/s. The propagation delay of all the links is fixed at 3.3
µs. Table 1 summarizes the simulation setup parameters [24].
6. SYSTEM PARAMETERS
The criterion for evaluating the virtualized environment is
introduced in this section. Two types of parameters are used: input
and output. Input parameters configure the system, whereas output
parameters measure system performance.
6.1 Input Parameters
The following input parameters are fed to the simulator before it
starts.
Number of DCs: Given that we are focusing on a VM in DC
and the consumed energy in DC, only one DC is assumed to
be present.
Number of VMs in DC and their specifications: Number of
VMs in DC that are dedicated to finishing all the submitted
tasks and the specifications of these VMs
Number of tasks submitted and their specifications: A set
of tasks is generated and submitted to DC, and each task has a
deadline and size. The scheduler should handle tasks
according to their specifications.
Scheduling algorithm: The manner in which tasks are
mapped onto VMs affects the simulation results. In each
experiment, the algorithm that maps the tasks onto VMs is
presented as an input parameter. In this research, we adopt two
task scheduling algorithms: the green scheduler algorithm and
PSSA.
6.2 Output Parameters
Several performance metrics are used to test and evaluate the
proposed models. These parameters determine system efficiency
according to the input parameters. The output parameters are
described as follows:
Makespan: The maximum completion time of all the received
tasks per unit time. This parameter indicates the quality of job
assignment to resources in terms of execution time. This
parameter can be written formally as in equation 1.
 (1)
Where  denotes the completion time of task that belongs to
task list .
Throughput: The number of executed tasks is calculated to
study the efficiency of meeting task deadlines. This parameter
is calculated using Equation 2.
 (2)
Where is
 
  (3)
Figure 2. GreenCloud Simulator: A 3-Tier Architecture [5]
Figure 3. Simulation Steps
Task failure: The number of task failures indicates the
number of tasks that fail to meet their deadlines as shown in
Equation 4.  (4)
Where is the number of is failed tasks; and is the decision
variable that indicates the task’s completion time, which is
provided in Equation 3.
DC and server loads: The DC load represents the percentage
of computing resources that are allocated for incoming tasks
with respect to data center capacity. This load should be
between 0 and 100%. A load close to 0 indicates an idle data
center, whereas a load equal to 100% denotes a saturated data
center [5]. To calculate DC and server loads, let be the set
of servers in DC, where . Each server
has nominal million instructions per second (MIPS; N),
which denotes the maximum computing capability of the
server at the maximum frequency. Server load is the current
load of the server in MIPS. Equation 5 indicates the load for
each server si, which is equal to the ratio of the current server
load to the maximum computing capability.

 (5)
The DC load computed using Equation 6 is equal to the
average load of all its hosts.



(6)
DC energy consumption: The total energy consumption in
DC represents the sum of the energy consumed by the servers
and switches [5]. In this research, we focus on the energy of
servers and network switches. Hence, the power consumption
of an average server can be expressed using Equation 7.
 (7)
Where  accounts for the portion of the consumed power that
does not scale with the operating frequency , and Pf is the
frequency-dependent CPU power consumption.
The power consumption of a switch can be expressed using
Equation 8.
    
 (8)
Figure 4 shows the detailed components of switch energy
consumption, where  is the total power consumed for the
switch,  is the consumed power for the switch’s chassis
(hardware), is the number of line cards in the switch, Plinecard is
the power consumed with any active switch line card, and
represents the power consumed by a port (transceiver) that runs
at bit rate .
7. RESULTS
To demonstrate the impact of server virtualization on the server’s
energy consumption, two experiments were conducted to measure
the following parameters: server’s energy consumption, DC load,
and makespan. That is, PSSA under the GreenCloud simulator is
used to compare two simulation scenarios (with and without server
virtualization).
7.1 Makespan
Figure 5 depicts the makespan and the set of 20 experiments
conducted on 10 groups of data center loads using the proposed
algorithm under two scenarios. The first scenario involved running
DC with virtualized servers, such that all tasks were sent to the VM.
By contrast, the second scenario required running the DC without
server virtualization, such that all tasks were sent directly to the
physical machine (PM). The value of this parameter reflects the
ending time only for tasks that meet their deadlines.
In this experiment, the non-virtualized environment performed
better in terms of makespan. This outcome is attributed to the
scheduler sending the tasks directly to the PM instead of the VM,
which requires additional time for each task to be created at each
PM.
7.2 Energy Consumption of Server
Figure 6 presents the energy consumed by the servers at different
DC loads for PSSA. The experiment was conducted 20 times. The
first 10 experiments were for non-virtualized servers with different
Figure 5. Makespan at Different Loads for Virtualized
and Non-virtualized DC
0
5
10
15
20
25
30
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
Time (min)
DC Load
Power Saver Algorithm with Server Virtualization
Power Saver Algorithm without Server Virtualization
Figure 6. Total Server Energy Consumption for
Virtualized and Nonvirtualized DCs
0
2000
4000
6000
8000
10000
10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
Energy Consumed by Servers w*h
DC Load
Power Saver algorithm with server virtualization
Power Saver algorithm without server virtualization
Figure 4. Detailed Components of Switch Energy
Consumption
DC loads ranging from 10% to 100%, whereas the last 10
experiments were for virtualized servers with different DC loads
ranging from 10% to 100%. When DC load was increased, the total
energy consumed by the servers increased under both scenarios.
However, the virtualized servers consume less energy than their
non-virtualized counterparts.
In this experiment, the virtualized environment achieved higher
energy saving than the non-virtualized environment. The total
energy saving is approximately 645 Wh (approximately 52389
kWh annually) because all the tasks were sent to the VM instead of
the PM, thereby placing all other PMs in Dynamic Shutdown
(DNS): mode and making them consume less energy.
7.3 DC Load
Figure 7 depicts the DC load and a set of 20 experiments conducted
on 10 groups of data center loads using the proposed algorithm. The
first 10 experiments were for non-virtualized servers with different
DC loads ranging from 10% to 100%, whereas the last 10
experiments were for virtualized servers with different DC loads
ranging from 10% to 100%. The value of this parameter indicates
the current DC load for all submitted tasks and DC hosts.
Furthermore, Figure 7 illustrates a continuous load difference
between the two sets of experiments. Notably, the power saver
scheduler with virtualized servers demonstrated a noticeable
improvement under loads ranging from 30% to 100% compared
with the power saver scheduler with non-virtualized servers.
The definition of DC load in Section 6.2 suggests that when more
servers are running, a higher DC load will be obtained because the
PSSA creates VMs over the PM, thereby decreasing the number of
running servers and reducing DC load.
7.4 Server Load
Figure 8 shows the server loads for all 1,440 servers after
distributing 41,436 tasks, whereas Figure 9 presents the number of
running servers for virtualized and non-virtualized DCs. Figures 8
and 9 indicate that the PSSA with server virtualization finished
41,436 tasks earlier by operating 666 servers at high loads, thereby
minimizing energy consumption. By contrast, the DC with non-
virtualized servers used 689 servers to run the same number of tasks
at low loads. However, in contrast with the DC with virtualized
servers, the DC with non-virtualized servers was less efficient in
reducing the amount of energy required to finish the tasks.
8. CONCLUSION
PSSA and green algorithm were adopted to demonstrate the effect
of server virtualization on DC energy consumption. The
experiment was conducted using the GreenCloud simulator. Three
parameters were used in the experiments: makespan, DC load, and
server’s energy consumption. The results indicated that the
virtualized DC environment exhibited better performance than the
non-virtualized environment in terms of the DC load and server
energy consumption parameters. Nevertheless, the main drawback
of the virtualized environment is the makespan parameter.
The pool of VMs in a cloud computing data center must be
managed using an efficient task scheduling algorithm to maintain
quality of service and resource utilization. Evidently, VM failure
decreases total system throughput. This issue can be resolved via a
VM recovery process that allows the cloning of VMs to another
host. VM migration should be considered when allocating
resources for urgent jobs because transferring a huge amount of
data belonging to a VM may decrease total performance.
Future work should focus on power consumption techniques
relevant to network switches and storage area networks. Future
research must also emphasize the development of different
workload consolidation and traffic aggregation techniques.
9. REFERENCES
[1] Ye, K., Huang, D., Jiang, X., Chen, H., & Wu, S. (2010).
Virtual Machine Based Energy-Efficient Data Center
Architecture for Cloud Computing: A Performance
Perspective. 2010 IEEE/ACM Int’l Conference on Green
Computing and Communications & Int'l Conference on
Figure 8. Server Loads for Virtualized and Non-
virtualized DCs
0%
20%
40%
60%
80%
1
73
145
217
289
361
433
505
577
649
721
793
865
937
1009
1081
1153
1225
1297
1369
Server Load
Server Number
Power Saver Algorithm without server virtualization
Power Saver Algorithm with server virtualization
Figure 9. Number of Running Servers for Virtualized and
Non-virtualized DCs
650
655
660
665
670
675
680
685
690
695
Power Saver Algorithm
without server
virtualization
Power Saver Algorithm
with server virtualization
Number of servers
Figure 7. Total DC Workload for Virtualized and
Nonvirtualized DCs
0.0%
10.0%
20.0%
30.0%
40.0%
50.0%
60.0%
70.0%
80.0%
10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
DC Load
Simulation Load
Power Saver Algorithm with server virtualization
Power Saver Algorithm without server virtualization
Cyber, Physical and Social Computing, 171178.
doi:10.1109/GreenCom-CPSCom.2010.108
[2] C. Clark, K. Fraser, S. Hand, J. G. Hansen, E. Jul, C.
Limpach, I. Pratt, and A. Warfield, "Live migration of virtual
machines," in the 2nd Symposium on Networked Systems
Design and Implementation (NSDI 2005), Boston,
Massachusetts, USA, 2005, pp. 273-286.
[3] R. Raghavendra, P. Ranganathan, V. Talwar, Z. Wang, and
X. Zhu, "No "power" struggles: coordinated multi-level
power management for the data center," in the 13th
International Conference on Architectural Support for
Programming Languages and Operating Systems (ASPLOS
2008), Seattle, WA, USA, 2008, pp. 48-59
[4] Z. Zhang and S. Fu, "Characterizing power and energy usage
in cloud computing systems," in the 3rd IEEE International
Conference on Cloud Computing Technology and Science
(CloudCom 2011), Athens, Greece, 2011, pp. 146-153.
[5] Kliazovich, D., Bouvry, P., & Khan, S. U. (2012).
GreenCloud: A packet-level simulator of energy-aware cloud
computing data centers. Journal of Supercomputing, 62(3),
12631283. http://doi.org/10.1007/s11227-010-0504-1
[6] Atiewi, S., Yussof, S., & Ezanee, M. (in press). A power
saver scheduling algorithm using DVFS and DNS techniques
in cloud computing datacentres. International Journal of
Grid and Utility Computing.
[7] Pike Research, "Cloud Computing Energy Efficiency," Pike
Research, Boulder, 2010.
[8] D. Talbot, "Greener Computing in the cloud," Technology
Review, 24 May 2018. [Online]. Available:
http://www.technologyreview.com/business/23520/page1/.
[9] Ou, G. (2006). Introduction to server virtualization.
TechRepubliccom, 5, 1. Retrieved from
http://articles.techrepublic.com.com/5100-10878_11-
6074941.html
[10] Malhotra, L., Agarwal, D., & Jaiswal, A. (2014).
Virtualization in cloud computing. J Inform Tech Softw Eng,
4(136), 2.
[11] Creasy R.J. The Origin of the VM 370 Time-Sharing
System. IBM J. Res. Dev. 1981;25:483490.
[12] Rao K.T., Kiran P.S., Reddy L.S.S. Energy Efficiency in
Datacenters through Virtualization: A Case Study. Comput.
Sci. Technol. 2010;10:26.
[13] Szubert D. The Register; 2007. Virtualisation Gets Trendy.
Available
online:http://www.theregister.co.uk/2007/06/06/virtualisation
_gets_trendy
[14] Panek W., Wentworth T. Mastering Microsoft Windows 7
Administration. John Wiley and Sons; Hoboken, NJ, USA:
2010.
[15] Kappel J.A., Velte A.T., Velte T.J. Microsoft Virtualization
with Hyper-V. McGraw Hill Professional; New York, NY,
USA: 2009.
[16] Goldberg R.P. Architecture of Virtual Machines.
Proceedings of the Workshop of Virtual Computer Systems;
New York, NY, USA. 30 April 1973; pp. 74112.
[17] Kliazovich, D., Bouvry, P., & Khan, S. U. (2010). DENS:
Data Center Energy-Efficient Network-Aware Scheduling.
2010 IEEE/ACM Int’l Conference on Green Computing and
Communications & Int'l Conference on Cyber, Physical and
Social Computing, 16(1), 6975.
http://doi.org/10.1109/GreenCom-CPSCom.2010.31
[18] Buyya, R., Ranjan, R., & Calheiros, R. N. (2010).
InterCloud : Utility-Oriented Federation of Cloud Computing
Environments for Scaling of. In International Conference on
Algorithms and Architectures for Parallel Processing (pp.
1331). http://doi.org/10.1007/978-3-642-13119-6_2
[19] Wu, C. M., Chang, R. S., & Chan, H. Y. (2014). A green
energy-efficient scheduling algorithm using the DVFS
technique for cloud datacenters. Future Generation Computer
Systems, 37, 141147.
http://doi.org/10.1016/j.future.2013.06.009
[20] Buyya, R., Ranjan, R., & Calheiros, R. N. (2009). Modeling
and Simulation of Scalable Cloud Computing Environments
and the CloudSim Toolkit : Challenges and Opportunities, 1–
11.
[21] Cisco. (2007). Cisco Data Center Infrastructure 2.5 Design
Guide. Cisco, (6387).
[22] Mahadevan, P., Sharma, P., Banerjee, S., & Ranganathan, P.
(2009). A power benchmarking framework for network
devices. In Lecture Notes in Computer Science (including
subseries Lecture Notes in Artificial Intelligence and Lecture
Notes in Bioinformatics) (Vol. 5550 LNCS, pp. 795808).
http://doi.org/10.1007/978-3-642-01399-7_62
[23] Baliga, J., Ayre, R. W. A., Hinton, K., & Tucker, R. S.
(2011). Green Cloud Computing: Balancing Energy in
Processing, Storage, and Transport. Proceedings of the IEEE,
99(1), 149167.
[24] Cisco. (2009). Cisco Data Center Interconnect Design and
Implementation Guide. System. Retrieved from
http://www.cisco.com/en/US/solutions/collateral/ns340/ns51
7/ns224/ns949/ns304/ns975/data_center_interconnect_design
_guide.pdf
... It allows sharing a single physical resource or an application among many clients and organizations. Virtualization describes as software that lets a single hardware calculating device to be automatically partitioned to single/multi assumed devices [21], [22]. In turn, each of these virtual devices can be used and managed easily, and thus, it reduces cost by increasing the infrastructure utilizing, also it provides the agility required to speed up IT processes [1]. ...
... Multiple users demand resources, it requires a lot of investment in physical infrastructure to react to users' demands; therefore, the cloud infrastructure providers solve and manage this situation by offering VM services based on user requests with higher quality and at low cost [5], [25]. There are different forms of virtualization such as storage, server, appliance, desktop, and network virtualization [21], [26]. ...
... Atiewi et al. [21], studied the impact of virtualization on energy consumption in the cloud. They presented an experiment using a power saver scheduler algorithm (PSSA) under a green cloud simulator in order to compare two scenarios (virtualized server and non-virtualization server). ...
Article
Full-text available
Cloud computing is a new technology which managed by a third party "cloud provider" to provide the clients with services anywhere, at any time, and under various circumstances. In order to provide clients with cloud resources and satisfy their needs, cloud computing employs virtualization and resource provisioning techniques. The process of providing clients with shared virtualized resources (hardware, software, and platform) is a big challenge for the cloud provider because of over-provision and under-provision problems. Therefore, this paper highlighted some proposed approaches and scheduling algorithms applied for resource allocation within cloud computing through virtualization in the datacenter. The paper also aims to explore the role of virtualization in providing resources effectively based on clients' requirements. The results of these approaches showed that each proposed approach and scheduling algorithm has an obvious role in utilizing the shared resources of the cloud data center. The paper also explored that virtualization technique has a significant impact on enhancing the network performance, save the cost by reducing the number of Physical Machines (PM) in the datacenter, balance the load, conserve the server's energy, and allocate resources actively thus satisfying the clients' requirements. Based on our review, the availability of Virtual Machine (VM) resource and execution time of requests are the key factors to be considered in any optimal resource allocation algorithm. As a results of our analyzing for the proposed approaches is that the requests execution time and VM availability are main issues and should in consideration in any allocating resource approach.
... Cloud service providers benefit from expansion of resource utilization, so permitting customers to rent resources of restricted availability ensures full use of resources and hence profit maximization [34,66]. Due the energy consumption which is the fourth evaluation Metrix, it is obvious that exploitation for CPU and resources directly possessions power ingestion by tasks, where energy consumption increase if a CPU is not busy also if resources expose for heavy demand [67][68][69] and the last evaluation metric is DI which calculates the discrepancy through VMs [70], where the following equation is used to calculate it: ...
Article
Full-text available
In the context of cloud computing, one problem that is frequently encountered is task scheduling. This problem has two primary implications, which are the planning of tasks on virtual machines and the attenuation of performance. In order to address the problem of task scheduling in cloud computing, requisite nontraditional optimization attitudes to attain the optima of the problem, the present paper puts forth a hybrid multiple-objective approach called hybrid grey wolf and whale optimization (HGWWO) algorithms, that integrates two algorithms, namely, the grey wolf optimizer (GWO) and the whale optimization algorithm (WOA), with the purpose of conjoining the advantages of each algorithm for minimizing costs, energy consumption, and total execution time needed for task implementation, beside that improving the use of resources. Assessment of the aims of the proposed approach is carried out with the help of the tool known as CloudSim. As pointed out by the results of the experimental work undertaken, the proposed approach has the capability of performing at a superior level by comparison to the original algorithms GWO and WOA on their own with regard to costs, energy consumption, makespan, use of resources, and degree of imbalance.
... Virtualization technology offers on-demand harnessing of a pool of resources through a Pay as Use (PaU) fashion [4,24]. The advancement in the virtual machine (VM) and network technologies have initiated the growth of commercial cloud service providers. ...
Article
Full-text available
Resource provisioning is a key issue in large-scale distributed systems such as cloud computing systems. Several resource provider systems utilized preemptive resource allocation techniques to maintain a high quality of service level. When there is a lack of resources for high-priority requests, leases/jobs with higher priority can run by suspending or canceling leases/jobs with lower priority to release the required resources. The state-of-the-art preemptive resource allocation methods are classified into two classes, namely, (1) heuristic and (2) brute force. The heuristic-based methods are fast, but they can’t maintain the system performance, while brute force-based methods are vice versa. In this work, we proposed a new multi-objective preemptive resource allocation policy that benefits from these two classes. We proposed a new heuristic called Best K-First-Fit (Best-KFF). The Best-KFF searches for the first k preemption choices at each physical machine (PM) and then sorts these preemption choices obtained from the PMs with respect to several objectives (e.g., resource utilization). Then, the Best-KFF selects the best choice maintaining the cloud computing system performance. Thus, the Best-KFF algorithm is a compromise between the heuristic and brute force classes. The higher the value of k is, the larger the search space is. The Best-KFF method maximizes the resource utilization of the physical machines and minimizes the average waiting time of advanced-reservation requests, the number of lease preemption, the preemption time, and energy consumption. The proposed method was thoroughly examined and compared against the state-of-the-art methods. The experimental results on various cloud computing systems demonstrated that the proposed preemption policy outperforms the state-of-the-art methods.
... Many resources need to build the architecture the cloud computing data servers (Ataie et al., 2019). So, we can minimalized the financial by concerned with acquiring broadband connectivity to the cloud (Atiewi, Abuhussein, & Saleh, 2018). The efficiency allocation cloud computing also offers the usage of CPU core, memory, and disk storage based on cloud providers (Alshamrani, 2018). ...
Conference Paper
The usage of cloud computing is needed for educational sector, especially in universities to ease all of administration and learning access to everyone. So that, the development infrastructure cloud computing need to concern the aim of the university. Infrastructure as a Service (IaaS) in cloud computing has problems, like resource, security, and finance. This study follows Kitchenham protocol to explore the challenges in the cloud computing as a infrastructure service for education field literature systematically, then reviewed their techniques which can use to all of university in Indonesia. This study recommends that the management of IaaS should be considered well to get result development of cloud computing optimally.
Article
Full-text available
Mobile devices play a necessary role in our daily life, which are increasingly being used in different fields. Because of the high usage rate of mobile devices, the user needs safe technology to connect to big data exchange. That is presented by Mobile Cloud Computing (MCC), which happens by integrating mobile and cloud computing. This study shows the role of MCC and big data investigation in education. This holds extraordinary guarantee for the future because it has benefits like Cost less, Easily accessed, Huge Storage, Sharing, Collaborative interactions, resources availability, and Flexibility. Also, it h as challenges regarding Mobile network conditions, Control of applications Security, and privacy. We did a reference review of several studies related to mobile Education and its Potentials and Challenges. Also, we study the effect of MCC in boosting teaching and learning activities by 250 questionnaires for 150 teachers and 100 students and conducted a statistical study of using MCC for employee's online courses in a year at El-Kenouz Training Centre in Qena, Egypt.
Chapter
Full-text available
Çevre kirliliği, iklim değişikliği ve küresel ısınma gibi çevre konuları günümüz dünyasının en önemli sorunları haline gelmiştir. Son yıllarda teknolojide yaşanan gelişmeler neticesinde teknolojik cihazların artan kullanımının, bu çevre problemleri üzerinde önemli bir etkisi bulunmaktadır. Bilgi teknolojilerinin üretimi, kullanımı ve bertarafına kadar olan süreçler, ciddi bir karbon salınımına ve yüksek enerji tüketimine yol açmaktadır. Bu doğrultuda, bilgi teknolojilerinin çevre üzerindeki olumsuz etkilerini azaltmak için “Yeşil Bilgi Teknolojileri” kavramı ortaya çıkmış ve küresel bir ivme kazanmıştır. Bu doğrultuda kitabın bu bölümünde sırasıyla; bilgi teknolojilerinin çevre üzerindeki olumsuz etkileri, yeşil bilgi teknolojilerinin kavramsal çerçevesi, yeşil bilgi teknolojisi kullanımının sağladığı faydalar, bilgi teknolojilerinin yeşil olması için yapılabilecek uygulamalar ve yeşil bilgi teknoloji zorlukları ele alınmıştır.
Article
The software that runs its processing logic is a cloud application. In this the data is stored between two systems: client-side and server-side. End-users local hardware and remote server is also a part where some processing is done. However, most data storage exists on a remote server which is one of the major perk of using cloud application. In some cases a local device with no storage space is built with cloud application. Using web browser cloud application interacts with its users; this facility makes the organizations to switch their infrastructure to the cloud for gaining the benefit of digital transformations. In cloud applications it is easier for the clients to move or manage their data safely and it also provides the flexibility required for the emerging organizations to sustain in the digital market. As the cloud applications are emerged with sophistication many papers were employed on its branches. This research paper emphasizes on the evolution and long-term trends of cloud applications. Findings from the paper enable the enterprise with perplexity to decide on adopting cloud.
Thesis
Full-text available
In the last decade, there has been a great movement of virtualization and softwarization, promoting the widespread adoption of the cloud computing paradigm, reducing operational expenses, and making the digital services market more accessible and competitive. This revolution presents new opportunities but also new management challenges, demanding by the autonomic management. Service Function Chaining (SFC) promotes studies of how to automate and optimize the use of technologies such as Network Function virtualization (NFV) and Software Defined Networks (SDN). In this scenario, identifying overloaded functions - i.e., bottlenecks - is essential. Traditionally, this identification is carried out using hardware metrics such as CPU and memory or through information provided by the applications themselves. Besides the influence of the virtualization environment and the algorithm’s implementation quality, which compromises the reliability of these metrics, the end-user information request and traffic sniffing are necessary to observe the quality of the service offered. In this work, we present the Network Queuing Assessment (NQA) technique for detecting bottlenecks. This technique identifies the bottleneck without end-user feedback or any interference in the network traffic, regardless of the SFC’s composition. We present a wide experimental evaluation to support our conclusions, exposing the hardware metrics evaluation mistakes in the NFV environment and the better correlation between queue behavior and quality of experience (QoE). The NQA technique guaranteed a QoE above 90% in seven scenarios and at least 79.7% in two scenarios.
Article
Full-text available
Cloud computing is a fascinating and profitable area in modern distributed computing. Aside from providing millions of users the means to use offered services through their own computers, terminals, and mobile devices, cloud computing presents an environment with low cost, simple user interface, and low power consumption by employing server virtualisation in its offered services (e.g., Infrastructure as a Service). The pool of virtual machines found in a cloud computing data centre (DC) must run through an efficient task scheduling algorithm to achieve resource utilisation and good quality of service, thus ensuring the positive effect of low energy consumption in the cloud computing environment. In this paper, we present an energy-efficient scheduling algorithm for a cloud computing DC using the dynamic voltage frequency scaling technique. The proposed scheduling algorithm can efficiently reduce the energy consumption for executing jobs by increasing resource utilisation. GreenCloud simulator is used to simulate our algorithm. Experimental results show that, compared with other algorithms, our algorithm can increase server utilisation, reduce energy consumption, and reduce execution time. Reference to this paper should be made as follows: Atiewi, S., Yussof, S., Bin Rusli, M.E. and Zalloum, M. (2018) 'A power saver scheduling algorithm using DVFS and DNS techniques in cloud computing data centres', Int.
Article
Cloud computing is a fascinating and profitable area in modern distributed computing. Aside from providing millions of users the means to use offered services through their own computers, terminals, and mobile devices, cloud computing presents an environment with low cost, simple user interface, and low power consumption by employing server virtualisation in its offered services (e.g., Infrastructure as a Service). The pool of virtual machines found in a cloud computing data centre (DC) must run through an efficient task scheduling algorithm to achieve resource utilisation and good quality of service, thus ensuring the positive effect of low energy consumption in the cloud computing environment. In this paper, we present an energy-efficient scheduling algorithm for a cloud computing DC using the dynamic voltage frequency scaling technique. The proposed scheduling algorithm can efficiently reduce the energy consumption for executing jobs by increasing resource utilisation. GreenCloud simulator is used to simulate our algorithm. Experimental results show that, compared with other algorithms, our algorithm can increase server utilisation, reduce energy consumption, and reduce execution time.
Article
Information and communication technology (ICT) has a profound impact on environment because of its large amount of CO2 emissions. In the past years, the research field of “green” and low power consumption networking infrastructures is of great importance for both service/network providers and equipment manufacturers. An emerging technology called Cloud computing can increase the utilization and efficiency of hardware equipment. The job scheduler is needed by a cloud datacenter to arrange resources for executing jobs. In this paper, we propose a scheduling algorithm for the cloud datacenter with a dynamic voltage frequency scaling technique. Our scheduling algorithm can efficiently increase resource utilization; hence, it can decrease the energy consumption for executing jobs. Experimental results show that our scheme can reduce more energy consumption than other schemes do. The performance of executing jobs is not sacrificed in our scheme. We provide a green energy-efficient scheduling algorithm using the DVFS technique for Cloud computing datacenters.