Content uploaded by Carlos Becker Westphall
Author content
All content in this area was uploaded by Carlos Becker Westphall on Oct 08, 2016
Content may be subject to copyright.
374
International Journal
o
nAdvancesinIntel
l
igent Systems, vol
7
no
3&4
, year 20
1
4
, http://www.iariajournals.org/intelligent_systems/
201
4
, © Copyright by authors, Published under agreement with IARIA
-
www.iaria.org
Legacy Network Infrastructure Management Model for Green Cloud Validated
Through Simulations
Sergio Roberto Villarreal, María Elena Villarreal, Carlos Becker Westphall, and Carla Merkle Westphall
Network and Management Laboratory – Post-Graduate Program in Computer Science
Federal University of Santa Catarina
Florianopolis, SC, Brazil
sergio@lrg.ufsc.br, maria@lrg.ufsc.br, westphal@lrg.ufsc.br, carla@lrg.ufsc.br
Abstract — The concepts proposed by Green IT have changed
the priorities in the design of information systems and
infrastructure, adding to traditional performance and cost
requirements, the need for efficiency in energy consumption.
The approach of Green Cloud Computing builds on the
concepts of Green IT and Cloud in order to provide a flexible
and efficient computing environment, but their strategies have
not given much attention to the energy cost of the network
equipment. While Green Networking has proposed principles
and techniques that are being standardized and implemented
in new networking equipment, there is a large amount of
legacy equipment without these features in data centers. In this
paper, the basic principles pointed out in related work for
power management in legacy network equipment are
presented, and a model for its use to optimize green cloud
approach is proposed. It is also presented NetPowerCloudSi m,
an extension to the open-source framework CloudSim, which
was developed to validate the aforementioned model and adds
to the simulator the capability of representing and managing
network equipment according to the state changes of servers.
Experiments performed to validate the model showed that it is
possible to significantly increase the data center efficiency
through its application. The major contributions of this paper
are the proposed network infrastructure management model
and the simulator extension.
Keywords - Green IT; Cloud Computing; Network
Management; Data center; CloudSim.
I. INTRODUCTION
This paper extends [1], which proposes a data center’s
network equipment management model to optimize the green
cloud approach, presenting an extension to the CloudSim
simulator and the experiments performed to validate the
aforementioned model.
Traditionally, computer systems have been developed
focusing on performance and cost, without much concern for
their energy efficiency. However, with the advent of mobile
devices, this feature has become a priority because of the
need to increase the autonomy of the batteries.
Recently, the large concentration of equipment in data
centers brought to light the costs of inefficient energy
management in IT infrastructure, both in economic and
environmental terms, which led to the adaptation and
application of technologies and concepts developed for
mobile computing in all IT equipment.
The term Green IT was coined to refer to this concern
about the sustainability of IT and includes efforts to reduce
its environmental impact during manufacturing, use and
final disposal.
Cloud computing appears as an alternative to improve the
efficiency of business processes, since from the point of
view of the user, it decreases energy costs through the
resources sharing and efficient and flexible sizing of the
systems. Nevertheless, from the standpoint of the service
provider, the actual cloud approach needs to be seen from
the perspective of Green IT, in order to reduce the data
center energy consumption without affecting the system’s
performance. This approach is known as Green Cloud
Computing [2].
Considering only IT equipment, the main cause of
inefficiency in the data center is the low average utilization
rate of the resources, usually less than 50%, mainly caused
by the variability of the workload, which obliges to build
the infrastructure to handle work peaks that rarely happen,
but that would decrease the quality of service if the
application was running on a server fully occupied [3].
The strategy used to deal with this situation is the
workload consolidation that consists of allocating the entire
workload in the minimum possible amount of physical
resources to keep them with the highest possible occupancy,
and put the unused physical resources in a state of low
energy consumption.
The challenge is how to handle unanticipated load peaks
and the cost of activation of inactive resources.
Virtualization, widely used in the Cloud approach, and the
ability to migrate virtual machines have helped to
implement this strategy with greater efficiency.
To validate green cloud management algorithms and
strategies, simulators are used, since performing tests in real
environments is not feasible due to the cost, the physical
rigidity of the structure and the difficulty to reproduce
experiments under controlled conditions.
Calheiros et al. [4] developed CloudSim, an open-source
framework for modeling and simulating cloud computing
environments, which allows performing simulations of large
scale data center operation in a conventional computer. With
375
International Journal
o
nAdvancesinIntel
l
igent Systems, vol
7
no
3&4
, year 20
1
4
, http://www.iariajournals.org/intelligent_systems/
201
4
, © Copyright by authors, Published under agreement with IARIA
-
www.iaria.org
this simulator, it is possible to conduct experiments to
validate workload consolidation algorithms, measure power
consumption and calculate violations to the hired service
levels. However, its use demands effort to interpret the code
and extend it.
Strategies to improve efficiency in data centers have been
based mainly on the servers, cooling systems and power
supply systems, while the interconnection network, which
represents an important proportion of energy consumption,
has not received much attention, and the proposed algorithms
for load consolidation of servers, usually disregard the
consolidation of network traffic [5][6].
According to Bianzino et al. [7], traditionally the
networking system design has followed two principles
diametrically opposed to the aims of Green Networking:
oversizing to support demand peaks and redundancy for the
single purpose of assuming the task when other equipment
fails.
The concepts of Green IT, albeit late, have also achieved
design and configuration of network equipment, leading to
Green Networking, which primary objective is to introduce
the concept of energy-aware design in networks without
compromising performance or reliability, and has to deal
with a central problem: the energy consumption of traditional
network equipment is virtually independent of the traffic
workload [8].
The Green Networking has as main strategies proportional
computing that applies to adjust both the equipment
processing speed and the links speed to the workload, and
the traffic consolidation, which is implemented considering
traffic patterns and turning off not needed components.
While the techniques of Green Networking begin to be
standardized and implemented in the new network
equipment, a large amount of legacy equipment forms the
infrastructure of current data centers. In works to be
presented in the next section, it is shown that it is possible to
manage properly these devices to make the network
consumption roughly proportional to the workload.
Taking into account that the more efficient becomes the
management of virtual machines and physical servers, the
greater becomes the network participation in the total
consumption of the data center, the need to include network
equipment in green cloud model is reinforced.
Thereby, there is the need and the possibility to add, to
the Green Cloud management systems, means of interaction
with the data center network management system, to
synchronize the workload consolidation and servers
shutdown, with the needs of the network traffic
consolidation.
In this article, the principles suggested in recent papers by
several authors for power management in legacy network
equipment are presented, and their application to optimize
green cloud approach is proposed. An extended version of
the CloudSim called NetPowerCloudSim and the results of
the experiments performed to validate the model are also
presented.
The remainder of this paper is organized as follows:
Section II describes related work on which is based our
proposal that is presented in Section III, along with an
analytic case study to show the model’s application possible
results. Section IV presents NetPowerCloudSim, the
experiments performed with this extended simulator to
validate the model, and the results obtain ed. Finally, in
Section V, concluding remarks and proposals for future work
are stated.
II. RELATED WORK
Mahadevan et al. [9] present the results of an extensive
research conducted to determine the consumption of a wide
variety of network equipment in different conditions. The
study was performed by measuring the consumption of
equipment in production networks, which made it possible
to characterize the energy expenditure depending on the
configuration and use of the equipment, and determine a
mathematical expression that allows calculating it with an
accuracy of 2%. This expression determines that total
consumption has a fixed component, which is the
consumption with all ports off, and a variable component
which depends on the number of active ports and the speed
of each port.
Research has determined that the power consumed by the
equipment is relatively independent of the traffic workload
and the size of packets transmitted, and dependent on the
amount of active ports and their speed. The energy saved is
greater when the port speed is reduced from 1 Gbps to 100
Mbps, than from 100 Mbps to 10 Mbps.
This research also presents a table with the average time
needed to achieve the operational state after the boot of each
equipment category, and also demonstrates that the behavior
of the current equipment is not proportional, as expected
according to the proposals of the Green Networking, and
therefore the application of traffic consolidation techniques
have the potential to produce significant energy savings.
Mahadevan et al. [10], continuing the work presented in
the preceding paragraphs, put the idea that the switches
consumption should ideally be proportional to the traffic
load, but as in legacy devices the reality is quite different,
they propose techniques to make the network consumption
closer to the proportional behavior by the application of
configurations available in all devices.
The results are illustrated in Figure 1, which shows the
ideal behavior identified as "Energy Proportional" which
corresponds to a network with fully "Energy Aware"
equipment, the actual curve of the most of the today's
networks where the consumption is virtually independent of
load, labeled "Current", and finally the consumption curve
obtained by applying the techniques they proposed, labeled
“Mahadevan’s techniques”.
The recommended configurations are: slow down the
ports with low use, turn off unused ports, turn off line cards
that have all their ports off and turn off unused switches.
The authors, through field measurements, have shown that it
376
International Journal
o
nAdvancesinIntel
l
igent Systems, vol
7
no
3&4
, year 20
1
4
, http://www.iariajournals.org/intelligent_systems/
201
4
, © Copyright by authors, Published under agreement with IARIA
-
www.iaria.org
is possible to obtain savings of 35% in the consumption of a
data center network with the application of these settings.
Also, with the use of simulations, they have demonstrated
that in ideal conditions savings of 74% are possible
combining servers load consolidation and network traffic
consolidation.
Werner [11] proposes a solution for the integrated
control of servers and support systems for green cloud
called OTM (Organization Theory Model). This approach,
based on the Theory of Organization, defines a model of
allocation and distribution of virtual machines that were
validated through simulations and showed to get up to 40%
energy saving compared to traditional cloud model.
The proposed model determines when to turn off, resize
or migrate virtual machines, and when to turn on or off
physical machines based on the workload and the SLA
(Service Level Agreement) requirements. The solution also
envisages the shutdown of support systems. Figure 2 shows
the architecture of the management system proposed, which
is based on norms, roles, rules and beliefs.
Calheiros et al. [4], from the University of Melbourne,
present the CloudSim simulator, an open-source framework
which supports large scale cloud environment modeling and
simulation in a conventional computer with low consumption
of computational resources. This tool was designed
specifically for modeling cloud computing infrastructures,
and offers support to virtualized environments simulation
and to modeling data centers with large amounts of servers.
This version of the simulator has a class called
NetworkTopology, which provides information about
entities communication latency, but does not allow
representing network equipment and energy consumption.
Figure 2. Green Cloud management system based on OTM [11].
Figure 1. Consumption in computer networks as a function of the
workload [10].
377
International Journal
o
nAdvancesinIntel
l
igent Systems, vol
7
no
3&4
, year 20
1
4
, http://www.iariajournals.org/intelligent_systems/
201
4
, © Copyright by authors, Published under agreement with IARIA
-
www.iaria.org
Freitas [12] made extensions to the CloudSim simulator,
creating the needed classes to support and validate the OTM,
which allowed calculating energy savings and SLA
violations in various scenarios. Neither the model nor the
extensions consider the network equipment energy
consumption.
Garg and Buyya [13] present NetworkCloudSim, an
extension to CloudSim, which incorporates resources for
modeling applications and data center network behaviors.
This simulator has the necessary classes to represent network
equipment and traffic, however, it does not allow
representing and calculating data center equipment energy
consumption.
Beloglazov [14] presented a new version of the
simulator, the CloudSim 2.0, which allows representing the
data center components energy consumption, capacity that
was not contemplated by the framework core, and also
incorporates applications with dynamic workloads. This
version does not support network equipment and its energy
consumption representation.
Based on the findings of the works described above, in
the next section, a proposal to include the management of
legacy and current network devices in OTM is presented.
The rules and equations required to include this extension in
CloudSim simulations are also presented and validated
through a case study.
III. PROPOSAL FOR DATA CENT ER NETWORK
MANAGEMEN T IN GREEN CLOUD APPROACH
The proposal considers the network topology of a typical
data center shown in Figure 3, where the switches are
arranged in a hierarchy of three layers: core layer,
aggregation layer and access or edge layer. In this
configuration, there is redundancy in the connections
between layers so that the failure of a device does not affect
the connectivity.
In traditional facilities, the implementation and
management of this redundancy is done by the Spanning
Tree Protocol and in most recent configurations by the MC-
LAG (Multichassis Links Aggregation Group), which allows
using redundant links simultaneously expanding its capacity,
as described in [15].
The racks are the basic unit of this configuration and
each rack accommodates a certain amount of servers and two
access layer switches. The servers have two NICs (Network
Interface Card) each one connected to a different access
switch.
A. Extensions To The Organization Theory Model
To include the management of legacy network equipment
in the model proposed by Werner et al. [16], such that the
network consumption becomes relatively proportional to the
traffic workload and the energy savings contribute to the
overall efficiency of the system, it is proposed to add the
following elements to its architecture:
1) Management Roles
Add to the "System Operations" components the
“Network Equipment Management" role, which acts as an
interface between the model and the network equipment
being responsible for actions taken on these devices such as:
enabling and disabling ports or equipment or change MC-
LAG protocol settings.
The "Monitoring Management" role, responsible for
collectin g structure information and its understanding,
should be augmented with elements for interaction with the
network management system to provide data, from wh ich
decisions can be made about the port speed configuration, or
turning on or off components and ports. These decisions
will be guided by the rules and beliefs.
2) Planning Rules
These rules are used when decisions must be taken, and
therefore, rules to configure the network equipment in
accordance with the activation, deactivation and utilization
of physical machines should be added.
To implement the settings pointed out in [9], already
presented, the following rules are proposed:
If a PM (Physical Machine) is switched off, the
corresponding ports of access layer switches must be
turned off.
If the occupation of a PM is smaller than a preset
value, network interfaces and corresponding access
switches ports must be slowed down.
If the aggregate bandwidth of the downlink ports of
an access layer switch is smaller than a preset value,
their uplink ports must have their speed reduced.
If an access layer switch has all its ports off, it must
be turned off.
If an access layer switch is turned off, th e
corresponding ports of the aggregation layer switch
must be turned off.
If the aggregate bandwidth of the downlink ports of
an aggregation layer switch is smaller than a preset
value, their uplink ports must have their speed
reduced.
If an aggregation layer switch has all its ports off, it
must be turned off.
If an aggregation layer switch is turned off, the
corresponding port of the core layer switch must be
turned off.
Figure 3. Typical network topology of a data center [9].
378
International Journal
o
nAdvancesinIntel
l
igent Systems, vol
7
no
3&4
, year 20
1
4
, http://www.iariajournals.org/intelligent_systems/
201
4
, © Copyright by authors, Published under agreement with IARIA
-
www.iaria.org
If a module of a core layer switch has all its ports
off, it must be turned off.
If a core layer switch has all its ports off, it must be
turned off.
All reversed rules must also be included.
The application of these rules does not affect the
reliability of the network, since port and devices are only
turned off when servers are turned off. The system
performance will only be affected if the network equipment
activation cost is bigger than the server activation cost.
For more efficiency in traffic consolidation, the model
should consider the racks in virtual machines allocation and
migration strategies, and rules that consolidate active
physical machines in as fewer racks as possible are
necessary.
3) Beliefs
They are a set of empirical knowledge used to improve
decisions, and are linked to the used resources
characteristics and to the type of services implemented in
each specific case.
For each of the rules listed in the previous paragraph, a
belief related to energy consumption should be stated. If we
consider Christensen et al. [17], examples include:
Disconnecting a port of an access layer switch
generates a saving of 500 mWh.
Decreasing the speed of a port from 10 Gbps to 1
Gbps generates a saving of 4.5 Wh.
It will also be necessary to include beliefs about the time
required for a deactivated port or device to become
operational after the boot. These beliefs will be used to make
decisions that must consider performance requirements.
B. Simulation Model
The typical data center network topology, rules and
beliefs proposed form the basis for building a simulation
model to validate different strategies and rules in specific
settings and with different workloads.
For the simulator implementation, it was considered that
each rack accommodates forty 1U servers and two access
layer switches. Each of these switches has 48 Gigabit
Ethernet ports and two 10 Gigabit Ethernet uplink ports.
Each server has two Gigabit Ethernet NICs (Network
Interface Card) each one connected to a different access
switch.
It was also considered that if there is only one rack,
aggregation layer switches are not required, and up to 12
racks can be attended by 2 aggregation layer switches with
twenty four 10 Gigabit Ethernet and two 10 Gigabit Ethernet
or 40 Gigabit Ethernet uplinks, with no need for core
switches.
Finally, it was assumed that, with more than 12 racks two
core switches with a 24 ports module for every 144 racks
will be required. The module’s port speed may be 10 Gigabit
Ethernet or 40 Gigabit Ethernet, according to the aggregation
switches uplinks.
In the next subsections the central aspects of the
simulation model are presented.
1) Network Topology Definition
The simulator must create the network topology based on
the amount of physical servers using the following rules:
If the number of servers is smaller than 40, the
topology will have only two access layer switches
interconnected by their uplink ports. Turn off unused
ports.
If the number of servers is greater than 40 and
smaller than 480 (12 Racks), put two access layer
switches for every 40 servers or fraction and two
aggregation layer switches interconnected by their
uplink ports. Turn off unused ports of both layers
switches.
If the number of servers is greater than 480, apply
the previous rule for each group of 480 servers or
fraction, add two core layer switches and put on each
switch a 24 ports module for each 5,760 servers (144
racks) or fraction. Turn off unused port.
2) Network Energy Consumption Calculation
The total consumption of the network is given by the
sum of all its switches consumption and, based on the
findings of Mahadevan et al. [9], the equation to calculate
switches and modules consumption is:
Power (W) = BP + no. P 40Giga x 10 + no. P 10Giga x 5
+ no. P Giga x 0.5 + no. P Fast x 0.3 (1)
In this expression, the power in Watts is calculated by
summing the base power (BP), which is a fixed value
specific to each device, and the consumption of every active
port at each speed, which is a variable component. The
consumption of each type of port is specific to each device,
but the proposed values are the average values according to
the works already cited.
The simulator must permit to set each kind of port
consumption and the BP in order to represent different
scenarios and to calibrate the model.
In (1), if the switch is modular, the base power of the
chassis must be added.
At the end of each simulation frame, the simulator must
update the calculation of the network total consumption by
summing each switch consumption during the frame.
3) Interconnection calculation
Since the network topology is a hierarchy, it is possible
to establish a mathematical relationship in the equipment
interconnection if these are identified by numbers. Thus, it
is not necessary to include information about these
interconnections in the state vector .
When the simulator needs to determine the switch port
number that corresponds to a specific server network
interface, or to a specific uplink port of a switch, it is
possible to calculate it using a mathematical expression
applied to the server or switch identifier.
379
International Journal
o
nAdvancesinIntel
l
igent Systems, vol
7
no
3&4
, year 20
1
4
, http://www.iariajournals.org/intelligent_systems/
201
4
, © Copyright by authors, Published under agreement with IARIA
-
www.iaria.org
4) Network Management
During the simulation, when servers are connected or
disconnected, the simulator must apply the network
management rules by turning on or off the corresponding
ports or configuring its speed.
The sequence of the application of the rules according to
the state changes of servers is represented by the activity
diagram in Figure 4. This diagram considers, besides the
events of turning servers on and off, events based on the
utilization rate of the server.
Figure 4. Activity Diagram of the application of the model rules.
380
International Journal
o
nAdvancesinIntel
l
igent Systems, vol
7
no
3&4
, year 20
1
4
, http://www.iariajournals.org/intelligent_systems/
201
4
, © Copyright by authors, Published under agreement with IARIA
-
www.iaria.org
C. Case Study
To validate the model and the potential of the proposal, it
was applied to a hypothetical case of a cloud with 200
physical servers, creating the topology, calculating its initial
consumption without network equipment management and
illustrating two possible situations in the operation of the
system. It was considered for this scenario that the base
power is 60 W for access layer switches and 140 W for
aggregation layer switches.
Applying the rule to calculate the topology, it is
determined that it comprises 5 racks housing a cluster of 40
servers each and, therefore, there will be ten access layer
switches with for ty Gigabit Ethernet ports and two 10
Gigabit Ethernet empowered ports, and two aggregation
layer switches with ten 10 Gigabit Ethernet connected ports
for access layer switches and two 40 Gigabit Ethernet ports
for uplink interconnection between them.
1) Scenario 1: All network equipment with all its ports
connected
Access layer switches = 10 x (60 + 2x5 + 48x0.5)
= 940 W
Aggregation layer switches = 2 x (140 + 2x10 + 24x5)
= 560 W
Total network consumption = 1,500 W
2) Scenario 2: Initial configuration with unused ports
off
Access layer switches = 10 x (60 + 2x5 + 40x0.5)
= 900 W
Aggregation layer switches = 2 x (140 + 2x10 + 10x5)
= 420 W
Total network consumption = 1,320 W
In this scenario, it is observed that only by the proper
initial configuration of the network it is possible to get a
power save of approximately 12%.
3) Scenario 3: 90 active servers, workload consolidated
in the first three racks and network configuration rules
applied.
In this situation, according to the rules, there are 4
access layer switches working in initial conditions (2), two
access layer switches working with twelve Gigabit Ethernet
ports, 10 for servers and 2 uplink ports with its speed
reduced (3), and 2 aggregation layer switches with four 10
Gigabit Ethernet and two Gigabit Ethernet downlinks ports
and two 40 Gigabit Ethernet uplinks (4), and the network
consumption will be:
Access layer switches 1 = 4 x (60 + 2x5 + 40x0.5)
= 360 W (2)
Access layer switches 2 = 2 x (60 + 12x0.5)
= 132 W (3)
Aggregation switches = 2 x (140 +2x10 + 4x5 + 2x0.5)
= 362 W (4)
Total network consumption = 854 W
In this scenario, there is a power saving of
approximately 43% in network consumption.
IV. NETPOWERCLOUDSIM
To validate the network management model proposed in
the previous section, extensions to the CloudSim were
developed and the extended simulator was called
NetPowerCloudSim.
A. Extensions development
To represent the network and manage it according to the
rules of the model, the PowerSwitch, NetTopology,
NetworkManager and NetPowerDatacenter classes were
developed as presented in the simplified class diagram in
Figure 5.
Figure 5. NetPowerCloudSim simplified class diagram.
381
International Journal
o
nAdvancesinIntel
l
igent Systems, vol
7
no
3&4
, year 20
1
4
, http://www.iariajournals.org/intelligent_systems/
201
4
, © Copyright by authors, Published under agreement with IARIA
-
www.iaria.org
The PowerSwitch class represents network equipment
and is extended by other classes that represent specific layer
switches (access, aggregation and core). The switches have
attributes such as status (on/off), current consumption,
quantity of ports and each port’s speed; and methods that
allow to turn them on and off, to set the speed of a specific
port and to calculate their consumption at a given time, as
well as the energy consumed during a simulation frame,
which is done through linear interpolation.
The NetTopology class represents the network topology
and is responsible for calculating the quantity of each kind
of switch and their interconnection. Before the simulation
start, based on the data center physical machines amount,
this class calculates the number of racks needed to
accommodate them. The quantities of access, aggregation
and core switches are then calculated from the amount of
racks.
The NetworkManager class contains the attributes and
the logic required to turn on and off network equipment and
ports, and to set ports speed when the state of servers
changes, based on the management model rules and beliefs.
Before the simulation start, helped by the NetTopology
class, it determines which ports are not connected to any
equipment and turn them off. Then, it verifies if the
aggregate bandwidth of the switches that had ports turned
off is under a predefined threshold to determine whether
their uplink ports speed should be reduced.
Finally, the PowerDatacenter class was extended by
NetPowerDatacenter class to integrate the network model to
CloudSim, allowing interaction with the events generated by
other entities of the simulator. This class represents a data
center with a network that comprises ph ysical machines and
access, aggregation and core switches; and computes its
consumption during a simulation. Therefore, in each
simulation frame, this class calculates the network power
consumption, adds it to the data center’s total power
consumption and informs the state changes of servers to the
network manager so it can reconfigure the network
equipment according to the rules.
To perform the experiments, a main class, responsible
for creating the scenario, starting the simulation and
retrieving results, was implemented. It allows setting the
characteristics of all the needed objects and the simulation
parameters.
In order to facilitate simulations, a graphical interface
that allows setting the scenario and repeating simulations
with different parameters without modifying the source code
was developed. This interface is showed in Figure 6.
Figure 6. NetPowerCloudSim graphical user interface.
382
International Journal
o
nAdvancesinIntel
l
igent Systems, vol
7
no
3&4
, year 20
1
4
, http://www.iariajournals.org/intelligent_systems/
201
4
, © Copyright by authors, Published under agreement with IARIA
-
www.iaria.org
The version used for the extensions developing was
CloudSim 3.0.3 and the implementation was made with the
object oriented programming language Java through the
development environment Netbeans IDE 7.3.1 along with
the graphic library Swing. To verify each class code
correction, unit tests were conducted using the JUnit
framework. The integration tests were performed with the
aid of a class developed for this purpose and by simple
simulations that allowed thoroughly analyzing the logs and
comparing the obtained results with those expected.
Lastly, for the sake of helping in understanding the code
and the algorithms operation, as well as facilitating future
extensions, all the created classes were widely commented
and documentation was generated with the Javadoc tool,
provided by Sun Microsystems.
B. Experiments and Results
In order to validate the model and the extensions, three
experiments with different scenarios were conducted in a
single machine. The experiments were simulations executed
in a microcomputer with the following characteristics: Intel
Core i5-3230M 2.6 GHz process or; 8 GB DDR3 RAM
memory; and 64-bit Windows 8 operational system.
The scenarios created for the experiments had the
following common parameters: 1 data cen ter; physical
machines with eight 2000 MIPS processing cores, 8 GB
RAM memory, 1 TB storage capacity and 1 Gbps
bandwidth; virtual machines with two 1000 MIPS
processing cores, 1 GB RAM memory and 100 Mbps
bandwidth.
The experiments performed and the results obtained are
described and discussed next.
1) Experiment 1
In this experiment, one hour of operation of a data center
with only 2 physical machines, 4 virtual machines and 4
applications was simulated, in order to verify the correct
operation of the extensions and their interaction with
CloudSim. The simulation was repeated four times and the
results are presented in Table I: the first one was performed
with the original version of CloudSim (R1); the second one,
with NetPowerCloudSim without representing the network
(R2); the third one, with network representation but without
managing it (R3); and the last one, managing the network
(R4). The results and the logs of each simulation were
compared with each other and to the expected results and
the necessary adjustments were made to ensure the
correction and coherence of the model.
This experiment allowed evaluating all the
functionalities of the developed extensions since, despite of
the simplicity of the scenario, VMs migrations, PMs
shutdowns and reductions and increases in PMs utilization
rates happen and, consequently, network equipment speed
configuration and ports shutdowns are performed.
In Table I it is possible to observe that, as expected, the
data center servers consumption was constant over the four
simulation repetitions and the network consumption was
greater when it was not managed. It can also be observed
that the network consumption has a very close value to the
data center servers consumption. However, this happens
because a rack was set up with two access switches for only
two physical machines, which would not happen in a real
infrastructure.
2) Experiment 2
In this experiment, six hours of operation of a data
center with 500 PMs was simulated, so that the network was
composed by the three layers of the topology. In order to
have a considerable amount of VMs migrations and PMs
shutdowns, 2,000 VMs and 2,000 applications were
executed. The simulation was repeated three times an d the
results obtained are shown in Table II: the first one, without
managing the network (R1); the second one, also without
managing the network, but turning off unused ports in the
initial configuration (R2); and the last one, managing the
network (R3).
It is possible to observe that the network consumption
without management was 32.21 kWh; that, by turning off
unused ports, there was a saving of 4.40 kWh (13.66%); and
that, by managing the network equipment, the saving was
increased to 6.85 kWh (21.25%). Considering the data
center total energy consumption (401.62 kWh), there was a
saving of 1.09% with unused ports off and 1.70% managing
the network.
Simulation repetition
R1
R2
R3
R4
Execution time (ms)
69
67
77
109
Network initia l
consumption (W*s)
-
-
188
124
Data center servers
consu mption
(kWh)
0.1304
0.1304
0.1304
0.1304
Network
consumption (kWh)
-
-
0.1723
0.1076
Total consumption
(kWh)
0.1304
0.1304
0.3027
0.2380
MV migrations
2
2
2
2
PM shutdowns
1
1
1
1
TABLE I. EXPERIMENT 1 RESULTS.
TABLE II. EXPERIMENT 2 RESULTS.
Simulation repetition
R1
R2
R3
Execution time
(min)
02:32.565
02:33.359
02:37.299
Network initial
consumption (W*s)
5,444.00
4,700.00
4,700.00
Data center servers
consu mption
(kWh)
369.4057
369.4057
369.4057
Network
consu mption
(kWh)
32.2104
27.8084
25.3643
Total consumption
(kWh)
401.6161
397.2141
394.7700
MV migrations
1,268
1,268
1,268
PM shutdowns
250
250
250
383
International Journal
o
nAdvancesinIntel
l
igent Systems, vol
7
no
3&4
, year 20
1
4
, http://www.iariajournals.org/intelligent_systems/
201
4
, © Copyright by authors, Published under agreement with IARIA
-
www.iaria.org
The chart in Figure 7 shows the evolution of the network
energy consumption during the data center’s 6-hour
operation, representing the accumulated consumption in
kWh at the end of each simulation frame.
3) Experiment 3
In this experiment, six hours of operation of a data
center with 5,760 PMs, 10,000 VMs and 10,000
applications was simulated, with the purpose of testing the
simulator representing a large scale data center and
verifying the code’s efficiency. As well as in Experiment 2,
the simulation was repeated three times and the results are
presented in Table III: the first one, without managing the
network (R1); the second one, without managing the
network, but turning off unused ports in the initial
configuration (R2); and the last one, managing the network
equipment (R3).
From the results of this experiment, it is possible to
perceive that the network energy consumption without
management was 211.06 kWh; that, by turning off unused
ports in the initial configuration, there was a saving of 6.82
kWh (3.23%); and that, by managing the network
equipment, the saving was increased to 97.92 kWh
(46.39%). Considering the data center total consumption
(2,071.77 kWh), there was a saving of 0.33% with unused
ports off and 4.73% managing the network. The chart in
Figure 8 shows the network energy consumption evolution
during the data center’s operation.
V. CONCLUSION AND FUTURE WORK
In this paper, basic concepts related to Green IT were
first presented, i.e., Green Cloud and Green Networking,
demonstrating the need of considering the network
equipment in strategies designed to make data centers more
efficient, since the network represents a significant
percentage of total consumption, and this participation will
be more expressive when the other components become
more efficient.
Afterwards, in the related work section, a green cloud
management model called OTM was presented, as well as
network equipment management principles that, when
properly applied, make the behavior of the total
consumption of the network approximately proportional to
the traffic load, even when legacy energy-agnostic
equipment are used in. The proposal was to extend the OTM
to manage the network traffic consolidation according to
these management principles.
Then, the elements that must be added to the architecture
of the OTM were described, including th e rules and beliefs
required for the correct network configuration according to
the state changes of servers during the load consolidation
process.
A model to simulate and validate the extensions to the
OTM was also proposed. This model determines the data
center network topology based on the number of physical
servers, the rules to manage and set the network devices
Figure 7. Experiment 2 network consumption evolution.
Figure 8. Experiment 3 network consumption evolution.
TABLE III. EXPERIMENT 3 RESULTS.
Simulation repetition
R1
R2
R3
Execution time (min)
24:49.597
25:28.382
27:09.126
Network initial
consu mption (W*s)
35,672.00
34,520.00
34,520.00
Data center servers
consu mption (kWh)
1,860.712
1,860.711
1,860.712
Network consumption
(kWh)
211.060
204.244
113.141
Total consumption
(kWh)
2,071.772
2,064.956
1,973.853
MV migrations
12,177
12,177
12,177
PM shutdowns
4,510
4,510
4,510
384
International Journal
o
nAdvancesinIntel
l
igent Systems, vol
7
no
3&4
, year 20
1
4
, http://www.iariajournals.org/intelligent_systems/
201
4
, © Copyright by authors, Published under agreement with IARIA
-
www.iaria.org
according to the state changes of servers, and equations to
calculate the switches consumption and the total network
consumption.
The sim ulation model was validated by its application
in a case study, wh ich allowed verifying that equations and
rules are correct and enough to create the topology and to
calculate the consumption of the network in each step of the
simulation, as well as highlight the possible effects of the
application of the proposal. This model was the basis to
create a simulator and perform simulations.
The simulator was created by extending CloudSim and
was called NetPowerCloudSim. New classes to represent
network equipment and network topology were created as
well as a network manager that applies the rules during the
simulation. A graphical interface was also developed in
order to allow creating scenarios and perform simulation
without the need to modify the application source code.
Finally, experiments to validate the extensions and the
model were performed, demonstrating that it is possible to
obtain significant energy savings in the data center
consumption by the application of the model. It was thus
demonstrated the possibility and desirability of extending
the green cloud management model as proposed.
Although the actual results in each situation will depend
on the data center configuration, the kind of network
equipment and the workloads, it was demonstrated through
the presented experiments that it is possible to obtain
savings of nearly 50% in the network consumption and 5%
in the data center total consumption in feasible conditions.
It is important to consider that the impact of applying the
model is maximum in legacy energy-agnostic equipment,
and will be smaller as the equipment becomes more energy-
aware by applying the resources of the Green Networking
but its application will be still convenient.
As future research, it is proposed to continue this wor k
by performing experiments to determine the actual
contribution of the model in scenarios with real
configuration and workloads, as well as determine the most
effective rules and virtual machine allocation policies. It is
also proposed to compare these results to those obtained in
real systems to calibrate the model.
Finally, the implementation of the model is proposed as
future work and, since system performance may be affected
if the network devices activation cost is bigger than the
server activation cost, it is also suggested to study the
proper network configuration and technologies to avoid this
situation, with special consideration to protocols that
manage the links redundancy and aggregation, like the
Spanning Tree Protocol, MC-LAG, and other new
networking standards for data centers.
REFERENCES
[1] S. R. Villarreal, C. B. Westphall, and C. M. Westphall,
“Optimizing green clouds through legacy network
infrastructure management,” Proc. Thirteenth International
Conference on Networks (ICN – 2014), IARIA XPS Press,
2014, pp. 142-147.
[2] C. B. Westphall and S. R. Villarreal, “Principles and trends in
Green Cloud Computing,” Revista Eletrônica de Sistemas de
Informação, vol. 12, n. 1, pp. 1-19, January 2013, doi:
10.5329/RESI.2013.1201007.
[3] A. Beloglazov, R. Buyya, Y. C. Lee, and A. Zomaya, “A
taxonomy and survey of energy-efficient datacenters and
Cloud Computing,” Advances in Computers, vol. 82, pp. 47-
111, Elsevier, November 2011, doi: 10.1016/B978-0-12-
385512-1.00003-7.
[4] R. Calheiros, R. Ranjan, A. Beloglazov, C. De Rose, and R.
Buyya, “CloudSim: a toolkit for modeling and simulation of
cloud computing environments and evaluation of resources
provisioning algorithms,” SPE Wiley Press, vol. 41, January
2011, pp. 23-50, doi: 10.1002/spe.995.
[5] A. Abdullah, “Green Cloud Computing: the need of the hour,”
International Journal of Research in Advent Technology, v ol.
2, n. 1, January 2014, pp. 316-321.
[6] C. B. Westphall, C. M. Westphall, S. R. Villarreal, G. A.
Geronimo, J. Werner, Gr een Clouds through Servers, Virtual
Machines and Network Infrastructure Management. In book:
Courses / 32nd Brazilian Symposium on Computer Networks
and Distributed Systems, Chapter: 6, Publisher: SBC – SBRC
2014, vol. 1, pp. 244-289.
[7] A. Bianzino, C. Chaudet, D. Rossi, and J. Rougier, “A
survey of green networking research,” IEEE Communications
Surveys and Tutorials, vol. 14, pp. 3-20, February 2012, doi:
10.1109/SURV.2011.113010.00106.
[8] S. Jing et al, “State-of-the-art research study for Green Cloud
Computing,” The Journal of Supercomputing, vol. 65, n. 1,
July 2013, pp. 445-468.
[9] P. Mahadevan, P. Sharma, S. Banerjee and P. Ranganathan,
“A power benchmarking framework for network devices,”
Proc. 8th International IFIP-TC 6 Networking
Conference, Springer Berlin Heidelberg, November 2009, pp.
795-808, doi: 10.1007/978-3-642-01399-7_62.
[10] P. Mahadevan, S. Banerjee, P. Sharma, A. Shah, and P.
Ranganathan, “On energy efficiency for enterprise and data
center networks,” IEEE Communication Magazine. vol. 49
pp. 94-100. August 2011. 10.1109/MCOM.2011.5978421.
[11] J. Werner, “A virtual machines allocation approach in green
cloud computing environments”. Dissertation: Post-Graduate
Program in Computer Science Federal University of Santa
Catarina, 2011.
[12] R. Freitas, “Efficient energy use for cloud computing through
simulations”. Monograph: Post-Graduate Program in Com-
puter Science Federal University of Santa Catarina, 2011.
[13] S. Garg and R. Buyya, “NetworkCloudSim: modelling
parallel applications in cloud simulations,” Fourth IEEE
International Conference on Utility and Cloud Computing,
IEEE , December 2011, pp. 105-113, doi:
10.1109/UCC.2011.24.
[14] A. Beloglazov, “Energy-efficient management of virtual
machines in data centers for Cloud Computing”. PhD Thesis,
University of Melbourne, Australia, 2013.
[15] C. Sher De Cusatis, A. Carranza, and C. Decusatis,
“Comunication within clouds: open standards and proprietary
protocols for data center networking,” IEEE Communication
Magazine, vol. 50, pp. 26-33, September 2012. doi:
10.1109/MCOM.2012.6295708.
[16] J. Werner, G. Geronimo, C. B. Westphall, F. Koch, and R.
Freitas, “Simulator improvements to validate the green cloud
computing approach,” LANOMS, October 2011, pp. 1-8, doi:
10.1109/LANOMS.2011.6102263.
[17] K. Christensen, P. Reviriego, B. Nordman, M. Mostowfi, and
J. Maestro, “IEEE 802.3az: The road to energy efficient
Ethernet,” IEEE Comunication Magazine, vol. 48, pp. 50-56,
November 2010. doi: 10.1109/MCOM.2010.5621967.