Content uploaded by Carlos Becker Westphall
Author content
All content in this area was uploaded by Carlos Becker Westphall on Feb 25, 2014
Content may be subject to copyright.
Optimizing Green Clouds through Legacy Network Infrastructure Management
Sergio Roberto Villarreal, Carlos Becker Westphall, Carla Merkle Westphall
Network and Management Laboratory - Post-Graduate Program in Computer Science
Federal University of Santa Catarina
Florianopolis, SC, Brazil
sergio@inf.ufsc.br, westphal@inf.ufsc.br, carlamw@inf.ufsc.br
Abstract — The concepts proposed by Green IT have changed
the priorities in the design of information systems and
infrastructure, adding to traditional performance and cost
requirements, the need for efficiency in energy consumption.
The approach of Green Cloud Computing builds on the
concepts of Green IT and Cloud in order to provide a flexible
and efficient computing environment, but their strategies have
not given much attention to the energy cost of the network
equipment. While Green Networking has proposed principles
and techniques that are being standardized and implemented
in new networking equipment, there is a large amount of
legacy equipment without these features in datacenters. In this
paper, the basic principles pointed out in recent works for
power management in legacy network equipment are
presented, and a model for its use to optimize green cloud
approach is proposed.
Keywords - Green IT; Green Networking; Green Cloud
Computing
I. INTRODUCTION
Traditionally, computer systems have been developed
focusing on performance and cost, without much concern for
their energy efficiency. However, with the advent of mobile
devices, this feature has become a priority because of the
need to increase the autonomy of the batteries.
Recently, the large concentration of equipment in data
centers brought to light the costs of inefficient energy
management in IT infrastructure, both in economic and
environmental terms, which led to the adaptation and
application of technologies and concepts developed for
mobile computing in all IT equipment.
The term Green IT was coined to refer to this concern
about the sustainability of IT and includes efforts to reduce
its environmental impact during manufacturing, use and
final disposal.
Cloud computing appears as an alternative to improve the
efficiency of business processes, since from the point of
view of the user, it decreases energy costs through the
resources sharing and efficient and flexible sizing of the
systems. Nevertheless, from the standpoint of the service
provider, the actual cloud approach needs to be seen from
the perspective of Green IT, in order to reduce energy
consumption of the data center without affecting the
system’s performance. This approach is known as Green
Cloud Computing [1].
Considering only IT equipment, the main cause of
inefficiency in the data center is the low average utilization
rate of the resources, usually less than 50%, mainly caused
by the variability of the workload, which obliges to build
the infrastructure to handle work peaks that rarely happen,
but that would decrease the quality of service if the
application was running on a server fully occupied [2].
The strategy used to deal with this situation is the
workload consolidation that consists of allocating the entire
workload in the minimum possible amount of physical
resources to keep them with the highest possible occupancy,
and put the unused physical resources in a state of low
energy consumption. The challenge is how to handle
unanticipated load peaks and the cost of activation of
inactive resources. Virtualization, widely used in the Cloud
approach, and the ability to migrate virtual machines have
helped to implement this strategy with greater efficiency.
Strategies to improve efficiency in data centers have been
based mainly on the servers, cooling systems and power
supply systems, while the interconnection network, which
represents an important proportion of consumption, has not
received much attention, and the proposed algorithms for
load consolidation of servers, usually disregard the
consolidation of network traffic.
The concepts of Green IT, albeit late, have also achieved
design and configuration of network equipment, leading to
Green Networking, which has to deal with a central
problem: the energy consumption of traditional network
equipment is virtually independent of the traffic workload.
The Green Networking has as main strategies proportional
computing that applies to adjust both the equipment
processing speed such as the links speed to the workload,
and the traffic consolidation, which is implemented
considering traffic patterns and turning off components not
needed. According to Bianzino et al. [3], traditionally the
networking system design has followed two principles
diametrically opposed to the aims of Green Networking,
over-sizing to support demand peaks and redundancy for the
single purpose of assuming the task when other equipment
fail. This fact makes Green Networking technically
challenging, with the primary objective of introducing the
concept of energy-aware design in networks without
compromising performance or reliability.
While the techniques of Green Networking begin to be
standardized and implemented in the new network
equipment, a large amount of legacy equipment forms the
142Copyright (c) IARIA, 2014. ISBN: 978-1-61208-318-6
ICN 2014 : The Thirteenth International Conference on Networks
infrastructure of current data centers. In the works to be
presented in the next section, it is shown that it is possible to
manage properly these devices to make the network
consumption roughly proportional to the workload.
Thereby, there is the need and the possibility to add, to
the Green Cloud management systems, means of interaction
with the data center network management system, to
synchronize the workload consolidation and servers
shutdown, with the needs of the network traffic
consolidation.
Taking into account that the more efficient becomes the
management of virtual machines and physical servers, the
greater becomes the network participation in the total
consumption of the data center, the need to include network
equipment in green cloud model is reinforced.
In this article, the principles suggested in recent papers by
several authors for power management in legacy network
equipment are presented, and their application to optimize
our approach of green cloud is proposed.
After this introduction, section 2 presents related works
on which is based our proposal that is presented in section 3.
Section 4 presents possible results of the application of the
model and, finally, in section 5, concluding remarks and
proposals for future work are stated.
II. RELATED WORK
Mahadevan et al. [4] present the results of an extensive
research conducted to determine the consumption of a wide
variety of network equipment in different conditions. The
study was performed by measuring the consumption of
equipment in production networks, which made it possible
to characterize the energy expenditure depending on the
configuration and use of the equipment, and determine a
mathematical expression that allows calculating it with an
accuracy of 2%. This expression determines that total
consumption has a fixed component, which is the
consumption with all ports off, and a variable component
which depends on the number of active ports and the speed
of each port.
Research has determined that the power consumed by the
equipment is relatively independent of the traffic workload
and the size of packets transmitted, and dependent on the
amount of active ports and their speed. The energy saved is
greater when the port speed is reduced from 1 Gbps to 100
Mbps, than from 100 Mbps to 10 Mbps.
This research also presents a table with the average time
needed to achieve the operational state after the boot of each
equipment category, and also demonstrates that the behavior
of the current equipment is not proportional, as expected
according to the proposals of the Green Networking, and
therefore the application of traffic consolidation techniques
have the potential to produce significant energy savings.
Mahadevam et al. [5], continuing the work presented in
the preceding paragraphs, put the idea that the switches
consumption should ideally be proportional to the traffic
load, but as in legacy devices the reality is quite different,
Figure 1 - Consumption in comput er networks as a function of the
workload [5].
they propose techniques to make the network consumption
closer to the proportional behavior by the application of
configurations available in all devices.
The results are illustrated in Figure 1, which shows the
ideal behavior identified as "Energy Proportional" which
corresponds to a network with fully "Energy Aware"
equipment, the actual curve of the most of the today's
networks where th e consumption is virtually independent of
load, labeled "Current", and finally the consumption curve
obtained by applying the techniques they proposed, labeled
“Mahadevam’s techniques”.
The recommended configurations are: slow down the
ports with low use, turn off unused ports, turn off line cards
that have all their ports off and turn off unused switches.
The authors, through field measurements, have shown that it
is possible to obtain savings of 35% in the consumption of a
data center network with the application of these settings.
Also, with the use of simulations, they have demonstrated
that in ideal conditions savings of 74% are possible
combining servers load consolidation and network traffic
consolidation.
Werner [6] proposes a solution for the integrated control
of servers and support systems for green cloud model based
on the Theory of Organization (Organization Theory Model
- OTM). This approach defines a model of allocation and
distribution of virtual machines that were validated through
simulations and showed to get up to 40% energy saving
compared to traditional cloud model.
The proposed model determines when to turn off, resize
or migrate virtual machines, and when to turn on or off
physical machines based on the workload and the Service
Level Agreement (SLA) requirements. The solution also
envisages the shutdown of support systems. Figure 2 shows
the architecture of the management system proposed, which
is based on norms, roles, rules and beliefs.
Freitas [7] made extensions to the CloudSim simulator
by CALHEIROS et al. [8], developed at the University of
Melbourne, creating the necessary classes to support the
Organization Theory Model, presented in the previous
143Copyright (c) IARIA, 2014. ISBN: 978-1-61208-318-6
ICN 2014 : The Thirteenth International Conference on Networks
Figure 2 – G reen Cloud management system based on OTM [6].
paragraphs, which allowed to calculate the energy savings
and SLA violations in various scenarios.
In the next section, a proposal to include the
management of legacy network devices in Organization
Theory Model and the rules and beliefs for the proper
functioning of the model based on the findings of the works
described above are presented. The rules and equations
required to include this extension in CloudSim simulations
are also presented and validated through a study case.
III. PROPOSAL FOR DATA CENTER NETWORK
MANAGEMEN T IN GREEN CLOUD APPROACH
The proposal considers the network topology of a typical
datacenter shown in Figure 3, where the switches are
arranged in a hierarchy of three layers: core layer,
aggregation layer and access or edge layer. In this
configuration, there is redundancy in the connections
between layers so that the failure of a device does not affect
the connectivity.
Figure 3 - Typical network topology of a datacenter [5].
Consequently, we consider, in our model, that each rack
accommodates forty 1U servers and two access layer
switches. Each of these switches has 48 Gigabit Ethernet
ports and two 10 Gigabit Ethernet uplink ports, and each
server has two Gigabit Ethernet NICs each one connected to
a different access switch.
We also consider that if there is only one rack,
aggregation layer switches are not required, and up to 12
racks can be attended by 2 aggregation layer switches with
twenty four 10 Gigabit Ethernet and two 10 Gigabit
Ethernet or 40 Gigabit Ethernet uplinks, with no need for
core switches.
Finally, the model assumes that, with more than 12 racks
two core switches with a 24 ports module for every 144
racks will be required. The module’s port speed may be 10
Gigabit Ethernet or 40 Gigabit Ethernet, according to the
aggregation switches uplinks.
In traditional facilities, the implementation and
management of this redundancy is done by the Spanning
Tree Protocol and in most recent configurations by the
Multichassis Links Aggregation Protocol (MC-LAG), which
allows using redundant links simultaneously expanding its
capacity, as described in [9].
A. Extensions To The Organization Theory Model
To include the management of legacy network equipment
in the model proposed by WERNER et al [10], such that the
network consumption becomes relatively proportional to the
traffic workload and the energy savings contribute to the
overall efficiency of the system, it is proposed to add the
following elements to its architecture:
1) Management Roles
Add to the "System Operations" components the
“Network Equipment Management" role, which acts as an
144Copyright (c) IARIA, 2014. ISBN: 978-1-61208-318-6
ICN 2014 : The Thirteenth International Conference on Networks
interface between the model and the network equipment
being responsible for actions taken on these devices such as:
enabling and disabling ports or equipment or change MC-
LAG protocol settings.
The "Monitoring Management" role, responsible for
collecting structure information and its understanding,
should be augmented with elements for interaction with the
network management system to provide data, from which
decisions can be made about the port speed configuration, or
turning on or off components and ports. These decisions
will be guided by the rules and beliefs.
2) Planning Rules
These rules are used when decisions must be taken, and
therefore, rules to configure the network equipment in
accordance with the activation, deactivation and utilization
of physical machines should be added.
To implement the settings pointed out in [5], already
presented, the following rules are proposed:
If a physical machine (PM) is switched off, the
corresponding ports of access layer switches must be
turned off.
If the occupation of a PM is smaller than a preset
value, network interfaces and corresponding access
switches ports must be slowed down.
If the aggregate bandwidth of the downlink ports of
an access layer switch is smaller than a preset value,
their uplink ports must have their speed reduced.
If an access layer switch has all its ports off, it must
be turned off.
If an access layer switch is turned off, the
corresponding ports of the aggregation layer switch
must be turned off.
If the aggregate bandwidth of the downlink ports of
an aggregation layer switch is smaller than a preset
value, their uplink ports must have their speed
reduced.
If an aggregation layer switch has all its ports off, it
must be turned off.
If an aggregation layer switch is turned off, the
corresponding port of the core layer switch must be
turned off.
If a module of a core layer switch has all its ports
off, it must be turned off.
If a core layer switch has all its ports off, it must be
turned off.
All reversed rules must also be included.
The application of these rules does not affect the
reliability of the network, since port and devices are only
turned off when servers are turned off. The system
performance will only be affected if the network equipment
activation cost is bigger than the server activation cost.
For more efficiency in traffic consolidation, the model
should consider the racks in virtual machines allocation and
migration strategies, and rules that consolidate active
physical machines in as fewer racks as possible are
necessary.
3) Beliefs
They are a set of empirical knowledge used to improve
decisions, and are linked to the used resources
characteristics and to the type of services implemented in
each specific case.
For each of the rules listed in the previous paragraph, a
belief related to energy consumption should be stated. If we
consider CHRISTENSEN et al. [11], examples include:
Disconnecting a port on a switch access layer
generates a saving of 500 mWh.
Decreasing the speed of a port from 10 Gbps to 1
Gbps generates a saving of 4.5 Wh.
It will also be necessary to include beliefs about the time
required for a deactivated port or device to become
operational after the boot. These beliefs will be used to make
decisions that must consider performance requirements.
B. Simulation Model
The typical datacenter network topology, rules and
beliefs proposed form the basis for building a simulation
model to validate different strategies and rules in specific
settings and with different workloads. As already done in
previous works by WERNER [6] and FREITAS [7], it is
possible to expand the CloudSim [8] or work on some of its
extensions as TeachCloud [12].
The simulator must create the network topology and
calculate their initial consumption based on the amount of
physical servers using the following rules:
If the number of servers is smaller than 40, the
topology will have only two access layer switches
interconnected by their uplink ports. Turn off unused
ports.
If the number of servers is greater than 40 and
smaller than 480 (12 Racks), put two access layer
switches for every 40 servers or fraction and two
aggregation layer switches interconnected by their
uplink ports. Turn off unused ports of both layers
switches.
If the number of servers is greater than 480, apply
the previous rule for each group of 480 servers or
fraction, add two core layer switches and put on each
switch a 24 ports module for each 5,760 servers (144
racks) or fraction. Turn off unused port.
The equation to calculate the consumption of the
switches and modules is:
Power (W) = BP + no. P 10Giga x 5 + no. P Giga x 0,5
+ no. P Fast x 0,3 (1)
In this expression, the power in Watts is calculated by
summing the base power (BP), which is a fixed value
specific to each device, and the consumption of every active
port at each speed, which is the variable component. The
consumption of each type of port is specific to each device,
but the proposed values are the average values according to
the works already cited.
145Copyright (c) IARIA, 2014. ISBN: 978-1-61208-318-6
ICN 2014 : The Thirteenth International Conference on Networks
In (1), if the switch is modular, the base power of the
chassis must be added.
During the simulation, when servers are connected or
disconnected, the simulator must apply the network
management rules by turning on or off the corresponding
ports or configuring its speed, and update the calculation of
the total consumption of the network.
In order to analyze the system performance and SLA
violations, the model must know the time needed to put into
operation each type of equipment, and at the moment of the
server’s activation, compare the uptime of the server with
the uptime of the network equipment and use the greatest.
IV. CASE STUDY
To validate the model and the potential of the proposal, it
was applied to a hypothetical case of a cloud with 200
physical servers, creating the topology, calculating its initial
consumption without network equipment management and
illustrating two possible situations in the operation of the
system. It was considered for this scenario that the base
power is 60 W for access layer switches and 140 W for
aggregation layer switches.
Applying the rule to calculate the topology, it is
determined that it comprises 5 racks housing a cluster of 40
servers each and, therefore, there will be 10 access layer
switches with 40 Gigabit Ethernet ports and two 10 Gigabit
Ethernet empowered ports, and two aggregation layer
switches with 12 connected ports each, 10 ports for access
layer switches and two ports for uplink interconnection
between them.
A. Scenario 1: All network equipment with all its ports
connected
The consumption of the network will be:
Access layer switches = 10 x (60 + 2x5 + 48x0,5)
= 940 W (2)
Aggregation layer switches = 2 x (140 + 24x5)
= 520 W (3)
Total network consumption = 1.460 W (4)
B. Scenario 2: Initial configuration with unused ports off
The consumption of the network will be:
Access layer switches = 10 x (60 + 2x5 + 40x0,5)
= 900 W (5)
Aggregation layer switches = 2 x (140 + 12x5)
= 400 W (6)
Total network consumption = 1.300 W (7)
In this scenario, it is observed that only by the proper
initial configuration of the network it is possible to get a
power save of approximately 11%.
C. Scenario 3: 90 active servers, workload consolidated in
the first three racks and network configuration rules
applied.
In this situation, according to the rules, there are 4
access layer switches working in initial conditions (8), two
access layer switches working with twelve 1 Gbps ports, 10
for servers and 2 uplink ports with its speed reduced (9), and
2 aggregation layer switches with four 1 Gbps ports and two
10 Gbps ports (10), and the network consumption will be:
Access layer switches 1 = 4 x (60 + 2x5 + 40x0,5)
= 360 W (8)
Access layer switches 2 = 2 x (60 + 12x0,5)
= 132 W (9)
Aggregation switches = 2 x (140 + 4x5 + 2x0,5)
= 322 W (10)
Total network consumption = 814 W (11)
In this scenario, there is a power saving of
approximately 45% in network consumption.
V. CONCLUSIONS
In this paper, basic concepts related to Green IT were
first presented, i.e., Green Cloud and Green Networking,
demonstrating the need of considering the network
equipment in strategies designed to make data centers more
efficient, since the network represents a significant
percentage of total consumption, and this participation will
be more expressive when the other components become
more efficient.
Afterwards, in the related work section, a green cloud
management model called Organization Theory Model
(OTM) was presented, as well as network equipment
management principles that, when properly applied, make
the behavior of the total consumption of the network
approximately proportional to the traffic load, even when
legacy energy-agnostic equipment are used in. The proposal
was to extend the OTM to manage the network traffic
consolidation according to these management principles.
Then, the elements that must be added to the architecture
of the OTM were described, including the rules and beliefs
required for the correct network configuration according to
the load consolidation on servers.
It was also proposed a model to determine the data center
network topology based on the number of physical servers,
the rules to manage and set the network devices according to
the servers’ state changes, and equations to calculate the
switches consumption and the total network consumption.
This model is the basis to create a simulator and perform
simulations to test the viability and the impact of th e
proposal application in different configurations, with
different performance requirements and with differ ent rules
and beliefs.
The model was validated by its application in a case
study, which allowed verifying that equations and rules are
correct and enough to create the topology and to calculate the
consumption of the network in each step of the simulation, as
146Copyright (c) IARIA, 2014. ISBN: 978-1-61208-318-6
ICN 2014 : The Thirteenth International Conference on Networks
well as highlight the possible effects of the application of the
proposal.
It was also demonstrated, that in the described scenario it
is possible to get a power saving of approximately 11% only
by the proper initial configuration of the network and
without any compromise of the performance. In a
hypothetical situation of low utilization as described in
scenario 3, a power saving of approximately 45% through
proper workload consolidation is possible. It was thus
demonstrated the possibility and desirability of extending the
green cloud management model as proposed.
It is important to consider that the impact of applying the
model is maximum in legacy energy-agnostic equipment,
and will be smaller as the equipment becomes more energy-
aware by applying the resources of the Green Networking as
described in [13], but its application will be still convenient.
As future research, it is proposed to continue this work
by developing the necessary extensions to CloudSim to
implement the model, and perform experiments to determine
the most effective rules and virtual machine allocation
policies, and the actual contribution of the model in scenarios
with different configurations, real workloads and taking into
account possible violations to the SLA.
To evaluate the applicability of the model, it is also
proposed to determine, through simulation, how many times
a day a port or a device is turned on and off in real scenarios,
and its possible impact in equipment failure rate.
Finally, since system performance may be affected if the
network devices activation cost is bigger than the server
activation cost, it is also suggested to study the proper
network configuration and technologies to avoid this
situation, with special consideration to protocols that manage
the links redundancy and aggregation, like the Spanning Tree
Protocol, MC-LAG, and other new networking standards
for data centers.
REFERENCES
[1] C. Westphall and S. Villarreal, “Principles and trends in
Green Cloud Computing ”,RevistaEletrônicadeSistemasde
Informação, v. 12, n. 1, pp. 1-19, January 2013, doi:
10.5329/RESI.2013.1201007.
[2] A. Beloglazov, R. Buyya, Y.C. Lee, and A. Zomaya, “A
taxonomy and Survey of Energy-efficient Datacenters and
Cloud Computing”. Advances in Computers, vol 82, pp. 47-
111, Elsevier, November 2011, doi: 10.1016/B978-0-12-
385512-1.00003-7.
[3] A. Bianzino, C. Chaudet, D. Rossi, and J.Rougier, “A
survey of Green Networking research”. IEEE
Communications Surveys and Tutorials, vol 14, pp. 3-20,
February 2012, doi: 10.1109/SURV.2011.113010.00106
[4] P. Mahadevan, P. Sharma, S. Banerjee and P. Ranganathan, A
“Power Benchmarking Framework for Network Devices”.
Proc. 8th International IFIP-TC 6 Networking
Conference, Springer Berlin Heidelberg, November 2009, pp.
795-808, doi: 10.1007/978-3-642-01399-7_62
[5] P. Mahadevan, S. Banerjee, P. Sharma, A. Shah, and P.
Ranganathan, “On energy efficiency for enterprise and data
center networks”. IEEE Communication Magazine. vol. 49
pp. 94-100. August 2011. 10.1109/MCOM.2011.5978421
[6] J. Werner, “A virtual machines allocation approach in green
cloud computing environments”. dissertation: Post-Graduate
Program in Computer ScienceFederal University of Santa
Catarina, 2011.
[7] R. Freitas, “Efficient energy use for cloud computing through
simulations”. Monograph: Post-Graduate Program in
Computer Science Federal University of Santa Catarina,
2011.
[8] R. Calheiros, R. Ranjan, A. Beloglazov, C. De Rose, and R.
Buyya, “CloudSim: A Toolkit for Modeling and Simulation
of Cloud Computing Environments and Evaluation of
Resources Provisioning Algorithms”. SPE Wiley Press, vol.
41, pp. 23-50, January 2011.
[9] C. Sher De Cusatis, A. Carranza, and C. Decusatis,
“Comunication within clouds: open standards and proprietary
protocolsfor data centernetworking”, IEEE Communication
Magazine. Vol. 50, pp. 26-33, September 2012. doi:
10.1109/MCOM.2012.6295708
[10] J. Wener, G. Geronimo, C. Westphall, F. Koch, and R.
Freitas, “Simulator improvements to validate the green cloud
computing approach”, LANOMS, October 2011, pp. 1-8, doi:
10.1109/LANOMS.2011.6102263
[11] K. Christensen, P. Reviri ego, B. Nordman, M. Mostowfi, and
J. Maestro, “IEEE 802.3az: The road to Energy Efficient
Ethernet”, IEEE Comunication Magazine, vol 48, pp. 50-56,
November 2010. doi: 10.1109/MCOM.2010.5621967.
[12] Y. Jararweh, M. Kharbutli, and M. Alsaleh, “TeachCloud: A
Cloud Computing Educational Toolkit”. International Journal
of Cloud Computing (IJCC), Vol. 2, No. 2/3, February 2013,
pp. 237-257, doi:10.1504/IJCC.2013.055269.
[13] D-LINK. Green Technologies. Taipei: D-LINK, 2011,
available at: http://www.dlinkgreen.com/energyefficiency.asp.
Accessed on 13 June 2013.
147Copyright (c) IARIA, 2014. ISBN: 978-1-61208-318-6
ICN 2014 : The Thirteenth International Conference on Networks