Conference PaperPDF Available

Abstract and Figures

The concepts proposed by Green IT have changed the priorities in the design of information systems and infrastructure, adding to traditional performance and cost requirements, the need for efficiency in energy consumption. The approach of Green Cloud Computing builds on the concepts of Green IT and Cloud in order to provide a flexible and efficient computing environment, but their strategies have not given much attention to the energy cost of the network equipment. While Green Networking has proposed principles and techniques that are being standardized and implemented in new networking equipment, there is a large amount of legacy equipment without these features in datacenters. In this paper, the basic principles pointed out in recent works for power management in legacy network equipment are presented, and a model for its use to optimize green cloud approach is proposed.
Content may be subject to copyright.
Optimizing Green Clouds through Legacy Network Infrastructure Management
Sergio Roberto Villarreal, Carlos Becker Westphall, Carla Merkle Westphall
Network and Management Laboratory - Post-Graduate Program in Computer Science
Federal University of Santa Catarina
Florianopolis, SC, Brazil
sergio@inf.ufsc.br, westphal@inf.ufsc.br, carlamw@inf.ufsc.br
Abstract The concepts proposed by Green IT have changed
the priorities in the design of information systems and
infrastructure, adding to traditional performance and cost
requirements, the need for efficiency in energy consumption.
The approach of Green Cloud Computing builds on the
concepts of Green IT and Cloud in order to provide a flexible
and efficient computing environment, but their strategies have
not given much attention to the energy cost of the network
equipment. While Green Networking has proposed principles
and techniques that are being standardized and implemented
in new networking equipment, there is a large amount of
legacy equipment without these features in datacenters. In this
paper, the basic principles pointed out in recent works for
power management in legacy network equipment are
presented, and a model for its use to optimize green cloud
approach is proposed.
Keywords - Green IT; Green Networking; Green Cloud
Computing
I. INTRODUCTION
Traditionally, computer systems have been developed
focusing on performance and cost, without much concern for
their energy efficiency. However, with the advent of mobile
devices, this feature has become a priority because of the
need to increase the autonomy of the batteries.
Recently, the large concentration of equipment in data
centers brought to light the costs of inefficient energy
management in IT infrastructure, both in economic and
environmental terms, which led to the adaptation and
application of technologies and concepts developed for
mobile computing in all IT equipment.
The term Green IT was coined to refer to this concern
about the sustainability of IT and includes efforts to reduce
its environmental impact during manufacturing, use and
final disposal.
Cloud computing appears as an alternative to improve the
efficiency of business processes, since from the point of
view of the user, it decreases energy costs through the
resources sharing and efficient and flexible sizing of the
systems. Nevertheless, from the standpoint of the service
provider, the actual cloud approach needs to be seen from
the perspective of Green IT, in order to reduce energy
consumption of the data center without affecting the
system’s performance. This approach is known as Green
Cloud Computing [1].
Considering only IT equipment, the main cause of
inefficiency in the data center is the low average utilization
rate of the resources, usually less than 50%, mainly caused
by the variability of the workload, which obliges to build
the infrastructure to handle work peaks that rarely happen,
but that would decrease the quality of service if the
application was running on a server fully occupied [2].
The strategy used to deal with this situation is the
workload consolidation that consists of allocating the entire
workload in the minimum possible amount of physical
resources to keep them with the highest possible occupancy,
and put the unused physical resources in a state of low
energy consumption. The challenge is how to handle
unanticipated load peaks and the cost of activation of
inactive resources. Virtualization, widely used in the Cloud
approach, and the ability to migrate virtual machines have
helped to implement this strategy with greater efficiency.
Strategies to improve efficiency in data centers have been
based mainly on the servers, cooling systems and power
supply systems, while the interconnection network, which
represents an important proportion of consumption, has not
received much attention, and the proposed algorithms for
load consolidation of servers, usually disregard the
consolidation of network traffic.
The concepts of Green IT, albeit late, have also achieved
design and configuration of network equipment, leading to
Green Networking, which has to deal with a central
problem: the energy consumption of traditional network
equipment is virtually independent of the traffic workload.
The Green Networking has as main strategies proportional
computing that applies to adjust both the equipment
processing speed such as the links speed to the workload,
and the traffic consolidation, which is implemented
considering traffic patterns and turning off components not
needed. According to Bianzino et al. [3], traditionally the
networking system design has followed two principles
diametrically opposed to the aims of Green Networking,
over-sizing to support demand peaks and redundancy for the
single purpose of assuming the task when other equipment
fail. This fact makes Green Networking technically
challenging, with the primary objective of introducing the
concept of energy-aware design in networks without
compromising performance or reliability.
While the techniques of Green Networking begin to be
standardized and implemented in the new network
equipment, a large amount of legacy equipment forms the
142Copyright (c) IARIA, 2014. ISBN: 978-1-61208-318-6
ICN 2014 : The Thirteenth International Conference on Networks
infrastructure of current data centers. In the works to be
presented in the next section, it is shown that it is possible to
manage properly these devices to make the network
consumption roughly proportional to the workload.
Thereby, there is the need and the possibility to add, to
the Green Cloud management systems, means of interaction
with the data center network management system, to
synchronize the workload consolidation and servers
shutdown, with the needs of the network traffic
consolidation.
Taking into account that the more efficient becomes the
management of virtual machines and physical servers, the
greater becomes the network participation in the total
consumption of the data center, the need to include network
equipment in green cloud model is reinforced.
In this article, the principles suggested in recent papers by
several authors for power management in legacy network
equipment are presented, and their application to optimize
our approach of green cloud is proposed.
After this introduction, section 2 presents related works
on which is based our proposal that is presented in section 3.
Section 4 presents possible results of the application of the
model and, finally, in section 5, concluding remarks and
proposals for future work are stated.
II. RELATED WORK
Mahadevan et al. [4] present the results of an extensive
research conducted to determine the consumption of a wide
variety of network equipment in different conditions. The
study was performed by measuring the consumption of
equipment in production networks, which made it possible
to characterize the energy expenditure depending on the
configuration and use of the equipment, and determine a
mathematical expression that allows calculating it with an
accuracy of 2%. This expression determines that total
consumption has a fixed component, which is the
consumption with all ports off, and a variable component
which depends on the number of active ports and the speed
of each port.
Research has determined that the power consumed by the
equipment is relatively independent of the traffic workload
and the size of packets transmitted, and dependent on the
amount of active ports and their speed. The energy saved is
greater when the port speed is reduced from 1 Gbps to 100
Mbps, than from 100 Mbps to 10 Mbps.
This research also presents a table with the average time
needed to achieve the operational state after the boot of each
equipment category, and also demonstrates that the behavior
of the current equipment is not proportional, as expected
according to the proposals of the Green Networking, and
therefore the application of traffic consolidation techniques
have the potential to produce significant energy savings.
Mahadevam et al. [5], continuing the work presented in
the preceding paragraphs, put the idea that the switches
consumption should ideally be proportional to the traffic
load, but as in legacy devices the reality is quite different,
Figure 1 - Consumption in comput er networks as a function of the
workload [5].
they propose techniques to make the network consumption
closer to the proportional behavior by the application of
configurations available in all devices.
The results are illustrated in Figure 1, which shows the
ideal behavior identified as "Energy Proportional" which
corresponds to a network with fully "Energy Aware"
equipment, the actual curve of the most of the today's
networks where th e consumption is virtually independent of
load, labeled "Current", and finally the consumption curve
obtained by applying the techniques they proposed, labeled
“Mahadevam’s techniques”.
The recommended configurations are: slow down the
ports with low use, turn off unused ports, turn off line cards
that have all their ports off and turn off unused switches.
The authors, through field measurements, have shown that it
is possible to obtain savings of 35% in the consumption of a
data center network with the application of these settings.
Also, with the use of simulations, they have demonstrated
that in ideal conditions savings of 74% are possible
combining servers load consolidation and network traffic
consolidation.
Werner [6] proposes a solution for the integrated control
of servers and support systems for green cloud model based
on the Theory of Organization (Organization Theory Model
- OTM). This approach defines a model of allocation and
distribution of virtual machines that were validated through
simulations and showed to get up to 40% energy saving
compared to traditional cloud model.
The proposed model determines when to turn off, resize
or migrate virtual machines, and when to turn on or off
physical machines based on the workload and the Service
Level Agreement (SLA) requirements. The solution also
envisages the shutdown of support systems. Figure 2 shows
the architecture of the management system proposed, which
is based on norms, roles, rules and beliefs.
Freitas [7] made extensions to the CloudSim simulator
by CALHEIROS et al. [8], developed at the University of
Melbourne, creating the necessary classes to support the
Organization Theory Model, presented in the previous
143Copyright (c) IARIA, 2014. ISBN: 978-1-61208-318-6
ICN 2014 : The Thirteenth International Conference on Networks
Figure 2 G reen Cloud management system based on OTM [6].
paragraphs, which allowed to calculate the energy savings
and SLA violations in various scenarios.
In the next section, a proposal to include the
management of legacy network devices in Organization
Theory Model and the rules and beliefs for the proper
functioning of the model based on the findings of the works
described above are presented. The rules and equations
required to include this extension in CloudSim simulations
are also presented and validated through a study case.
III. PROPOSAL FOR DATA CENTER NETWORK
MANAGEMEN T IN GREEN CLOUD APPROACH
The proposal considers the network topology of a typical
datacenter shown in Figure 3, where the switches are
arranged in a hierarchy of three layers: core layer,
aggregation layer and access or edge layer. In this
configuration, there is redundancy in the connections
between layers so that the failure of a device does not affect
the connectivity.
Figure 3 - Typical network topology of a datacenter [5].
Consequently, we consider, in our model, that each rack
accommodates forty 1U servers and two access layer
switches. Each of these switches has 48 Gigabit Ethernet
ports and two 10 Gigabit Ethernet uplink ports, and each
server has two Gigabit Ethernet NICs each one connected to
a different access switch.
We also consider that if there is only one rack,
aggregation layer switches are not required, and up to 12
racks can be attended by 2 aggregation layer switches with
twenty four 10 Gigabit Ethernet and two 10 Gigabit
Ethernet or 40 Gigabit Ethernet uplinks, with no need for
core switches.
Finally, the model assumes that, with more than 12 racks
two core switches with a 24 ports module for every 144
racks will be required. The modules port speed may be 10
Gigabit Ethernet or 40 Gigabit Ethernet, according to the
aggregation switches uplinks.
In traditional facilities, the implementation and
management of this redundancy is done by the Spanning
Tree Protocol and in most recent configurations by the
Multichassis Links Aggregation Protocol (MC-LAG), which
allows using redundant links simultaneously expanding its
capacity, as described in [9].
A. Extensions To The Organization Theory Model
To include the management of legacy network equipment
in the model proposed by WERNER et al [10], such that the
network consumption becomes relatively proportional to the
traffic workload and the energy savings contribute to the
overall efficiency of the system, it is proposed to add the
following elements to its architecture:
1) Management Roles
Add to the "System Operations" components the
“Network Equipment Management" role, which acts as an
144Copyright (c) IARIA, 2014. ISBN: 978-1-61208-318-6
ICN 2014 : The Thirteenth International Conference on Networks
interface between the model and the network equipment
being responsible for actions taken on these devices such as:
enabling and disabling ports or equipment or change MC-
LAG protocol settings.
The "Monitoring Management" role, responsible for
collecting structure information and its understanding,
should be augmented with elements for interaction with the
network management system to provide data, from which
decisions can be made about the port speed configuration, or
turning on or off components and ports. These decisions
will be guided by the rules and beliefs.
2) Planning Rules
These rules are used when decisions must be taken, and
therefore, rules to configure the network equipment in
accordance with the activation, deactivation and utilization
of physical machines should be added.
To implement the settings pointed out in [5], already
presented, the following rules are proposed:
If a physical machine (PM) is switched off, the
corresponding ports of access layer switches must be
turned off.
If the occupation of a PM is smaller than a preset
value, network interfaces and corresponding access
switches ports must be slowed down.
If the aggregate bandwidth of the downlink ports of
an access layer switch is smaller than a preset value,
their uplink ports must have their speed reduced.
If an access layer switch has all its ports off, it must
be turned off.
If an access layer switch is turned off, the
corresponding ports of the aggregation layer switch
must be turned off.
If the aggregate bandwidth of the downlink ports of
an aggregation layer switch is smaller than a preset
value, their uplink ports must have their speed
reduced.
If an aggregation layer switch has all its ports off, it
must be turned off.
If an aggregation layer switch is turned off, the
corresponding port of the core layer switch must be
turned off.
If a module of a core layer switch has all its ports
off, it must be turned off.
If a core layer switch has all its ports off, it must be
turned off.
All reversed rules must also be included.
The application of these rules does not affect the
reliability of the network, since port and devices are only
turned off when servers are turned off. The system
performance will only be affected if the network equipment
activation cost is bigger than the server activation cost.
For more efficiency in traffic consolidation, the model
should consider the racks in virtual machines allocation and
migration strategies, and rules that consolidate active
physical machines in as fewer racks as possible are
necessary.
3) Beliefs
They are a set of empirical knowledge used to improve
decisions, and are linked to the used resources
characteristics and to the type of services implemented in
each specific case.
For each of the rules listed in the previous paragraph, a
belief related to energy consumption should be stated. If we
consider CHRISTENSEN et al. [11], examples include:
Disconnecting a port on a switch access layer
generates a saving of 500 mWh.
Decreasing the speed of a port from 10 Gbps to 1
Gbps generates a saving of 4.5 Wh.
It will also be necessary to include beliefs about the time
required for a deactivated port or device to become
operational after the boot. These beliefs will be used to make
decisions that must consider performance requirements.
B. Simulation Model
The typical datacenter network topology, rules and
beliefs proposed form the basis for building a simulation
model to validate different strategies and rules in specific
settings and with different workloads. As already done in
previous works by WERNER [6] and FREITAS [7], it is
possible to expand the CloudSim [8] or work on some of its
extensions as TeachCloud [12].
The simulator must create the network topology and
calculate their initial consumption based on the amount of
physical servers using the following rules:
If the number of servers is smaller than 40, the
topology will have only two access layer switches
interconnected by their uplink ports. Turn off unused
ports.
If the number of servers is greater than 40 and
smaller than 480 (12 Racks), put two access layer
switches for every 40 servers or fraction and two
aggregation layer switches interconnected by their
uplink ports. Turn off unused ports of both layers
switches.
If the number of servers is greater than 480, apply
the previous rule for each group of 480 servers or
fraction, add two core layer switches and put on each
switch a 24 ports module for each 5,760 servers (144
racks) or fraction. Turn off unused port.
The equation to calculate the consumption of the
switches and modules is:
Power (W) = BP + no. P 10Giga x 5 + no. P Giga x 0,5
+ no. P Fast x 0,3 (1)
In this expression, the power in Watts is calculated by
summing the base power (BP), which is a fixed value
specific to each device, and the consumption of every active
port at each speed, which is the variable component. The
consumption of each type of port is specific to each device,
but the proposed values are the average values according to
the works already cited.
145Copyright (c) IARIA, 2014. ISBN: 978-1-61208-318-6
ICN 2014 : The Thirteenth International Conference on Networks
In (1), if the switch is modular, the base power of the
chassis must be added.
During the simulation, when servers are connected or
disconnected, the simulator must apply the network
management rules by turning on or off the corresponding
ports or configuring its speed, and update the calculation of
the total consumption of the network.
In order to analyze the system performance and SLA
violations, the model must know the time needed to put into
operation each type of equipment, and at the moment of the
servers activation, compare the uptime of the server with
the uptime of the network equipment and use the greatest.
IV. CASE STUDY
To validate the model and the potential of the proposal, it
was applied to a hypothetical case of a cloud with 200
physical servers, creating the topology, calculating its initial
consumption without network equipment management and
illustrating two possible situations in the operation of the
system. It was considered for this scenario that the base
power is 60 W for access layer switches and 140 W for
aggregation layer switches.
Applying the rule to calculate the topology, it is
determined that it comprises 5 racks housing a cluster of 40
servers each and, therefore, there will be 10 access layer
switches with 40 Gigabit Ethernet ports and two 10 Gigabit
Ethernet empowered ports, and two aggregation layer
switches with 12 connected ports each, 10 ports for access
layer switches and two ports for uplink interconnection
between them.
A. Scenario 1: All network equipment with all its ports
connected
The consumption of the network will be:
Access layer switches = 10 x (60 + 2x5 + 48x0,5)
= 940 W (2)
Aggregation layer switches = 2 x (140 + 24x5)
= 520 W (3)
Total network consumption = 1.460 W (4)
B. Scenario 2: Initial configuration with unused ports off
The consumption of the network will be:
Access layer switches = 10 x (60 + 2x5 + 40x0,5)
= 900 W (5)
Aggregation layer switches = 2 x (140 + 12x5)
= 400 W (6)
Total network consumption = 1.300 W (7)
In this scenario, it is observed that only by the proper
initial configuration of the network it is possible to get a
power save of approximately 11%.
C. Scenario 3: 90 active servers, workload consolidated in
the first three racks and network configuration rules
applied.
In this situation, according to the rules, there are 4
access layer switches working in initial conditions (8), two
access layer switches working with twelve 1 Gbps ports, 10
for servers and 2 uplink ports with its speed reduced (9), and
2 aggregation layer switches with four 1 Gbps ports and two
10 Gbps ports (10), and the network consumption will be:
Access layer switches 1 = 4 x (60 + 2x5 + 40x0,5)
= 360 W (8)
Access layer switches 2 = 2 x (60 + 12x0,5)
= 132 W (9)
Aggregation switches = 2 x (140 + 4x5 + 2x0,5)
= 322 W (10)
Total network consumption = 814 W (11)
In this scenario, there is a power saving of
approximately 45% in network consumption.
V. CONCLUSIONS
In this paper, basic concepts related to Green IT were
first presented, i.e., Green Cloud and Green Networking,
demonstrating the need of considering the network
equipment in strategies designed to make data centers more
efficient, since the network represents a significant
percentage of total consumption, and this participation will
be more expressive when the other components become
more efficient.
Afterwards, in the related work section, a green cloud
management model called Organization Theory Model
(OTM) was presented, as well as network equipment
management principles that, when properly applied, make
the behavior of the total consumption of the network
approximately proportional to the traffic load, even when
legacy energy-agnostic equipment are used in. The proposal
was to extend the OTM to manage the network traffic
consolidation according to these management principles.
Then, the elements that must be added to the architecture
of the OTM were described, including the rules and beliefs
required for the correct network configuration according to
the load consolidation on servers.
It was also proposed a model to determine the data center
network topology based on the number of physical servers,
the rules to manage and set the network devices according to
the servers state changes, and equations to calculate the
switches consumption and the total network consumption.
This model is the basis to create a simulator and perform
simulations to test the viability and the impact of th e
proposal application in different configurations, with
different performance requirements and with differ ent rules
and beliefs.
The model was validated by its application in a case
study, which allowed verifying that equations and rules are
correct and enough to create the topology and to calculate the
consumption of the network in each step of the simulation, as
146Copyright (c) IARIA, 2014. ISBN: 978-1-61208-318-6
ICN 2014 : The Thirteenth International Conference on Networks
well as highlight the possible effects of the application of the
proposal.
It was also demonstrated, that in the described scenario it
is possible to get a power saving of approximately 11% only
by the proper initial configuration of the network and
without any compromise of the performance. In a
hypothetical situation of low utilization as described in
scenario 3, a power saving of approximately 45% through
proper workload consolidation is possible. It was thus
demonstrated the possibility and desirability of extending the
green cloud management model as proposed.
It is important to consider that the impact of applying the
model is maximum in legacy energy-agnostic equipment,
and will be smaller as the equipment becomes more energy-
aware by applying the resources of the Green Networking as
described in [13], but its application will be still convenient.
As future research, it is proposed to continue this work
by developing the necessary extensions to CloudSim to
implement the model, and perform experiments to determine
the most effective rules and virtual machine allocation
policies, and the actual contribution of the model in scenarios
with different configurations, real workloads and taking into
account possible violations to the SLA.
To evaluate the applicability of the model, it is also
proposed to determine, through simulation, how many times
a day a port or a device is turned on and off in real scenarios,
and its possible impact in equipment failure rate.
Finally, since system performance may be affected if the
network devices activation cost is bigger than the server
activation cost, it is also suggested to study the proper
network configuration and technologies to avoid this
situation, with special consideration to protocols that manage
the links redundancy and aggregation, like the Spanning Tree
Protocol, MC-LAG, and other new networking standards
for data centers.
REFERENCES
[1] C. Westphall and S. Villarreal, “Principles and trends in
Green Cloud Computing ”,RevistaEletrônicadeSistemasde
Informação, v. 12, n. 1, pp. 1-19, January 2013, doi:
10.5329/RESI.2013.1201007.
[2] A. Beloglazov, R. Buyya, Y.C. Lee, and A.  Zomaya, “A
taxonomy and Survey of Energy-efficient Datacenters and
Cloud Computing”. Advances in Computers, vol 82, pp. 47-
111, Elsevier, November 2011, doi: 10.1016/B978-0-12-
385512-1.00003-7.
[3] A. Bianzino, C. Chaudet, D. Rossi, and J.Rougier, “A
survey of Green Networking research”. IEEE
Communications Surveys and Tutorials, vol 14, pp. 3-20,
February 2012, doi: 10.1109/SURV.2011.113010.00106
[4] P. Mahadevan, P. Sharma, S. Banerjee and P. Ranganathan, A
Power Benchmarking Framework for Network Devices.
Proc. 8th International IFIP-TC 6 Networking
Conference, Springer Berlin Heidelberg, November 2009, pp.
795-808, doi: 10.1007/978-3-642-01399-7_62
[5] P. Mahadevan, S. Banerjee, P. Sharma, A. Shah, and P.
Ranganathan, “On energy efficiency for enterprise and data
center networks. IEEE Communication Magazine. vol. 49
pp. 94-100. August 2011. 10.1109/MCOM.2011.5978421
[6] J. Werner, A virtual machines allocation approach in green
cloud computing environments. dissertation: Post-Graduate
Program in Computer ScienceFederal University of Santa
Catarina, 2011.
[7] R. Freitas, Efficient energy use for cloud computing through
simulations. Monograph: Post-Graduate Program in
Computer Science Federal University of Santa Catarina,
2011.
[8] R. Calheiros, R. Ranjan, A. Beloglazov, C. De Rose, and R.
Buyya, CloudSim: A Toolkit for Modeling and Simulation
of Cloud Computing Environments and Evaluation of
Resources Provisioning Algorithms”. SPE Wiley Press, vol.
41, pp. 23-50, January 2011.
[9] C. Sher De Cusatis, A. Carranza, and C. Decusatis,
Comunication within clouds: open standards and proprietary
protocolsfor data centernetworking”, IEEE Communication
Magazine. Vol. 50, pp. 26-33, September 2012. doi:
10.1109/MCOM.2012.6295708
[10] J. Wener, G. Geronimo, C. Westphall, F. Koch, and R.
Freitas, Simulator improvements to validate the green cloud
computing approach, LANOMS, October 2011, pp. 1-8, doi:
10.1109/LANOMS.2011.6102263
[11] K. Christensen, P. Reviri ego, B. Nordman, M. Mostowfi, and
J. Maestro, “IEEE 802.3az: The road to Energy Efficient
Ethernet”, IEEE Comunication Magazine, vol 48, pp. 50-56,
November 2010. doi: 10.1109/MCOM.2010.5621967.
[12] Y. Jararweh, M. Kharbutli, and M. Alsaleh, TeachCloud: A
Cloud Computing Educational Toolkit. International Journal
of Cloud Computing (IJCC), Vol. 2, No. 2/3, February 2013,
pp. 237-257, doi:10.1504/IJCC.2013.055269.
[13] D-LINK. Green Technologies. Taipei: D-LINK, 2011,
available at: http://www.dlinkgreen.com/energyefficiency.asp.
Accessed on 13 June 2013.
147Copyright (c) IARIA, 2014. ISBN: 978-1-61208-318-6
ICN 2014 : The Thirteenth International Conference on Networks
... This paper extends [1], which proposes a data center's network equipment management model to optimize the green cloud approach, presenting an extension to the CloudSim simulator and the experiments performed to validate the aforementioned model. ...
... P Giga x 0.5 + no. P Fast x 0.3 (1) In this expression, the power in Watts is calculated by summing the base power (BP), which is a fixed value specific to each device, and the consumption of every active port at each speed, which is a variable component. The consumption of each type of port is specific to each device, but the proposed values are the average values according to the works already cited. ...
Article
Full-text available
The concepts proposed by Green IT have changed the priorities in the design of information systems and infrastructure, adding to traditional performance and cost requirements, the need for efficiency in energy consumption. The approach of Green Cloud Computing builds on the concepts of Green IT and Cloud in order to provide a flexible and efficient computing environment, but their strategies have not given much attention to the energy cost of the network equipment. While Green Networking has proposed principles and techniques that are being standardized and implemented in new networking equipment, there is a large amount of legacy equipment without these features in data centers. In this paper, the basic principles pointed out in related work for power management in legacy network equipment are presented, and a model for its use to optimize green cloud approach is proposed. It is also presented NetPowerCloudSim, an extension to the open-source framework CloudSim, which was developed to validate the aforementioned model and adds to the simulator the capability of representing and managing network equipment according to the state changes of servers. Experiments performed to validate the model showed that it is possible to significantly increase the data center efficiency through its application. The major contributions of this paper are the proposed network infrastructure management model and the simulator extension.
... Section 2 presents the motivations to propose an integrated management model [Werner et al. 2012], strategies for allocation and provisioning of physical machines and virtual machines [Geronimo et al. 2014], and power management in legacy network equipments [Villarreal et al. 2014]. ...
Chapter
Full-text available
The aim of green cloud computing is to achieve a balance between resource consumption and quality of service. This work introduces the distributed system management model, analyses the system’s behavior, describes the operation principles, and presents case study scenarios and some results. We extended CloudSim to simulate the organization model approach and implemented the migration and reallocation policies using this improved version to validate our management solution. In this context, we proposed strategies to optimize the use of the cloud computing resources, introducing two hybrid strategies, describing the base strategies, validating and analyzing them, and presenting the results. The basic principles pointed out in recent works for power management in legacy network equipment are presented, and a model for its use to optimize green cloud approach is proposed.
Book
Full-text available
Os principais problemas associados à implementação e uso da gerência de redes e serviços ocorrem devido à grande quantidade de proposições, padrões e de diferentes produtos oferecidos no mercado, dificultando consideravelmente a tomada de decisão no que se refere a utilização da abordagem de gerência de redes e serviços mais adequada. Além disso, novas tendências na área de gerência de redes e serviços vêm sendo pesquisadas, entre estas destacam-se atualmente: gerência de redes sem fio, de sensores, óticas, futura internet, internet das coisas, internet espacial...; áreas funcionais de segurança, configuração, desempenho, contabilidade...; gerência de serviços de multimídia, data centers, grid, cloud, fog, edge virtualização...; e gerência centralizada, autonômica, distribuída, auto-gerência, baseada em políticas... Estas novas tendências vêm sendo pesquisadas no Laboratório de Redes e Gerência (LRG) da UFSC e a partir deste projeto as mesmas poderão ser aperfeiçoadas através das seguintes atividades deste projeto: A - Aperfeiçoamentos na Gerência Autonômica para Fog e IoT; B - Aperfeiçoamentos na Qualidade de Serviço para Aplicações de Tempo Real em IoT e Fog; C Aperfeiçoamentos na Segurança para Fog e IoT; D - Aperfeiçoamentos no Sistema de Resposta de Intrusão Autonômica em Cloud e IoT; E - Aperfeiçoamentos na Privacidade em Gerência de Identidade para Federações Dinâmicas em Cloud e IoT; e F - Aperfeiçoamentos no Controle de Acesso Dinâmico Baseado em Risco para uma Federação de Nuvem e IoT..
Article
Full-text available
This paper presents some scope, context, proposals and solutions related with the following topics: Decision- Theoretic Planning for Cloud Computing; An Architecture for Risk Analysis in Cloud; Risk-based Dynamic Access Control for a Highly Scalable Cloud Federation; Challenges of Operationalizing PACS on Cloud Over Wireless Networks; Environment, Services and Network Management for Green Clouds; Provisioning and Resource Allocation for Green Clouds; and Optimizing Green Clouds through Legacy Network Infrastructure Management.
Article
Full-text available
Ethernet is the dominant wireline communications technology for LANs with over 1 billion interfaces installed in the U.S. and over 3 billion worldwide. In 2006 the IEEE 802.3 Working Group started an effort to improve the energy efficiency of Ethernet. This effort became IEEE P802.3az Energy Efficient Ethernet (EEE) resulting in IEEE Std 802.3az-2010, which was approved September 30, 2010. EEE uses a Low Power Idle mode to reduce the energy consumption of a link when no packets are being sent. In this article, we describe the development of the EEE standard and how energy savings resulting from the adoption of EEE may exceed $400 million per year in the U.S. alone (and over $1 billion worldwide). We also present results from a simulation-based performance evaluation showing how packet coalescing can be used to improve the energy efficiency of EEE. Our results show that packet coalescing can significantly improve energy efficiency while keeping absolute packet delays to tolerable bounds. We are aware that coalescing may cause packet loss in downstream buffers, especially when using TCP/IP. We explore the effects of coalescing on TCP/IP flows with an ns-2 simulation, note that coalescing is already used to reduce packet processing load on the system CPU, and suggest open questions for future work. This article will help clarify what can be expected when EEE is deployed.
Conference Paper
Full-text available
Abstract—Green Cloud Computing aims at a processing infrastructure that combines flexibility, quality of services, and reduced energy utilization. In order to achieve this objective, the management solution must regulate the internal settings to address the pressing issue of data center over-provisioning related to the need to match the peak demand. In this context, we propose an integrated solution for resource management based on organization models of autonomous agent components. This work introduces the system management model, analyzes the system’s behavior, describes the operation principles, and presents a use case scenario. To simulate the approach of organization, theory and implementation of migration policies and reallocation changes were made as improvements in the code of CloudSim framework.
Article
Full-text available
Reduction of unnecessary energy consumption is becoming a major concern in wired networking, because of the potential economical benefits and of its expected environmental impact. These issues, usually referred to as "green networking", relate to embedding energy-awareness in the design, in the devices and in the protocols of networks. In this work, we first formulate a more precise definition of the "green" attribute. We furthermore identify a few paradigms that are the key enablers of energy-aware networking research. We then overview the current state of the art and provide a taxonomy of the relevant work, with a special focus on wired networking. At a high level, we identify four branches of green networking research that stem from different observations on the root causes of energy waste, namely (i) Adaptive Link Rate, (ii) Interface proxying, (iii) Energy-aware infrastructures and (iv) Energy-aware applications. In this work, we do not only explore specific proposals pertaining to each of the above branches, but also offer a perspective for research.
Conference Paper
Cloud computing is an evolving and fast-spreading computing paradigm that has gained great interest from both industry and academia. Consequently, universities are actively integrating Cloud computing into their IT curricula. One major challenge facing Cloud computing instructors is the lack of a teaching tool to experiment we introduce TeachCloud, a modeling and simulation environment for cloud computing. Students can use TeachCloud to experiment with different cloud components such as: processing elements, data centers, networking, Service Level Agreement (SLA) constraints, web-based applications, Service Oriented Architecture (SOA), virtualization, management and automation, and Business Process Management (BPM). TeachCloud is an extension of CloudSim, a research-oriented simulator used for the development and validation in cloud computing.
Article
Cloud computing is an evolving and fast-growing computing paradigm that has gained great interest from both industry and academia. Consequently, universities are actively integrating cloud computing into their IT curricula. One major challenge facing cloud computing instructors is the lack of a teaching tool to experiment with. This paper introduces TeachCloud, a modeling and simulation environment for cloud computing. TeachCloud can be used to experiment with different cloud components such as: processing elements, data centers, storage, networking, Service Level Agreement (SLA) constraints, web-based applications, Service Oriented Architecture (SOA), virtualization, management and automation, and Business Process Management (BPM). Also, TeachCloud introduces MapReduce processing model in order to handle embarrassingly parallel data processing problems. TeachCloud is an extension of CloudSim, a research-oriented simulator used for the development and validation in cloud computing.
Article
Cloud computing and other highly virtualized data center applications have placed many new and unique requirements on the data center network infrastructure. Conventional network protocols and architectures such as Spanning Tree Protocol and multichassis link aggregation can limit the scale, latency, throughput, and virtual machine mobility for large cloud networks. This has led to a multitude of new networking protocols and architectures. We present a tutorial on some of the key requirements for cloud computing networks and the various approaches that have been proposed to implement them. These include industry standards (e.g., TRILL, SPB, software-defined networking, and OpenFlow), best practices for standards-based data center networking (e.g., the open datacenter interoperable network), as well as vendor proprietary approaches (e.g., FabricPath, VCS, and Qfabric).
Conference Paper
Energy efficiency is becoming increasingly important in the operation of networking infrastructure, especially in enterprise and data center networks. Researchers have proposed several strategies for energy management of networking devices. However, we need a comprehensive characterization of power consumption by a variety of switches and routers to accurately quantify the savings from the various power savings schemes. In this paper, we first describe the hurdles in network power instrumentation and present a power measurement study of a variety of networking gear such as hubs, edge switches, core switches, routers and wireless access points in both stand-alone mode and a production data center. We build and describe a benchmarking suite that will allow users to measure and compare the power consumed for a large set of common configurations at any switch or router of their choice. We also propose a network energy proportionality index, which is an easily measurable metric, to compare power consumption behaviors of multiple devices.
Article
Cloud computing is a recent advancement wherein IT infrastructure and applications are provided as ‘services’ to end-users under a usage-based payment model. It can leverage virtualized services even on the fly based on requirements (workload patterns and QoS) varying with time. The application services hosted under Cloud computing model have complex provisioning, composition, configuration, and deployment requirements. Evaluating the performance of Cloud provisioning policies, application workload models, and resources performance models in a repeatable manner under varying system and user configurations and requirements is difficult to achieve. To overcome this challenge, we propose CloudSim: an extensible simulation toolkit that enables modeling and simulation of Cloud computing systems and application provisioning environments. The CloudSim toolkit supports both system and behavior modeling of Cloud system components such as data centers, virtual machines (VMs) and resource provisioning policies. It implements generic application provisioning techniques that can be extended with ease and limited effort. Currently, it supports modeling and simulation of Cloud computing environments consisting of both single and inter-networked clouds (federation of clouds). Moreover, it exposes custom interfaces for implementing policies and provisioning techniques for allocation of VMs under inter-networked Cloud computing scenarios. Several researchers from organizations, such as HP Labs in U.S.A., are using CloudSim in their investigation on Cloud resource provisioning and energy-efficient management of data center resources. The usefulness of CloudSim is demonstrated by a case study involving dynamic provisioning of application services in the hybrid federated clouds environment. The result of this case study proves that the federated Cloud computing model significantly improves the application QoS requirements under fluctuating resource and service demand patterns.
Article
In recent years, there has been intense focus on increasing the energy efficiency of IT infrastructure. We advocate the need to consider energy consumption holistically over the entire lifetime of these devices. Life cycle energy considerations include a number of factors, of which operational energy consumption is just one. Of all the IT components, networks have received relatively little attention when it comes to energy efficient operation, and even that has been too focused on operational power consumption. In this article, we describe the challenges relating to life cycle energy management of network devices, present a sustainability analysis of these devices, and develop techniques to significantly reduce network operational power. A distinguishing feature of our work is that it is applicable to legacy devices, and as such can be deployed now, as opposed to waiting for new standards and products to be developed.
Article
Traditionally, the development of computing systems has been focused on performance improvements driven by the demand of applications from consumer, scientific and business domains. However, the ever increasing energy consumption of computing systems has started to limit further performance growth due to overwhelming electricity bills and carbon dioxide footprints. Therefore, the goal of the computer system design has been shifted to power and energy efficiency. To identify open challenges in the area and facilitate future advancements it is essential to synthesize and classify the research on power and energy-efficient design conducted to date. In this work we discuss causes and problems of high power / energy consumption, and present a taxonomy of energy-efficient design of computing systems covering the hardware, operating system, virtualization and data center levels. We survey various key works in the area and map them to our taxonomy to guide future design and development efforts. This chapter is concluded with a discussion of advancements identified in energy-efficient computing and our vision on future research directions. Comment: 49 pages, 9 pages, 3 tables