Making the Cloud Energy Efficient
An Approach to Make the Data Centers Greener
Mainul Kabir Aion
College of Engineering and
University of Derby
M Nabil Bhuiyan
Department of Mechanical and
Islamic University Of Technology
Department of Mechanical and
Islamic University Of Technology
Abstract— Today is the age of modern technology and the vast
growing IT industry have created a better opportunity for
everyone. One of the burning questions of today’s modern world
is the growth of energy demands and price. Besides that the
impact on environment and the depletion of fossil fuels have
brought a crisis in the energy related issues. Today’s Information
Technology is all about data centers thus cloud computing. The
data centers around the world require a great amount of energy
everyday which has created impact on the energy supply and
environmental conditions. This is why the uncertainty of
continuous energy supply in the future is in question. This paper
indicates a clear study of the energy consumption of the data
centers and how this can be minimized and prepare for the quest
of global energy saving and make the ICT greener.
Index Terms—Green ICT, Data Center, Cloud Computing,
Energy Efficient Computing, Cloud Energy
To ensure the security of each and every big data center,
electrical efficiency in Data center application is adapted by
most of the servers and data servers as well for maintaining the
cloud. Hence the cost in empowering and cooling the
hardware makes extra counting for the server controlling and
energy consumption. In order to reduce the cost per center
there should energy efficient and less consuming network
storage equipment is regarded to be used. Heat generation
needs to be reduced first to make the energy consumption most
efficient and make less the computer hardware density. There
can be different pathways to follow in order to cut loose the
total power consumption and a strategic measurement routine
through analysing the key Performance Indicators (KPI) of the
changes in energy usage can make the result to mass
II. ENERGY EFFICIENT PATHWAYS
In order to create a reliable power construction for the cloud
hardware use of UPS is an indispensable prerequisite of power
consumption and reliable power infrastructure. Hardware
efficiency improvement as like throughput per watt for servers
and semiconductors are increased to double  in each year
which can an instinct possibility of energy efficiency. Hence,
reduction in IT and other software based electrical load must be
made at the same time. Server vitalization to control server
environment, efficient cooling system through water side
economizers, transient subsequent cooling control can make a
difference in huge number to the total energy consumption
calculation of the any kind of huge data centers.
III. ENERGY EFFICIENCY METRICS
In order to increase the efficiency in power or energy usage
the metrics of Power Usage Effectiveness (PUE) metrics
introduced in 2007 by the several IT professionals used to work
in green grids . They introduced Data Center infrastructure
Efficiency (DCiE) with PUE . Both of the metrics can be
found from Total power used in the Data center and Total It
PUE=Total Power used in the Data Center / IT equipment
DCiE= IT equipment power / Total Power used in the Data
While PUE equals to 1 then total power used in data center
are used as the IT equivalent power and energy is consumed by
everything that supports equipment load.
Energy efficient servers can also be used in cluster. Making
a proper scheduling of the servers can reduce the total optimal
CPU utilization. Figure 1 shows a typical should cluster
structure which shows the cluster component's energy
consumption gateways. To reduce the total energy cost aim
should also consider the architecture of the cluster of cloud
components so far.
Figure 1: A typical Cloud Cluster Structure .
IV. ENERGY EFFICIENCY MODEL FOR COMPUTING PLATFORMS
When a new server deployment is planned the energy
conservation should also need to be planned right away
through using server-by-server migration rather than the
existing power migration technologies. Hence several models
can be used to make serve-by-server migration of energy
consumption in cloud data centers.
A. Energy efficient system design
Making a proper planning for the power saving data centers
planning of right sized standard and virtually maintainable
system is a prerequisite. Hence individual parts should be
planned in a way in the system to summing the in a lower
Hardware parts of power distribution should be used
below the capabilities on full load.
Continues re-humidifying and DE-humidifying should
be proceeding simultaneously.
Cooling pump can reduce the efficiency
B. Cluster based energy efficiency model
In order to reduce the cluster based energy usage Cluster
based Energy Conservation protocol is adaptable for network
redundancy and more accuracy. This will ultimately reduce the
use of energy through being independent of location
information and directly adaptive network connectivity .
Certain rules are used also to get the redundancy getaway of
nodes which are powered off after selecting the cluster while
accessing the cloud. Thus cluster and CEC protocol affects
conservation of energy. Figure 2 shows the details.
Figure 2: Cluster formation of CEC protocol for Cloud.
Using efficient DCPI devices Magnitude of electrical
consumption of data centers can be reduced if a typical design
is adapted to the used devices. Effective strategies can be taken
for the redundancy of the system as well. This paper
demonstrates some these effective strategies in the table below:
Table 1: Practical strategies for reducing electrical power
consumption for data centers, indicating range of achievable
electrical savings .
10 – 30%
• Use of Modular &
• For new
that is scalable in
• Savings are
1 0– 40%
• Also frees up
power in case of
• To achieve
savings in an
need to be
7 – 15%
• high efficiency is
present in some
cooling in case of
• Shorter air paths
require less fan
• For new
• Benefits are
4 – 15%
• the layout of floor
has a good amount
of effect on the
efficiency of air
both hot and cold
and arranging them
with air conditioner
location that is
• For new
• Difficult to
5 – 12%
• Economizer mode
is used in different
but sometimes this
mode is kept
• For new
• Difficult to
1 – 3%
• CRAC return air
temperature can be
used to save energy
by increasing it
• Schneider Electric
provides new kind
of snap in blanking
panel which are
cheap and easy
• For any data
old or new
These in the table are the most powerful practically
implementable strategy can be used in modern highly capable
data centers for better redundancy of data and energy saving
C. Optimized UPS solution
Use of comprehensive energy efficient UPS of modern data
centers like the Chloride Trienergy based UPS is a
revolutionary energy saver in the constantly analysable
electrical environment system of the data center. International
Electrical Commission (IEC) certifies is the use of this UPS in
the full compatible installation guidelines for the data center
usage. Maximum Power Control using double conversion
mode makes more than 95 % efficiency in tolerance level
energy as showed for a system like in Figure 3.
Figure 3: Maximum Power Control Model for Data Center.
When the power is at maximum rate in adaption of
Maximum Power Savings (VFD) can bypass the line with a
stable 99% energy savings like in figure 4.
Figure 4: Maximum Energy Saving Model for Data Center.
Such Chloride Trinergy monitors the network intuitive
before making any decision of bypassing the power supply and
make efficient compassing distribution for the cloud data
center. This decision making ability of this kind hybrid UPS
makes the cloud 4-7% more energy saving capability . So
far it can differentiate the network input conditions and
required energy for best case output which make it more
intelligent to adapt the hybrid UPS and install the system in the
V. IMPLEMENTATION OF THE MODELS IN DATA CENTER
As per the models discussed above the paper shows a result
and analysis on PUE analysis if adapted a virtual data center so
far. Data center managers are often pushed to reduce the
significant value of PUE for matching the power usage on the
stability scale. If a virtual data center is supposed with a
number of servers a go through analysis is presented here in the
paper for the following way of calculation.
Let the servers in the data center have power amount of 100
kW, 50 kW is being used for power IT equipment installed in
the system architecture. It illustrates the value of PUE in 20.00.
PUE= 1000/50 = 20. 0
Again, the visualization in servers if adapted to reduce by
25 kW of the power of IT equipment and the overall power
supplied are in the same stable amount then the changed PUE
after visualization would be:
PUE (after visualization) = 975/ 25 = 19.0
Here, decrease in PUE seems to counterintuitive where any
IT load without an equivalent redundancy of infrastructure load
will actually result in higher PUE.
Now, illustrate this, let’s calculate the annual energy usage
and cost both before and after visualization.
Annual Energy Use = 1000 kW * 8,760 hrs. /yr. = 876, 0000
Annual Electric Cost = 876, 0000 kWh * $0.10/kwh = $ 87,
Annual Energy Use = 975 kW * 8,760 hrs. /yr. = 8,541,000
Annual Electric Cost = 8,541,000 kWh * $0.10/kwh = $
In addition to tracking PUE, it must also should be tracked
any changes in the infrastructure or IT load so here is
correlated the changes to the PUE value. In Chart 1,
Chart 1: PUE tracking for infrastructure load.
Here, power supply load versus supply efficiency can also
be drawn for better understanding and analysis of the
operations performing while efficiency increase and while the
highest efficiency is attained with the load of half of the
average. The chart illustrated below as the power efficiency
cart in chart 2.
Chart 2: Power efficiency chart.
However, similar carves can also be found in the computer
room air conditioner and other cooling system in any data
centers where energy efficient power saver UPS is adapted.
Besides PUE several other KPIs are also need to be measured
to calculate the total energy consumption, savings and the
efficiency is far this research is considered.
A. Total IT load
The less the IT load in accessing the data for the user
request Chanel the lesser the energy in usage.
To reduce the IT load hence reducing the PUE certain
initiatives can be taken like – capitulate or consolidate the
multi layered servers, replacing the inefficient of energy
consuming servers, enable the power management mood on
while I heavy data access is accustomed . Decommission of
lo longer used servers in the center. Here in the table 2 a typical
CPU utilization is shown enabling the Demand based
switching where significant saving can serve in the data center
realization of power savings.
Table 2: Annual cost after enabling power management.
Typical CPU Utilization
15% 30% 45%
Using multiple and layered methodology for energy
consuming cluster should be constructed for making the proper
filtration of the data access. This will ultimately reduce usage
of energy by each and every channel requests before
proceeding. Latest processing energy saving hardware should
Based on the models of architecture of used cloud energy
efficiency measures described here differ from each other. The
possible ways of making the consumption as low as possible,
has been studied thoroughly so far. Future level of
implementation and continuity of the research hence is going
too attached in the further section of the paper.
The proper includes the several pathways of energy
efficient measures that can reduce the total energy usage of any
big and mid-size data center. However, the paper does not
contain the exact research conduct on the actual and real time
use of the pathways so far with the evaluation of KPIs. Hence
the further study is also required and can be conducted on the
qualitative assessment of the pathways with the findings of
performance benchmarks as well.
 Belady, C. L., “In the Data Center, Power and Cooling
Costs More Than the Equipment it Supports”, Electronics
Cooling, vol. 13, no. 1, February 2007.
 Belady, C., Rawson, A., Pfleuger, J., & Cader, T. (2007,
February 20). Retrieved from www.thegreengrid.org
 Blackburn, M., Azevedo, D., Ortiz, Z., Tipley, R., & Van
Den Berghe, S. (2010). The Green Grid Data Center
Compute Efficiency Metric: DccE.
 M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. Katz,
A. Konwinski, G. Lee, D. Patterson, A.
Rabkin, I. Stoica and M. Zaharia, “Above the Clouds: A
Berkeley View of Cloud Computing”, Tech. rep., (2009)
February, UC Berkeley Reliable Adaptive Distributed
 L. Luo, W. Wu, D. Di, F. Zhang, Y. Yan and Y. Mao, “A
Resource Scheduling Algorithm of Cloud Computing
based on Energy Efficient Optimization Methods”, IEEE
Green Computing Conference (IGCC), 2012
International, (2012) June 4-8, San Jose, CA, pp. 1 – 6.
 Kats G., Rosenfeld A., McGaraghan S.: “Energy
Efficiency as a Commodity: The Emergence
of a Secondary Market for Efficiency Savings in
Commercial Buildings”, 1997, available at
 Brown, B., Kozlowski, M., “Power System Event
Reconstruction for Modern Data Centers”,
2006, available at www.criticalpowernow.com
 Emerson Network Power. (2009). “Energy Logic:
Reducing Data Center Energy Consumption by Creating
Savings that Cascade Across Systems”.