ArticlePDF Available

Green Technology, Cloud Computing and Data Centers: the Need for Integrated Energy Efficiency Framework and Effective Metric

Authors:

Abstract

Energy efficiency (EE), energy consumption cost and environmental impact are vibrant challenges to cloud computing and data centers. Reducing energy consumption and emissions of carbon dioxide (CO2) in data centers represent open areas and driving force for future research work on green data centers. Our Literature review reveals that there are currently several energy efficiency frameworks for data centers which combine a green IT architecture with specific activities and procedures that led to decrease the impact on environment and less CO2 emissions. The current available frameworks have some pros and cons that is the reason why there is an urgent need for an integrated criterion for selecting and adopting energy efficiency framework for data centers. The required energy efficiency framework criteria should also consider the social network applications as a vital related factor in elevating energy consumption, as well as high potential for better energy efficiency in data centers. Additionally, in this paper, we highlighted the importance of the identification of efficient and effective energy efficiency metric that can be used for the measurement and determination of the value of data centers efficiency and their performance combined with sound and empirically validated integrated EE framework.
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 5, No. 5, 2014
89 | P a g e
www.ijacsa.thesai.org
Green Technology, Cloud Computing and Data
Centers: the Need for Integrated Energy Efficiency
Framework and Effective Metric
Nader Nada
Fatih University
Istanbul, Turkey
Abusfian Elgelany
Sudan University
Khartoum, Sudan
AbstractEnergy efficiency (EE), energy consumption cost
and environmental impact are vibrant challenges to cloud
computing and data centers. Reducing energy consumption and
emissions of carbon dioxide (CO2) in data centers represent open
areas and driving force for future research work on green data
centers. Our Literature review reveals that there are currently
several energy efficiency frameworks for data centers which
combine a green IT architecture with specific activities and
procedures that led to decrease the impact on environment and
less CO2 emissions. The current available frameworks have some
pros and cons that is the reason why there is an urgent need
for an integrated criterion for selecting and adopting energy
efficiency framework for data centers. The required energy
efficiency framework criteria should also consider the social
network applications as a vital related factor in elevating energy
consumption, as well as high potential for better energy efficiency
in data centers. Additionally, in this paper, we highlighted the
importance of the identification of efficient and effective energy
efficiency metric that can be used for the measurement and
determination of the value of data centers efficiency and their
performance combined with sound and empirically validated
integrated EE framework.
KeywordsCloud Computing; green Cloud; Datacenter;
Energy efficiency
I. INTRODUCTION TO GREEN TECHNOLOGY IN CLOUD
COMPUTING
Cloud computing is a promising area in distributed
computing. Data centers are the main component of cloud
computing. Data centers energy consumption cost and
environmental effect are dynamic challenge to cloud
computing. Additionally, the growing usage of social
applications and the expansion of e-business require an
increase in the number of data centers. However, the
combination of global warming and inconsistent climate make
the cost of energy a major challenge for the sustainability of e-
business [1]. It is a corner stone of the infrastructure of cloud
computing approach by which a variety of information
technology (IT) services were built. They extended the ability
of centralized repository for computing, hosting, storage,
management, monitoring, networking and deployment of data.
With the rapid increase in the capacity and size of data
centers, there is a continuous increase in the demand of energy
consumption [2]. Data centers, beside their ongoing high
energy consumption, also produce carbon dioxide that riddled
with IT inefficiencies. International Data Corporation (IDC)
annual report found that cloud computing reached $42bn in
2012 and the revenue of cloud in 2013 was $150bn [3].
Environmental impact of Information Technology (IT)
under the banner of “Green IT” was been discussed by
academia, media and government Since (2007), when the
Environmental Protection Agency (EPA) submitted a report to
the US Congress [5] about the expected energy consumption of
data centers. Since then Green IT has been receiving growing
attention. The overall objective of Green IT is to increase
energy efficiency and reduce CO2 emissions [6], figure 1
represents the effect of good practice of green data centers to
gas emission. There are two ways to make data center greener:
First, improve energy efficiency of data center, second, use
clean energy supply. Cloud computing has different techniques
to solve energy-efficient problem by minimizing the impact of
cloud computing on the environment. These techniques deal
with energy efficiency consumption like virtualization,
hardware base, operating systems base and data centers. Some
new features arise like energy performance, and time wise.
However, the concerns should be to swap problem between
energy consumption and performance.
II. LITERATURE REVIEW ON ENERGY EFFICIENCY
FRAMEWORKS FOR CLOUD COMPUTING
In our literature review below is based on previous studies
of investigated energy efficiency on cloud computing and
focused on data center technology.
Asghar Sabbaghiet al.[9], investigated previous researches
and introduced energy efficiency framework on information
technology that enabled Green supply chain management. They
proposed a unique conceptual taxonomy of information
technology for sustainability. They also identified the
relationship between Green supply chain management
information flow, IT governance and Green infrastructure
components.
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 5, No. 5, 2014
90 | P a g e
www.ijacsa.thesai.org
Fig. 1. Green Data Center Market Value [7]
Zhiming Wang et al.[10], proposed mechanism to support
maximizing resource utilization by using active and idle energy
consumption by finish time minimization. This mechanism
reduces the power consumption by allowing spare servers to be
in idle state. This mechanism put into account QoS of cloud
datacenter.
RajkumarBuyya et al.[11], proposed a novel mechanism in
three ways: (a) architectural principles for energy-efficient
management of Clouds; (b) energy-efficient resource allocation
policies and scheduling algorithms considering QoS, and
devices power usage characteristics; and (c) a novel software
technology for energy-efficient management of Clouds.
Anton Beloglazov et al.[12], developed a unique
mechanism which supports dynamic consolidation of VMs
based on adaptive utilization thresholds, which put into account
Service Level Agreements (SLA).
Nguyen Quang Hung et al.[13], proposed unique server
selection policy, and four algorithms solving the lease
scheduling problem. This approach reduces 7.24% and 7.42%
energy consumption than the existing greedy mapping
algorithm.
Uddin et al.[14]and his team introduced a unique
framework to improve the performance and energy efficiency
of data centers. They developed a classification mechanism for
data center components depending on different resource pools
and different parameters like energy consumption, resource
utilization, workload, etc. The framework highlights the
importance of implementing green metrics like Power Usage
Effectiveness (PUE) to measure the efficiency of data center in
terms of energy utilization and carbon dioxide (CO2)
emissions. The framework is based on virtualization and cloud
computing to increase the resource utilization of already
installed servers from 10% to more than 50%.
Meenakshi Sharma et al.[15],developed a new mechanism
with two steps: firstly they developed an analysis of different
Virtual Machine(VM) load balancing algorithms, second
introduced a new VM load balancing algorithm that has been
developed and implemented in Virtual Machine environment of
cloud computing in order to achieve better response time and
cost.
In S. Kontogiannis et al.[16], the research team developed a
unique mechanism called Adaptive Workload Balancing
algorithm (AWLB) for cloud datacenter based web systems
which deals with agents into two dimensions the web
datacenter and web servers. AWLB algorithm also supports
protocol specification for signaling purposes among web
switch and datacenter nodes and also utilizes other protocols
such as SNMP and ICMP for its balancing process.
Performance gains are shown from tests of AWLB against
known balancing Least Connections (LC) and Least Loaded
(LL) algorithms. Table 1 represents the summary of our
literature review on cloud computing energy efficiency
frameworks and techniques.
III. URGENT NEED FOR ENERGY EFFICIENCY INTEGRATED
FRAMEWORK FOR CLOUD COMPUTING AND DATA CENTERS
Reducing energy consumption and emissions of carbon
dioxide (CO2) in data centers represent open challenges and
driving the future research work for green data centers. Our
Literature review reveals that there is an urgent need
for integrated energy efficiency framework for data centers
which combines a green IT architecture with specific activities
and procedures that led to minimal impact on environment and
less CO2 emissions. The required energy efficiency framework
should also consider the social network applications as a vital
related factor in elevating energy consumption, as well as high
potential for energy efficiency.
$31.81
$10.68
$9.26
$6.39
$4.94
$3.82
2015
2014
2013
2012
2011
2010
$16.00
$14.00
$12.00
$10.00
$8.00
$6.00
$4.00
$2.00
$0.00
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 5, No. 5, 2014
91 | P a g e
www.ijacsa.thesai.org
TABLE I. DATA CENTERS ENERGY EFFICIENCY TECHNIQUES
IV. GREEN METRICS TO MEASURE AND ASSESS ENERGY
EFFICIENCY OF DATA CENTER
Globally, the energy consumption of data centers is
continuously on the increase [17]. The energy operations cost
will continue to double every five years between 2005 and
2025 [18]. This increase led to higher emission of CO2that
reflects negatively on global warming and environmental
health.
Measuring energy consumption of data centers has become
a significant concern of all datacenters stakeholders to meet
end-user agreement [19].Energy efficiency metric is a tool used
to measure energy efficiency in data centers [20]. The most
important challenge in the data centers industry is the limitation
of effective standard energy efficiency metrics, which supports
improving energy efficiency [21,22].
For an effective energy efficiency assessment on data
centers and its components, we need to assess the effectiveness
of the used metrics and to measure the energy efficiency of
data centers. To determine whether these metrics are effective
or not we need to assess these metrics against its intended goals
and under a range of common used cases to determine the
values of its effectiveness in terms of reporting, targets,
education, analysis and decision support [23].
Our literature review on common energy efficiency metrics
that are currently in use by data centers reveals that none of
these metrics are meeting the prior mentioned criteria.
Therefore, our research is not only introducing a comparative
review of the most common used metrics and their features
(criteria) but also attempting to recommend better metric to be
used in the assessment of data centers energy efficiency.
In last few years operators have adopted PUE metrics as the
measure of energy efficiency for the mechanical and electrical
infrastructure of the data center. The process of assessment has
submitted a focus and comparable measure of performance,
which has enabled data centers operators to make substantial
improvements. However, until now no consensus about IT or
software energy efficiency and most energy efficiency
measurements stopped at the IT power cord. In this paper we
are proposing the Fixed to Variable Energy Ratio (FVER)
metric which could be used to measure the data centers energy
efficiency instead of PUE. The reason behind our choice in
favor of the FVER metric is that it combines and meets all the
needed criteria for better energy efficiency assessment in data
centers, listed in table 2, including the usage of IT and software
applications in data centers [24]. Figure 1 depicts the difference
between FVER and PUR and Table 2 represents the different
Goals of energy efficiency metrics including PUE, DCiE,
FVER, and DCeP where:
Power Usage Effectiveness (PUE) =
Total Facility Power/ IT Equipment Power (1)
Data Center Infrastructure Effectiveness (DCiE) = 1 / PUE (2)
Fixed to Variable Energy Ratio (FVER) =1+Fixed
Energy/Variable Energy (3)
Data Center Energy Productivity (DCeP) = Useful Work
Produced / Total Data Center Energy Consumed over time (4)
V. CONCLUSION AND CONTRIBUTION
The first contribution of this paper is our literature review
on current energy efficiency frameworks. The study reveals
that there are currently several energy efficiency frameworks
for data centers which combine a green IT architecture with
specific activities and procedures that will lead to decrease
the impact on environment and the diminution of CO2
emissions. The current available frameworks have some pros
and cons (see Table 1) that is why there is an urgent need for an
integrated energy efficiency framework for data centers and
cloud computing. The framework should consider a common
and integrated set of criteria. The selection and adoption of
such framework should be in accordance with the data center
area of application and its surrounding environment.
The second contribution was the literature review on
energy efficiency metrics that are currently used for the
assessment of energy efficiency in data centers (depicted in
Table 2 and Figure 2). This part of our study developed a
comparative study of the most commonly used metrics and
their features (criteria), additionally we recommended the use
of FVER instead of PUE as a better metric for the assessment
of data centers energy efficiency which was based on certain
required criteria including the usage of IT and software
N
o
Author
Strengths
Limitation
1
Asghar
Sabbag
hi
Supply
Management
Focus on
infrastructure
only
2
Zhimin
g Wang
Put into
account QoS
Much job
performance
take amount of
time
Sleep-in-
Waking up-
ready.
3
Rajku
marBu
yya
Quality-of-
service
No parameter to
indicate CO2
emission
4
Anton
Belogla
zov
Meeting the
Service
Level
Agreements
(SLA)
No parameter to
show the
energy
efficiency level
5
Meena
kshi
Sharma
Good in
reduce
energy,
pricing and
time
Much
calculation need
more time to
take decision
6
Mueen
Uddin
Increase the
utilization
ratio
High utilization
leads to
introduce CO2
7
S.
Kontog
iannis
can balance
the workload
in
multidimensi
onal
resources
Increase the
Web traffic
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 5, No. 5, 2014
92 | P a g e
www.ijacsa.thesai.org
applications in data centers. Our future work will focus on the
development and empirical validation of an integrated energy
efficiency framework for cloud computing and data centers.
Fig. 2. FVER Vs PUE [25]
REFERENCES
[1] Mell, P. and T. Grance. The NIST Definition of Cloud Computing,
2009.
[2] Mueen Uddin, framework for energy efficient data centers using
virtualization, 2012.
[3] IDC - Press Release, 2013.
[4] Ian Foster, Cloud Computing and Grid Computing 360-Degree
Compared, 2008.
[5] James W. Smith, Green Cloud A literature review of Energy-Aware
Computing, 2011.
[6] Asghar Sabbaghi, Green Information Technology and Sustainability,
2012.
[7] Ariel Schwartz, Green Data Center Marketto More than TripleOver Next
Five Years, 2010.
[8] Eric Woods, Data Center Electricity Consumption 2005-2010: The Good
and Bad News, 2011.
[9] Asghar Sabbaghi, Green information technology and sustainability: A
Conceptual taxonomy, volume 13, Issue 2, pp. 26-32, 2012.
[10] Zhiming Wang, Energy-aware and revenue-enhancing Combinatorial
Scheduling in Virtualized of Cloud Datacenter, Volume7, Number1,
January 2012.
[11] Rajkumar Buyya, Energy-Efficient Management of Data Center
Resources for Cloud Computing: A Vision, Architectural Elements, and
Open Challenges, 2010.
[12] Anton Beloglazov, aSurvey on Power Management Solutions for
Individual Systems and Cloud, 2010.
[13] Nguyen Quang Hung, Performance constraint and power-aware
allocation for user requests in virtual computing, 2011.
TABLE II. GOALS OF ENERGY EFFICIENCY METRICS [24]
[14] Mueen Uddin, Green Information Technology (IT) framework for
energy efficient data centers using virtualization, 2012.
[15] Meenakshi Sharma, Performance Evaluation of Adaptive Virtual
Machine Load Balancing Algorithm, 2012.
[16] S. Kontogiannis, A probing algorithm with Adaptive workload load
balancing capabilities for heterogeneous clusters, journal of computing,
volume 3, issue 7, July2011.
[17] Lacity, Mary C and Khan, Shaji A and Willcocks, Leslie P, A review of
the IT outsourcing literature: Insights for practice, The Journal of
Strategic Information Systems, Elsevier, 18, 130-146 (2009).
No
Goal
PUE
DCi
E
FVER
DCeP
1
Provide a clear,
preferably intuitive
understanding of the
measure
Y
Y
Y
2
Provide a clear,
preferably intuitive
direction of
improvement
Y
Y
Y
3
Describe a clearly
defined part of the
energy to useful work
function of the IT
services
Y
Y
Y
4
Be persistent, i.e. the
metrics should be
designed to be stable
and extensible as the
scope of efficiency
measurement increases,
rather than confusing the
market with rapid
replacement
Y
Y
Y
5
Demonstrate the
improvements available
in a modern design of
facility
Y
Y
6
Demonstrate the
improvements available
through upgrade of
existing facilities using
more efficient M&E
systems
Y
Y
7
Provide a clear,
preferably intuitive
understanding of the
impacts of changes
Y
Y
8
Be reversible, i.e. it
should be possible to
determine the energy
use at the electrical input
to the data center for any
specified device or
group of devices within
the data center
Y
Y
Y
9
Be capable of
supporting ‘what if’
analysis for IT and data
center operators in
determining the energy
improvement and ROI
for improvements and
changes to either the
facility or the IT
equipment it houses
Y
Y
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 5, No. 5, 2014
93 | P a g e
www.ijacsa.thesai.org
[18] Laura Sisó, Ramon B. Fornós, Assunta Napolitano &Jaume, Energy-
and Heat-aware metrics for computing modules, 2012.
[19] Tung, Teresa, Data Center Energy Forecast, Silicon Valley Leadership
Group, San Jose, CA, (2008).
[20] Wang, Lizhe and Khan, Samee U, Review of performance metrics for
green data centers: a taxonomy study, The Journal of Supercomputing,
Springer, 1-18 (2013).
[21] Belady, Christian L and Malone, Christopher G, Metrics and an
infrastructure model to evaluate data center efficiency, Proceedings of
the Pacific Rim/ASME InternationalElectronic Packaging
Technical Conference and Exhibition(IPACK), ASME, (2007).
[22] Rivoire, Suzanne and Shah, Mehul A and Ranganathan, Parthasarathy
and Kozyrakis, Christos, JouleSort: a balancedenergy-efficiency
benchmark, Proceedings of the 2007 ACMSIGMOD international
conference on Management of data,ACM, 365-376 (2007).
[23] Liam Newcombe, Data center energy efficiency metrics existing and
proposed metrics to provide effective understanding and reporting of
data center energy, 2013.
[24] Liam Newcombe, Data center Fixed to VariableEnergy Ratio metricDC-
FVER, 2012
[25] Peter Hopton, Move Over PUE, 2012.
[26] G. Eason, B. Noble, and I. N. Sneddon, “On certain integrals of
Lipschitz-Hankel type involving products of Bessel functions,” Phil.
Trans. Roy. Soc. London, vol. A247, pp. 529551, April 1955.
(references)
[27] J. Clerk Maxwell, A Treatise on Electricity and Magnetism, 3rd ed., vol.
2. Oxford: Clarendon, 1892, pp.6873.
[28] I. S. Jacobs and C. P. Bean, “Fine particles, thin films and exchange
anisotropy,” in Magnetism, vol. III, G. T. Rado and H. Suhl, Eds. N ew
York: Academic, 1963, pp. 271350.
[29] K. Elissa, “Title of paper if known,” unpublished.
[30] R. Nicole, “Title of paper with only first word capitalized,” J. Name
Stand. Abbrev., in press.
[31] Y. Y orozu, M. Hirano, K. Oka, a nd Y. Tagawa, “Electron spectroscopy
studies on magneto-optical media and plastic substrate interface,” IEEE
Transl. J. Magn. Japan, vol. 2, pp. 740741, August 1987 [Digests 9th
Annual Conf. Magnetics Japan, p. 301, 1982].
[32] M. Young, The Technical Writers Handbook. Mill Valley, CA:
University Science, 1989.
... However, most of the heat is not utilized, even though different solutions already exist. Global warming creates an increasing amount of cooling demand at the same time as power density/floor space in DC is increasing and a demand for processing power is rising faster than processing technology is advancing [2]. With rapid increase in the capacity and size of DCs, there is a continuous increase in the energy consumption and related CO 2 emissions [2]. ...
... Global warming creates an increasing amount of cooling demand at the same time as power density/floor space in DC is increasing and a demand for processing power is rising faster than processing technology is advancing [2]. With rapid increase in the capacity and size of DCs, there is a continuous increase in the energy consumption and related CO 2 emissions [2]. ...
... It allows tracking the improvements, changes, comparisons between technologies, and benchmarking against average industry performance. According to Nada et al. [2], the assessment of metrics -against intended goals and values of their effectiveness in terms of reporting, targets, education, analysis, and decision support e is needed. ...
Article
Data centers seek solutions to increase energy efficiency and lower costs by novel methods. Waste heat utilization is considered to be one of the major trends in the near future, especially in the Nordic countries, where heat demand is high. In this paper, waste heat utilization was analyzed from the perspectives of both the data center and district heating network operators. Timing of the data center waste heat production was considered based on an existing data center load profile. For the district heating network operator, the system level effects of increased waste heat utilization were quantified by simulating district heating production in the city of Espoo, Finland, with actual plant and heat demand data for 2013 and 2015. Results showed that with high shares of waste heat in the district heating system, i.e. 20–60 MW, the system level operational cost savings were 0.6–7.3% in the case study. Utilizing waste heat decreased utilization hours of both combined heat and power plants and heat-only boilers. The analysis showed that pricing of the procured waste heat affects the utilization level of waste heat, but operational hours of waste heat utilization were over 95% in all scenarios.
... DCs are the main components of digitalization and cloud computing (Nada & Elgelany, 2014). The number of DCs has grown due to the increasing demand for data processing. ...
... There are two strategies to improve DC sustainability: improving energy efficiency and increasing the use of renewable energy (N. Nada & Elgelany, 2014). Current energy efficiency activities in DCs include increasing power-feeding technology efficiency, aisle capping, reusing waste heat and fuel cell technology utilization (H. ...
Article
Full-text available
The growth of computing and Internet use have attracted the attention of the general public concerning the carbon footprint of data centers (DCs). Previous research has focused on the implementations of energy efficiency activities. However, little research has been published on the economic evaluation of waste heat utilization in the DC industry. This paper aims to provide an economic investment assessment of DC waste heat utilization. The contribution of this paper is the assessment of three different sized cases with realistic input factors affecting the net present value (NPV) model. We contribute to the ongoing discussion on the energy efficiency of DCs and provide a transparent assessment model for DC and district heating operators. We identified the positive NPV cases with high probability. The medium case has an NPV of 1.04 M€ (with uncertainty, the results range from −0.332 M€ to 2.57 M€). The large case has an NPV of 16.3 M€ (with uncertainty, the results range from 4.1 M€ to 30.2 M€). Both of these are clear-cut waste heat utilization investment proposals. The small case NPV is −48.5 k€ (with uncertainty, the results range from −264 k€ to 143 k€). The small case is sensitive to input factor values.
... The challenge is then to study the extent to which ICT can contribute to sustainable development (SD). Thus, along side a vision of the threats generated by ICT and in an effective conception of the role of IS by report to sustainability, some recognize a positive link between SD and SI (Nada and Elgelany 2014). In particular, they see ICT as the latest gift that can build more sustainable society and a new economy based on SD values. ...
Article
Full-text available
The new IT governance practices have emerged to adopt a responsible attitude within the company to question its mode of operation, its mode of consumption and its relationship with every IT service. The objective of our research is to ensure that the functions of the company should follow the criteria of coherence and of consonance, in order to develop a corporate culture oriented toward the restoration of the environment, green IT. Knowing how to make information and communication technologies as a catalyst for sustainable development and deploying a specific green framework to the company by using the powers of the artificial intelligence are the aim of our proposed approach. According to the four ecological aspects, the proposed approach, firstly it diagnoses the maturity of the company by grouping the existing processes and the IT needs processes. Secondly it unifies all the processes by using a prototype process. Finally, it evaluates the company’s processes by using a suite of performance indicators then it lists a set of the best ecological recommendations.
... 33 They address different aspects of data centers from different perspectives, including the server itself 55 ; the heating, ventilation, and air-conditioning (HVAC) system 56 ; or the data center as a whole. 57 In the context of this study, we concentrate on the overall efficiency of the data center. The most common generic data center energy efficiency metric is the Power Usage Effectiveness (PUE), which compares the total annual energy demand of the data center to the annual energy demand of servers and necessarily needs to be equal to or greater than one. ...
Article
Full-text available
The electric energy demand of data centers in Germany has grown rapidly from 10.5 TWh/a in 2010 to 13.2 TWh/a in 2017, an average of 25% of which are used to fulfill the data centers' cooling demand. In order to increase its energy efficiency, TU Darmstadt applies a new cooling concept in the next generation of its high‐performance computing data center “Lichtenberg II.” Instead of the current air‐cooled servers with water‐cooled rear doors at 17‐24°C, the new data center will be equipped with direct hot‐water cooling for the high‐performance computer, supplying heat at a temperature of 45°C. The high‐temperature waste heat is used for heating purposes on the university's campus Lichtwiese. For waste heat utilization, two concepts are presented, either integrating the heat in the return line of the district heating network or using it locally in buildings located near the data center. Reductions in CO2 emission and annuity are generated both by decreased compression cooling demand for the data center and by decreased heat generation due to waste heat utilization. Depending on the scenario, a total of 20%‐50% of the waste heat emitted by the high‐performance computer can be used for heating purposes, while the remaining heat is dissipated efficiently via free cooling without additional energy demand for mechanical chillers. CO2 emission can be decreased by up to 720 , representing a reduction of about 4% of the total emission at campus Lichtwiese. TU Darmstadt is currently implementing the waste heat integration into its district heating network and will benefit from this concept starting in 2020.
... The energy use of Data Centers in 2010 was estimated at 200 TWh, corresponding to 1% of global electricity demand, and projections for 2030 range from 3 to 13% [1][2][3][4]. This implies not only very high costs, but also an important carbon footprint as a consequence of both the electricity generation process [5,6] and the actual use of DC resources [5,7,8]. In 2011, information and communication technologies equipment contributed around 2% of the greenhouse gas emissions [9], from which DCs are responsible for approximately 23% [6,10]. ...
Preprint
Full-text available
Computing systems are mayor consumers of electricity and their energy efficiency is usually poor, leading to unnecessarily high costs and carbon footprint. This is due to the fact that some idle capacity should be kept to guarantee a good quality of service in case of unforeseen workloads. In this paper, we apply the classical methods of Extreme Value Theory to understand and model the behavior of extreme workload in computing systems. Our method is based on the concept of Value-at-Risk (VaR), which is widely used in financial applications. We illustrate how extreme value models can be used to model the Load-at-Risk (LaR). The developed framework is applied to a data set from a Finnish company's computing system providing web services. Our results show that this framework successfully models and forecasts the LaR of computing systems, thus, allowing operators and managers to make better decisions regarding the management of their computing resources, while being able to determine a workload that will not be exceeded with a certain confidence level.
... A comprehensive educational series made by datacenterknowledge.com describes the various factors that impact the efficiency of a data centre. In addition, IBM has identified strategic building blocks companies that can be used to take action to begin to achieve energy efficiency [14]. ...
Article
Increased reliance on technology and online transactions has increased the heat generated in data centres, due to greater access, storage, aggregation and analysis of such data. This paper discusses the downsides of traditional data centres and highlights the importance of applying green IT practices. It explores the benefits and proposes guidelines on shifting to green data centres. The recommendations to go green are to: a) reduce energy by applying green IT practices, b) eliminate redundancies in server systems and cooling modules, c) turn on power management tools on servers and terminals when applicable, d) utilise newer technologies of power consumption and e) minimise internal barriers to establish a good energy management policy. The benefits expected from the proposed approach are minimised pollution levels and lower the cost of activities, among others. The findings of the study will also contribute to face the challenge of global warming and help in better management and control of power consumption. Keywords: Traditional data centres, green data centres, green computing, emission.
... 61 billion kWh, US$4.5 billion), energy is thus the main factor in operating expenses (Barroso and Hölzle, 2009). Higher energy consumption implies higher CO 2 emissions, both 111 New measure to monitor and manage EE in electricity generation (Shuja et al., 2012(Shuja et al., , 2014 and DC resource use (Ardito and Morisio, 2014;Nada and Elgelany, 2014;Shuja et al., 2014). Additionally, the average utilization level of a typical DC is generally lower than 30 percent (Barroso et al., 2013;Hölzle, 2007, 2009;Bohrer et al., 2002;Delimitrou and Kozyrakis, 2014;Meisner et al., 2009;Pawlish et al., 2014;Reiss et al., 2012). ...
Purpose - Data centers (DCs) are similar to traditional factories in many aspects like response time constraints, limited capacity, and utilization levels. Several indicators have been developed to monitor and compare productivity in manufacturing. However, in DCs most used indicators focus on technical aspects of infrastructure, not efficiency of operations. In this paper, we rely on operations management (OM) to define a commensurate and proportionate DC performance indicator: the energy-efficient utilization indicator (EEUI). EEUI makes objective and comparative assessment of efficiency possible independently of the operating environment and its constraints. Design/methodology/approach - We follow a design science approach, which follows the practitioner’s initial steps for finding solutions to business relevant problems prior to theory building. Therefore, this approach fits well with this research, as it is primarily motivated by business and management needs. EEUI combines both the amount of energy consumed by different components and their current energy efficiency (EE). It reaches its highest value when all server components are optimally loaded in EE sense. We tested EEUI by collecting data from three scientific DCs and performing controlled laboratory tests. Findings - The results indicate that the optimization of EEUI makes it possible to run computing resources more efficiently. This leads to a higher EE and throughput of the DC while reducing the carbon footprint associated to DC operations. Both energy-related costs and the total cost of ownership are consequently reduced, since the amount of both energy and hardware resources needed decrease, while improving DC sustainability. Practical implications - In comparison with current DC operations, the results imply that using the EEUI could help increase the EE of DCs. In order to optimize the proposed EEUIs, DC managers and operators should use resource management policies that increase the resource usage variation of the jobs being processed in the same computing resources (e.g. servers). Originality/value – The paper provides a novel approach to monitor the EE at which computing resources are used. The proposed indicator does not only consider the utilization levels at which server components are used but also takes into account their EE and energy proportionality. Keywords: Data centers; Energy efficiency; Energy-efficient computing; Energy proportionality; Resource usage efficiency; Operations management principles
... FVER demonstrates the improvements in the modern design of a facility and existing facilities using more efficient systems. FVER provides an understanding of changes and determines the energy use at the electrical input to the DC for any specified device or group of devices within the DC [30]. ...
Article
In this study the potential for data center waste heat utilization was analyzed in the Nordic countries. An overview of upcoming data center projects where waste heat is utilized is presented. Especially in Finland data center operators are planning to reuse waste heat in district heating. However, business models between the district heating network operator and data center operator are often not transparent. The implications of economics and emissions on waste heat utilization in district heating were analyzed through life cycle assessment. Currently the biggest barriers for utilizing waste heat are the low quality of waste heat (e.g. low temperature or unstable source of heat) and high investment costs. A systematic 8-step change process was suggested to ensure success in changing the priority of waste heat utilization in the data center and district heating market. Relevant energy efficiency metrics were introduced to support rational decision-making in the reuse of waste heat. Economic calculations showed that the investment payback time is under the estimated lifetime of the heat pump equipment, when waste heat was utilized in district heating. However, the environmental impact of waste heat utilization depends on the fuel, which waste heat replaces.
Article
Integrating evaporative cooling with loop thermosyphon can significantly improve the free cooling ability. In this paper, a loop thermosyphon with evaporative condenser is investigated experimentally. The mist water flow out of a single nozzle is observed and analyzed. The performance and annual free cooling potential of the system are investigated, compared with conventional loop thermosyphon. The results show that the evaporative cooling effect of the locations below the horizontal level of the nozzle is more significant due to gravity. With the increase of horizontal distance from the nozzle, the temperature decreases and then increases and the optimal distance is 200–400 mm. The heat capacity of LTEC increases with the increase of indoor and outdoor temperature difference while it decreases with the increase of humidity. Evaporative cooling can enhance the heat transfer of LTEC by 7%–33% compared with loop thermosyphon with conventional condenser, and this value is larger for smaller indoor and outdoor temperature difference and higher indoor temperature. LTEC can expand the annual free cooling time by 7%–14% compared with LTCC and the effect is more significant in regions with drier weather.
Article
Full-text available
The conception of Cloud computing has not only reshaped the field of distributed systems but also extend businesses potential. Load balancing is a core and challenging issue in Cloud Computing. How to use Cloud computing resources efficiently and gain the maximum profits with efficient load balancing algorithm is one of the Cloud computing service providers' ultimate goals. In this paper firstly an analysis of different Virtual machine(VM) load balancing algorithms was done, a new VM load balancing algorithm has been proposed and implemented in Virtual Machine environment of cloud computing in order to achieve better response time and cost.
Article
The increasing demand for storage, networking and computation has driven the escalation of large complex data centers, the massive server farms that run many of today’s Internet, financial, commercial and business applications. A data center can comprise many thousands of servers and can use as much energy as a small city. The massive amounts of computation power required to drive these server systems results in many challenges like energy consumption, emission of green house gases, backups and recovery issues, etc. The rising costs of oil and global warming are some of the biggest challenges of today’s world. The research proposed in this paper discusses how virtualization can be used to improve the performance and energy efficiency of data centers. To prove this work, Green Information Technology (IT) based framework is developed to seamlessly and securely divide data center components into different resource pools depending on different parameters like energy consumption ratio, utilization ratio, workloads, etc. The framework highlights the importance of implementing green metrics like power usage effectiveness (PUE) and data center effectiveness, and carbon emission calculator to measure the efficiency of data center in terms of energy utilization and carbon dioxide (CO2) emissions. The framework is based on virtualization and cloud computing to increase the utilization ratio of already installed servers from 10% to more than 50%.
Conference Paper
This work describes two data center efficiency metrics: Power Usage Effectiveness (PUE) and Compute Power Efficiency (CPE). PUE characterizes the fraction of the total data center power used for IT work. CPE characterizes the overall data center efficiency, considering IT equipment utilization as well as how power is used in the data center. The PUE results from three data center studies are presented here. The data suggests that a carefully designed and managed data center has a PUE of 2.0. More studies are required to determine the range of values for the typical data center. A data center infrastructure and energy cost model is presented to compare hardware costs to infrastructure and energy costs. The impact of PUE on these costs is examined to illustrate the impact of data center efficiency on the total cost of operating a data center.
Article
As the scale of cloud datacenter becomes larger and larger, the energy consumption and revenue enhancing of the cloud datacenter will grow rapidly. How to improve high-throughput computing resource allocation strategy was taken into account. High-throughput computing resource consolidation is an effective method to increase resource utilization and in turn reduces energy consumption and increases revenue acquisition. However, high-throughput computing resource consolidation may lead to several problems, such as the freeing up of resources that can sit idling yet still drawing power and the energy consumption of communication and revenue acquisition are ignored. Based on these considerations, this paper proposes a Particle Swarm Optimized Tabu search Mechanism, aiming to maximize resource utilization and explicitly taking into account both active and idle energy consumption in minimizing finish time. While fulfilling requirements and QoS of cloud datacenter, this proposed mechanism allows turning off the spare servers, thus reducing power consumption and increasing revenue enhancing overall datacenter. We conducted extensive experiments based on the platform of CloudSim. By comparing with traditional algorithms, we prove that proposed algorithm can save energy consumption reducing by 67.5% and increase revenue enhancing by 8.14 times averagely based on the consideration of communication and QoS of cloud datacenter in minimizing finish time.
Article
This paper is concerned with the evaluation and tabulation of certain integrals of the type I(mu , nu ; lambda ) = int0∞ Jmu(at) Jnu(bt) e-ct tlambdadt. In part I of this paper, a formula is derived for the integrals in terms of an integral of a hypergeometric function. This new integral is evaluated in the particular cases which are of most frequent use in mathematical physics. By means of these results, approximate expansions are obtained for cases in which the ratio b/a is small or in which b ~= a and c/a is small. In part II, recurrence relations are developed between integrals with integral values of the parameters mu , nu and lambda . Tables are given by means of which I(0, 0; 1), I(0, 1; 1), I(1, 0; 1), I(1, 1; 1), I(0, 0; 0), I(1, 0; 0), I(0, 1; 0), I(1, 1; 0), I(0, 1; -1), I(1, 0; -1) and I(1, 1; -1) may be evaluated for 0
Article
Cloud computing is driven by economies of scale. A cloud system uses virtualization technology to provide cloud resources (e.g. CPU, memory) to users in form of virtual machines. Virtual machine (VM), which is a sandbox for user application, fits well in the education environment to provide computational resources for teaching and research needs. In resource management, they want to reduce costs in operations by reducing expensive cost of electronic bill of large-scale data center system. A lease-based model is suitable for our Virtual Computing Lab, in which users ask resources on a lease of virtual machines. This paper proposes two host selection policies, named MAP (minimum of active physical hosts) and MAP-H2L, and four algorithms solving the lease scheduling problem. FF-MAP, FF-MAP-H2L algorithms meet a trade-off between the energy consumption and Quality of Service (e.g. performance). The simulation on 7-day workload, which converted from LLNL Atlas log, showed the FF-MAP and FF-MAP-H2L algorithms reducing 7.24% and 7.42% energy consumption than existing greedy mapping algorithm in the leasing scheduler Haizea. In addition, we introduce a ratio \theta of consolidation in HalfPI-FF-MAP and PI-FF-MAP algorithms, in which \theta is \pi/2 and \pi, and results on their simulations show that energy consumption decreased by 34.87% and 63.12% respectively.