ThesisPDF Available

Data Center in Space domain application: basic design concepts of infrastructure

Authors:
  • University of Science and Technology of Hanoi

Abstract and Figures

The 21st century is a booming era of digital technology; it is also the age of cloud computing where internet-based data is handled from remote places. Data is being entered, stored. processed, deposited and backed up all at the central information infrastructure located in specific buildings that we called Information Technology rooms or data center. Data center is a place where all the information technology servers, the storage and network facilities are gathered in compliance with the art of technology, in which the equipment requires 24 /7 continuous operation. Data centers are becoming as important a part of business operations as office, retail, industrial assets. Nowadays, with four concerns: Big Data, Internet of things, Industry 4.0 revolution and smart-cities are all trends that will increase demand for computing power, thus data centers, on an exponential rather than linear scale. However, we have some limitations regarding knowledge about the data center architecture, especially the infrastructure design in a typical location. Firstly, part of the report will cover the application of data centers in the Space domain, in which a large amount of daily data collected from satellites is used for data storage, computation and processing in order to target specific applications such as weather forecasts, ... that require high-performance computing and storage tools or facilities. This thesis intends to present a comprehensive literature review that accounts for definition, basic concept design requirements, application, and infrastructure topology of data centers. Based on these observations, an important part of the report, we aim at the basic design related to the infrastructure of a small data center at USTH. The primary design is based on the best practices and international reference standards for data center infrastructure design such as Tier-3 level recommended by Uptime Institute. The design scale is geared towards the current use of IT equipment as well as the ability to expand in the next 5-years of USTH. We conclude that the design by describing key components and challenges for future research on constructing effective and accurate data center infrastructures.
Content may be subject to copyright.
UNIVERSITY OF SCIENCE
AND TECHNOLOGY OF
HANOI
HDL ENGINEERING
TRADING JOINT
STOCK COMPANY
BACHELOR THESIS
Data Center in Space domain application: basic design
concepts of infrastructure
By
NGUYEN HUU THANH
Internship period: 10 May 10 August 2020
Department of Space and Applications
University of Science and Technology of Hanoi
Supervisors: Dr. NGUYEN XUAN TRUONG
PLMCC, USTH
nguyen-xuan.truong@usth.edu.vn
Mr. TRINH MINH TOAN
SOLUTION DEPARTMENT, HDL. JSC
toantm@hdl.com.vn
Hanoi, August 2020
i
TABLE OF CONTENTS
ACKNOWLEDGEMENTS ............................................................................. II
LIST OF ABBREVIATIONS ......................................................................... III
LIST OF TABLES .......................................................................................... IV
ABSTRACT .................................................................................................... VI
I. INTRODUCTION ................................................................................. 7
1.1 Context and Motivation ........................................................................................... 7
1.2 Objectives of the thesis ............................................................................................ 8
1.3 Structure of the thesis .............................................................................................. 9
II. BACKGROUND AND RELATED WORKS ................................... 10
2.1 WHAT IS THE DATA CENTER? ....................................................................... 10
2.1.1 Data Center basic concepts ...................................................................... 10
2.1.2 Data Center in daily application ................................................................. 12
2.2 DATA CENTER CLASSIFICATION AND INFRASTRUCTURE COMPONENTS
13
2.2.1 Data Center classification ........................................................................... 13
2.2.2 Data Center Infrastructure: main components ............................................ 15
2.3 DATA CENTER APPLICATION IN SPACE DOMAIN .................................... 22
2.4 OVERVIEW OF DATA CENTER IN VIETNAM .............................................. 25
III. BASIC CONCEPT DESIGN OF A DATA CENTER IN USTH ... 27
3.1 TECHNICAL STANDARDS REFERENCES FOR DESIGN CONCEPT .......... 27
3.1.1 Technical standards for design of physical facilities in Data Center ......... 27
3.1.2 Facts of the IT infrastructure in the USTH ................................................. 31
3.2 SIMPLE CONCEPT OF DATA CENTER DESIGN IN UNIVERSITY ............. 33
3.2.1 Site installation and main components consideration ................................. 33
3.2.2 Proposal design concept of the Data Center in USTH ............................... 34
IV. CONCLUSIONS ................................................................................. 39
REFERENCES .............................................................................................. 40
ii
ACKNOWLEDGEMENTS
I would like to express my indebtedness appreciation to those who gave me the
possibility to complete this thesis that I have received during the time at the PLMCC
(University of Science and Technology of Hanoi - USTH) and HDL JSC. Foremost, I
would like to send my honest thanks to my supervisor Dr. Nguyen Xuan Truong, with
the constant guidance and advice played an essential role in my research and Mr. Trinh
Minh Toan (Solution engineer, HDL Enngineering and Trading Company). The
guidance, suggestions and encouragement provided me necessary insight into the
research problem. My special gratitude goes to my teammates (Hiếu, Hoàng, Nht) for
their meticulous supports. I have no hesitation to say that, without their constant support,
I would fail to complete my work on time.
I would also like to thank the Space and Applications department at USTH,
including Assoc. Prof. Ngo Duc Thanh and Dr. Tong Sy Son, for their help not only
during the internship but also during my undergraduate years.
Getting through my dissertation required more than academic support, and I have
many, many people to thank for listening to and, at times, having to tolerate me over the
three - years in USTH. I cannot begin to express my gratitude and appreciation for their
friendship. Classmates have been unwavering in their personal and professional support
during the time I spent at the University.
Most importantly, none of this could have happened without my family. To my
parents, - it would be an understatement to say that, as a family, we have experienced
some ups and downs in the three years. This dissertation stands as a testament to your
unconditional love and encouragement.
iii
LIST OF ABBREVIATIONS
ANSI
ASHRAE
CRAC
CB
CC
CPU
CDC
DC
ENIAC
HVAC
IDC
IT
ICT
ITI
ITU
MNO
MPS
M&E
NOC
PDU
TIA
TCS
UI
UPS
USTH
iv
LIST OF TABLES
TABLE 1: SOME TYPICAL DATA CENTER TOP 5 BIGGEST IN THE WORLD
2019 [11] ................................................................................................................... 12
TABLE 2: DATA CENTER SIZE CLASSIFICATIONS BY RACK NUMBER AND
LOCATION SPACE [12] ......................................................................................... 13
TABLE 3: DATA CENTER INFRASTRUCTURE CLASSIFICATION TIERS ... 14
TABLE 4: DATA CENTER INFRASTRUCTURE MAIN COMPONENTS...... 15
TABLE 5: RACK CABINET DESIGN CONSIDERATION WITH POWER
DENSITY [16, 17] .................................................................................................... 17
TABLE 6: 17 COLLOCATION DATA CENTER IN VIETNAM .......................... 26
TABLE 7: TOPOLOGY STANDARD (UPTIME AND TIA-942) ......................... 29
TABLE 8: POWER CONSUMPTION CALCULATION OF THE IT EQUIPMENT
IN DATA CENTER (CRITICAL EQUIPMENT BE SUPPLIED BY UPS)........... 35
TABLE 9: POWER SIZING UPS ............................................................................ 36
TABLE 11: POWER DEMAND CALCULATION FOR ALL EQUIPMENT IN
DATA CENTER ....................................................................................................... 37
LIST OF FIGURES
Figure 1: Typical data center ........................................................................................ 10
Figure 2: Typical layout of a Data center arranged by three-main areas: server room,
power room, NOC [9] ................................................................................................... 11
Figure 3: Two main groups: core components (IT systems) and physical infrastructures
....................................................................................................................................... 11
Figure 4: Critical Building Systems of a Data Center .................................................. 15
Figure 5: Rack cabinet rows arrangement in a Data Center and IT equipment
arrangement inside of a Rack cabinet ........................................................................... 17
Figure 6: An overview of power supply system in the Data Center [18, 19] ............... 18
Figure 7: Example of a hot aisle/cold aisle configuration recommended by ASHRAE
TC 9.9 in 2015 4th edition [22] ..................................................................................... 20
Figure 8: ASHRAE TC 9.9 4th edition 2015 Thermal Guidelines [23] ........................ 21
Figure 9: Raised floor in a Data Center [24] ................................................................ 22
Figure 10: Transponder of satellite internet ................................................................. 23
Figure 11: A sentinel image of Northeast Vietnam from Sentinel 1 ............................ 24
Figure 12: Mini data center at the University of Science and Technology of Hanoi ... 25
v
Figure 13: Some Data Center in Vietnam certified by Uptime Institute (updated 2020)
https://uptimeinstitute.com/uptime-institute-awards/ ................................................... 26
Figure 14: Topologies of different Tier systems [19, 30, 31] ....................................... 28
Figure 15: Typical Cooling technology : row-based or Inrow and room-based or Inroom
[32] ................................................................................................................................ 30
Figure 16: Data Center in the University of Science and Technology of Hanoi: 5th floor
of the Building 9-floors, IT room area of 60m2 with 03 rack cabinets and 8-CPUs, other
Storages ......................................................................................................................... 31
Figure 17: actual Data Center in the University of Science and Technology of Hanoi
located in 5th floor (USTH building) with the air comfort conditioning, no-UPS, no-
backup Generator, no-access control, no-raised floor systems ..................................... 31
Figure 18: Layout arrangement of Data Center in the University of Science and
Technology of Hanoi: 5th floor: 03 rack cabinets and 8-CPUs, other Storages ............ 32
Figure 19: Layout arrangement of new - Data Center in the University of Science and
Technology of Hanoi: 5th floor: 20 rack cabinets ......................................................... 34
Figure 20: Topology for Data center in USTH according to the Uptime Institute Tier 3
for M&E infrastructure [19, 30, 31] ............................................................................. 36
Figure 21: Layout arrangement - Data center in USTH according to the Uptime Institute
Tier 3 ............................................................................................................................. 38
vi
ABSTRACT
The 21st century is a booming era of digital technology; it is also the age of cloud
computing where internet-based data is handled from remote places. Data is being
entered, stored. processed, deposited and backed up all at the central information
infrastructure located in specific buildings that we called Information Technology rooms
or data center. Data center is a place where all the information technology servers, the
storage and network facilities are gathered in compliance with the art of technology, in
which the equipment requires 24 /7 continuous operation. Data centers are becoming as
important a part of business operations as office, retail, industrial assets. Nowadays,
with four concerns: Big Data, Internet of things, Industry 4.0 revolution and smart-cities
are all trends that will increase demand for computing power, thus data centers, on an
exponential rather than linear scale. However, we have some limitations regarding
knowledge about the data center architecture, especially the infrastructure design in a
typical location. Firstly, part of the report will cover the application of data centers in
the Space domain, in which a large amount of daily data collected from satellites is used
for data storage, computation and processing in order to target specific applications such
as weather forecasts, ... that require high-performance computing and storage tools or
facilities. This thesis intends to present a comprehensive literature review that accounts
for definition, basic concept design requirements, application, and infrastructure
topology of data centers. Based on these observations, an important part of the report,
we aim at the basic design related to the infrastructure of a small data center at USTH.
The primary design is based on the best practices and international reference standards
for data center infrastructure design such as Tier-3 level recommended by Uptime
Institute. The design scale is geared towards the current use of IT equipment as well as
the ability to expand in the next 5-years of USTH. We conclude that the design by
describing key components and challenges for future research on constructing effective
and accurate data center infrastructures.
Keywords: Data center, Cloud computing, Server, Infrastructure topology, Tier level.
7
I. INTRODUCTION
1.1 CONTEXT AND MOTIVATION
Data centres or Data centers are facilities that house the hardware and software
that support a company, government organizations, banks, universities’ information
technology systems. The mission critical nature of computers in the modern business world
means that data centers have additional requirements that differentiate them from a typical
storage unit. These include enhanced power, cooling, connectivity and security features.
Data centres are becoming as important a part of business operations as office, retail and
industrial assets. This trend is being driven by several factors including the increasingly
digital world, Information Technology (IT) development and the importance enterprise IT
strategy plays in business delivery. The evolution of cloud computing which provides on-
demand provisioning of elastic resources with a pay-as-you-go model has transformed the
Information and Communication Technology industry. Over the last few years, large
enterprises and government organizations have migrated their data and mission-critical
workloads into the cloud. As we are moving towards the fifth generation of cellular
communication systems (5G), Mobile Network Operators need to address the increasing
demand for more bandwidth and critical latency applications. Thus, they leverage the
capabilities of cloud computing and run their network elements into distributed cloud
resources. The adoption of cloud computing by many industries has resulted in the
establishment of humongous data centers around the world containing thousands of servers
and network equipment. Data centers are large-scale physical infrastructures that provide
computing resources, network and storage facilities. Cloud computing is expanding across
different industries and along with it the footprint of data center facilities which host the
infrastructure and run the services is growing. Since 2015 there has been 259 hyperscale
data centers around the globe, and by 2020 this number will grow to 485 as shown in [1].
These types of data centers will roughly accommodate 50% of the servers installed in all
the distributed data centers worldwide. Data centers are promoted as a key enabler for the
fast-growing IT industry, resulting in a global market size of 152 billion US dollars by
2016 [2].
In Vietnam [3], the new Cybersecurity Law is among the strengths that would
drive more demand for data centers. The law that came into effect earlier this year requires
8
technology businesses to store Vietnamese users’ data within the country and provide it to
the Ministry of Public Security on request. In the International Telecommunication
Union’s 2018 survey, Vietnam ranked 50th out of 175 countries in cybersecurity. The
country’s emerging and tech-savvy population is another factor. It has 64 million Internet
users. The fast-growing trend in the Asia-Pacific of co-locating data centers is underpinned
by the rapid pace of digitization and a surge in demand for cloud-based services across the
region. And in Vietnam, currently there are 17 colocation data centers from 3 areas in
Vietnam (Hanoi, HCM and Da Nang). Nowadays, with a large number of people using
Internet services and smartphones; create a trend of big technology companies moving
factories and transfering servers to Vietnam. The country is also in the development of 5G
information technology infrastructure, aiming toward smart cities. Therefore, the trend of
developing the Data Center market in Vietnam is being evaluated with full potential.
However, a full understanding of the infrastructure and the operation of data centers is still
being limited, focused mainly within some large service providers such as Viettel IDC,
CMC Telecom organisations... Currently, there are no statistical reports or aggregate
assessments about the number of Data Center, or guidelines related to the design of
infrastructure and operations of the Data Center in Vietnam.
1.2 OBJECTIVES OF THE THESIS
As mentioned in section 1.1, the increase in the number and size of Data Centers in
Vietnam in the coming years is very large, thus making a full understanding of the
infrastructure, components (electrical, cooling, etc…), information technology
infrastructure (cabling, internet connectivity…) becomes a necessity.
In this thesis, we will have an overview of the data center’s physical infrastructure
which includes key components (electrical systems, air-conditioning system, rack
systems) and international standards serve as references for data center design.
As an important part of the thesis, we cover the design of a small scale data center
(20 rack cabinets), used for data storage, data processing, computation and simulation,
etc… for University of Science and Technology of Hanoi. This first design will serve as
the basis for more detailed designs of data centers in the future.
9
1.3 STRUCTURE OF THE THESIS
The thesis is structured into three main parts starting with the introduction of the
topic. The rest of the work is organized as follows:
Part II gives an introduction on data centers concepts, and briefly presents the
architecture of data center. It explains the importance of power and cooling air supply, and
describes how they are used to ensure the realibility operation 24/7 of data center such as
IT load (server, network, storage). Data Center's important applications for businesses and
organizations in different fields, such as the field of Space science. Finally, it presents the
classification for Data center according to the power capacity, number of rack cabinets and
following the Uptime Institute classification by TIER 1-4. One important part related to
the standards in designing infrastructure like Uptime TIER, ANSI/TIA-942 and the recent
thermal guidelines ASHRAE T9.9 (2015) will be delivered. We also give a short overview
of the state of the Data center in Vietnam upto date.
Part III presents the proposal design for a typical Data Center in the University of
Science and Technology of Hanoi. We propose designing a Data Center infrastructure
(focused on Power and Cooling systems) according to TIER - 3 standards for University
of Science and Technology of Hanoi. This design aims to expand in the future as the
number of students in the university can increases up to 5000 students/year; number of
racks will be used for different departments in the university: SPACE and Applications,
Energy, Information and Comunication Technology, Water Environment
Oceanography…., prioritizing the storage of all universities’s databases such as student
records, faculty records, and other important USTH databases.
Part IV summarizes the work briefing the purpose of constructing the concept
design for the Data Center. It states few observations about the design standards that are
used and the first concept that we got.
10
II. BACKGROUND AND RELATED WORKS
2.1 WHAT IS THE DATA CENTER?
2.1.1 Data Center basic concepts
Data Centers (DC) or sometime Data Centres are a fundamental part of the IT
operations and provide computing facilities to large entities, such as, online social
networks, cloud computing services, online business, hospitals, and universities [4].
According to the Cisco [5] a data center is a physical facility that organizations use to
house their critical applications and data. A data center's design is based on a network of
computing and storage resources that enable the delivery of shared applications and data.
The key components of a data center design include routers, switches, firewalls, storage
systems, servers, and application-delivery controllers. According to [6], data centers are
computer warehouses that store large amounts of data that meet the daily transaction
processing needs of different businesses. They contain servers for the collection of data
and network infrastructure for the utilization and storage of the data. Data centres usually
run 24/7 all year round [7], and they are very energy intensive with power densities of
5382153 W/m2 that sometimes can reach up to 10 kW/m2 [8].
Figure 1: Typical data center
The Figure 1 present an example of Data Center; in which the physical
infrastructure consists mainly of IT equipment (server, network, storage ...) installed inside
the rack cabinets (see next section). Figure 2 shows the physical layout infrastructure of a
data center, which is generally arranged into spaces: Server room, Power room and
Network Operations Centers (NOC), and other auxiliary rooms. (warehouse, ...).
11
Figure 2: Typical layout of a Data center arranged by three-main areas: server room,
power room, NOC [9]
To ensure stable and continuous operation of IT devices 24/7, in the early-state of
design and during operation phase of the data center, it is interested in two main concerns
(Figure 3): (i) What are the core components of a data center; (ii) What is in a data center
facility? The first concern includes routers, switches, firewalls, storage systems, servers,
and application delivery controllers. These components store and manage business-critical
data and applications. They compose: Network infrastructure connects servers (physical
and virtualized), data center services, storage, and external connectivity to end-user
locations; Storage infrastructure are used to hold the fuel valuable commodity sources;
Computing resources provide the processing, memory, local storage, and network
connectivity through servers. Data center facilities require significant physical
infrastructure to support the center's hardware and software. These include power
subsystems, uninterruptible power supplies (UPS), ventilation, cooling systems, fire
suppression, backup generators, and connections to external networks.
Figure 3: Two main groups: core components (IT systems) and physical infrastructures
12
2.1.2 Data Center in daily application
There are many applications of the Data Center, the criticality of data centers has
been fueled mainly by two aspects. First, the ever-increasing growth in the demand for
data computing, processing and storage by a variety of large-scale cloud services, such as
Google, Facebook and Amazon, by telecommunication operators, by banks and
government organizations, resulted in the proliferation of large data centers with thousands
or millions of servers, CPU. Second, the requirement for supporting a vast variety of
applications ranging from those that run for a few seconds to those that run persistently on
shared hardware platforms has promoted building large-scale computing infrastructures.
As a result, data centers have been touted as one of the keys enabling technologies for the
fast-growing IT industry and at the same time, resulting in global data center market size
is expected to reach revenues of around $174 billion by 2023 [10].
Table 1: some typical Data Center Top 5 biggest in the world 2019 [11]
Data Center Company
Facility Location
Area
1
Range International
Information Group
Langfang, China
6,300,000 Sq. Ft
2
Switch SuperNAP
Nevada, USA
3,500,000 million Sq. Ft
3
DuPont Fabros
Technology
Virginia, USA
1,600,000 million Sq. Ft
4
Utah Data Centre
Utah, USA
1,500,000 million Sq. Ft
5
Microsoft Data Centre
Iowa, USA
1,200,000 Sq. Ft
Data centers are designed to support business applications and research activities
that includes: (i) Email and file sharing, which need to store for a long time; (ii) Customer
relationship management is managing all company’s relationships and interactions with
customers and potential customers. Customer data is uploaded and stored in a DC that
allows companies to access their customer data anytime, anywhere. ; (iii) Big data,
artificial intelligence, and machine learning which requires storing, processing a large
amount of data in a short time; (iv) Virtual desktops, communication, and collaboration
services.
13
2.2 DATA CENTER CLASSIFICATION AND INFRASTRUCTURE
COMPONENTS
2.2.1 Data Center classification
Data centers in general can be classified in the following ways: owner and service
provision purpose; classified according to the number of cabinets racks (or the number of
IT equipment); classified according to the rating standards Uptime Institute.
Firstly, two broad categories of data center ownership are enterprise and colocation.
Enterprise data centers are built and owned by large technology companies such as
Amazon, Facebook, Google, Microsoft, Yahoo, as well as government agencies, financial
institutions, insurance companies, retailers, and other companies across all industries.
Enterprise data centers support web-related services for their organizations, partners, and
customers. Colocation data centers are typically built, owned, and managed by data center
service providers such as Coresite, CyrusOne, Digital Realty Trust, DuPont Fabros...
These data center service providers do not use the services themselves but rather lease the
space to one or multiple tenants [12].
The Data Center Institute classifies data centers into six size groups (Table 2),
measuring by space or number of racks [12, 13]. Uptime Institute created a standard Tier
Classification System (Table 3) that has four tiers to consistently evaluation the
infrastructure performance or uptime of data centers [14].
Table 2: Data center size classifications by rack number and location space [12]
Size
Number of racks
Location area of rack room (m2)
Mini Data center
1-10
2.6-26
Small Data center
11-200
28.6- 520
Medium Data center
201-800
522.6-2080
Large Data center
801-3,000
2082.6-7800
Massive Data center
3,001-9,000
7802.6-23400
Mega Data center
More than 9,000
More than 23,400
14
Table 3: Data center Infrastructure classification Tiers
TIER
Description
Uptime
Downtime
Per Year
Basic
Capacity
Data centers provide dedicated site infrastructure
to support IT beyond an office setting, including
a dedicated space for IT systems, an
uninterruptible power supply, dedicated cooling
equipment that does not shut down at the end of
normal office hours, and an engine generator to
protect IT functions from extended power
outages.
99.671%
28.8 Hours
Redundant
Capacity
Components
Data centers include redundant critical power and
cooling components to provide select
maintenance opportunities and an increased
margin of safety against IT process disruptions
that would result from site infrastructure
equipment failures. The redundant components
include power and cooling equipment.
99.749%
22 Hours
Concurrently
Maintainable
Data centers have no shutdowns for equipment
replacement and maintenance. A redundant
delivery path for power and cooling is added to
the redundant critical components of Tier II so
that each component needed to support the IT
processing environment can be shut down and
maintained without impacting the IT operation.
99.982%
1.6 Hours
Fault
Tolerance
Site infrastructure builds on Tier III, adding the
concept of Fault Tolerance to the site
infrastructure topology. Fault Tolerance means
that when individual equipment failures or
distribution path interruptions occur, the effects
of the events are stopped short of the IT
operations.
99.995%
26.3
Minutes
15
2.2.2 Data Center Infrastructure: main components
As described in Figure 2 (previous section), in order to ensure stable, safe and
continuous 24/7 operation of the data center, the data center infrastructure generally
consists of major equipment. described in figure 4. The main components summarized in
Table 4 can be divided into: hardware and software. In particular, hardware equipment is
divided as information technology infrastructure and physical infrastructure (mechanical
and electrical equipment) [15]. The mechanical and electrical systems (M&E) can make
up more than 60% of the total developmental cost of a new data center and is thus a major
cost component. M&E systems include electrical substations, chillers, backup generators,
uninterrupted power supplies (UPS) and computer room air-conditioning (CRAC) units.
Figure 4: Critical Building Systems of a Data Center
Table 4: Data center Infrastructure Main components
Items
Components
Description
Hardware
IT facilities
Server Racks
Network Racks
Storage Racks
Other Racks
16
Cabling systems
Energy facilities
Electrical network
To supply the power
Generator (GEN)
To guarantee a reliable power
supply
Uninterruptible power supply
(UPS) and Battery storage
To guarantee a reliable power
supply
Main distribution & power
distribution unit (PDU), rack,
and server levels)
To supply the power to racks,
equipment…
Mechanical
facilities
Heating, ventilation and air
conditioning - HVAC
Chiller system
Other facilities
Raised floor
Fire protection
Security system
Data center Infrastructure management system
a) Rack Cabinet
A data center rack is a type of physical steel and electronic encloses that are designed
to house servers, networking devices, cables and other data center computing equipment.
This physical structure provides equipment placement and orchestration within a data
center facility. Each rack is generally prefabricated with slots for connecting electrical,
networking and Internet cables. Data center racks are created using a systematic design,
and are classified based on their capacity - the amount of IT equipment they can hold. It
can be defined two types of power space for rack in Data Center [16, 17]. The maximum
power of rack is denoted as Rack Max Power, which is determined by the rack power
supply and generally adopts its rating power (kVA or kW). When the total power of all
devices stacked inside a rack reaches or exceeds its maximum power, the power supply
will be cut off to protect those devices. Meanwhile, the maximum power of server is
denoted as Server Max Power, which generally adopts the server’s rating power. Usually,
server’s power consumption will not exceed its maximum power, even if it runs at full
capacity. The maximum power of server is far lower than its rating power [16]. When
17
deploying servers, it must be ensured that the total needed power supply of all servers in a
rack will not exceed the maximum power of the rack, thus avoiding the influence on that
workload running on the rack due to its power-off. In reality, the total power demand of
servers in a rack is usually calculated with all servers’ maximum power. New generations
of high-density servers and networking equipment have increased rack densities and
overall facility power requirements. While power density per rack averaged 6 kW in 2006,
it climbed to about 8 kW by 2012, 12 kW per rack by 2014. A Data Center capacity is
normally rated by its power-density (Table 5).
Figure 5: Rack cabinet rows arrangement in a Data Center and IT equipment
arrangement inside of a Rack cabinet
Table 5: Rack cabinet design consideration with power density [16, 17]
Classification
Power density (kW/ cabinet)
Low - DC
Up to 5 kW
Medium DC
6 kW to 10 kW
High - DC
11 kW to 17 kW
Extreme - DC
Over 17 kW/ rack
18
b) Power System
IT equipment and support-infrastructures inside the Data Center consume a lot of
energy. Data centers that deliver critical services for businesses have always been
concerned with costly downtime. In order to guarantee a reliable power supply, data
centers employ a variety of energy resources such as: uninterruptible power supply
systems and backup generators, to make up when the grid is not present [18, 19]. In
addition, all critical loads are dual powered according the guidelines for Data Center
infrastructures by Uptime Institute
1
. IT equipment requires 24/7 continuous operation, so
it will be prioritized to supply power from UPS via the power distribution unit (PDU) to
the Rack cabinet. To prevent power outages, Data Center need an uninterruptable power
supply. During a power interruption, the UPS will switch over the current load to a set of
internal or external batteries (Figure 6).
Figure 6: An overview of power supply system in the Data Center [18, 19]
1
https://uptimeinstitute.com/tier-certification/design
19
c) Cooling System
Every kilowatt (kW) of electrical power that is used by IT devices is later released
as heat [20, 21]. This heat must be drawn away from the device, rack cabinet and Data
center area so that operating temperatures are kept constant. Air conditioning systems, that
operate in a variety of ways and have different levels of performance, are used to draw
away heat. Providing air conditioning to IT systems is crucial for their availability and
security. When servers operate, they will heat up. If they reach a critical point, the server
components will not be able to work properly or to burn the processor. Humidity is also
important, if Data Center is too humid, this will form condensation. The condensation is
in hard drives, in connecting sockets will quickly lead to damage, corrosion, and
eventually, equipment failure. When data center air is too dry, the electric sparks are easily
produced. The high voltages from the static discharge can damage components of Data
Center. Temperature and humidity need to be in ideal condition by using the Computer
Room Air Conditioner (CRAC). IT hardware produces an unusual, concentrated heat load,
and at the same time, is very sensitive to changes in temperature or humidity. In fact, there
are two ways to remove the heat load in a room: Standard comfort air conditioner and
Precision air conditioner. Standard comfort air conditioning is not designed to handle he
heat load concentration and heat load profile of technology rooms, nor is it designed to
provide the precise temperature and humidity set point required for these applications.
Precision air systems are designed for close temperature and humidity control. They
provide high reliability for year-round operation, with the ease of service, system
flexibility and redundancy necessary to keep the technology room up and running 24 hours
a day [20].
As recommended by the American Society of Heating, Refrigerating and Air-
Conditioning Engineers (ASHRAE
2
) [21, 22] and the ANSI/TIA-942 data center standard,
the first step to gaining control of excess heat is to reconfigure the cabinet layout into a hot
aisle/cold aisle arrangement (Figure 7). ASHRAE allowable thermal envelopes as defined
in Thermal Guidelines for Data Processing Environments that represent where IT
manufacturers test equipment to ensure functionality. According to the ASHRAE TC 9.9
2
https://www.electronics-cooling.com/2019/09/ashrae-technical-committee-9-9-mission-critical-facilities-data-
centers-technology-spaces-and-electronic-equipment/
20
standards, to ensure optimal working environment for servers, IT and telecommunications
equipment; cooling systems need to maintain the IT environment according to the
following standards: ASHRAE TC 9.9 2015 recommended with Temperature: 22°C ± 1°C
and Humidity: 50% ± 5% RH. ASHRAE issued its first thermal guidelines for data centers
in 2004. The original ASHRAE air temperature recommended envelope for data centers
was 20-25°C and 40-55%RH (2004). Reliability and uptime were the primary concerns
and energy costs were secondary. Since then, ASHRAE has issued a recommended range
of 18-27°C and 35-60% RH (2008) and, in 2011 [23], published classes A3 and A4 that
allow a temperature range of 5°- 45°C and 20-80% RH. The A3 and A4 classes were
created to support new energy saving technologies such as economization. A summary of
the ASHRAE recommended range and classes are given in the Figure 8. In the suction
section of the rack (cold aisle) the air temperature must be at a temperature of between
18°C and 27°C and 30°C ÷ 38 °C (hot aisle) and relative humidity (RH) from 40% - 55%
depending on the active thermal load (Figure 7).
Figure 7: Example of a hot aisle/cold aisle configuration recommended by ASHRAE TC
9.9 in 2015 4th edition [22]
21
Figure 8: ASHRAE TC 9.9 - 4th edition 2015 Thermal Guidelines [23]
d) Raised floor
Raised floor ensures high load support, easy to access, maintenance of underfloor
equipment, cleaning, and safety. Flexible module for a cold air distribution system for
cooling IT equipment, to tracks, conduits, or supports for data cabling, a location for power
cabling, a copper ground grid for grounding of equipment, a location to run chilled water
or other utility piping. According to the [24], the raised floor was developed and
implemented as a system intended to provide the following functions:
- A cold air distribution system for cooling IT equipment
- Tracks, conduits, or supports for data cabling
- A location for power cabling
- A copper ground grid for grounding of equipment
- A location to run chilled water or other utility piping
22
Figure 9: Raised floor in a Data Center [24]
2.3 DATA CENTER APPLICATION IN SPACE DOMAIN
There are a 2,666-operating satellite on the orbit until 3/31/2020. They are owned
by the United States: 1,327, Russia: 169, China: 363, Other: 807. These satellites include
low earth orbit (LEO): 1,918, medium earth orbit (MEO): 135, Elliptical orbit: 59,
geostationary orbit (GEO): 554. Space technology has advanced rapidly in recent years.
Satellite plays an important role in daily life. [25].
Four important satellite applications include communication, navigation, weather-
climate, earth-planet observation. A communications satellite relays and amplifies radio
telecommunications signals via a transponder; it creates a communication
channel between a source transmitter and a receiver at different locations on Earth.
Communications satellites are used for television, telephone, radio, internet, and military
applications. There are about 2,000 communication satellites in Earth's orbit which use for
private, public, academic, business, and government. The signal is sent into space is
called Uplink frequency, while the frequency with which it is sent by the transponder
is Downlink frequency (Figure 10).
23
Figure 10: Transponder of satellite internet
The speed of the satellite internet is ranging from 12Mbps to 100 Mbps [8]. This
means about 1TB data per day or 30 TB per month can be transmitted between a satellite
and a ground station. If many satellites are used to transmit data, hundreds of TB data per
month will need to collect, to store, to process, and to distribute them. Therefore, a Data
Center is built to manage a large amount of data.
Another application of satellite is earth-planet observation, European Space Agency
(ESA) is exploiting five Sentinel programs (Sentinel 1, 2, 3, 4, 5P) [26] that include radar
and super-spectral imaging for land, ocean, and atmosphere monitoring. For instance, the
soil moisture can extract and study from Sentinel 1 satellite data (Figure 11), this image
contains about 1.5 GB data. While average each 30s one image is captured. This means
about (24*3600/30) *1.5=4.32TB data is acquired per day. The five Sentinel programs
(Sentinel 1, 2, 3, 4, 5P) will produce hundreds of TB data each month. Therefore, a Data
Center is designed to manage a very large amount of data.
24
Figure 11: A sentinel image of Northeast Vietnam from Sentinel 1
Indian Space Science Programme has the primary goal of promoting and
establishing space science and technology programme. The ISSDC is the primary data
center to be retrieved from Indian space science missions. This center is responsible for
the collections of payload data and related ancillary data for space science missions such
as Chandrayaan, Astrosat, Youthsat, etc. The payload data sets can include a range of
information including satellite images, X-ray spectrometer readings, and other space
observations [27]. The Southeast Asia Regional Climate Downscaling (SEACLID) project
was established in November 2013 with objectives to downscale multiple climate change
scenarios for Southeast Asia, build capacity in regional climate simulation and establish
data centre for data dissemination [28].
A mini Data Center of Department of Space and Applications (Figure 12) at USTH
which is built to store and process the databases for scientific research activities. These
researches include climate modeling, remote sensing projects, ..which demand high-
performance computing. Climate modeling is based on mathematical equations that
represent the basic laws of physics, chemistry, and biology that govern the behavior of the
atmosphere, ocean, land surface, ice, other parts of the climate system, and interaction
among them. Therefore, climate modeling requires massive computing capacity and
capability with computer performance of about 2 Tflops and stores about 200 TB data.
25
Figure 12: Mini data center at the University of Science and Technology of Hanoi
2.4 OVERVIEW OF DATA CENTER IN VIETNAM
In Vietnam, in fact, there are many Data Centers: banks, agencies, state
organizations ... enterprises. However, we do not have the official statistical reports and
full evaluation analysis. The development of information technology is making Vietnam
one of the fastest growing countries in the data center field. Currently, there are 17
colocation data centers from three areas in Vietnam (Da Nang 1, Hanoi 10, and Ho Chi
Minh City 6 Data centers) [29]. The popular Data centers are owned by Viettel IDC,
Telehouse VN, VNTT. Viettel IDC has certificated ANSI/TIA-942-B:2017 Rated-3
Constructed Facilities (24/12/2019) that is the first Data Center service provider in
Vietnam to meet this rigorous international standard. FPT Telecom which will be
Vietnam's the largest data center (computer space is 10,000 square meters) and is
constructing at the Hi-Tech Park in HCM City. Some Data Centers have been certified by
Uptime Institute
3
.
3
https://uptimeinstitute.com/uptime-institute-awards/
26
Figure 13: Some Data Center in Vietnam certified by Uptime Institute (updated 2020)
https://uptimeinstitute.com/uptime-institute-awards/
Table 6: 17 collocation Data Center in Vietnam
Hanoi - 10
1. Telehouse Vietnam
Hanoi;
2. Viettel IDC Phap Van
3. CMC Hanoi Data center
4. VNPT Data Nam Thang
Long
5. Viettel IDC Binh Duong
industrial zone
5. Viettel IDC Binh Duong
industrial zone
6. GDS Hanoi Thang Long
7. FPT Data Center (Cau Giay)
8. Hanel CSF Data Center
9. Hanoi Thang Long Data
Center
10. FPT FORNIX, FPT
Telecom International, Hanoi
Ho Chi Minh - 06
1. QTSC Data Center
2. Viettel IDC Hoang Hoa
Tham DC
3. eDatacenter VNTT
4. Viettel IDC HHT DC
5. CMC Telecom DC Ho Chi
Minh
6. DTS Data Center
Da Nang - 01
Viettel IDC Danang
27
III. BASIC CONCEPT DESIGN OF A DATA CENTER IN USTH
3.1 TECHNICAL STANDARDS REFERENCES FOR DESIGN CONCEPT
3.1.1 Technical standards for design of physical facilities in Data Center
Designing a data center is a huge task that requires a lot of time, effort, and expense.
When it’s done properly, a data center facility can house servers and other IT equipment
for decades into the future. Whether planning out a modest facility for a specific company,
or a massive, million-plus square foot facility for cloud technologies, doing everything
properly is critical. In general best practices, we should consider the following points when
looking at the needs for design of a Data Center [30].
Floorspace How many square feet of floorspace do you need today? Do you
expect this to grow over time? It is much less expensive to build what you need now
than to try to perform a renovation in a few years.
Power Requirements The electrical needs of a data center can be quite massive.
Take time to plan out the needs you have today, and the potential requirements you
will have in the future.
Cooling Requirements As you add more and more hardware into a data center,
the heat produced will need to be eliminated. New cooling units are extremely
costly, so investing in the right ones up front is essential.
Server Space Choosing the right server racks now will allow you to house your
equipment properly while leaving space for growth as well. Many new data centers
have rows of empty racks that help to facilitate proper airflow until they are filled.
There are two international standards referred to design infrastructure of Data Center
include Uptime and ANSI/TIA-942 [30, 31]. This consists of facility requirements for
power, cooling and back up, that measure a data center’s potential uptime. First is the
Uptime standard of the Uptime Institute, which is an advisory organization focused on
improving the performance, efficiency, and reliability of Data Center. Uptime Institute
covers the electrical part, mechanical part and ancillary components (engine generator,
fuel system, make-up water system, building automation system). The Uptime Institute
defines 4-Tier system topologies for describing the availability of systems as shown in
Figure 14. Second is the international standard ANSI/TIA-942 is a standard issued by a
non-profit organization and TIA is accredited by ANSI. The standard is publicly available
28
leading to great transparency. The standard covers all aspects of the physical data center
including site location, architecture, security, safety, fire suppression, electrical,
mechanical and telecommunication. The summary of requirements for data center design
is divided into 4-levels in the following Table 7.
Figure 14: Topologies of different Tier systems [19, 30, 31]
Continuous and smooth power supply system is the most basic requirement of the
Data Center. Cost of power interruption could be very high especially for the e-commerce
related business, for example. In order to provide continuous power during grid outages,
Data Centers are provided with backup generators (GEN) and uninterruptible power
supply systems. UPS plays an important role in maintaining data center uptime by
providing continuous power until backup generators start in the event of grid failure.
Storage energy is the most critical component and requires regular maintenance. One of
storage techniques, the UPS manufacturers uses batteries as energy storage for back-up
time at least 10 minutes, it takes enough time for the data to be backed up and the system
to be powered from the generator during grid outages.
29
Table 7: Topology standard (UPTIME and TIA-942)
Requirements
Level 1
Level 2
Level 3
Level 4
Active Capacity
Components to Support
the IT Load
N
N+1
N+1
N+N
Distribution Paths
1
1
1 Active &
1 Alternate
2 (Both Active)
Concurrently
Maintainable
No
No
Yes
Yes
Fault Tolerance
No
No
No
Yes
Compartmentalization
No
No
Yes
Yes
Continuous Cooling
No
No
No if
[Average < 5
KW]
Yes
[Average > 5
KW]
Yes
In the Figure 14, It can be seen that the difference between Tier I and Tier II is the
number of generator and UPS. In Tier II, additional generators and UPS provide backup
for the most critical components. The significant difference between Tier II and Tier III is
the number of delivery path. In Tier III, the alternative power from a second utility
provides the parallel power support for the critical IT load, in case of power failure of the
primary path. However, there is no requirement to install UPS in the passive path.
Therefore, Tier III system is vulnerable to utility conditions. Tier IV provides a complete
redundant system by adding two active power delivery paths. It can enable dual systems
to run actively in parallel. In both power paths, it contains N+1 UPS and generator sets.
The comparison of performance in different Tier systems is shown in Table 7. It shows
that higher level of Tier system has greater system availability.
30
The data center cooling system is an important system that helps to remove the heat
that is discharged from IT equipment during the operation time. The calculation of thermal
load is based on the power consumption of the IT equipment. Using precision air-
conditioning system is recommended according to the standards described in previous
session. ASHRAE allows thermal envelopes as defined in “Thermal Guidelines for Data
Processing Environments” that represent where IT manufacturers test equipment to ensure
functionality. In fact, we probably know about some cooling technologies currently used
in data centers: room, row and rack-based cooling systems. Choosing a right cooling
technology is based on experiences and requirements for data center design. Figure 15
introduces two data center server cooling technologies: room based and row-based
cooling systems. The closer the cooling system is to the heat source, the more efficient it
will operate. Room-based cooling may consist of one or more air conditioners supplying
cool air completely unrestricted by ducts, dampers, vents, etc. or the supply and/or return
may be partially constrained by a raised floor system or overhead return plenum. With a
row-based configuration, the CRAC units are associated with a rack row and are assumed
to be dedicated to a row for design purposes. Row-based cooling has a number of side
benefits other than cooling performance. The reduction in the airflow path length reduces
the CRAC fan power required, increasing cooling efficiency.
Figure 15: Typical Cooling technology : row-based or Inrow and room-based or
Inroom [32]
31
3.1.2 Facts of the IT infrastructure in the USTH
Figure 16: Data Center in the University of Science and Technology of Hanoi: 5th
floor of the Building 9-floors, IT room area of 60m2 with 03 rack cabinets and 8-CPUs,
other Storages
Figure 17: actual Data Center in the University of Science and Technology of
Hanoi located in 5th floor (USTH building) with the air comfort conditioning, no-UPS,
no-backup Generator, no-access control, no-raised floor systems
32
Figure 18: Layout arrangement of Data Center in the University of Science and
Technology of Hanoi: 5th floor: 03 rack cabinets and 8-CPUs, other Storages
Comments:
- The Data Center area with depth x width = 8500 x 6500 mm, on the 5th floor of a 9
floor building. The room floor-to-ceiling height is 2700 mm. (Figure 16-18).
- Including 03 racks cabinets, uneven height, wide x depth = 600 mm x 1070 mm, of
which 01 rack (1991mm high, 600mm wide, 1200mm deep) contains server
equipment operating. The remaining racks contain other equipment. Also, the entire
USTH’s database is being stored and processed in 07 CPUs.
- Fire protection system: only suitable for fire prevention and fighting in the office
area; it does not comply with the standards of the data center.
- The data center USTH is not equipped with UPS and GEN systems, so the risk of
data loss can occur when the building power electricity is interrupted. Therefore,
the stability and reliability in operation of IT equipment are not guaranteed to
comply with Data Center standards.
33
- Furthermore, the air conditioners in use currently are not the precision air cooling
systems recommended for use in data centers. This is a standard comfort air
conditioning system, not including humidification and dehumidification systems,
so it is impossible to control the exact temperature and humidity in the entire space
of the IT equipment room when the rack system operates. ., thus greatly affecting
the stability and performance of IT equipment.
- Another points that we can consider in data center area space causes a large loss of
cooling air, thus causing loss of power consumption as well as the ability to cool IT
equipment when Data center operating during summer days with very high outside
ambient temperatures in Hanoi.
3.2 SIMPLE CONCEPT OF DATA CENTER DESIGN IN UNIVERSITY
3.2.1 Site installation and main components consideration
Based on the data center design standards presented in Part II, the main
infrastructure design consists of two main parts: Electrical Infrastructure and Air
Conditioning Infrastructure.
In practice, the design is based on the needs of the end-user (i.e. capacity density of
racks), the location where the Data Center is installed, and the deployment experience of
other data centers in Vietnam. In this work, we recommend installing on the 5th floor of
the USTH’s building; compiling with the TIER-3 standard. We divided the areas into
rooms (see Figure 19): the IT room (for the racks which contain servers, storage and
network equipments); Power room contains the electrical equipment supplies (UPS,
electrical cabinet power supply); NOC room (for IT staff managing and operating the Data
Center of USTH).
34
Figure 19: Layout arrangement of new - Data Center in the University of Science
and Technology of Hanoi: 5th floor: 20 rack cabinets
3.2.2 Proposal design concept of the Data Center in USTH
a) Rating of Data Center energy demand
In this report, we only mention the calculation and design of the power supply for
all equipment of the Data Center; Cooling system for IT equipment. IT equipment in the
IT room is supplied priority by the UPS. The energy calculation is based on standards
described in Chapter II and best practices experiences in implementing Data Centers of the
company HDL JSC
4
. Details of the calculation of energy supply are shown in Table 8 -
10. Power source and cooling heat source topology for the IT room is shown in Figure 20.
4
http://hdl.com.vn/
35
Table 8: Power consumption calculation of the IT equipment in Data Center (critical
equipment be supplied by UPS)
Description
Formular
Unit
Value
Total
I
POWER RATING FOR IT
LOAD
CRITICAL EQUIPMENTS IN DATA
CENTER
1
Number server rack
(1)
Rack
14
-
Power density/ serve rack
(2)
KW
5.0
-
Power rateing of server rack
(3) = (1) x (2)
KW
70.0
2
Number of network rack
(4)
Rack
4
-
Power density/ network rack
(5)
KW
3.5
-
Total power rating of network
rack
(6) = (4) x (5)
KW
14.0
3
Number of rack for arrangment -
Camera
(10)
Rack
2
-
Power density/ Camera rack
(11)
KW
3.5
-
Total power rating of Camera
rack
(12) = (10) x
(11)
KW
7.0
TOTAL POWER RATING
FOR IT LOAD
(7) = (3) + (6)
KW
91.0
II
POWER RATING FOR
OTHER LOAD
-
Power consumption reserved
other load (CCTV, Access
Control, Fire protection)
(8)
KW
3.0
-
PC work station for IT Staffs and
other equipment in NOC room
(10)
KW
3.0
TOTAL POWER RATING
FOR OTHER LOAD
(11)
KW
6.0
III
POWER RATING FOR IT
LOAD - SUPPLY BY UPS
(12)
KW
97.0
-
IT LOAD DEMAND
(13)
KW
97.0
-
POWER RATING UPS 115
KVA / 115 KW
(14)
KW
115.0
-
Number of UPS 115 KVA/ 115
KW
(15)
unit
1
-
Number of UPS (redundancy):
N+1
(16)
unit
1
Total number of UPS sizing
(17) = (15) +
(16)
unit
2
36
Table 9: Power sizing UPS
Description
Formular
Unit
Value
Total
I
POWER RATING FOR
UPS
-
Power supply IT
equipment load by UPS
(1)
KW
97.0
-
UPS efficiency
(2)
%
95%
-
UPS losses
(3) = (1) x (2)
KW
4.85
-
% charge to Battery
backup
(4)
%
10%
-
Power demand for
charging Battery backup
(5) = (1) x 10%
KW
9.7
TOTAL POWER
RATING FOR UPS
(6) = (1) + (3) + (5)
KW
111.55
II
UPS rating selection
115 KVA/115KW
Figure 20: Proposal topology for Data center in USTH refer to the Uptime
Institute Tier 3 for M&E infrastructure [19, 30, 31]
37
a) Thermal rating of Data Center energy demand cooling system
Table 10: Thermal sizing CRAC Precision cooling system in IT room
Description
Formular
Unit
Value
Total
I
THERMAL RATING FOP IT
ROOM
(1)
KW
96.0
1
IT LOAD DEMAND
(2)
KW
91.0
2
Thermal losses in IT room
(3)
KW
5.0
3
Thermal rating of CRAC 35 KW
(4)
unit
3
4
Number of CRAC (redundancy):
N+1
(precision cooling system: row-
based technology)
(5)
unit
1
Total number of CRAC sizing
(6) = (4) + (5)
unit
4
Table 11: Power demand calculation for all equipment in Data Center
Description
Formular
Unit
Value
Total
I
TOTAL POWER RATING
FOR UPS
(6)
KW
111.55
II
POWER DEMAND FOR
HVAC SYSTEM
-
CRAC 35 kW (INROW)
(7)
KW
30
-
Other Air conditioner system (in
NOC and Power rooms)
(8)
KW
5
Power demand for HVAC
system
(9) = (7) + (8)
KW
35
III
Lighting systems, other loads
in NOC & Power room
-
Lighting systems
(10)
KW
3
-
Office equipments
(11)
KW
3
Total
(12) = (10) +
(11)
KW
6
IV
POWER DEMAND FOR
DATA CENTER
(13) = (6) +
(9) + (12)
KW
152.55
Power factor
(14)
0.85
38
Power rating in kVA - Data
Center
(main electrical distribution
board)
(15)
KVA
179.50
POWER SUPPLY FROM TRANSFORMER
KVA
180
Figure 21: Layout arrangement - Data center in USTH according to the Uptime
Institute Tier 3
In Figure 21, we propose designing a Data Center according to TIER - 3 standards
for University of Science and Technology of Hanoi. This design aims to expand in the
future as the number of students of the school increases up to 5000 students/year; racks
are used for different departments in the school: Faculty of SPACE, EN, ICT….,
prioritizing the storage of student records, faculty records, and other important USTH
databases.
39
IV. CONCLUSIONS
This thesis presented a comprehensive review of the previous research works and
the future trends of data center energy infrastructure. The infrastructure design for the data
center is complicated and needs to be based on related international design standards such
as electrical infrastructure, cold air supply system used for IT equipment cooling. In fact,
the operation of IT equipment (contained inside rack cabinets) required a 24/7 constant
power supply from the UPS system with 22°C temperature, 50% relative humidity by a
precision air-conditioning system. Failing to ensure these recommended operating
conditions can create a risk of data loss and security compromise along with negative
effects on the performance of IT equipment, thereby becoming a big factor in the business's
economic losses. On that basis, we have proposed a new Data Center design for USTH,
based on TIER-3 infrastructure standards, which are being applied in most major data
centers in Vietnam. However, due to time limitations and lack of in-depth understanding
of all data center components, this report does not cover other data center internal
structures.
40
REFERENCES
[1] Cisco, “Cisco Global Cloud Index: Forecast and Methodology, 2015–2020”,
White Paper, 2016
[2] M. Dayarathna, Y. Wen, R. Fan, “Data Center Energy Consumption Modeling: A
Survey”, IEEE Communications Surveys & Tutorials, Vol. 18, No. 1, p. 732-794,
2016
[3] VNEXPRESS: “Vietnam among least competitive data center markets in Asia
Pacific: report”, https://e.vnexpress.net/news/news/vietnam-among-least-
competitive-data-center-markets-in-asia-pacific-report-3970435.html , August 22,
2019
[4] Gemma A. Brady, Nikil Kapur, Jonathan L. Summers, Harvey M. Thompson, “A
case study and critical assessment in calculating power usage effectiveness for a
data centre”, Energy Conversion and Management, Volume 76, 2013, Pages 155-
161, ISSN 0196-8904,
[5] Cisco Systems, Inc: “What Is a Data Center”,
https://www.cisco.com/c/en/us/solutions/data-center-virtualization/what-is-a-data-
center.html#~distributed-network
[6] Rong, H.; Zhang, H.; Xiao, S.; Li, C.; Hu, C, “Optimizing energy consumption for
data centres”, Renew. Sustain. Energy Rev. 2016, 58, 674691
[7] Oró, E.; Depoorter, V.; Garcia, A.; Salom, J, “Energy efficiency and renewable
energy integration in data centres. Strategies and modelling review”, Renew.
Sustain. Energy Rev. 2015, 42, 429445.
[8] Beaty, D.L, “Internal IT load profile variability”, ASHRAE J. 2013, 55, 7274
[9] Patrick Donovan, “Data Center Projects: Advantages of Using a Reference
Design”, White paper 147. Schneider Electric.
[10] Arizton: “Data Center Market - Global Outlook and Forecast 2018-2023”;,
https://www.arizton.com/market-reports/global-data-center-market/snapshots
[11] ICT Price : “Top 10 biggest data centres from around the world”, http://ict-
price.com/top-10-biggest-data-centres-from-around-the-world/
[12] Tim Day and Nam D. Pham, “Data Centers: Jobs and Opportunities in
Communities Nationwide”, 2017 U.S. Chamber of Commerce Technology
Engagement Center.
[13] Andrea, Mike. 2014. “Data Center Standards: Size and Density,” The Strategic
Directions Group Pty Ltd.
[14] Stansberry, Matt. 2014. “Explaining the Uptime Institute’s Tier Classification
System.” Uptime Institute.
41
[15] J. Mitchell Jackson, J.G. Koomey, B. Nordman and M. Blazek, “Data center
power requirements: measurements from Silicon Valley”, July 2001: University
of California, Berkeley.
[16] Tom Weber, “Evaluating data center cabinet power densities”, www.align.com
[17] Kevin Brown, Wendy Torell, Victor Avelar, “Choosing the Optimal Data Center
Power Density”, White paper n°156, Schneider Electric.
[18] Xibo Jin, Fa Zhang, Athanasios V. Vasilakos, Zhiyong Liu, “Green Data
Centers: A Survey, Perspectives, and Future Directions”, 2016.
https://arxiv.org/abs/1608.00687v1
[19] S. Chalise et al., "Data center energy systems: Current technology and future
direction," 2015 IEEE Power & Energy Society General Meeting, Denver, CO,
2015, pp. 1-5, doi: 10.1109/PESGM.2015.7286420.
[20] APC: “Why Do I Need Precision Air Conditioning?”, 2001 American Power
Conversion (Schneider Electric).
[21] John Bruschi, Peter Rumsey, Robin Anliker, Larry Chu, and Stuart Gregson,
“FEMP Best Practices Guide for Energy-Efficient Data Center Design”, NREL
report/project number: nrel/br-7a40-47201, 2011.
[22] ASHRAE TC 9.9, Standard 90.4, “Thermal Guidelines for Data Processing”,
https://www.electronics-cooling.com/2019/09/ashrae-technical-committee-9-9-
mission-critical-facilities-data-centers-technology-spaces-and-electronic-
equipment/
[23] ASHRAE, 2015, “Thermal guidelines for Data Processing Environments”, 4th
edition. Atlanta: ASHRAE.
[24] Neil Rasmussen, “Raised Floors vs Hard Floors for Data Center Applications”,
White Paper 19. 2014, Schneider Electric.
[25] The Union of Concerned Scientists: “UCS Satellite Database ”, Published Dec 8,
2005 Updated Apr 1, 2020. https://www.ucsusa.org/resources/satellite-database
[26] Copernicus Program, https://en.wikipedia.org/wiki/Copernicus_Programme
[27] Indian Space Science Data Center project,
https://www.re3data.org/repository/r3d100010988?fbclid=IwAR0h-
yga7Pvxp97cOYkDKRhhpTswCCgDYtpJFgP-hTl-nVP1N0Zui8z1-1g
[28] Project N° ARCP2015-04CMY-Tangang, “The Southeast Asia Regional Climate
Downscaling”, 2015
[29] Colocation Vietnam (data accessed on June 2020):
https://www.datacentermap.com/vietnam/
[30] Anixter : “Data CenterInfrastructure Resource Guide”,
https://www.anixter.com/content/dam/Anixter/Guide/12H0013X00-Data-Center-
Resource-Guide-EN-US.pdf
42
[31] Victor Avelar, “Guidelines for Specifying Data Center Criticality / Tier Levels”,
White Paper 122, by APC (Schneider Electric).
[32] Kevin Dunlap and Neil Rasmussen, “Choosing Between Room, Row, and
Rack-based Cooling for Data Centers”, White Paper 130 by APC (Schneider
Electric
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
At present, a major concern regarding data centers is their extremely high energy consumption and carbon dioxide emissions. However, because of the over-provisioning of resources, the utilization of existing data centers is, in fact, remarkably low, leading to considerable energy waste. Therefore, over the past few years, many research efforts have been devoted to increasing efficiency for the construction of green data centers. The goal of these efforts is to efficiently utilize available resources and to reduce energy consumption and thermal cooling costs. In this paper, we provide a survey of the state-of-the-art research on green data center techniques, including energy efficiency, resource management, thermal control and green metrics. Additionally, we present a detailed comparison of the reviewed proposals. We further discuss the key challenges for future research and highlight some future research issues for addressing the problem of building green data centers.
Conference Paper
Full-text available
Data centers are becoming a significant energy consumer. Server workload, cooling, and supporting infrastructure represents large loads for the grid. This paper intends to present a comprehensive literature review that account for generation, loads, storage, and topology of data centers. It is shown that green data centers are emerging which incorporate renewable energy sources to cap their carbon footprint. Different data center metrics have been introduced which shows data center efficiency and utilization of resources. Utilization of different power supply topologies and storage to improve the availability, and how various components can play role in energy management to improve the performance of data centers are presented.
Article
Data centers are critical, energy-hungry infrastructures that run large-scale Internet-based services. Energy consumption models are pivotal in designing and optimizing energy-efficient operations to curb excessive energy consumption in data centers. In this paper, we survey the state-of-theart techniques used for energy consumption modeling and prediction for data centers and their components. We conduct an in-depth study of the existing literature on data center power modeling, covering more than 200 models. We organize these models in a hierarchical structure with two main branches focusing on hardware-centric and software-centric power models. Under hardware-centric approaches we start from the digital circuit level and move on to describe higher-level energy consumption models at the hardware component level, server level, data center level, and finally systems of systems level. Under the software-centric approaches we investigate power models developed for operating systems, virtual machines and software applications. This systematic approach allows us to identify multiple issues prevalent in power modeling of different levels of data center systems, including i) few modeling efforts targeted at power consumption of the entire data center ii) many state-of-the-art power models are based on a few CPU or server metrics, and iii) the effectiveness and accuracy of these power models remain open questions. Based on these observations, we conclude the survey by describing key challenges for future research on constructing effective and accurate data center power models.
Article
There is a significant opportunity for engineers to deliver great value and great return on investment by right sizing for the IT load in data centers even though it is a challenging task. The physical manifestation of the Internet is server-based one with most of those servers residing in data centers. Some common drops in activity occur when there is a major event that diverts attention away from the Internet such as big sporting events like the Superbowl, World Cup Final and the Olympics. A daily fluctuation in Internet activity commonly occurs with the stock market, with some data centers supporting high-frequency trading, others provide domestic exchanges only. Depending on the type of trading supported, the data center may only see high Internet traffic for eight hours of any given day.
Article
Metrics commonly used to assess the energy efficiency of data centres are analysed through performing and critiquing a case study calculation of energy efficiency. Specifically, the metric Power Usage Effectiveness (PUE), which has become a de facto standard within the data centre industry, will be assessed. This is achieved by using open source specifications for a data centre in Prineville, Oregon, USA provided by the Open Compute Project launched by the social networking company Facebook. The usefulness of the PUE metric to the IT industry is critically assessed and it is found that whilst it is important for encouraging lower energy consumption in data centres, it does not represent an unambiguous measure of energy efficiency.
Article
The continuous growth in size, complexity and energy density of data centres due to the increasing demand for storage, networking and computation has become a worldwide energetic problem. The emergent awareness of the negative impact that the uncontrolled energy consumption has on natural environment, the predicted limitation of fossil fuels production in the upcoming decades and the growing associated costs have strongly influenced the energy systems engineering work in the last decades. Therefore, the implementation of well known and advanced energy efficiency measures to reduce data centres energy demand play an important role not only to a supportable growth but also to reduce its operational costs. The carbon footprint is greatly influenced by the energy sources used. Therefore, there have been recent efforts to exploit and reuse or combine green energy sources in data centres to lower brown energy consumption and CO2 emissions. This paper presents a comprehensible overview of the current data centre infrastructure and summarizes a number of currently available energy efficiency strategies and renewable energy integration into data centres and its characterization using numerical models. Moreover it would be necessary to develop dynamic models and metrics for properly understand and quantify the energy consumption and the benefits of applying the incoming energy efficiency strategies and renewable energy sources in the data centres. Thus, the researches or investors will be able to compare with reliability the different data centre designs and choose the best option depending on the renewable energy sources and capital available.
Article
A framework for benchmarking a future data center's operational performance is essential for effective planning and decision making. Currently available criticality or tier methods do not provide defensible specifications for validating data center performance. An appropriate specification for data center criticality should provide unambiguous defensible language for the design and installation of a data center. This paper analyzes and compares existing tier methods, de- scribes how to choose a criticality level, and proposes a defensible data center criticality specification. Main- taining a data center's criticality is also discussed. Executive summary >
Article
Current estimates of data center power requirements are greatly overstated because they are based on criteria that incorporate oversized, redundant systems, and several safety factors. Furthermore, most estimates assume that data centers are filled to capacity. For the most part, these numbers are unsubstantiated. Although there are many estimates of the amount of electricity consumed by data centers, until this study, there were no publicly available measurements of power use. This paper examines some of the reasons why power requirements at data centers are overstated and adds actual measurements and the analysis of real-world data to the debate over how much energy these facilities use.