Conference PaperPDF Available

Numerical Investigation of Data Center Raised-Floor Plenum



Data center raised-floor plenum effectiveness is numerically investigated using a computational fluid dynamics (CFD) package to determine the most appropriate data center raised floor plenum height (size). The current study considers raised floor plenum height between 30.5 cm and 152.4 cm (12–60 inch) with the standard 15.2 cm (6-inch) increment while maintaining the supply airflow rate constant. Three factors are considered for optimum plenum size: the individual airflow rate from each perforated tile, the level of airflow rates uniformity between different perforated tiles and the top rack inlet air temperature at 2.13 m (7-ft) above the raised floor. The results show that raised floor plenum with height of 76.2 cm (30-inch) or higher had the most uniform airflow rates through the perforated tiles and most uniform temperature across the top of the racks. The findings also indicate that the uniformity of the static pressure inside the plenum increases with plenum height. The current results show that data center plenums with height range of 76.2–91.4 cm (30–36 inch) are found to be the best option based on the current thermo-fluid considerations however thermo-economics analysis should be addressed in future work. Copyright © 2015 by ASME Country-Specific Mortality and Growth Failure in Infancy and Yound Children and Association With Material Stature Use interactive graphics and maps to view and sort country-specific infant and early dhildhood mortality and growth failure data and their association with maternal
ASME 2015 International Mechanical Engineering Congress & Exposition
November 13-19, 2015, Houston, Texas, USA
Abdlmonem H. Beitelmal
Qatar Environment and Energy Research Institute
Qatar Foundation
Doha, Qatar
Data center raised-floor plenum effectiveness is
numerically investigated using a computational fluid
dynamics (CFD) package to determine the most
appropriate data center raised floor plenum height (size).
The current study considers raised floor plenum height
between 30.5 cm and 152.4 cm (12-60 inch) with the
standard 15.2 cm (6-inch) increment while maintaining
the supply airflow rate constant. Three factors are
considered for optimum plenum size: the individual
airflow rate from each perforated tile, the level of
airflow rates uniformity between different perforated
tiles and the top rack inlet air temperature at 2.13 m (7-
ft) above the raised floor. The results show that raised
floor plenum with height of 76.2 cm (30-inch) or higher
had the most uniform airflow rates through the
perforated tiles and most uniform temperature across the
top of the racks. The findings also indicate that the
uniformity of the static pressure inside the plenum
increases with plenum height. The current results show
that data center plenums with height range of 76.2-91.4
cm (30-36 inch) are found to be the best option based on
the current thermo-fluid considerations however thermo-
economics analysis should be addressed in future work.
Energy consumption is a critical factor in the design
and operation of data centers. The data center is the
. Environmental Protection
Agency (EPA) reported that data centers consumes at
least 1.5% of the total United States of America
electricity consumption [1] and this value is expected to
increase over time. This increase is motivated in part by
the technological advancement and increased demand
for data centers with more capacity and high
performance compute systems in addition to compaction
or consolidation of hardware to make better use of
physical floor space. Blade servers are great examples of
such compaction, and represent one of the fastest
growing enterprise server markets. The thermal
requirement of Information Technology (IT) equipment
is usually specified by temperature and airflow rate. This
requirement is met by delivering the proper amount of
airflow at the pre-specified temperature to the inlet side
of the IT equipment. Some studies [2-5] have discussed
the lack of energy efficiency in state of the art data
centers and showed that for every one watt of power
used by the IT equipment, an additional 0.5-1.0 Watt of
power is required by the cooling resources to maintain
proper inlet conditions for the same equipment i.e. an
additional 50%-100% is used to operate the cooling
resources. The over provisioning of the cooling systems
design can add millions of dollars to the annual thermal
management cost for large data centers. The capital cost
required to deploy cooling systems at this scale is also
considerable. Hence efficient cooling of data centers is
the way to minimizing power consumption, reducing the
harmful emissions to the environment and providing a
cost savings prospective.
Typical high density data centers are designed based
on hot/cold aisles layout. The conditioned air is drawn at
specific temperature and humidity into the IT systems by
the integrated fans or blowers. Hot air rejected from the
IT equipment is exhausted into the hot aisles and
recycled through the room which is known as room
return or through a ceiling plenum and in this case is
called ceiling return, back to the computer room air
handler (CRAH) units. Ideally, the hot return air streams
should return to the CRAC units without mixing with the
cool air from the supply. However, most data centers
have an open air return design (no ceiling plenum)
which results in mixing the servers air exhaust with
some of the conditioned air streams and reducing the
energy efficiency of the data center. The complicated air
flow distribution in the open environment makes thermal
management of such data centers challenging. The lack
of proper thermal environment inside a data center is a
serious threat not only to system reliability and
performance, but also warranties and service agreements
of systems manufacturers.
The design treatment of the heat rejected by the
computational, networking and storage resources into
the data center is specified simply by matching the
installed cooling capacity to the total data center design
power rating. However, this approach does not guarantee
       
       
may be formed. Hence a thermal requirement specified
by a combination of a temperature range and a minimum
airflow rate is required for reliable data center operation.
This specific requirement can be met by delivering
sufficient amount of cooled air to the inlet of each rack
at a temperature within the acceptable temperature range
The most common data centers are designed based
on raised-floor architecture and hence the focus of this
Fig. 1 Typical raised-floor data center configuration
Figure 1 depicts a simple data center configuration
with one CRAH pressurizing under-floor plenum
enclosure. The floor plenum should be free from any
significant blockages that can disturb airflow and the
static pressure inside the plenum. Plenum blockages may
be caused by network cable trays, power cables and
unused legacy cables from past installations. Overhead
racks trays design maybe a good solution to potential
plenum blockages.
A conventional raised-floor data center is typically
equipped with perforated tiles with fixed percent open
area and CRAH units with constant flow rates.
Perforated tiles are critical elements of the raised-floor
data centers as they are the channels by which the cooled
airflow is delivered to the data center IT systems. The
CRAH units pressurize the under-floor plenum and
maintain a positive static pressure differential between
the plenum pressure and above the raised-floor pressure.
The pressure differential is the driving force for the
cooled airflow across the perforated tiles with the
volumetric airflow being proportional to the percent
open area. Most CRAH unit manufacturers attempt to
thermally manage the data center environment based on
the CRAH unit return air temperature although there has
being a move to use the supply air temperature instead.
In either case, the set point temperature of each CRAH
unit is selected manually and is based on a qualitative
assessment of the data center environmental conditions
which makes provisioning data center cooling capacity
even more challenging.
CRAH units and perforate tiles are two knobs that
are available to manage and fine tune the thermal
environment inside a data center and eliminate hot spots.
The data center thermal management is achieved by
selecting the proper CR  individualized set
point temperatures and individualized airflow rate. Fine
tuning the localized airflow is achieved by selecting the
proper number, size and layout of the perforated tiles.
Selecting the number and size of the CRAH units are
typically performed quantitatively by matching them to
the total heat load generated inside the data center while
the data center layout of the CRAH units and the number
and layout of the perforated tiles are performed
qualitatively based on the IT equipment heat load
distribution. Caution should be taken since the IT system
utilization and consequently the heat load is
continuously fluctuating and as a result the cooling
capacity requirement is also fluctuating.
Off-the shelf data center perforated tiles are
available with a fixed percent open area ranging
Server rack
typically between 25% and 56% although perforated
tiles with higher percent open area is also available in
the market. Facility engineers and operators are required
to physically move, relocate and replace perforated tiles
for local airflow adjustments and to match new cooling
requirements or prevent localized hot spots that may
result from insufficient cooled air circulation. The
relocation of perforated tiles is not a recommended long
term strategy since it introduces changes to the plenum
static pressure distribution, to the available airflow from
each individual perforated tile and may even disrupt the
overall data center thermal environment in addition to
being impractical and labor intensive. This issue
becomes particularly more pronounced when the data
center plenum height is less than 2-ft in height [6, 7].
Perforated tiles with automated dampers have been
introduced as a solution to the challenges that data
centers operators face in tuning the localized airflow to
match the continuous change in the heat load
distribution. These controllable perforated tiles tune the
airflow by adjusting the percent-open area between 0%
and 50% [6]. The adjustable dampers provide flexibility
to an existing standard perforated tile [7]. Monitoring
and controlling these channels will provide local thermal
management that could lead to energy savings [8, 9].
The increased fluctuation in the plenum static pressure
negatively affects the airflow uniformity through the
perforated tiles. This results in over provisioning of
cooling capacity to compensate for the airflow loss in
the affected areas and reduces the overall data center
energy efficiency. Published reports and studies are
available that highlight energy efficiency challenges and
opportunities in data centers [10-12]. New energy
efficiency metrics have being proposed to standardize
data center energy performance assessment based on
first principle and thermodynamic approach [13].
This paper presents the results of a numerical study
on the effect of a plenum height/size on the perforated
tile individual airflow rate and rack inlet temperature
distribution at 7-ft above the raised floor. This study
utilizes FloVent software, a commercially available
computational fluid dynamics (CFD) package. The
method takes into account the data center size, the
plenum dimensions, the CRAH set point temperature,
location and air flow rate, the perforated tiles number,
size and locations. The results are presented in terms of
the temperature, static pressure and volumetric flow rate
distribution. This paper provides insight into optimum
data center plenum sizing underscoring the power of the
numerical design approach. The importance of
predicting the performance of a given data center prior to
construction and facilitating changes to the layout of the
IT and cooling resources are evident.
The partial differential equations describing the
complex interactions between the dynamics of fluid flow
and heat transfer are highly coupled and non-linear
making them difficult to be solve analytically. Resolving
these equations numerically requires the use of various
approximation techniques such as finite volume, finite
element and finite difference. The partial differential
equations are initially discretized and transformed into
algebraic form to be solved iteratively until a pre-
specified convergence level is reached. The convergence
level is determined based on the variables residual
monitored relative to the problem scale at hand.
The governing equations describing the airflow are
continuity, momentum and energy equations.
      (1)
Equation (1) is the continuity equation also known
as the conservation of mass equation. The variables
   are the density, velocity and time, respectively.
This equation states that the mass entering a control
volume is equal to the mass leaving the control volume
plus the change of mass within the same control volume.
          (2)
Equation (2) is the conservation of Momentum. The
variables (  ) are pressure, gravity and the viscous
shear tensor, respectively. This equation is derived from
      the net
force is equal to mass times acceleration. The forces
exerted on the fluid volume in this case are pressure,
viscous and body forces. This equation is a vector
equation and therefore has a magnitude and a direction.
 - (3)
Equation (3) is the conservation of Energy. The
variable ( ) is the total internal energy and the thermal
conductivity, respectively. This equation is derived from
the first law of thermodynamics and states that the net
exchange of energy across the control surfaces is equal
to the change between the initial and final state of the
The numerical model of the data center requires
information on the actual and maximum heat load of the
IT equipment and the maximum available air flow and
capacity of the CRAH units. Caution should be
exercised to make sure that each CRAH  capacity
in the numerical model does not exceed its actual
physical (rated) capacity at any given time. This can be
done by coding the CRAH unit maximum allowable
capacity into the numerical model and verifying that the
limit have not being exceeded by checking the final
results of the numerical model. The model, once
converged, solves for the temperature profile and airflow
distribution among other variables, and enables
optimization of the installed cooling capacity in the data
The principal features of the mathematical model
presented in this paper are defined as a steady-state
three-dimensional model with negligible radiation effect.
The motion of the fluid layers within the space of the
data center is affected by fluctuating currents caused by
mixing, entrainment and circulation and therefore it is
best represented using the k-Epsilon (k- 
model that calculates the kinetic energy and its
dissipation rate.The k-ε model solves for two variables:
k; the turbulent kinetic energy, and ε; the rate of
dissipation of kinetic energy. The k-epsilon model has
good convergence rate and relatively low memory
requirements. The numerical model is built taking into
account all of the data center details that affect air flow
and temperature distribution including plenum size,
        
      
load. The data center has a total area of 29 m2 (312 ft2)
with one row of ten industry standard server racks and
one perforated tile strategically located for each server
rack to provide the necessary cooling capacity. The
server racks are modeled as enclosures with open ends
based on the standard industry racks dimensions. The
model defines the servers as heat sources with their
corresponding airflow rates and each perforated tile as a
planar resistance to reflect the percent open area of 47%.
The selection of the 47% open perforated tiles is based
on experience and what is typically available as an off-
the shelf product. The maximum load of each rack
selected for this study is 10kW with a maximum heat
load of 100 kW for the overall data center. One standard
CRAH unit supplies the necessary cooling capacity for
the overall data center and is modeled to supply the
cooled air at a given temperature. In this model, the
CRAH unit supply temperature is set at 20 oC.
This study presents a numerical analysis to predict
the impact of the changes in the plenum size on the
plenum static pressure, the airflow through each
perforated tile and temperature distribution above the
raised-floor. The objective of the analysis is to show that
the availability of adequate cooling resources to match
the heat load is not sufficient by itself to prevent hot
spots that may lead to server failures if and when non-
uniformity of airflow distribution reach a critical level
[4]. The numerical model is built based on the
assumption that there is no obstruction to the airflow
inside the plenum and that the plenum walls are
adiabatic i.e. the temperature at the exit of the perforated
tile is equal to the air temperature supplied by the CRAH
Figure 2 shows a sample temperature distribution
plots for various plenum sizes. The temperature shown
on the plots is the spatial temperature distribution at
213.4 cm (7-ft) above the raised floor. This location is
selected because it is typically the worst case scenario
for the inlet temperatures. The plots show hot spots
    
sizes 61 cm (24-inch) and smaller. A hot spot is defined
as the inlet air temperature at or above 30 oC. The inlet
air temperature distribution becomes more uniform as
the plenum size increases and the uniformity in
temperature becomes more pronounced for plenum sizes
91.4 cm (36-inch) and larger.
30.5 cm (12”) Plenum 76.2 cm (30”) Plenum
45.7 cm (18”) Plenum 91.4 cm (36”) Plenum
61.0 cm (24”) Plenum 152.4 cm (60”) Plenum
Fig. 2: Sample temperature plots at 213.4 cm (7-ft) above the raised floor for various plenum sizes.
The volumetric airflow rate from the each perforated
tile is plotted and shown in Fig. 3. Although the current
results are based on the model at hand and on the given
assumptions, they still valuable in terms of showing the
trend of how the volumetric air flow rates distribution
among the perforated tiles vary as a result of the plenum
size changes. There exists a plenum size range where the
volumetric airflow is more uniform and that is when the
plenum size is between 76.2-91.4 cm (30-36 inch).
Figure 4 displays the inlet temperature taken at
213.4 cm (7-ft) above the raised floor and at the inlet of
the top server in the rack. The plot shows that the inlet
temperature to some of the servers for 30.5 cm (12-inch)
and 45.7 cm (18-inch) plenum sizes approaches the
maximum operating threshold temperature set by the
manufacturer. Each system has a built-in thermal
threshold that may vary from one type of system to
another but typically is set to 36 oC at which the system
graceful shutdown starts. Data centers owners and
operators work hard to avoid this issue and prevent
    g the critical
thermal threshold range and therefore it makes very
good sense to numerically evaluate the data center
environment numerically and before the actual systems
are in place. It is very difficult to mitigate this issue once
the data center is already in operation and the only
solution is to over provision the cooling resources and
manually redistribute/reselect the perforated tiles around
the affected systems. The current model results show
temperature slightly above 33 oC and approaching the
critical thermal threshold level.
Servers’ Exhaust
Server Racks
Servers’ Inlet
T (oC)
Fig. 3: Individual perforated tiles volumetric flow rate
for various plenum sizes
Fig. 4: Inlet temperature at 7-ft (213.4 cm) above the
raised floor
The maximum room and inlet rack temperatures at 213.4
cm (7-ft) above the raised-floor as a function of the
plenum height are shown in Fig. 5. The plot shows both
temperatures to follow the same trend which indicates
the linear relationship between the inlet and the exhaust
temperatures. The plot also show that at plenum height
of 61 cm (24-inch) or smaller help the generation of hot
spots that may leads to over provisioning and reduces
the energy efficiency of the data center.
Fig. 5: Maximum room temperature and Inlet
temperature at 213.4 cm (7-ft) above the raised floor
versus plenum height
Tables 1 and 2 display sample numerical results
obtained from the CFD model. The results display the
        
inlet temperatures at 213.4 cm (7-ft) above the raised
floor as the raised-floor plenum size varied.
Tile no.
Plenum Height
45.7 cm (18")
61.0 cm (24")
76.2 cm (30")
91.4 cm (36")
T , oC
T , oC
T , oC
T , oC
Max. Room Temp.
Table 1: Airflow rate and temperature values for various plenum sizes at 213.4 cm (7-ft) above the raised floor
-5 -4 -3 -2 -1 1 2 3 4 5
Perforated tile flow rate, cm3/min
Perforated tile number
30.5 cm Plenum
45.7 cm Plenum
61.0 cm Plenum
76.2 cm Plenum
91.4 cm Plenum
106.7 cm Plenum
121.9 cm Plenum
137.2 cm Plenum
152.4 cm Plenum
-5 -4 -3 -2 -1 1 2 3 4 5
Perforated tile number
30.5 cm Plenum
45.7 cm Plenum
61.0 cm Plenum
76.2 cm Plenum
91.4 cm Plenum
106.7 cm Plenum
121.9 cm Plenum
137.2 cm Plenum
152.4 cm Plenum
020 40 60 80 100 120 140 160 180
Temperature, oC
Plenum height, cm
Maximum Room Temperature
Maximum Rack Inlet Temperature
Tile no.
Plenum Height
106.7 cm (42")
121.9 cm (48")
137.2 cm (54")
152.4 cm (60")
T , oC
T , oC
T , oC
T , oC
Max. Room Temp.
Table 2: Airflow rate and temperature values for various plenum sizes at 213.4 cm (7-ft) above the raised floor
It is clear from the tables that the bulk of the airflow
leaves the plenum through the middle perforated tiles for
plenum heights smaller than 76.4 cm (30-inch) and
tapers down towards both ends of the perforated tiles
row. The volumetric flow rate from each of these two
perforated tiles measured less than 50% of the average
air flow rate through any of the other individual
perforated tile. The variation between the perforated tiles
volumetric airflow rate ranges between 3-5% at the low-
end and 12-13% at the high-end of the total and
maximum volumetric flow rate of 481.4 cm3/min
(17,000 CFM) specified for the CRAH unit by the
manufacturer. The results also show the maximum
variation between any two perforated tiles is as high as
75% for 45.7 cm (18-inch) plenum size. This variation
between the low-end and the high-end volumetric flow
rates decreases to 20% for 91.4 cm (36-inch) plenum
size, to about 4% for the 42-inch (106.7 cm) plenum size
and to about 2.3% for the 121.9 cm (48-inch) plenum
size. Interestingly this variation goes slightly up to about
3% for the 137.2 cm (54-inch) plenum size and to close
to 5% for the 152.4 cm (60-inch) plenum size. The
temperature distribution at 213.4 cm (7-ft) above the
raised-floor plenum moves towards more uniformity for
plenum height of 76.2 cm (30-inch) and larger. The
results indicate that an optimum raised-floor plenum size
range exist for the constant airflow inlet condition
specified in this study.
A numerical model of a hypothetical density data
center is built to investigate the effect of plenum height
on the level of uniformity of the static pressure within
the plenum just below the perforated tiles. The static
pressure just below the perforated tiles directly affects
the perforated tiles airflow distribution and the inlet
airflow temperature to the server racks. The steady-state
model is created and the effect of the plenum height/size
on the airflow distribution and inlet server temperature is
analyzed. The steady state model shows that the rack
inlet temperature increases to an unacceptable level
when the plenum height is 61.0 cm (24-ft) or smaller.
This issue can be resolved by proper
redistribution/addition of the perforated tiles around the
affected servers. However this approach may affect other
areas negatively and disrupt the overall airflow
distribution and cooling capacity of the data center.
Numerical modeling and pre-analysis of the data center
environment is the best approach to predict and resolve
such issues before it happens. Furthermore, economic
assessment should be conducted when selecting the
plenum size for a new data center as larger plenums
require larger capital investment; the cost of the plenum
is not included in the current analysis. The following are
the specific key points summary of this study:
The uniformity of the static pressure inside the
plenum increases with plenum height. This uniformity
becomes more pronounced as the plenum height
increases beyond the 61 cm (24 inch).
The data center plenums with height range of 76.2-
91.4 cm (30-36 inch) are the best option based on the
current thermo-fluid considerations and for the specified
geometry, however thermo-economics analysis should
be addressed in future work.
High density data centers architecture and design
approach must be augmented with detailed
computational thermo-fluids analyses. Providing
sufficient cooling resources based on the total compute
heat load in the data center is not sufficient by itself, as
localized high power density will lead to localized hot
Modeling the thermo-fluids behavior in the data
center helps with uptime assessment, create policies and
procedures to avoid data center failure and enable
reliable data center operation.
Modifying the number or layout of the perforated
tile, adding or removing plenum obstructions such as
wires and cables will affect the static pressure uniformity
inside the plenum and may disrupts the airstream paths
however this behavior will be more pronounced for
smaller plenums heights. It is highly recommended that
all power and network cables, wires used inside the data
center are connected through rack-top cable trays.
One way of maintaining control over the static
pressure uniformity inside a plenum is to create dividers
(compartmentalization) and allow each CRAH unit to
feed air into a specific compartment within the plenum.
This approach provides a granular control of the static
pressure uniformity inside the plenum and allow for
tighter control of the volumetric airflow through the
perforated tiles. This method may take away the CRAH
units redundancy unless at least one side divider is
designed to be removable either automatically or
manually when the adjacent plenum section static
pressure goes below a certain threshold pressure value.
Future work will include more detail evaluation and
analysis of an actual size data center. The current study
should be extended to investigate the effect of multiple
CRAH units and their zone of influence variation as the
plenum size changes. Direct versus indirect chilled air
delivery into the plenum effect on the static pressure and
consequently on the perforated tiles airflow rates should
be investigated.
[1] US Department of Energy, Report to Congress on
Server and Data Center Energy Efficiency Public
Law 109-431 U.S. Environmental Protection
Agency ENERGY STAR Program , 2007.
[2] 
Efficiency Optimization and GreeningCase Study
     
Buildings Journal, Elsevier, 2014.
[3] Patel, C.D., Bash, C.E., Sharma, R., Beitelmal,
A.H.,      
center enabled by advanced flexible cooling
  hermal Measurement
and Management Symposium, IEEE, 2005.
[4]     -fluids
       J. of
Distributed and parallel databases-special issue on
high density data centers, Vol. 21 No. 2-3, 227-238,
[5] Beitelmal, A.H., Patel, C.D., "A Steady-State
Model for the Design and Optimization of a
Centralized Cooling System," Int. J. of Energy
Research, Vol. 34 No. 14, 2010.
[6]        
    tates Patent No.
7,347,058, 2008.
[7] Beitelmal, A. H., McReynolds, A.A., Bash, C.E.,
 Manipulating
    
United States Patent No. 8,639,651, 2014.
[8]     controller for
      
7,902,966, 2011.
[9] Beitelmal, A.H., Wang, Z., Felix, C., Bash, C.,
     
       
Proceedings of the InterPACK2009 conference,
San Francisco, California, July 2009.
[10] Kaplan, J., Forrest, W., Kidler, N.,
    
McKinsey and Company, data center efficiency
report, 2008.
[11] Ebbers, M., Galea, A., Schaefer, M., Khiem, M.,
       
published by IBM, 2008.
[12]         
distribution system's airflow performance for
cooling energy savings in high-density data
     
[13]       
 
Buildings Journal, Elsevier, 2014.
... A change in the plenum length only affects the frictional resistance, and a change in the plenum height modifies both frictional resistance and pressure [264]. Therefore, increasing the plenum height generally homogenizes the tile flow rate and rack inlet temperature distribution [82], [96], [265]. However, the uniformity of air flow may be undermined if the tile porosity is too high (over 50%) [266]. ...
... The underfloor plenum can be further divided into smaller compartments by installing partitions, such that each CRAH only feeds air to a specific compartment [265]. This approach is more flexible in terms of the local air provision and under floor pressure control. ...
Recently, the rapid growth in both data center power density and scale poses great challenges to the cooling system. On one hand, data center operators try to over provision cooling resources for fear of server failures induced by accumulated heat. On the other hand, they also want to reduce the energy cost as the cooling system takes up a significant portion of overall energy consumption. Among all available cooling solutions, air cooling dominates the data center industry due to its simpleness. However, its cooling efficiency has been questioned due to the low air density and specific heat. In this paper, we provide an overview for current endeavours to improve the air cooling efficiency. We group existing researches according to the locations where they can be applied from the perspective of air flow cycle. We also discuss the thermal measurement issues. We hope this paper can help researchers and engineers to design and control their data center air cooling systems.
... As a factor of improving and optimizing the system, we decided to adopt 70 cm of height, with an increase of 75% in the floor elevation, taking into account the statement by IBM [49] that "the highest raised floor allows for a better balance of air conditioning in space". Based on a study by Beitelmal [50], this height value brought us closer to the 76.2-91.4 cm height range, which is considered the best option for raised data center floors. ...
Full-text available
Data centers are widely recognized for demanding many energy resources. The greater the computational demand, the greater the use of resources operating together. Consequently, the greater the heat, the greater the need for cooling power, and the greater the energy consumption. In this context, this article aims to report an industrial experience of achieving energy efficiency in a data center through a new layout proposal, reuse of previously existing resources, and air conditioning. We used the primary resource to adopt a cold corridor confinement, the increase of the raised floor’s height, and a better direction of the cold airflow for the aspiration at the servers’ entrance. We reused the three legacy refrigeration machines from the old data center, and no new ones were purchased. In addition to 346 existing devices, 80 new pieces of equipment were added (between servers and network assets) as a load to be cooled. Even with the increase in the amount of equipment, the implementations contributed to energy efficiency compared to the old data center, still reducing approximately 41% of the temperature and, consequently, energy-saving.
... In order to diminish the heterogeneity of airflow and inlet temperature, some new techniques have been widely used in data centers [49]. Two solutions are used to solve the distribution of airflow and temperature, which are the closed cold aisle and closed hot aisle. ...
Full-text available
The energy consumption of fast-growing data centers is drawing attentions from not only energy organizations and institutions all over the world, but also charity groups, such as Greenpeace, and research shows that the power consumption of air conditioning makes up a large proportion of the electricity cost in data centers. Therefore, more detailed investigations of air conditioning power consumption are warranted. Three types of airflow distributions with different aisle layouts (the open aisle, the closed cold aisle, and the closed hot aisle) were investigated with Computational Fluid Dynamics (CFD) methods in a typical data center of four rows of racks in this study. To evaluate the results of thermal and bypass phenomenon, the temperature increase index (β) and the energy utilization index (ηr) were used. The simulations show that there is a better trend of the β index and ηr index both closed cold aisle and closed hot aisle compared with free open aisle. Especially with high air flow rate, the β index decreases and the ηr index increases considerably. Moreover, the results prove the closed aisles (both closed cold aisle and closed hot aisle) can not only significantly improve the airflow distribution, but also reduce the mixture of cold and heat flow, and therefore improve energy efficiency. In addition, it proves the design of the closed aisles can meet the increasing density of installations and our simulation method could evaluate the cooling capacity easily.
... Zhang et al. [19] used a combination of measurement, CFD, and different optimization approaches to simulate and optimize the operational conditions of a small-scale data center. The CFD simulations by Beitelmal [20] indicated that the height of air plenum affects the uniformity of airflow across tiles and taller plenums result in more uniform rack temperatures. Song et al. proposed an optimized control strategy with an efficient CFD approach for adjusting the outlet temperatures of CRACs to improve cooling efficiency while preventing equipment overheating [21]. ...
The energy consumption for cooling electronic equipment in data centers using central systems is significant and will continue to rise. The motivation of the present study research is based on the need to determine optimization strategies to improve and optimize the thermal efficiency of data centers using a simulation-based approach. Here, simulation is used to model and optimize a proposed research data center for use as an environment to test equipment and investigate best practices and strategies such as containment and hybrid cooling. The optimization technique used in this study finds the optimal operating conditions and containment strategies of the data center while still meeting certain thermal conformance constraints. More specifically, optimum cooling units supply flow rate and supply temperature setpoint are sought under different containment configurations including both hot aisle and cold aisle containment strategies in both full and partial setups. The optimization process developed in the present study can serve as a generalized procedure for the optimization of any new air-cooled data center.
... Early simulations of data centers focused on modeling a raised floor and perforated tiles to accurately predict the boundary conditions [46][47][48][49][50][51][52]. We believe that not only does heterogeneous airflow in the cold aisle (hot air recirculation and cold air bypass) depend on airflow distribution from the tiles, the airflow outlet angle also influences the distribution of cold air. ...
Full-text available
Data centers have become ubiquitous in the last few years in an attempt to keep pace with the processing and storage needs of the Internet and cloud computing. The steady growth in the heat densities of IT servers leads to a rise in the energy needed to cool them, and constitutes approximately 40% of the power consumed by data centers. However, many data centers feature redundant air conditioning systems that contribute to inefficient air distribution, which significantly increases energy consumption. This remains an insufficiently explored problem. In this paper, a typical, small data center with tiles for an air supply system with a raised floor is used. We use a fluent (Computational Fluid Dynamics, CFD) to simulate thermal distribution and airflow, and investigate the optimal conditions of air distribution to save energy. The effects of the airflow outlet angle along the tile, the cooling temperature and the rate of airflow on the beta index as well as the energy utilization index are discussed, and the optimal conditions are obtained. The reasonable airflow distribution achieved using 3D CFD calculations and the parameter settings provided in this paper can help reduce the energy consumption of data centers by improving the efficiency of the air conditioning.
Airflow is crucial for air-cooled data centers. Its flow path and distribution influences the thermal environment and energy efficiency of raised-floor data centers. This paper provides a review of the topic including the aspects of airflow factors, numerical study, airflow performance metrics, and thermal optimization. Based on the multi-scale characteristics of the data center, the thermal environment is categorized into room-level, rack-level, and server-level environments. For the room-level thermal environment, the main factors include layout, raised floor plenum and ceiling height, and perforated tiles. For the rack level, the effects of the porosity ratio of rack door, airflow rate/temperature, server population, server arrangement and power density are considered. For the server level, airflow rate and server fan speed are investigated. Moreover, numerical studies have been widely employed to understand the thermal environment of data centers. The selections of simulation tool and the methods for simplifying and validating the models are key to predicting the data center's thermal behavior correctly. In addition, airflow performance metrics and multi-scale thermal optimization are summarized and discussed. This review aims to emphasize the importance of the airflow in data centers and thus serve a guiding reference for airflow design and energy efficiency in data centers. Some recommended topics for future research are also provided.
In recent years, with the rapid development of data centers, its huge energy consumption problem is increasingly prominent. Especially, with the development trend of large-scale data center, the load rate of racks is relatively low, which can easily lead to oversupply of cold capacity, thus resulting in energy waste. Hence, how to reasonably arrange servers so as to effectively utilize cooling capacity and improve energy efficiency has become an urgent problem to be solved. In this paper, field test was made to investigate the thermal environment. The test results indicated that the actual load rate of the data room is between 19.34%˜42.55%, and SHI value for test rows were between 0.128 and 0.594 owning to oversupply of CRACs, as a result of serious mixing of cold air and hot air. To reduce the energy consumption, the cooling capacity of CRACs were equal to IT equipment heat dissipation by using simulation tool through changing the load rate, servers’ arrangement and air distribution strategy under partial load in data centers. The results showed that the servers uniformly distributed in all racks presented good inlet air temperature that fell in the acceptable temperature range by ASHRAE. And it suggests that the servers are uniformly mounted from the middle of the rack and then the upper for low load rates. When the load rate more than 60%, blanking plates need to be installed.
Full-text available
A system and method for manipulating environmental conditions in an infrastructure containing a fluid moving device are disclosed that include identifying correlations between operational settings of the fluid moving device and environmental conditions resulting from changes to the operational settings. In addition, an environmental condition detected at a location proximate to or within the plenum following supply of fluid into the plenum by the fluid moving device is received and errors between the received environmental condition and a reference environmental condition are identified. Operational settings for the fluid moving device to achieve the reference environmental condition are determined based upon the identified correlations and errors.
Conference Paper
Full-text available
Local airflow distribution in data center environments has historically been accomplished through ventilation tiles distributed over a raised floor air distribution plenum. The tiles are initially configured upon the commissioning of the facility and, as IT equipment configuration changes with time, the tiles are adjusted accordingly. However, tile adjustment is a manual process that is error-prone and often non-intuitive. Tile flow rates are a strong function of under floor plenum pressure distribution which is subject to change as tile layouts are reconfigured. Thermal models are often developed to assist with layout changes, but these models can be time-consuming to generate and require skilled users to achieve accurate results. This paper presents an adaptive vent tile (AVT) for use in raised floor data centers that can adapt to the needs of nearby IT equipment. We present a multi-input-multi-output (MIMO) AVT controller that automatically and dynamically adjusts a multiplicity of AVT openings in coordination such that thermal management requirements are met with minimum use of airflow. We describe the development of dynamic models and algorithm design of the MIMO controller. The controller was evaluated with a set of AVT units in a production data center environment. Results show that the controller can optimize local airflow distribution, provide fine-grained rack intake temperature control and respond to disturbances in a manner that is not achievable through static distribution of tiles.
Full-text available
Consolidation and dense aggregation of slim compute, storage and networking hardware has resulted in high power density data centers. The high power density resulting from current and future generations of servers necessitates detailed thermo-fluids analysis to provision the cooling resources in a given data center for reliable operation. The analysis must also predict the impact on the thermo-fluid distribution due to changes in hardware configuration and building infrastructure such as a sudden failure in data center cooling resources. The objective of the analysis is to assure availability of adequate cooling resources to match the heat load, which is typically non-uniformly distributed and characterized by high-localized power density. This study presents an analysis of an example modern data center with a view of the magnitude of temperature variation and impact of a failure. Initially, static provisioning for a given distribution of heat loads and cooling resources is achieved to produce a reference state. A perturbation in reference state is introduced to simulate a very plausible scenario—failure of a computer room air conditioning (CRAC) unit. The transient model shows the “redlining” of inlet temperature of systems in the area that is most influenced by the failed CRAC. In this example high-density data center, the time to reach unacceptable inlet temperature is less than 80 seconds based on an example temperature set point limit of 40C (most of today's servers would require an inlet temperature below 35C to operate). An effective approach to resolve this issue, if there is adequate capacity, is to migrate the compute workload to other available systems within the data center to reduce the inlet temperature to the servers to an acceptable level.
Conference Paper
Full-text available
The management of energy as a key resource will be a requirement from an economic and sustainability standpoint for the future computing utility. In addition to billions of computing devices, the miniaturization of semiconductor technologies will push the current power density of the microprocessor core over 200 W/cm<sup>2</sup> resulting in the use of active heat removal techniques. In order to facilitate thermal management of such high power density sources, and to enable energy efficiency, measured application of active cooling resources will be required. State of the art application of heat removal technologies, applied based on maximum heat load and managed with a lack of knowledge of the overall system requirements, will not suffice. Balanced use of energy to actively remove heat from the source, together with management of heat dissipated from the source, will be necessary to reduce the total cost of ownership of information technology equipment and services. Indeed, based on the current trajectory in chip design, future chips will have the flexibility to scale power, albeit at some performance penalty. This variability in heat generation must be utilized to enable balanced chip performance based on the most efficient provisioning of cooling resources. To enable "right" provisioning of cooling resources, flexibility must be devised at all levels of the heat removal stack - chip, system and data center. The ability to change the temperature and coolant mass flow is the required high level abstraction in this heat removal stack. With these underlying flexibilities in heat generation and heat removal, one can overlay a low-cost sensing network and create a control system that can modulate the cooling resources and work "hand in hand" with a power scheduling mechanism to create an energy aware global computing utility.
The goal of air management system in high-density data centers is to minimize hot air re-circulation and cold air by-pass in data centers. If successfully implemented, both goals will result in energy savings and enhanced thermal conditions. Air management helps to reduce operating costs by enhancing economizer utilization and improving cooling system efficiency. Metrics play a significant role in providing performance indices, which include Supply/Return Heat Index (SHI/RHI), the rack cooling index (RCI) and the return temperature index (RTI), of air management systems. CFD simulation estimates temperature and airflow in the data center as well as to provide wealth of information to evaluate airflow performance. This study defines four performance metrics that analyzes air management systems. These metrics were computed with CFD modeling of 46 different design alternatives to evaluate the thermal environment in a typical data center module. Advanced and comprehensive evaluation index of cooling efficiency was also derived as well-crafted metrics will undoubtedly play an important role.
A thermodynamic approach for evaluating energy performance (productivity) of information technology (IT) servers and data centers is presented. This approach is based on the first law efficiency to deliver energy performance metrics defined as the ratio of the useful work output (server utilization) to the total energy expanded to support the corresponding computational work. These energy performance metrics will facilitate proper energy evaluation and can be used as indicators to rank and classify IT systems and data centers regardless of their size, capacity or physical location. The current approach utilizes relevant and readily available information such as the total facility power, the servers' idle power, the average servers' utilization, the cooling power and the total IT equipment power. Experimental simulations and analysis are presented for a single and a dual-core IT server, and similar analysis is extended to a hypothetical data center. The current results show that the server energy efficiency increases with increasing CPU utilization and is higher for a multi-processor server than for a single-processor server. This is also true at the data center level however with a lower relative performance indicator value than for the server level.
A simplified steady-state model has been developed to describe thermodynamically the operation of a centralized cooling system. This model resolves the mass and energy equations simultaneously and uses inputs that are readily available to the design engineer. The model utilizes an empirical relationship for the compressor power as a function of cooling load and key temperatures. The outputs include the chiller coefficient of performance (COP) and the compressor actual power. The model simulation results are validated with a manufacturer performance data and compared with the experimental data collected at Hewlett-Packard Laboratories site for two chillers: a variable speed and a constant speed chiller. The results of the model are found to closely match the current experimental data with less than 5% average deviation for chiller load over 10% and with a maximum deviation of 18% at 95% chiller load. For the constant speed chiller, the chiller efficiency increases with increasing load and peaks at full load. For the variable speed chiller, the chiller efficiency peaks at part loading between 40 and 80% of the chiller full load depending on the condenser water temperature. This indicates that for variable speed chillers, the chiller design has been optimized for loading less than 100% depending on the ambient conditions, customer specifications and size of the chiller. Copyright © 2010 John Wiley & Sons, Ltd.
Microcontroller for controlling an actuator
  • A H Beitelmal
  • W Mack
Beitelmal, A.H., Mack, W., "Microcontroller for controlling an actuator," United States atent No. 7,902,966, 2011.