To read the full-text of this research, you can request a copy directly from the author.
... Their data center uses commodity mid-range servers; that density is likely to be higher with newer, more power-hungry server choices as data centers migrate to bladed servers, this will climb to over 55 kW per rack [89]. ...
... Current-generation 1U servers consume over 350 Watts at peak utilization, releasing much of this energy as heat; a standard 42U rack of such servers consumes over 8 kW. As data centers migrate to bladed servers over the next few years, these numbers could potentially increase to 55 kW per rack [8]. A thermal management policy that considers facilities components – such as A/C units and the layout of the data center – and temperature-aware IT components can decrease cooling costs [9] , increase hardware reliability [2] , and decrease response times to transients and emergencies [6] . ...
Several projects involving high-level thermal man-agement — such as reducing cooling costs through intelligent workload placement — require ambient air temperature readings at a fine granularity. Un-fortunately, current thermal instrumentation meth-ods involve installing a set of expensive hardware sensors. Modern motherboards include multiple on-board sensors, but the values reported by these sensors are dominated by the thermal effects of the server's workload. We propose using machine learning methods to model the effects of server workload on on-board sensors. Our models combine sensor readings with workload instrumentation and "mask out" the ther-mal effects due to workload, leaving the ambient air temperature at that server's inlet. We present a formal problem statement, outline the properties of our model, describe the machine learning approach we use to construct our models, and present ConSil, a prototype implementation.
... Barroso et al. [7] estimate that the power density of the Google data center is 3–10 times that of typical commercial data centers. Their data center uses commodity midrange servers; that density is likely to be higher with newer, more powerhungry server choices [64]. Current data centers overprovision cooling and power by planning for the worstcase scenario [70]. ...
Over the past decade, however, power and energy have begun to severely constrain component, system, and data center designs. When a data center reaches its maximum provisioned power, it must be replaced or augmented at great expense. In desktops, power consumption and heat contribute to electricity costs as well as noise. Better equipment design and better energy management policies are needed to address these concerns. This chapter will detail current efforts in energy‐efficiency metrics and in power and thermal modeling, delving into specific case studies for each. Various benchmarks are explained and their effectiveness at measuring power requirements is discussed.
... However, the increased compute density comes with associated problems of power and heat density. For example, it is estimated that future blade servers will increase the power density from current values of about 8KW/rack to almost 55KW/rack causing a subsequent increase in cooling from 27BTU/hr to almost 200BTU/hr [4]. At these levels, this may mean the need for liquid cooling in data centers! ...
The last decade has seen several changes in the structure and emphasis of enterprise IT systems. Specific infrastructure trends have included the emergence of large consolidated data centers, the adoption of virtualization and modularization, and an increased commoditization of hardware. At the application level, both the workload mix and usage patterns have evolved to an increased emphasis on service-centric computing and SLA-driven performance tuning. These, often dramatic, changes in the enterprise IT landscape motivate equivalent changes in the emphasis of architecture research. In this paper, we summarize some recent trends in enterprise IT systems and discuss the implications for architecture research, suggesting some high-level challenges and open questions for the community to address.
... These problems are likely to be exacerbated by recent trends towards consolidation in data centers and adoption of higher-density computer systems [18]. Blade servers in particular, have been roadmapped to consume up to 55KW/rack – more than a five-fold increase in power density compared to recently announced 10KW/rack systems [15]. Traditionally, power density and heat extraction issues are addressed at the facilities level through changes in the design and provisioning of power delivery and cooling infrastructures (e.g., [16, 19]). ...
One of the key challenges for high-density servers (e.g., blades) is the increased costs in addressing the power and heat density associated with compaction. Prior approaches have mainly focused on reducing the heat generated at the level of an individual server. In contrast, this paper proposes power efficiencies at a larger scale by leveraging statistical properties of concurrent resource usage across a collection of systems ("ensemble"). Specifically, we discuss an implementation of this approach at the blade enclosure level to monitor and manage the power across the individual blades in a chassis. Our approach requires low-cost hardware modifications and relatively simple software support. We evaluate our architecture through both prototyping and simulation. For workloads representing 132 servers from nine different enterprise deployments, we show significant power budget reductions at performances comparable to conventional systems
... Power density is also one of the limiting factors preventing greater compaction, particularly with smaller form factors as in blade servers. Future blade servers are estimated to need close to 188K BTU/hr (for 55KW racks) [20]. Such densities may well require liquid cooling in the data center. ...
The exponential open queue network model studied here consists of n symmetrical queues in parallel served by independent first-level servers in tandem with a second-level server. Blocking of the flow of units through a first-level server occurs each time the server completes a service. The server remains blocked until its blocking unit completes its service at the second-level server. An approximate expression of the probability distribution of the number of blocked first-level servers conditioned upon a service completion of a first-level server is obtained. This expression compares well with simulation data. Based on this distribution, an approximate expression of the queue-length probability distribution is derived assuming a processor-sharing type of service. The exact condition for stability of the queue network is also derived. Some potential applications are discussed, and a quantitative evaluation of the model is given through a case study.
A novel dual-mode dual-band bandpass filter (BPF) for a single substrate configuration is proposed that uses a balun structure. The outer and inner loop resonators are simultaneously excited by the microstrip line and slotline balun structure. In spite of the microstrip feed network, the proposed dual-mode BPF can achieve dual-band operation. Two proper degenerate modes in the lower and upper passbands can be generated and controlled by installing an inductive cut and a capacitive patch in the corner of two loops, respectively. This letter deals with the analysis and design of the proposed dual-band BPF as well as the experimental validation of the predicted dual-band performance.
The expanding scale and density of data centers has made their power consumption an imperative issue. Data center energy management has become of unprecedented importance not only from an economic perspective but also for environment conservation. The recent surge in the popularity of cloud computing for providing rich multimedia services has further necessitated the need to consider energy consumption. Moreover, a recent phenomenon has been the astounding increase in multimedia data traffic over the Internet, which in turn is exerting a new burden on the energy resources. This paper provides a comprehensive overview of the techniques and approaches in the fields of energy efficiency for data centers and large-scale multimedia services. The paper also highlights important challenges in designing and maintaining green data centers and identifies some of the opportunities in offering green streaming service in cloud computing frameworks.
Trends towards consolidation and higher-density computing configurations make the problem of heat management one of the critical challenges in emerging data centers. Conventional approaches to addressing this problem have focused at the facilities level to develop new cooling technologies or optimize the delivery of cooling. In contrast to these approaches, our paper explores an alternate dimension to address this problem, namely a systems-level solution to control the heat generation through temperature-aware workload placement. We first examine a theoretic thermodynamic formulation that uses information about steady state hot spots and cold spots in the data center and develop real-world scheduling algorithms. Based on the insights from these results, we develop an alternate approach. Our new approach leverages the non-intuitive observation that the source of cooling inefficiencies can often be in locations spatially uncorrelated with its manifested consequences; this enables additional energy savings. Overall, our results demonstrate up to a factor of two reduction in annual data center cooling costs over location-agnostic workload distribution, purely through software optimizations without the need for any costly capital investment.
ResearchGate has not been able to resolve any references for this publication.