Conference PaperPDF Available

Analysis of the Influences on Server Power Consumption and Energy Efficiency for CPU-Intensive Workloads

Authors:

Abstract and Figures

Energy efficiency of servers has become a significant research topic over the last years, as server energy consumption varies depending on multiple factors, such as server utilization and workload type. Server energy analysis and estimation must take all relevant factors into account to ensure reliable estimates and conclusions. Thorough system analysis requires benchmarks capable of testing different system resources at different load levels using multiple workload types. Server energy estimation approaches, on the other hand, require knowledge about the interactions of these factors for the creation of accurate power models. Common approaches to energy-aware workload classification categorize workloads depending on the resource types used by the different workloads. However, they rarely take into account differences in workloads targeting the same resources. Industrial energy-efficiency benchmarks typically do not evaluate the system's energy consumption at different resource load levels, and they only provide data for system analysis at maximum system load. In this paper, we benchmark multiple server configurations using the CPU worklets included in SPEC's Server Efficiency Rating Tool (SERT). We evaluate the impact of load levels and different CPU workloads on power consumption and energy efficiency. We analyze how functions approximating the measured power consumption differ over multiple server configurations and architectures. We show that workloads targeting the same resource can differ significantly in their power draw and energy efficiency. The power consumption of a given workload type varies depending on utilization, hardware and software configuration. The power consumption of CPU-intensive workloads does not scale uniformly with increased load, nor do hardware or software configuration changes affect it in a uniform manner.
Content may be subject to copyright.
A preview of the PDF is not available
... Energy consumption is becoming an important issue because of its economic and environmental impact [12]. Resource usage has a direct impact on energy consumption, particularly CPU [13], although it does not scale uniformly with it. ...
... We were also able to estimate which DAW version was consuming the most energy through resource usage and runtime of modified tasks. The energy consumed by a DAW is correlated (non-linearly) with both its runtime and the amount of resources used, particularly CPU [13]. However, we cannot make a direct estimate of this consumption as it is affected by other factors such as some hardware and software specific configurations. ...
... However, even in this case, these tasks are not as resource intensive as the alignment task. According to the study by Kistowski et al [13], the CPU-intensive tasks are the ones that consume the most energy. We can also see that we used less RAM than the threads-only version, except for the Hisat2 index task. ...
... A. Utilization-based power model CPU utilization is the ratio of CPU running time to total time. This coarse-grained indicator only considers the processor's running time, without taking into account specific CPU operation scenarios [10], [17], [18]. In in-order architectures without caches, a direct proportionality between CPU utilization and power consumption can be observed. ...
Article
In order to improve performance and avoid overheating on mobile devices, precise thermal control with low overhead is crucial. To achieve this, we propose incorporating a PMC-based power model into thermal control, which enables more accurate evaluation of the CPU’s power consumption. We demonstrate the plausibility of this approach using polynomial regression based on Moore’s Law. Additionally, we introduce a lightweight PMCs Sampling method that can collect multiple PMCs at once in the kernel space, reducing sampling overhead. By replacing the utilization-based model in the original IPA with a PMC-based power model, we realize the PMC-based IPA governor that can be ported to real mobile devices. After updating the thermal control governor in the Linux kernel, we perform tests on our PMC-based IPA using a mobile phone device. We compare it with Stepwise and IPA, which are commonly used in current mobile phone systems. We choose the CPU-incentive workbench, I/O-incentive workbench, and CPU & I/O-incentive hybrid workbench as workloads. The results show that PMC-based IPA effectively reduces power consumption while improving performance. In particular, during the CPU & I/O-incentive hybrid experiment, where CPU-intensive and I/O-intensive tasks are executed alternately, PMC-based IPA reduces the running time by 10.0% and energy consumption by 16.6% compared to the original IPA. In order to verify the benefits of PMC-based IPA, mobile phone testing software AI Bench and Antutu are utilized. The results show that our scheme is able to control temperature more precisely than IPA and achieved a better score while consuming less energy, particularly during AI computing. These experiment results suggest that PMC-based IPA is valuable for practical use.
... Unfortunately, as processing power and storage capacity increase, so do the corresponding power and cooling requirements of the data centers. Several studies have examined the efficiency of data centers by focusing on server and cooling power inputs, but this fails to capture the data center's entire impact [21][22][23]. To fully account for the environmental impact of these resources the materials, manufacture, and transportation of the servers themselves should also be considered. ...
Chapter
Full-text available
In this paper, a framework for the development and implementation of a low-cost, “low-code”, information system, for the digitalization of small businesses, in the retail and manufacturing sectors is developed and employed. The purpose of this framework is to enable small businesses, that lack the technical expertise and financial resources to invest in proprietary information system technology, to develop systems by leveraging freely available cloud-based tools like Google Forms, Google Sheets, and Google Sites. A thorough literature review of the concept of digitalization is conducted. Thereafter, a small business suitable for digital transformation is identified. Based on the system requirements an information system relevant to the business is developed and implemented. Finally, guidelines are proposed for the development and implementation of similar systems in other small businesses.
... This might have a negative impact on the generalizability because the relation between utilisation and power is not necessarily linear. Kistowski et al. [19] find that for CPUs, there is a steep increase in power output starting at around 80% utilisation. Furthermore, we performed the study by using the PyTorch framework only, which is deemed less energy-efficient compared to TensorFlow [15]. ...
Preprint
Modern AI practices all strive towards the same goal: better results. In the context of deep learning, the term "results" often refers to the achieved accuracy on a competitive problem set. In this paper, we adopt an idea from the emerging field of Green AI to consider energy consumption as a metric of equal importance to accuracy and to reduce any irrelevant tasks or energy usage. We examine the training stage of the deep learning pipeline from a sustainability perspective, through the study of hyperparameter tuning strategies and the model complexity, two factors vastly impacting the overall pipeline's energy consumption. First, we investigate the effectiveness of grid search, random search and Bayesian optimisation during hyperparameter tuning, and we find that Bayesian optimisation significantly dominates the other strategies. Furthermore, we analyse the architecture of convolutional neural networks with the energy consumption of three prominent layer types: convolutional, linear and ReLU layers. The results show that convolutional layers are the most computationally expensive by a strong margin. Additionally, we observe diminishing returns in accuracy for more energy-hungry models. The overall energy consumption of training can be halved by reducing the network complexity. In conclusion, we highlight innovative and promising energy-efficient practices for training deep learning models. To expand the application of Green AI, we advocate for a shift in the design of deep learning models, by considering the trade-off between energy efficiency and accuracy.
Article
This article focuses on the potential task scheduling risks in the digital cloud platform of ultra-high voltage substations and conducts research on workload prediction, exploring the differences in computing power caused by node heterogeneity. The article aims to address the impact of single-core high load or multi-core heavy load on system stability. In this algorithm, wavelet packet decomposition is used to obtain a more stable CPU utilization sequence. ARIMA (Autoregressive Integrated Moving Average) is employed for evaluation, and the SVM (Support Vector Machine) model is used to revise the time series to improve the accuracy of load prediction results, ultimately yielding a sequence representing the load status of nodes. The prediction curve generated by the SARIMA algorithm in this paper better reflects the actual load conditions of the test sequence. When the SARIMA load evaluation algorithm is applied to task scheduling, the task completion times are reduced. Specifically, the completion time is reduced by 84 seconds for 50 tasks and by 881 seconds for 200 tasks, corresponding to reductions of 14% and 22%, respectively.
Chapter
Present Dynamic VM Consolidation (DVMC) algorithms assume that optimal energy efficiency can be achieved via maximum load on Physical Machines (PMs). Such assumption has become invalid with the advent of the highly energy proportional PMs. Additionally, these algorithms consider only varying resource demand, ignoring dissimilarity of workload finishing time, aka the VM Release Time (VMRT), whereas both aspects are strongly associated with energy consumption. Consequently, traditional algorithms fail to proffer optimal performance under real Cloud scenarios. Although minimization of VM migration brings massive benefit for Cloud Data Center (CDC), it is complete opposite of what is needed to minimize energy consumption through DVMC. As such, our proposed multi-objective Stochastic Release Time aware DVMC (SRTDVMC) algorithm is unique in addressing concomitant minimization of energy consumption and VM migration in the presence of state-of-the-art PMs and heterogeneous workloads.KeywordsHighly energy proportional serversCloud data center energy efficiencyDynamic VM consolidation
Conference Paper
Full-text available
Today's software systems are expected to deliver reliable performance under highly variable load intensities while at the same time making efficient use of dynamically allocated resources. Conventional benchmarking frameworks provide limited support for emulating such highly variable and dynamic load profiles and workload scenarios. Industrial benchmarks typically use workloads with constant or stepwise increasing load intensity, or they simply replay recorded workload traces. Based on this observation, we identify the need for means allowing flexible definition of load profiles and address this by introducing two meta-models at different abstraction levels. At the lower abstraction level, the Descartes Load Intensity Meta-Model (DLIM) offers a structured and accessible way of describing the load intensity over time by editing and combining mathematical functions. The High-Level Descartes Load Intensity Meta-Model (HLDLIM) allows the description of load variations using few defined parameters that characterize the seasonal patterns, trends, bursts and noise parts. We demonstrate that both meta-models are capable of capturing real-world load profiles with acceptable accuracy through comparison with a real life trace.
Article
Full-text available
Consolidation of applications in cloud computing environments presents a significant opportunity for energy optimization. As a first step toward enabling energy efficient consolidation, we study the inter-relationships between energy consumption, resource utilization, and performance of consolidated workloads. The study reveals the energy performance trade-offs for consolidation and shows that optimal operating points exist. We model the consolidation problem as a modified bin packing problem and illustrate it with an example. Finally, we outline the challenges in finding effective solutions to the consolidation problem.
Conference Paper
Full-text available
In light of an increase in energy cost and energy consciousness industry standard organizations such as Transaction Processing Performance Council (TPC), Standard Performance Evaluation Corporation (SPEC) and Storage Performance Council (SPC) as well as the U.S. Environmental Protection Agency have developed tests to measure energy consumption of computer systems. Although all of these consortia aim at standardizing power consumption measurement using benchmarks, ultimately aiming to reduce overall power consumption, and to aid in making purchase decisions, their methodologies differ slightly. For instance, some organizations developed specialized benchmarks while others added energy metrics to existing benchmarks. In this paper we give a comprehensive overview of the currently available energy benchmarks followed by an in depth analysis of their commonalities and differences.
Conference Paper
The Server Efficiency Rating Tool (SERT) has been developed by the Standard Performance Evaluation Corporation (SPEC) at the request of the US Environmental Protection Agency (EPA). Almost 3% of all electricity consumed within the US in 2010 went to running datacenters. With this in mind, the EPA released Version 2.0 of the ENERGY STAR for Computer Servers program in early 2013 to include the mandatory use of the SERT. Other governments world-wide that are also concerned by growing power consumption of servers and datacenters are considering the adoption of the SERT.
Article
The Server Efficiency Rating Tool (SERT) has been developed by Standard Performance Evaluation Corporation (SPEC) at the request of the US Environmental Protection Agency (EPA), prompted by concerns that US datacenters consumed almost 3% of all energy in 2010. Since the majority was consumed by servers and their associated heat dissipation systems the EPA launched the ENERGY STAR Computer Server program, focusing on providing projected power consumption information to aid potential server users and purchasers. This program has now been extended to a world-wide audience. This paper expands upon the one published in 2011, which described the initial design and early development phases of the SERT. Since that publication, the SERT has continued to evolve and has entered the first Beta phase in October 2011 with the goal of being released in 2012. This paper describes more of the details of how the SERT is structured. This includes how components interrelate, how the underlying system capabilities are discovered, and how the various hardware subsystems are measured individually using dedicated worklets.
Article
Until recently, there have been relatively few studies explor-ing the power consumption of ICT resources in data cen-tres. In this paper, we propose a methodology to capture the behaviour of most relevant energy-related ICT resources in data centres and present a generic model for them. This is achieved by decomposing the design process into four mod-elling phases. Furthermore, unlike the state-of-the-art ap-proaches, we provide detailed power consumption models at server and storage levels. We evaluate our model for differ-ent types of servers and show that it suffers from an error rate of 2% in the best case, and less than 10% in the worst case.
Article
To drive energy efficiency initiatives, SPEC established SPECpower_ssj2008, the first industry-standard benchmark for measuring power and performance characteristics of computer systems.
Conference Paper
According to the United States Environmental Protection Agency (US EPA) almost 3% of all electricity consumed within the US in 2010 goes to running datacenters, with the majority of that powering servers and the associated air conditioning systems dedicated to eliminating the heat they produce. The EPA launched the ENERGY STAR® Computer Server program in May 2009, intended to deliver information to better enable server purchasing decisions based on projected power consumption. The Server Efficiency Rating Tool (SERT)" has been developed by the Standard Performance Evaluation Corporation (SPEC) SPECpower committee to address the EPA requirements for Version 2 of the ENERGY STAR server program. Unlike many tools sourced from the SPEC organization the SERT is not intended to be a benchmark, and for Version 2 does not offer a single score model. Instead it produces detailed information regarding the influence of CPU, memory, network and storage I/O configurations on overall server power consumption. This paper describes the design and development of the SERT, including discussion of the collaborative nature of working with the EPA and the various industry stakeholders involved in the design, review and development process. Many of the core ideas behind SERT were derived from the SPECpower_ssj2008 and other SPEC-developed benchmarks, and this paper illustrates where ideas and code were shared, as well as where new thinking resulted in entirely new solutions. It also includes thoughts for the future, as the ENERGY STAR server program continues to evolve and the SERT will evolve with it.
Conference Paper
The energy eciency of computer systems is an important concern in a variety of contexts. In data centers, reducing energy use improves operating cost, scalability, reliability, and other factors. For mobile devices, energy consumption directly aects functionality and usability. We propose and motivate JouleSort, an external sort benchmark, for evaluat- ing the energy eciency of a wide range of computer systems from clusters to handhelds. We list the criteria, challenges, and pitfalls from our experience in creating a fair energy- eciency benchmark. Using a commercial sort, we demon- strate a JouleSort system that is over 3.5x as energy-ecient as last year's estimated winner. This system is quite dier- ent from those currently used in data centers. It consists of a commodity mobile CPU and 13 laptop drives, connected by server-style I/O interfaces.