Anita Sobe’s research while affiliated with University of Neuchâtel and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (38)


Figure 2: Overview of the 2-phase proposed approach. 
Figure 3: Pearson coefficients of the Top-30 correlated events for the PARSEC benchmarks on the Intel Xeon W3520. 
Table 3 : Processor architecture specifications.
Figure 4: Average error per combination of events for R 3 (freqmine, fluidanimate, freqmine) on an Intel Xeon W3520 (log scale). The larger the circle, the higher the average error. 
Figure 5: Relative error distribution of the PARSEC benchmarks on the Xeon processor (P idle = 92 W). 

+2

The next 700 CPU power models
  • Article
  • Full-text available

July 2018

·

867 Reads

·

31 Citations

Journal of Systems and Software

·

·

·

[...]

·

Software power estimation of CPUs is a central concern for energy efficiency and resource management in data centers. Over the last few years, a dozen of ad hoc power models have been proposed to cope with the wide diversity and the growing complexity of modern CPU architectures. However, most of these CPU power models rely on a thorough expertise of the targeted architectures, thus leading to the design of hardware-specific solutions that can hardly be ported beyond the initial settings. In this article, we rather propose a novel toolkit that uses a configurable/interchangeable learning technique to automatically learn the power model of a CPU, independently of the features and the complexity it exhibits. In particular, our learning approach automatically explores the space of hardware performance counters made available by a given CPU to isolate the ones that are best correlated to the power consumption of the host, and then infers a power model from the selected counters. Based on a middleware toolkit devoted to the implementation of software-defined power meters, we implement the proposed approach to generate CPU power models for a wide diversity of CPU architectures (including Intel, ARM, and AMD processors), and using a large variety of both CPU and memory-intensive workloads. We show that the CPU power models generated by our middleware toolkit estimate the power consumption of the whole CPU or individual processes with an accuracy of 98.5% on average, thus competing with the state-of-the-art power models.

Download

Energy-proportional Profiling and Accounting in Heterogeneous Virtualized Environments

November 2017

·

19 Reads

·

3 Citations

Sustainable Computing Informatics and Systems

The costs of current data centers are mostly driven by their energy consumption (specifically by the air conditioning, computing and networking infrastructure). Yet, current pricing models are usually static and rarely consider the facilities' energy consumption per user. The challenge is to provide a fair and predictable model to attribute the overall energy costs per virtual machine (VM) in heterogeneous environments. In this paper we introduce EPAVE, a model for Energy-Proportional Accounting in VM-based Environments. EPAVE allows transparent, reproducible and predictive cost calculation for users and for Cloud providers. It provides a full-cost model that does not account only for the dynamic energy consumption of a given VM, but also includes the proportional static cost of using a Cloud infrastructure. It comes with PowerIndex, a profiling and estimation model, which is able to profile the energy cost of a VM on a given server architecture and can then estimate its energy cost on a different one. We provide performance results of PowerIndex on real hardware, and we discuss the use cases and applicability of EPAVE.


Enhanced Energy Efficiency with the Actor Model on Heterogeneous Architectures

May 2016

·

36 Reads

·

1 Citation

Lecture Notes in Computer Science

Due to rising energy costs, energy-efficient data centers have gained increasingly more attention in research and practice. Optimizations targeting energy efficiency are usually performed on an isolated level, either by producing more efficient hardware, by reducing the number of nodes simultaneously active in a data center, or by applying dynamic voltage and frequency scaling (DVFS). Energy consumption is, however, highly application dependent. We therefore argue that, for best energy efficiency, it is necessary to combine different measures both at the programming and at the runtime level. As there is a tradeoff between execution time and power consumption, we vary both independently to get insights on how they affect the total energy consumption. We choose frequency scaling for lowering the power consumption and heterogeneous processing units for reducing the execution time. While these options showed to be effective already in the literature, the lack of energy-efficient software in practice suggests missing incentives for energy-efficient programming. In fact, programming heterogeneous applications is a challenging task, due to different memory models of the underlying processors and the requirement of using different programming languages for the same tasks. We propose to use the actor model as a basis for efficient and simple programming, and extend it to run seamlessly on either a CPU or a GPU. In a second step, we automatically balance the load between the existing processing units. With heterogeneous actors we are able to save 40–80 % of energy in comparison to CPU-only applications, additionally increasing programmability.





Convergence and monotonicity of the hormone levels in a hormone-based content delivery system

July 2015

·

24 Reads

·

1 Citation

Central European Journal of Operations Research

The practical significance of bio-inspired, self-organising methods is rapidly increasing due to their robustness, adaptability and capability of handling complex tasks in a dynamically changing environment. Our aim is to examine an artificial hormone system that was introduced in order to deliver multimedia content in dynamic networks. The artificial hormone algorithm proved to be an efficient approach to solve the problem during the experimental evaluations. In this paper we focus on the theoretical foundation of its goodness. We show that the hormone levels converge to a limit at each node in the typical cases. We form a series of theorems on convergence with different conditions which are built on each other by starting with a specific base case and then we consider more general, practically relevant cases. The theorems are proved by exploiting the analogy between the Markov chains and the artificial hormone system. We examine spatial and temporal monotonicity of the hormone levels as well and give sufficient conditions on monotonic increase.


Dynamic Message Processing and Transactional Memory in the Actor Model

June 2015

·

9 Reads

·

5 Citations

Lecture Notes in Computer Science

With the trend of ever growing data centers and scaling core counts, simple programming models for efficient distributed and concurrent programming are required. One of the successful principles for scalable computing is the actor model, which is based on message passing. Actors are objects that hold local state that can only be modified by the exchange of messages. To avoid typical concurrency hazards, each actor processes messages sequentially. However, this limits the scalability of the model. We have shown in former work that concurrent message processing can be implemented with the help of transactional memory, ensuring sequential processing, when required. This approach is advantageous in low contention phases, however, does not scale for high contention phases. In this paper we introduce a combination of dynamic resource allocation and non-transactional message processing to overcome this limitation. This allows for efficient resource utilization as these two mechanisms can be handled in parallel. We show that we can substantially reduce the execution time of high-contention workloads in a micro-benchmark as well as in a real-world application.


Process-level Power Estimation in VM-based Systems

April 2015

·

262 Reads

·

78 Citations

Power estimation of software processes provides critical indicators to drive scheduling or power capping heuristics. State-of-the-art solutions can perform coarse-grained power estimation in virtualized environments, typically treating virtual machines (VMs) as a black box. Yet, VM-based systems are nowadays commonly used to host multiple applications for cost savings and better use of energy by sharing common resources and assets. In this paper, we propose a fine-grained monitoring middleware providing real-time and accurate power estimation of software processes running at any level of virtualization in a system. In particular, our solution automatically learns an application-agnostic power model, which can be used to estimate the power consumption of applications. Our middleware implementation, named BitWatts, builds on a distributed actor implementation to collect process usage and infer fine-grained power consumption without imposing any hardware investment (e.g., power meters). BitWatts instances use high-throughput communication channels to spread the power consumption across the VM levels and between machines. Our experiments, based on CPU- and memory-intensive benchmarks running on different hardware setups, demonstrate that BitWatts scales both in number of monitored processes and virtualization levels. This non-invasive monitoring solution therefore paves the way for scalable energy accounting that takes into account the dynamic nature of virtualized environments.


SEAHORSE: Generalizing an artificial hormone system algorithm to a middleware for search and delivery of information units

February 2015

·

120 Reads

·

8 Citations

Computer Networks

This paper introduces SEAHORSE (SElforganizing Artificial HORmone SystEm), a middleware that builds upon an artificial hormone system for search and delivery of information units. SEAHORSE is a generalization of an artificial hormone algorithm where information units are requested by network nodes via emitting a an artificial hormone which is propagated through the network with respect to the current network conditions. Information units are following the hormone gradient and therefore place themselves on servers where they are close to the requesting nodes. This self-organizing algorithm is robust and scalable, however, due to their complex nature, self-organizing systems are hard to configure and set up to get a desired outcome. Parameter settings that work on different scales are crucial for making the system work. Therefore, we provide a parameter study based on two use cases showing the applicability of SEAHORSE to target applications ranging from multimedia distribution at social events to information dissemination in smart electrical microgrids.


Citations (27)


... New hardware capabilities such as RAPL [92,52], can provide CPU energy (and in some cases, DRAM [27]). "Software power meters" such as powertop and others [28,26,31,32,94,62,53,82,71], use statistical models to attribute total CPU power to processes based on resource use (such as CPU performance counters). Modern hardware still only has rudimentary support for power measurement. ...

Reference:

FaasMeter: Energy-First Serverless Computing
The next 700 CPU power models

Journal of Systems and Software

... This can be reflected in part by factoring in the PUE (Power Usage Effectiveness) of the data centers used for training these models, which is the approach adopted by Patterson et al. for estimating the carbon emissions of ML models such as T5 and GPT-3 [28]. However, while PUE is a useful metric for representing the amount of energy used for cooling and other overhead, it does not account for the totality of energy consumed by the data center infrastructure [6,17]. In order to estimate the idle consumption of the computing infrastructure that we used for training BLOOM, we ran a series of experiments to compare the total energy consumption of idle devices on the Jean Zay computing cluster (e.g. ...

Energy-proportional Profiling and Accounting in Heterogeneous Virtualized Environments
  • Citing Article
  • November 2017

Sustainable Computing Informatics and Systems

... The first one tries to improve the performance of all mechanisms used to execute Actors efficiently, mainly the Actor scheduling strategies [6,20,34,35]. The second approach, instead, follows the direction of extending the AM with new features and constructs [10,19,22,25,27,33]. Our work falls in the second category. ...

Dynamic Message Processing and Transactional Memory in the Actor Model
  • Citing Conference Paper
  • June 2015

Lecture Notes in Computer Science

... Georgiou et al. suggest incorporating energy into the cluster scheduling algorithm to prioritize energy-efficient users [54]. Other works have proposed incorporating the price of electricity into the cost of cloud VMs [55][56][57][58]. These works emphasize deferring costs rather than incentivizing sustainability, and do not examine the same trade-offs as here. ...

How Much Does a VM Cost? Energy-Proportional Accounting in VM-Based Environments
  • Citing Conference Paper
  • February 2016

... Virtualization complicates the power usage monitoring as the hypervisor may not expose the performance counters to the guest OS. Thus, various estimation approaches have been proposed, including for nested virtualization scenarios (Colmant et al., 2015). It has been shown that Docker containerization is not more energy efficient compared to traditional VM hypervisors (Jiang et al., 2019). ...

Process-level Power Estimation in VM-based Systems
  • Citing Conference Paper
  • April 2015

... Artificial hormone systems draw inspiration from the biological endocrine system, which regulates various metabolic processes within our bodies [12], [13]. This creates a selforganizing system characterized by scalability, adaptability, and robustness. ...

SEAHORSE: Generalizing an artificial hormone system algorithm to a middleware for search and delivery of information units
  • Citing Article
  • February 2015

Computer Networks

... Some works focused on energy efficiency of traditional DCs [79,104]. On the other hand, many research works have performed energy efficiency on both traditional and virtual DCs [16,[27][28][29]31,52,54,56,83,87,94,98,103]. ...

ParaDIME: Parallel Distributed Infrastructure for Minimization of Energy
  • Citing Conference Paper
  • August 2014

Microprocessors and Microsystems

... The process involves moving VMs, while keep running, from a physical host to another within a datacenter [7]. VM live migration has contributed dramatically to load balancing, saving energy, and easing maintenance [8,9]. ...

Using Power Measurements as a Basis for Workload Placement in Heterogeneous Multi-Cloud Environments
  • Citing Conference Paper
  • December 2014

... However, since the authors of [20] focus on a COP model, there is no advantage in scaling down the voltage beyond the PoFF. Other works proposing transactional memory (TM)-based fault tolerance (e.g., [30,32,34]) do not evaluate energy consumption (our major goal) and consider transient and permanent faults rather than intermittent timing errors. ...

Combining Error Detection and Transactional Memory for Energy-Efficient Computing below Safe Operation Margins
  • Citing Conference Paper
  • February 2014