HEaRS: A Hierarchical Energy-Aware Resource Scheduler for Virtualized Data Centers.
ABSTRACT With the increasing popularity of Internet-based cloud services, energy efficiency in large-scale Internet data centers has become important not only to curtail energy costs and alleviate environmental concern, but also because such systems can quickly reach the limits of power available to them. This paper investigates to what extent and how energy usage improvements through consolidation can benefit from taking into account the environmental influences and effects seen in data center systems. Toward that end, we present experimental results obtained in a fully instrumented, small scale data center and then use these results to propose a hierarchical energy-aware resource scheduler (HEaRS) for cluster workload placement and server provisioning, also considers the physical environment in which data center systems operate. Specifically, at the rack level, HEaRS tries to maintain a 'thermal balance' across the rack to avoid hot spots and reduce cooling costs. At the chassis level, HEaRS utilizes the proportional plus integral controller to achieve a balance in the levels of usage of electrical current between the two power domains in the chassis, which helps the chassis reach its most energy efficient state. Finally, at server level, HEaRS can employ known methods like dynamic voltage and frequency scaling or core idling to reduce power consumption. This results in a hierarchical set of controllers that jointly, implement holistic solutions to energy-aware resource scheduling for an entire rack, and this hierarchical solution can then be further extended to entire data centers. Our initial experiment result show opportunities for gains, with up to 16% in energy usage compared to methods that are not aware of the physical environment and up to 15% improvements in application performance.
SourceAvailable from: P. RomanoACM Transactions on Autonomous and Adaptive Systems 12/2014; · 0.79 Impact Factor
Conference Paper: Cloud computing architectures and dynamic provisioning mechanisms[Show abstract] [Hide abstract]
ABSTRACT: Cloud computing is a prodigious modern technology committed to provide pool of resources to the on-demand customers. The resources in a cloud are virtualized as a collection of services using the virtualization technology. Efficient provisioning of resources is a lofty problem due to the lively nature and the need to support heterogeneous applications with diverse performance requirements. The performance guarantee from the cloud datacenter requires efficient utilization of the resources. The efficient resource utilization towards any specific Service Level Agreement (SLA) or constraint on Quality of Service (QoS) alone is not sufficient for the cloud computing environments. Thus, cloud computing requires to strike a balance between the performance based on the negotiated QoS as well as the energy consumption at the datacenter during the process of resource provisioning. In this paper, we provide a detailed review of cloud computing architectures and provisioning mechanisms for delivering the computing as a service. We classify the cloud computing architectures discussed in literature based on their significance in cloud services provisioning mechanism. The paper presents taxonomy of dynamic provisioning mechanisms from the cloud utility point of view and brings out salient features/evaluation of existing mechanisms.2013 International Conference on Green Computing, Communication and Conservation of Energy (ICGCE); 12/2013
[Show abstract] [Hide abstract]
ABSTRACT: Data-centers require huge amount of electricity to continue meeting the computing demands of consumers each year. Fossil fuel based electricity is utilized due to lack of abundant renewable energy resources, resulting in the emission of CO2 in atmosphere and causing global temperature hike. The world is in dire need of efficient utilization of electricity. At the same time, advent of cloud computing has brought the innovation of everything as a service. This has led to proliferation of cloud services in every computing field. It has increased the load on cloud hosting data-centers, resulting in excessive use of electricity. A cloud data-center can manage to save electricity by reducing resource exploitation through either or both of the efficient utilization based and thermal based scheduling and monitoring. In this paper we take a peek into recently proposed thermal aware scheduling and monitoring techniques to maintain a cost effective Green Cloud Computing environment.Advanced Computer Science Applications and Technologies (ACSAT), 2012 International Conference on; 11/2012