ArticlePublisher preview available

Reduced carbon emission and optimized power consumption technique using container over virtual machine

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract and Figures

Environmental warning is caused by IT industry which critically leads the global pollution with huge amount of toxic carbon emission which is drastically increasing day by day due to demand and usage raised. Due to the environmental warning the industries are in awful need of reducing the carbon foot print by inducing green computing. This paper has achieved green computing by implementing two different algorithms (1) Water Shower Model (WSM) and (2) Trigger—WSM load balancing algorithms, and two different techniques (3) Recommending Containers over virtual machine techniques and 4. DVFS (Dynamic Voltage Frequency Scaling) modeling. The observation between the recommendation systems for container over virtual machine for a sample of four containers with one application each and four virtual machine with one application each is monitored for carbon emission equivalent in kg co2 for about 1 week is 14 kg co2 for container and 84.4 kg co2 for virtual machine, where a drastic difference in the amount of carbon emission is seen. So recommending container will be the best possible solution for the IT based on applications, by enforcing these ideas and techniques the carbon emission can be drastically decreased and the amount of carbon footprint in the atmosphere will also be reduced. The amount of power consumption utilized for the same model is 15.71367 W for container and 94.72667 W for virtual machine is also observed; in the field of IT the power consumption also to be taken into consideration for reducing carbon emission. The recommendation system along with the proposed algorithm will reduce the amount of carbon footprint in the environment.
This content is subject to copyright. Terms and conditions apply.
Reduced carbon emission and optimized power consumption
technique using container over virtual machine
G. Anusooya
1
Varadarajan Vijayakumar
1
Published online: 14 May 2019
ÓSpringer Science+Business Media, LLC, part of Springer Nature 2019
Abstract
Environmental warning is caused by IT industry which critically leads the global pollution with huge amount of toxic
carbon emission which is drastically increasing day by day due to demand and usage raised. Due to the environmental
warning the industries are in awful need of reducing the carbon foot print by inducing green computing. This paper has
achieved green computing by implementing two different algorithms (1) Water Shower Model (WSM) and (2) Trigger—
WSM load balancing algorithms, and two different techniques (3) Recommending Containers over virtual machine
techniques and 4. DVFS (Dynamic Voltage Frequency Scaling) modeling. The observation between the recommendation
systems for container over virtual machine for a sample of four containers with one application each and four virtual
machine with one application each is monitored for carbon emission equivalent in kg co
2
for about 1 week is 14 kg co
2
for
container and 84.4 kg co
2
for virtual machine, where a drastic difference in the amount of carbon emission is seen. So
recommending container will be the best possible solution for the IT based on applications, by enforcing these ideas and
techniques the carbon emission can be drastically decreased and the amount of carbon footprint in the atmosphere will also
be reduced. The amount of power consumption utilized for the same model is 15.71367 W for container and 94.72667 W
for virtual machine is also observed; in the field of IT the power consumption also to be taken into consideration for
reducing carbon emission. The recommendation system along with the proposed algorithm will reduce the amount of
carbon footprint in the environment.
Keywords Container Virtual machine Power consumption Carbon emission Load balancing
1 Introduction
1.1 Green computing
Green computing is the art of designing an environment
friendly atmosphere without creating much of carbon
footprint in the ease of using Information Technology (IT)
[15]. In the current scenario or current IT world, nearly
most of them are into use of technologies where the impact
or urge of using these technologies will lead to the emis-
sion of toxic gases which is polluting the entire globe, but
still limiting usage of technology in this current smart
world is also not possible. Green computing drives us to
different standards and techniques were it reduces the
impact of carbon emission/carbon foot print [6] in the
environment.
1.2 Load balancing
When the usage of technology is increased drastically then
definitely the need to balance the load is a mandatory
requirement [1,713]. Load balancing is an art of sharing
the user’s request among various available services. The
main aim of load balancing is to check that there is no
delay or minimal delay in processing the user’s request,
instead of a long wait. Long wait during the processing of
request will leads to many critical issues like more power
consumption, more carbon emission, issues in security etc.
&G. Anusooya
anusooya.g@vit.ac.in
1
School of Computing Science and Engineering, Vellore
Institute of Technology, Chennai, Tamil Nadu, India
123
Wireless Networks (2021) 27:5533–5551
https://doi.org/10.1007/s11276-019-02001-x(0123456789().,-volV)(0123456789().,-volV)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
... As can be seen from the results based on power use and its equivalent carbon emission, this has been examined and proven with the application running on a container as well as the virtual machine. However, while combining both virtual machine and container [1] [2] strategies is advised, this study suggests containers as the ideal model for minimising power consumption and carbon emissions/carbon footprint, which is the current requirement of the industry. ...
... The analysis was done for two different phases. It was first tested with the minimal requirements and next was with maximum requirements for ROR and RWR Anusooya et al. (2019) [1] [2] applications. Power consumption is tested with PowerStat tool and its equivalent carbon emission was measured using AVERT online calculator by EPA -the United States Environmental Protection Agency. ...
... When less energy is consumed, power consumption will also gradually drop, as shown by the chart, making containers the best option for cutting down on carbon emissions. When utilising the virtual machine and container, there is a very [2] , which is the best method based on the successful results, is used to assess the container for Read-Only Request (ROR). The same application, which was calculated for 1 Day at 1440, was equivalent to 2 kg of CO2 for 4 containers with 1 application each, 3.2 kg for 1 virtual machine with 4 applications each, 3 kg for 1 virtual machine with 1 application, and 12.1 kg for 4 virtual machines with 1 application each, which is a significant amount. ...
Article
The International Energy Agency (IEA) revealed that the worldwide energy-related carbon dioxide (CO2) situation has hit a historic high of 33.1 Giga tonnes (Gt) of CO2. 85% of the rise in emissions was due to China, India, and the United States. The increase in emissions in India was 4.8%, or 105 Mega tonnes (Mt) of CO2, with the increase in emissions being evenly distributed across the transportation and industrial sectors, according to Beloglazov et al (2011). Environmental contamination brought on by carbon emissions is harmful to the environment. As a result, there is an urgent need for the IT sectors to develop effective and efficient technology to eliminate such carbon emissions. The primary focus is on lowering carbon emissions due to widespread awareness of the issue.
... Due to its numerous advantages, it is used in different fields [2]. In communication and sensing technologies, the smart grid is used to distribute electricity innovatively [3]. In smart grid technology, optimal resource and energy management areessential [4]. ...
Article
Full-text available
At present, a higher rate of power consumption is caused by intelligent grid applications. Due to high power consumption, the system efficiency, as well as the energy rate of the system, was high. So, to reduce the power consumption rate and increase the system’s efficiency, a novel Ant Lion-based Auto Encoder System (ALbAES) was developed in this research. With this model, the power consumption reduced and the system’s efficiency is increased, and the parameters of the proposed model were obtained in a better range. This work initially pre-processed the power demand data sets to remove the noisy data. The smart grid and the control and monitoring features were extracted through feature extraction. Those extracted features were used in further processes. Moreover, the fitness of the extracted features was compared with the ant lion fitness. The calculation was done based on the ant lion optimization and the Autoencoder. Ideally, the system’s efficiency was increased as 96% based on the fitness function of developed optimization algorithm. To detect the performance of the proposed model, the parameters in the proposed model were compared with the other existing models. The system’s efficiency was improved, the rate of power flow was reduced, and the energy rate of the model was reduced. The design was implemented in MATLAB software, and the results were executed on the windows ten platform.
... Whether it is the extraction of deep groundwater resources, long-distance water diversion, unconventional water reuse, expansion of urban drainage networks, increased industrial and domestic water consumption, or improved sewage treatment levels. Both mean that the urban water system needs to consume more energy during the operation process, resulting in more energy consumption and carbon emissions [9]. The mechanism analysis of the effect of energy consumption structure on carbon intensity is shown in fig. 1. ...
Article
Full-text available
Study the influencing factors and future changes of consumption carbon emissions and water consumption, and provide scientific support for the formulation of targeted policies in the region. Analyze the mechanism of energy consumption structure on carbon intensity, calculate the carbon emission of water intake system, water supply system, drainage and sewage treatment system; use the idea of carbon emission decomposition model to build a water consumption decomposition model. LMDI is used to decompose all factors without residual error, and the trend coefficient of gray correlation degree is used to judge the growth trend of energy consumption and carbon emission. The Baiyangdian Lake Basin is selected as the research water area. Based on the statistical data from 1986 to 2018, the direct path coefficients of the respective variables can be obtained. The absolute value of the respective variable t value is greater than t0.01(25)=2.496, indicating that the path coefficient of the respective variable to the dependent variable is extremely significant. The growth rate of total energy consumption and certain energy consumption is less than the growth rate of CO2 emissions, and the minimum detected carbon emissions per unit time is not less than 20Kg, indicating that the proposed method has certain monitoring efficiency and monitoring stability.
Article
The ecological threat is produced by IT industries that are the critical frontrunners of the universal greenhouse gasses with a massive increase in hazardous carbon emission that accumulates gradually due to the computing device usage and its equivalent electricity consumption. The state of being significant in reducing carbon emission with the reduced/appropriate use of power consumption is sloping upwards, which should not be focused only on the hardware design it should also be firmed with appropriate software design and resource allocation. This originates the evolution of proficient algorithms and effective resource provisioning by allocating the loads evenly based on requests like read-only or read-write for the new or existing workloads to improve the performance in a virtualized data center. This paper proposes an improved water shower model, which can effectively manage the resources in data centers, in order to address this challenge. Different with the traditional Water Shower Model which uses the mode of operation to distribute the load among the resources, the proposed improved Trigger - Water Shower Model uses the mode of operation as well as the identification of the heavily loaded and lightly loaded resources, and thus distributes the loan eventually to obtain a reduced carbon emission. The proposed approach is validated by examining the reduction of carbon emission led by the proposed algorithm. The experimental results show more effective utilization of resources with improved average response time as 64 ms with 1.28 mgco2 reduced carbon emission by the proposed approach, compared to the traditional water shower model.
Research
Full-text available
The need of hour extended its area of conservation of energy in cloud computing. Cloud computing is the heart of research and one of the hottest topics in the field of computer science and engineering. Basically Cloud computing provides services that are referred to as Software- as-a-Service (SaaS), Platform-as-a-Service (PaaS) and Infrastructure-as-a Service (IaaS). As the technology advances and network access becomes faster and with lower latency, the model of delivering computing power remotely over the Internet will proliferate. Hence, Cloud data centers are expected to grow and accumulate a larger fraction of the world’s computing resources. In this way, energy efficient management of data center resources is a major issue in regard to both the operating costs and CO2 emissions to the environment. In energy conservation we try to reduce CO2 emissions that contribute to greenhouse effect. Therefore, the reduction of power and energy consumption has become a first-order objective in the design of modern computing systems. In this paper we propose two algorithms for energy conservation on cloud based infrastructure.
Conference Paper
Full-text available
Cloud Computing is being used widely all over the world by many IT companies as it provides various benefits to the users like cost saving and ease of use. However, with the growing demands of users for computing services, cloud providers are encouraged to deploy large datacenters which consume very high amount of energy and also contribute to the increase in carbon dioxide emission in the environment. Therefore, we require to develop techniques which will help to get more environment friendly computing i.e. Green Cloud Computing. In this paper, we propose a new technique to reduce the carbon emission and energy consumption in the distributed cloud datacenters having different energy sources and carbon footprint rates. Our approach uses the carbon footprint rate of the datacenters in distributed cloud architecture and the concept of virtual machine allocation and migration for reducing the carbon emission and energy consumption in the federated cloud system. Simulation results show that our proposed approach reduces the carbon dioxide emission and energy consumption of federated cloud datacenters as compared to the classical scheduling approach of round robin VM scheduling in federated cloud datacenters.
Article
Cloud load balancing is the process of distributing workloads across multiple computing resources in a cloud environment. Load distribution in cloud computing systems is more challenging than in other systems. The purpose of the paper is to address the issue of optimal task dispatching on multiple Virtual Machines (VM’s) with efficient power management. The prime goal is to address the load distribution in multiple VM’s and to propose an algorithm to maintain a minimized task response time and minimized power consumption using CloudSim cloud simulator.
Article
In most existing cloud services, a centralized controller is used for resource management and coordination. However, such infrastructure is gradually not sufficient to meet the rapid growth of mega data centers. In recent literature, a new approach named devolved controller was proposed for scalability concern. This approach splits the whole network into several regions, each with one controller to monitor and reroute a portion of the flows. This technique alleviates the problem of an overloaded single controller, but brings other problems such as unbalanced work load among controllers and reconfiguration complexities. In this paper, we make an exploration on the usage of devolved controllers for mega data centers, and design some new schemes to overcome these shortcomings and improve the performance of the system. We first formulate Load Balancing problem for Devolved Controllers (LBDC) in data centers, and prove that it is NP-complete. We then design an f-approximation for LBDC, where f is the largest number of potential controllers for a switch in the network. Furthermore, we propose both centralized and distributed greedy approaches to solve the LBDC problem effectively. The numerical results validate the efficiency of our schemes, which can become a solution to monitoring, managing, and coordinating mega data centers with multiple controllers working together.
Article
In cloud computing, resources are dynamic, and the demands placed on the resources allocated to a particular task are diverse. These factors could lead to load imbalances, which affect scheduling efficiency and resource utilization. A scheduling method called interlacing peak is proposed. First, the resource load information, such as CPU, I/O, and memory usage, is periodically collected and updated, and the task information regarding CPU, I/O, and memory is collected. Second, resources are sorted into three queues according to the loads of the CPU, I/O, and memory: CPU intensive, I/O intensive, and memory intensive, according to their demands for resources. Finally, once the tasks have been scheduled, they need to interlace the resource load peak. Some types of tasks need to be matched with the resources whose loads correspond to a lighter types of tasks. In other words, CPU-intensive tasks should be matched with resources with low CPU utilization; I/O-intensive tasks should be matched with resources with shorter I/O wait times; and memory-intensive tasks should be matched with resources that have low memory usage. The effectiveness of this method is proved from the theoretical point of view. It has also been proven to be less complex in regard to time and place. Four experiments were designed to verify the performance of this method. Experiments leverage four metrics: 1) average response time; 2) load balancing; 3) deadline violation rates; and 4) resource utilization. The experimental results show that this method can balance loads and improve the effects of resource allocation and utilization effectively. This is especially true when resources are limited. In this way, many tasks will compete for the same resources. However, this method shows advantage over other similar standard algorithms.
Chapter
In this paper, we have proposed a technique using Raspberry Pi board as a small PC with Docker to implement the concept of virtualization. Docker which is used for running distributed applications in an open platform. This implementation shows how efficient it will be to implement server virtualization technique on simple computers like raspberry pi. The outcome will be a virtualized Raspberry Pi which will run various applications on the hypervisor. This method involves booting Arch Linux into the Raspberry Pi, and then installing a hypervisor into it. Virtual machines will be installed and keep running in the background. If the power consumed with virtualization is less than the power consumed when these processes are running in different hardware without any virtualization then it is concluded that virtualization is saving power and also reducing the hardware involved. Power monitoring software is used to measure the power consumed by the raspberry pi.
Article
This paper discusses a proposed load balance technique based on Artificial Neural Network ANN. ANN predict the demand and then allocate resources according to demand. Thus, it always maintains the active servers according to current demand, which results in low energy consumption than the conservative approach of over-provisioning. Furthermore, high utilization of server results in more power consumption, server running at higher utilization can process more workload with similar power usage. Finally the existing load balancing techniques in cloud computing are discussed and compared based on various parameters.
Article
Recently, datacenter carbon emission has become an emerging concern for the cloud service providers. Previous works are limited on cutting down the power consumption of datacenters to defuse such a concern. In this paper, we show how the spatial and temporal variabilities of the electricity carbon footprint can be fully exploited to further green the cloud running on top of geographically distributed datacenters. Specifically, we first verify that electricity cost minimization conflicts with carbon emission minimization, based on an empirical study of several representative geo-distributed cloud services. We then jointly consider the electricity cost, service level agreement (SLA) requirement, and emission reduction budget. To navigate such a three-way tradeoff, we take advantage of Lyapunov optimization techniques to design and analyze a carbon-aware control framework, which makes online decisions on geographical load balancing, capacity right-sizing, and server speed scaling. Results from rigorous mathematical analysis and real-world trace-driven evaluation demonstrate the effectiveness of our framework in reducing both electricity cost and carbon emission.