Conference Paper

Resource Allocation using Fog-2-Cloud based Environment for Smart Buildings

Authors:
  • Institute of Space Technology KICSIT Campus
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In this paper, a new orchestration of Fog-2-Cloud based framework is presented for efficiently managing the resources in the residential buildings. It is a three layered framework having: cloud layer, fog layer and consumer layer. Cloud layer is responsible for the on-demand delivery of the resources. Effective resource management is done through the fog layer because it minimizes the latency and enhances the reliability of cloud facilities. Consumer layer is based on the residential users who fulfill their daily electricity demands through fog and cloud layers. Six regions are considered in the study, where, each region has a cluster of buildings varying between 80 to 150 and each building has 80 to 100 homes. Load requests of the consumers are considered fixed during every hour in the complete day. Two control parameters are considered: clusters of buildings and load requests, whereas, three performance parameters: request per hour, response time and processing time are also included. These parameters are optimized by the round robin algorithm, equally spread current execution algorithm and our proposed algorithm shortest job first. The simulation results show that our proposed technique has outperformed the previous techniques in terms of the aforementioned parameters. Tradeoff occurs in the processing time of the algorithms as compared to response time and request per hour.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Therefore, the presented system has performed in a way that response time (RT) increased, and the cost was optimized. In [76], [82] [75], [79], [80], [81] [77], [84] [78], [83] [88] ...
... The importance of response time and low latency in the smart environment has been mentioned, and Javaid, et al. [79] implemented a three-layered fog-2-cloud-based model to manage the user requests efficiently in six regions with smart buildings. Each region included a cluster of 80 to 150 buildings, and each building had 80 to 100 apartments. ...
... These metrics are measured in most of the resource-management-based papers; hence, we bring them together here side by side.    Yasmeen, et al. [77]    Fatima, et al. [78]   Javaid, et al. [79]   Fatima, et al. [80]    Abbas, et al. [81]    Rehman, et al. [82]   Fatima, et al. [83]    Gill, et al. [84]   ...
Article
Full-text available
Smart homes are equipped residences for clients aiming at supplying suitable services via intelligent technologies. Through smart homes, household appliances as the Internet of Things (IoT) devices can easily be handled and monitored from a far distance by remote controls. With the day-to-day popularity of smart homes, it is anticipated that the number of connections rises faster. With this remarkable rise in connections, some issues such as substantial data volumes, security weaknesses, and response time disorders are predicted. In order to solve these obstacles and suggest an auspicious solution, fog computing as an eminently distributed architecture has been proposed to administer the massive, security-crucial, and delay-sensitive data, which are produced by communications of the IoT devices in smart homes. Indeed, fog computing bridges space between various IoT appliances and cloud-side servers and brings the supply side (cloud layer) to the demand side (user device layer). By utilizing fog computing architecture in smart homes, the issues of traditional architectures can be solved. This paper proposes a Systematic Literature Review (SLR) method for fog-based smart homes (published between 2014 and May 2019). A practical taxonomy based on the contents of the present research studies is represented as resource-management-based and service-management-based approaches. This paper also demonstrates an abreast comparison of the aforementioned solutions and assesses them under the same evaluation factors. Applied tools, evaluation types, algorithm types, and the pros and cons of each reviewed paper are observed as well. Furthermore, future directions and open challenges are discussed.
... So the generation of electricity occurred by different renewable resources of solar panels, wind turbines and thermal power plants. [4]. ...
... Not compactible for any system. [4] SJF Reduced latency and enhanced reliability ...
... The new concept of private and public data cannot be easily decrypted. [4], authors considered the load balancing issue on VMs and implement the Shortest Job First (SJF) algorithm and RR, equally spread current execution service broker policies used. Authors in [5], addressed many problems which are: interoperability, scalability, adaptability, and connectivity between smart devices over FC platform, low-power and low-cost devices used for computation, storage, and communication by HEM prototype. ...
Chapter
Full-text available
Cloud Computing (CC) concept is an emerging field of technology. It provides shared resources through its own Data Centers (DC’s), Virtual Machines (VM’s) and servers. People now shift their data on cloud for permanent storage and online easily approachable. Fog is the extended version of cloud. It gives more features than cloud and it is a temporary storage, easily accessible and secure for consumers. Smart Grid (SG) is the way which fulfills the demand of electricity of consumers according to their requirements. Micro Grid (MG) is a part of SG. So there is a need to balance load of requests on fog using VM’s. Response Time (RT), Processing Time (PT) and delay are three main factors which, discussed in this paper with Hill Climbing Load Balancing (HCLB) technique with Optimize best RT service broker policy.
... So the generation of electricity occurred by different renewable resources of solar panels, wind turbines and thermal power plants. [4]. ...
... Not compactible for any system. [4] SJF Reduced latency and enhanced reliability ...
... The new concept of private and public data cannot be easily decrypted. [4], authors considered the load balancing issue on VMs and implement the Shortest Job First (SJF) algorithm and RR, equally spread current execution service broker policies used. Authors in [5], addressed many problems which are: interoperability, scalability, adaptability, and connectivity between smart devices over FC platform, low-power and low-cost devices used for computation, storage, and communication by HEM prototype. ...
Conference Paper
Full-text available
Cloud Computing (CC) concept is an emerging field of technology. It provides shared resources through its own Data Centers (DC's), Virtual Machines (VM's) and servers. People now shift their data on cloud for permanent storage and online easily approachable. Fog is the extended version of cloud. It gives more features than cloud and it is a temporary storage, easily accessible and secure for consumers. Smart Grid (SG) is the way which fulfills the demand of electricity of consumers according to their requirements. Micro Grid (MG) is a part of SG. So there is a need to balance load of requests on fog using VM's. Response Time (RT), Processing Time (PT) and delay are three main factors which, discussed in this paper with Hill Climbing Load Balancing (HCLB) technique with Optimize best RT service broker policy.
... Random electricity demand by the consumers causes the problem to fulfill the energy requirements. Authors in [3], [4] and [5] explore the idea of energy scheduling by integrating TG with ICT for twoway communication. For fast, flexible and reliable communication among different consumers, the system requires cloud computing. ...
... To make the fog efficient and fast system need load balancing algorithms and service broker policies to distribute the load among different VMs of fog. Authors in [4] and [5] designed cloud environment for limited users of different regions to check the performance of their proposed load balancing algorithms. Authors emphasis on the optimization of the overall response time of fog servers for residential buildings by overlooking the overall system costs. ...
... To minimize fog processing time, we need to assign virtual machine more efficiently. In literature authors [3], [4] and [5] discuss some load balancing algorithms that are RR, Throttled and Particle Swarm Optimization (PSO) with multiple service broker policies. This proposed system compares RR, Throttled and AMVM load balancing algorithm with new dynamic service broker policy using one and two fogs in each region. ...
Conference Paper
Full-text available
Cloud computing provides Internet-based services to its consumer. Multiple requests on cloud server simultaneously cause processing latency. Fog computing act as an intermediary layer between Cloud Data Centers (CDC) and end users, to minimize the load and boost the overall performance of CDC. For efficient electricity management in smart cities, Smart Grids (SGs) are used to fulfill the electricity demand. In this paper, a proposed system designed to minimize energy wastage and distribute the surplus energy among energy deficient SGs. A three-layered cloud and fog based architecture described for efficient and fast communication between SGs and electricity consumers. To manage the SGs requests, fog computing introduced to reduce the processing time and response time of CDC. For efficient scheduling of SGs requests, proposed system compare three different load balancing algorithms: Round Robin (RR), Active Monitoring Virtual Machine (AMVM) and Throttled for SGs electricity requests scheduling on fog servers. Dynamic service broker policy is used to decide that which request should be routed on fog server. For evaluation of the proposed system, results performed in cloud analyst, which shows that AMVM and Throttled outperform RR by varying virtual machine placement cost at fog servers. Abstract Cloud computing provides Internet-based services to its consumer. Multiple requests on cloud server simultaneously cause processing latency. Fog computing act as an intermediary layer between Cloud Data Centers (CDC) and end users, to minimize the load and boost the overall performance of CDC. For efficient electricity management in smart cities, Smart Grids (SGs) are used to fulfill the electricity demand. In this paper, a proposed system designed to minimize energy wastage and distribute the surplus energy among energy deficient SGs. A three-layered cloud and fog based architecture described for efficient and fast communication between SGs and electricity consumers. To manage the SGs requests, fog computing introduced to reduce the processing time and response time of CDC. For efficient scheduling of COMSATS Univeristy Islamabad, Pakistan
... The resource management problem is also addressed considering diverse practical reallife applications such as vehicular network [22,4], smart grid [23,24], smart Buildings [25,26], smart manufacturing [16,27], smart city [28,29]. Authors in [22] presented an adaptive resource management algorithm for vehicular networks with the goal to minimize the transmission rate, delay-jitter and the upper-bound of delay. ...
... A model for the integration of fog and cloud with smart grid is presented in [23], where the data flow and the request forwarding for electricity to micro-grid are handled by FNs. Fog and cloud computing environments are used for management of the smart building resources through different load balancing algorithms in [25,26,30,13]. From the application point of view, authors in [28] show how the smart city resources can be managed by taking advantage of fog computing. ...
Preprint
Full-text available
By bringing computing capacity from a remote cloud environment closer to the user, fog computing is introduced. As a result, users can access the services from more nearby computing environments, resulting in better quality of service and lower latency on the network. From the service providers' point of view, this addresses the network latency and congestion issues. This is achieved by deploying the services in cloud and fog computing environments. The responsibility of service providers is to manage the heterogeneous resources available in both computing environments. In recent years, resource management strategies have made it possible to efficiently allocate resources from nearby fog and clouds to users' applications. Unfortunately, these existing resource management strategies fail to give the desired result when the service providers have the opportunity to allocate the resources to the users' application from fog nodes that are at a multi-hop distance from the nearby fog node. The complexity of this resource management problem drastically increases in a MultiFog-Cloud environment. This problem motivates us to revisit and present a novel Heuristic Resource Allocation and Optimization algorithm in a MultiFog-Cloud (HeRAFC) environment. Taking users' application priority, execution time, and communication latency into account, HeRAFC optimizes resource utilization and minimizes cloud load. The proposed algorithm is evaluated and compared with related algorithms. The simulation results show the efficiency of the proposed HeRAFC over other algorithms.
... In this work (an extension of [110]), we have proposed and implemented the C2F2C based framework for efficient resource management in residential buildings. Our main contributions regarding sub-problem 6 are described as below: ...
... Initially, this work is published in [110], whereas, its enhanced version is published in [111]. ...
Thesis
Full-text available
The transformation of conventional grid into Smart Grid (SG) requires strategic implementation of the demand-sensitive programs while considering the varying fluctuations in the consumers’ load. The core challenges faced by existing electric system are that how to utilize electrical devices, how to tackle large amount of data generated by end devices and how to meet energy demands of consumers in limited resources. This dissertation is focused on the energy management of residential sector in the SG. For this purpose, we have proposed the Energy Management Controllers (EMCs) at three levels: at home level (including the single and multiple homes), at building level and at regional level. In addition, cloud and fog based environments are integrated to provide on-demand services according to the consumers’ demands and are used to tackle the problems in existing electric system. At first level, heuristic algorithms based EMC is developed for the energy management of single and multiple homes in residential sector. Five heuristic algorithms: genetic algorithm, binary particle swarm optimization algorithm, bacterial foraging optimization algorithm, wind driven optimization algorithm and our proposed hybrid genetic wind driven algorithm are used to develop the EMC. These algorithms are used for scheduling of the residential load during peak and off peak hours in a real time pricing environment for minimizing both the electricity cost and peak to average ratio while maximizing the user comfort. In addition, the advancements in the electrical system, smart meters and implementation of Renewable Energy Sources (RESs) have yielded extensive changes to the current power grid for meeting the consumers’ demand. For integrating RESs and Energy Storage System (ESS) in existing EMCs, we have proposed another Home EMC (HEMC) that manages the residential sector’s load. The proposed HEMC is developed using the earliglow algorithm for electricity cost reduction. At second level, a fuzzy logic based approach is proposed and implemented for the hot and cold regions of the world using the world-wide adaptive thermostat for the residential buildings. Results show that the proposed approach achieves a maximum energy savings of 6.5% as compared to the earlier techniques. In addition, two EMCs: binary particle swarm optimization fuzzy mamdani and binary particle swarm optimization fuzzy sugeno are proposed for energy management of daily and seasonally used appliances. The comfort evaluation of these loads is also performed using the Fanger’s Predicted Mean Vote method. For increasing the system automation and on-demand availability of the resources, we have proposed a cloud-fog-based model for intelligent resource management in SG for multiple regions at next level. To implement this model, we have proposed a new hybrid approach of Ant Colony Optimization (ACO) and artificial bee colony known as Hybrid Artificial Bee ACO (HABACO). Moreover, a new Cloud to Fog to Consumer (C2F2C) based framework is also proposed for efficiently managing the resources in the residential buildings. C2F2C is a three layered framework having cloud, fog and consumer layers, which are used for the efficient resource management in six regions of the world. In order to efficiently manage the computation of the large amount of data of the residential consumers, we have also proposed and implemented the deep neuro-fuzzy optimizer. The simulation results of the proposed techniques show that they have outperformed the previous techniques in terms of energy consumption, user comfort, peak to average ratio and cost optimization in the residential sector.
... Fog or 'Fogging' is used for this purpose. Fog simply shifts the cloud services [3] to the edge of the network. Fog works as a middle layer between the cloud and users. ...
... For better resource allocation in Smart Buildings, authors in [3] have proposed a cloud to fog to consumer based framework. A three layer network having: cloud, fog and consumer layer is proposed. ...
Conference Paper
The integration of Smart Grid (SG) with cloud and fog computing has improved the energy management system. The conversion of traditional grid system to SG with cloud environment results in enormous amount of data at the data centers. Rapid increase in the automated environment has increased the demand of cloud computing. Cloud computing provides services at the low cost and with better efficiency. Although problems still exists in cloud computing such as Response Time (RT), Processing Time (PT) and resource management. More users are being attracted towards cloud computing which is resulting in more energy consumption. Fog computing is emerged as an extension of cloud computing and have added more services to the cloud computing like security , latency and load traffic minimization. In this paper a Cuckoo Optimization Algorithm (COA) based load balancing technique is proposed for better management of resources. The COA is used to assign suitable tasks to Virtual Machines (VMs). The algorithm detects under and over utilized VMs and switch off the under-utilized VMs. This process turn down many VMs which puts a big impact on energy consumption. The simulation is done in Cloud Sim environment, it shows that proposed technique has better response time at low cost than other existing load balancing algorithms like Round Robin (RR) and Throttled.
... By combining an enhanced Cuckoo Optimization Algorithm with PSO, Bouyer et al. [18] provide a hybrid technique for load balancing. Javaid et al. [19] used a cloud-fog model to improve resource allocation in smart buildings. Mareli et al. address the cost-effectiveness analysis for modifying settings [20] and are primarily concerned with switching and moving resources to meet requirements. ...
Article
Full-text available
Data centers are producing a lot of data as cloud-based smart grids replace traditional grids. The number of automated systems has increased rapidly, which in turn necessitates the rise of cloud computing. Cloud computing helps enterprises offer services cheaply and efficiently. Despite the challenges of managing resources, longer response plus processing time, and higher energy consumption, more people are using cloud computing. Fog computing extends cloud computing. It adds cloud services that minimize traffic, increase security, and speed up processes. Cloud and fog computing help smart grids save energy by aggregating and distributing the submitted requests. The paper discusses a load-balancing approach in Smart Grid using Rock Hyrax Optimization (RHO) to optimize response time and energy consumption. The proposed algorithm assigns tasks to virtual machines for execution and shuts off unused virtual machines, reducing the energy consumed by virtual machines. The proposed model is implemented on the CloudAnalyst simulator, and the results demonstrate that the proposed method has a better and quicker response time with lower energy requirements as compared with both static and dynamic algorithms. The suggested algorithm reduces processing time by 26%, response time by 15%, energy consumption by 29%, cost by 6%, and delay by 14%.
... [29] presented a service broker policy and compared its performance with throttled and round robin load balancing algorithms. The authors dealt with load balancing using shortest job first method was proposed [30]. The particle swarm optimization [31] was used in [32], [33] to maximize the efficiency of load balancing that enhance the fog performance. ...
Article
Full-text available
Energy management is among the key components of smart metering. Its role is to balance energy consumption and distribution. Smart devices integration results in a huge data exchange between different parts of the smart grid causing a delay in the response and processing time. To overcome this latency issue, the cloud computing has been proposed. However, cloud computing does not perform well when there are large distances from the cloud to the consumers. Fog computing solves this issue. In this paper, a cloud-fog computing system is presented to achieve an accurate load balancing. The hybridization of whale optimization algorithm with bat algorithm (WOA-BAT) is proposed for load balancing. The model performance is compared to state of art load balancing techniques as throttled, round robin, whale and particle swarm optimization algorithms in terms of processing and the response time. The results reveal that the proposed WOA-BAT has better results in terms of response time than the three algorithms with 4.3% improvement compared to RR and TH. It also outperforms all the algorithms in terms of processing time by at least 22.3%.
... Itrat [34] proposed a novel service broker scheme and compared it with two LB algorithms, which are throttled, and RR. In [35], the authors have studied the LB issue using RR and SJF. Authors in [36] discussed many issues in FC such as scalability, adaptability, and connection between end devices and fog servers. ...
Article
Full-text available
Fog computing (FC) designates a decentralized computing structure placed among the devices that produce data and cloud. Such flexible structure empowers users to place resources to increase performance. However, limited resources and low delay services obstruct the application of new virtualization technologies in the task scheduling and resource management of fog computing. Scheduling and load balancing (LB) in the cloud computing have been widely studied. However, countless efforts in LB have been proposed in the fog architectures. This presents some enticing challenges to solve the problem of how tasks are routed between different physical devices between fog nodes and cloud. Within fog, due to its mass and heterogeneity of devices, the scheduling is very difficult. There are still few studies that have been conducted. LB is a very interesting and important study area in FC as it aims to achieve high resource utilization. There are various challenges in LB such as security and fault tolerance. The main objective of this paper is to introduce an effective dynamic load balancing technique (EDLB) using convolutional neural network and modified particle swarm optimization, which is composed of three main modules, namely: (i) fog resource monitor (FRM), (ii) CNN-based classifier (CBC), and (iii) optimized dynamic scheduler (ODS). The main purpose of EDLB is to achieve LB in FC environment via dynamic real-time scheduling algorithm. This paper studies the FC architecture for Healthcare system applications. The FRM is responsible for monitoring each server resource and save the server's data into table called fog resources table. The CNN-based classifier (CBC) is responsible for classifying each fog server to suitable or not suitable. The optimized dynamic scheduler (ODS) is responsible for assigning the incoming process to the most appropriate server. Comparing EDLB with other previous LB algorithms, it reduces the response time and achieves high resource utilization. Hence, it is an efficient way to ensure the continuous service. Accordingly, EDLB is simple and efficient in real-time systems in fog computing such as in the case of healthcare system. Although several methods in LB for FC have been introduced, they have many limitations. EDLB overcomes these limitations and achieves high performance in various scenarios. It achieved better makespan, average resource utilization and load balancing level as compared to previously mentioned LB algorithms.
... To address this objective, BBU tasks allocation management is an essential mechanism to render the Fog v-RAN a cost-efficient solution [33]. To this end, the use of traditional algorithms like the RR and the LCT techniques have been widely analyzed in the [34]. Furthermore, meta-heuristic algorithms such as ant colony optimization and Max-min algorithms have been introduced in [35] focusing mainly on two objectives, i.e., minimizing the computation time and improving the resource sharing. ...
Article
Full-text available
The fifth generation wireless technology (5G) has been developed with an aim to provide ubiquitous and scalable connectivity for IoT nodes. Likewise, the Cloud Radio Access Network (C-RAN) architecture can be exploited to enable efficient network access to IoT nodes. Nevertheless, the 5G C-RAN architecture is based on large data-centers geographically located far apart, which introduces an inevitable overhead. Therefore, to supply real-time data services near by the data terminals, fog computing emerges as a promising solution. However, constrained physical fog resources and delay sensitive services hinder the application of new virtualization technologies in the Baseband Unit (BBU) task allocation management of the fog network. To tackle these challenges, a task allocation framework for hierarchical software-defined Fog virtual Radio Access Networks (v-RANs) is proposed in this paper. Precisely, we apply an enhanced Ant Colony Optimization (ACO) in combination with a Max-min algorithm to efficiently determine the optimal path for BBU task allocation management, while minimizing the transmission time for parallel task execution scheduling. Experimental results demonstrate that the queue delay in our approach is 98.38% and 98.82% lower than the Round-Robin (RR) algorithm and Least Connection Technique (LCT), respectively.
... Fog 2 cloud based frame work is presented by Sakeena et al. in [34]. They considered residential side to improve the services of cloud in terms of latency. ...
Thesis
Full-text available
Demand Side Management (DSM) is an effective and robust scheme for energy management, Peak to Average Ratio (PAR) reduction and cost minimization. Many DSM techniques have been proposed for industrial, residential and commercial areas in last years. Smart Grid (SG) gives the opportunity of two-way digital communication to consumers and utility. SG balances and monitors the consumption of electricity of the consumer. Moreover, it reduces the cost and energy consumption of the utility and consumer. There are several Smart Cities (SCs) in the world. These SCs contain numerous Smart Societies (SSs) which have the number of Smart Buildings (SBs) contain Smart Homes (SHs). When requests from the consumer side sent to acquire the resources other storage issues also increase. To make an environment more efficient and enhance the performance of SG, cloud is introduced. Reducing delay and latency in the cloud computing environment is a challenging task for the research community. The resources are required to process and store data in cloud. To overcome these challenges, another infrastructure fog computing environment is introduced, which plays an important role to enhance the efficiency of the cloud. The Virtual Machines (VMs) are installed at fog to whom consumers’ requests are allocated. In this thesis, the cloud and fog based integrated environment is proposed. The aim of this proposed environment is to overcome the delay and latency issues of cloud and to enhance the performance of fog. When there is a large number of incoming requests on fog and cloud, load balancing is another major issue. This issue is also been resolved in this thesis. The nature-inspired algorithms such as: Genetic Algorithm (GA), Crow Search Algorithm (CSA), Honey Bee (HB), Round Robin (RR), Particle Swarm Optimization (PSO) and Improved PSO by using Levy Walk (IPSOLW), Cuckoo Search (CS), CS with Levy distribution (CLW), BAT algorithm and Flower Pollination (FP) are proposed and implemented in this thesis. The aim of proposed GA and CSA is scheduling the load and minimizing the PAR and cost in SG environment. These algorithms also contribute in the cloud and fog based integrated environment of the thesis. To balance the load CSA, HB, IPSOLW, CLW, FP are proposed. The proposed algorithms are compared with implementing RR, PSO, and BAT. The comparative analysis of these proposed and implemented algorithms is done on the basis of service broker policies. The Closest Data Center (CDC), Optimize Response Time (ORT), Reconfigure Dynamically with load and proposed Advance Service Broker Policy (ASP) are also implemented in this thesis to evaluate the results of this thesis algorithm. On the basis of these policies, using aforementioned nature-inspired algorithms, the Response Time (RT), Processing Time (PT), VM cost, Data Transfer (DT) cost, Micro Grid (MG) cost and Total Cost (TC) is minimized in cloud and fog based integrated environment.
... In this work (an extension of [14]), we have proposed and implemented the Consumer to Fog to Cloud (C2F2C) based framework for efficient resource management in residential buildings. Our main contributions are described as below: ...
Article
Full-text available
In this work, a new orchestration of Consumer to Fog to Cloud (C2F2C) based framework is proposed for efficiently managing the resources in residential buildings. C2F2C is a three layered framework consisting of cloud layer, fog layer and consumer layer. Cloud layer deals with on-demand delivery of the consumer’s demands. Resource management is intelligently done through the fog layer because it reduces the latency and enhances the reliability of cloud. Consumer layer is based on the residential users and their electricity demands from the six regions of the world. These regions are categorized on the bases of the continents. Two control parameters are considered: clusters of buildings and load requests, whereas four performance parameters are considered: Request Per Hour (RPH), Response Time (RT), Processing Time (PT) and cost in terms of Virtual Machines (VMs), Microgrids (MGs) and data transfer. These parameters are analysed by the round robin algorithm, equally spread current execution algorithm and our proposed algorithm shortest job first. Two scenarios are used in the simulations: resource allocation using MGs and resource allocation using MGs and power storage devices for checking the effectiveness of the proposed work. The simulation results of the proposed technique show that it has outperformed the previous techniques in terms of the above-mentioned parameters. There exists a tradeoff in the PT and RT as compared to cost of VM, MG and data transfer.
... Shortest Job First (SJF) is proposed in [15] to tackle the load. The algorithm is implemented in two different scenarios in the paper. ...
Conference Paper
Energy Management System (EMS) is necessary to maintain the balance between electricity consumption and distribution. The huge number of Internet of Things (IoTs) generate the complex amount of data which causes latency in the processing time of Smart Grid (SG). Cloud computing provides its platform for high speed processing. The SG and cloud computing integration helps to improve the EMS for the consumers and utility. In this paper, in order to enhance the speed of cloud computing processing edge computing is introduced, it is also known as fog computing. Fog computing is a complement of cloud computing performing on behalf of cloud. In the proposed scenario numbers of clusters are taken from all over the world based on six regions. Each region contains two clusters and two fogs. Fogs are assigned using the service broker policies to process the request. Each fog contains four to nine Virtual Machines (VMs). For the allocation of VMs Round Robin (RR), throttle and Ant Colony Optimization (ACO) algorithms are used. The paper is based on comparative discussion of these load balancing algorithms.
... In [11], the authors proposed cloud to fog to customer (C2F2C) based communication model for demand-side management (DSM). Four performance parameters response time, cost, processing time and request per hour are considered. ...
Conference Paper
Smart grid (SG) provides observable energy distribution where utility and consumers are enabled to control and monitor their production , consumption, and pricing in almost, real time. Due to increase in the number of smart devices complexity of SG increases. To overcome these problems, this paper proposes cloud-fog based SG paradigm. The proposed model comprises three layers: cloud layer, fog layer, and end user layer. The 1st layer consists of the cluster of buildings. The renewable energy source is installed in each building so that buildings become self-sustainable with respect to the generation and consumption. The second layer is fog layer which manages the user's requests, network resources and acts as a middle layer between end users and cloud. Fog creates virtual machines to process multiple users request simultaneously, which increases the overall performance of the communication system. MG is connected with the fogs to fulfill the energy requirement of users. The top layer is cloud layer. All the fogs are connected with a central cloud. Cloud provides services to end users by itself or through the fog. For efficient allocation of fog resources, artificial bee colony (ABC) load balancing algorithm is proposed. Finally, simulation is done to compare the performance of ABC with three other load balancing algorithms, particle swarm optimization (PSO), round robin (RR) and throttled. While considering the proposed scenario, results of these algorithms are compared and it is concluded that performance of ABC is better than RR, PSO and throttled.
... Authors in [9], proposed a fog to cloud based frame work considering residential sector to enhance the service of cloud in term of latency. Moreover, the authors compare the result of shortest job first with the existing algorithms: round robin and equally spread current execution. ...
Chapter
The concept of cloud computing is becoming popular with each passing day. Clouds provide virtual environment for computation and storage. Number of cloud users is increasing drastically which may cause network congestion problem. To avoid such situation, fog computing is used along with cloud computing. Cloud act as a global system and fog works locally. As the requests from users are increasing so load balancing is also required on fog side. In this paper, a three layered cloud and fog based architecture is proposed. Fog computing acts as a middle layer between users and the cloud. Users’ requests are handled at fog layer and filtered data is forwarded to cloud. A single fog has multiple virtual machines (VMs) that are assigned to the users’ requests. The load balancing problem of these requests is managed by proposed weighted cuckoo search (WCS) algorithm. Simulations are carried out to evaluate the performance of proposed model. Results are presented in the form of bar graphs for comparison and detailed values of each parameter are presented in tables. Results show the effectiveness of proposed technique.
... Fog 2 cloud based frame work is presented by Sakina et al. in [11]. In this paper, they considered residential side to improve the services of cloud in terms of latency. ...
Chapter
Full-text available
In this paper, Smart Grid (SG) efficiency is improved by introducing Cloud-based environment. To access the services and hostage of cloud large number of requests are entertained from Smart Homes (SHs). These SHs exists in clusters of smart buildings. When the number of requests increase, delay, latency and response time also increase. To overcome these issues, Fog is introduced, which act as an intermediate layer between the cloud and consumer. Five Micro Grids (MGs) are attached to each cluster of the smart building to manage its requests. By using Fog base environment, the delay and latency decreases. The response time also increases with less processing time. To handle the load on cloud different load balancing algorithms and service broker policies exist. In order to manage the load, Honey Bee (HB) is implemented. HB is compared with existing algorithm Round Robin (RR). It gives better results than RR.
... Fog 2 cloud based frame work is presented by Sakina et al. in [11]. In this paper, they considered residential side to improve the services of cloud in terms of latency. ...
Conference Paper
Full-text available
In this paper, Smart Grid (SG) efficiency is improved by introducing Cloud-based environment. To access the services and hostage of cloud large number of requests are entertained from Smart Homes (SHs). These SHs exists in clusters of smart buildings. When the number of requests increases, delay, latency and response time also increase. To overcome these issues, Fog is introduced, which act as an intermediate layer between the cloud and consumer. Five Micro Grids (MGs) are attached to each cluster of the smart building to manage its requests. By using Fog base environment, the delay and latency decrease. The response time also increases with less processing time. To handle the load on cloud different load balancing algorithms and service broker policies exist. In order to manage the load, Honey Bee (HB) is implemented. HB is compared with existing algorithm Round Robin (RR). It gives better results than RR.
... Authors in [9], proposed a fog to cloud based frame work considering residential sector to enhance the service of cloud in term of latency. Moreover, the authors compare the result of shortest job first with the existing algorithms: round robin and equally spread current execution. ...
Conference Paper
The concept of cloud computing is becoming popular with each passing day. Clouds provide virtual environment for computation and storage. Number of cloud users is increasing drastically which may cause network congestion problem. To avoid such situation, fog computing is used along with cloud computing. Cloud act as a global system and fog works locally. As the requests from users are increasing so load balancing is also required on fog side. In this paper, a three layered cloud and fog based architecture are proposed. Fog computing acts as a middle layer between users and the cloud. Users requests are handled at fog layer and filtered data is forwarded to cloud. A single fog has multiple virtual machines (VMs) that are assigned to the users requests. The load balancing problem of these requests is managed by proposed weighted cuckoo search (WCS) algorithm. Simulations are carried out to evaluate the performance of the proposed model. Results are presented in the form of bar graphs for comparison and detailed values of each parameter are presented in tables. Results show the effectiveness of the proposed technique.
Article
Fog computing is a paradigm that allows the provisioning of computational resources and services at the edge of the network, closer to the end devices and users, complementing cloud computing. The heterogeneity and large number of devices are challenges to obtaining optimized resource allocation in this environment. Over time, some surveys have been presented on resource management in fog computing. However, they now lack a broader and deeper view about this subject, considering the recent publications. This article presents a systematic literature review with a focus on resource allocation for fog computing, and in a more comprehensive way than the existing works. The survey is based on 108 selected publications from 2012 to 2022. The analysis have exposed their main techniques, metrics used, evaluation tools, virtualization methods, architecture, and domains where the proposed solutions were applied. The results show an updated and comprehensive view about resource allocation in fog computing. The main challenges and open research questions are discussed, and a new fog computing resource management cycle is proposed.
Article
In this paper, the problem of strategic resource management in fog networks is discussed while considering a pay-per-use model, similar to that used in cloud. Fog networks are distributed in nature, because of which resource management in these networks is an NP-hard problem. In the existing literature, the researchers focused on resource management in fog networks, while considering the network delay constraint. However, none of these works considered the effect of pricing policy while deciding on resource allocation. Hence, there is a need for pricing-based resource management in fog networks. In this work, we proposed a dynamic pricing-based resource allocation scheme, named FogPrime, for analyzing the trade-off between the service delay and the associated price. In FogPrime, we use dynamic coalition-formation game to decide the resource allocation strategy locally within a cluster. On the other hand, we use utility game to choose the fog nodes, strategically, while considering the aforementioned trade-off. Through simulation, we observed that FogPrime outperforms the existing schemes in terms of the satisfaction of the involved entities the end-user and the fog nodes. Using FogPrime, the satisfaction of the end-users and the fog nodes increases by 24.4947.82%, respectively. Additionally, we observe that FogPrime ensures an even distribution of profit among the fog nodes and enables the end-users to pay less at most by 15.8847.27%.
Chapter
In this paper, present Cloud-fog computing platform which provide efficiently their services via the internet by using remote servers to the residential areas. The increasing number of Internet of Things (IoT) devices and applications cause large data traffic on the cloud system which increase the response time and cost. To overcome this situation, fog computing concept is introduced in this paper. It also reduce the load of cloud and the latency rate of response time to the energy consumption side. Fogs have less storage capacity as compare to cloud, however have all the services available as in cloud side. The Smart Grid (SG) is a modern electric grid like smart meters and smart appliances which efficiently manage the resources allocation. In this work, consider a large geographical residential area divided into six regions and each region has a fog server to manage the energy requests coming from the end users. Each fog has a number of Virtual Machines (VMs) to efficiently manage the different user requests in minimum time and cost. The Micro Grids (MG’s) are the small scale power grid which manage the energy consumption by reducing the time and cost of end users and are connected to the fog edges. Different load balancing and optimized techniques are used in cloud computing for the efficient resources allocation to the smart residential areas. In this paper an algorithm Random load balancing is used for reliable and efficient task scheduling to overcome the latency rate and cost of user in cloud computing environment.
Chapter
Smart grid (SG) provides observable energy distribution where utility and consumers are enabled to control and monitor their production, consumption, and pricing in almost, real time. Due to increase in the number of smart devices complexity of SG increases. To overcome these problems, this paper proposes cloud-fog based SG paradigm. The proposed model comprises three layers: cloud layer, fog layer, and end user layer. The 1st layer consists of the cluster of buildings. The renewable energy source is installed in each building so that buildings become self-sustainable with respect to the generation and consumption. The second layer is fog layer which manages the user’s requests, network resources and acts as a middle layer between end users and cloud. Fog creates virtual machines to process multiple users request simultaneously, which increases the overall performance of the communication system. MG is connected with the fogs to fulfill the energy requirement of users. The top layer is cloud layer. All the fogs are connected with a central cloud. Cloud provides services to end users by itself or through the fog. For efficient allocation of fog resources, artificial bee colony (ABC) load balancing algorithm is proposed. Finally, simulation is done to compare the performance of ABC with three other load balancing algorithms, particle swarm optimization (PSO), round robin (RR) and throttled. While considering the proposed scenario, results of these algorithms are compared and it is concluded that performance of ABC is better than RR, PSO and throttled.
Chapter
Cloud computing provides Internet-based services to its consumer. Multiple requests on cloud server simultaneously cause processing latency. Fog computing act as an intermediary layer between Cloud Data Centers (CDC) and end users, to minimize the load and boost the overall performance of CDC. For efficient electricity management in smart cities, Smart Grids (SGs) are used to fulfill the electricity demand. In this paper, a proposed system designed to minimize energy wastage and distribute the surplus energy among energy deficient SGs. A three-layered cloud and fog based architecture described for efficient and fast communication between SG’s and electricity consumers. To manage the SG’s requests, fog computing introduced to reduce the processing time and response time of CDC. For efficient scheduling of SG’s requests, proposed system compare three different load balancing algorithms: Round Robin (RR), Active Monitoring Virtual Machine (AMVM) and Throttled for SGs electricity requests scheduling on fog servers. Dynamic service broker policy is used to decide that which request should be routed on fog server. For evaluation of the proposed system, results performed in cloud analyst, which shows that AMVM and Throttled outperform RR by varying virtual machine placement cost at fog servers.
Chapter
Energy Management System (EMS) is necessary to maintain the balance between electricity consumption and distribution. The huge number of Internet of Things (IoTs) generate the complex amount of data which causes latency in the processing time of Smart Grid (SG). Cloud computing provides its platform for high speed processing. The SG and cloud computing integration helps to improve the EMS for the consumers and utility. In this paper, in order to enhance the speed of cloud computing processing edge computing is introduced, it is also known as fog computing. Fog computing is a complement of cloud computing performing on behalf of cloud. In the proposed scenario numbers of clusters are taken from all over the world based on six regions. Each region contains two clusters and two fogs. Fogs are assigned using the service broker policies to process the request. Each fog contains four to nine Virtual Machines (VMs). For the allocation of VMs Round Robin (RR), throttle and Ant Colony Optimization (ACO) algorithms are used. The paper is based on comparative discussion of these load balancing algorithms.
Chapter
The integration of Smart Grid (SG) with cloud and fog computing has improved the energy management system. The conversion of traditional grid system to SG with cloud environment results in enormous amount of data at the data centers. Rapid increase in the automated environment has increased the demand of cloud computing. Cloud computing provides services at the low cost and with better efficiency. Although problems still exists in cloud computing such as Response Time (RT), Processing Time (PT) and resource management. More users are being attracted towards cloud computing which is resulting in more energy consumption. Fog computing is emerged as an extension of cloud computing and have added more services to the cloud computing like security, latency and load traffic minimization. In this paper a Cuckoo Optimization Algorithm (COA) based load balancing technique is proposed for better management of resources. The COA is used to assign suitable tasks to Virtual Machines (VMs). The algorithm detects under and over utilized VMs and switch off the under-utilized VMs. This process turn down many VMs which puts a big impact on energy consumption. The simulation is done in Cloud Sim environment, it shows that proposed technique has better response time at low cost than other existing load balancing algorithms like Round Robin (RR) and Throttled.
Chapter
In this article, a resource allocation model is presented in order to optimize the resources in residential buildings. The whole world is categorized into six regions depending on its continents. The fog helps cloud computing connectivity on the edge network. It also saves data temporarily and sends to the cloud for permanent storage. Each continent has one fog which deals with three clusters having 100 buildings. Microgrids (MGs) are used for the effective electricity distribution among the consumers. The control parameters considered in this paper are: clusters, number of buildings, number of homes and load requests whereas the performance parameters are: cost, Response Time (RT) and Processing Time (PT). Particle Swarm Optimization with Simulated Annealing (PSOSA) is used for load balancing of Virtual Machines (VMs) using multiple service broker policies. Service broker policies in this paper are: new dynamic service proximity, new dynamic response time and enhanced new response time. The results of proposed service broker policies with PSOSA are compared with the existing policy: new dynamic service proximity. New dynamic response time and enhanced new dynamic response time performs better than the existing policy in terms of cost, RT and PT. However, the maximum RT and PT of proposed policies is more than the existing policy. We have used CloudAnalyst for conducting simulations for the proposed scheme.
Conference Paper
Full-text available
In this article, a resource allocation model is presented in order to optimize the resources in residential buildings. The whole world is categorized into six regions depending on its continents. The fog helps cloud computing connectivity on the edge network. It also saves data temporarily and sends to the cloud for permanent storage. Each continent has one fog which deals with three clusters having 100 buildings. Microgrids (MGs) are used for the effective electricity distribution among the consumers. The control parameters considered in this paper are: clusters, number of buildings, number of homes and load requests whereas the performance parameters are: cost, Response Time (RT) and Processing Time (PT). Particle Swarm Optimization with Simulated An-nealing (PSOSA) is used for load balancing of Virtual Machines (VMs) using multiple service broker policies. Service broker policies in this paper are: new dynamic service proximity, new dynamic response time and enhanced new response time. The results of proposed service broker policies with PSOSA are compared with the existing policy: new dynamic service proximity. New dynamic response time and enhanced new dynamic response time performs better than the existing policy in terms of cost, RT and PT. However, the maximum RT and PT of proposed policies are more than the existing policy. We have used CloudAnalyst for conducting simulations for the proposed scheme.
Article
Full-text available
Software defined networking (SDN) brings about innovation, simplicity in network management, and configuration in network computing. Traditional networks often lack the flexibility to bring into effect instant changes because of the rigidity of the network and also the over dependence on proprietary services. SDN decouples the control plane from the data plane, thus moving the control logic from the node to a central controller. A wireless sensor network (WSN) is a great platform for low-rate wireless personal area networks with little resources and short communication ranges. However, as the scale of WSN expands, it faces several challenges, such as network management and heterogeneous-node networks. The SDN approach to WSNs seeks to alleviate most of the challenges and ultimately foster efficiency and sustainability in WSNs. The fusion of these two models gives rise to a new paradigm: Software defined wireless sensor networks (SDWSN). The SDWSN model is also envisioned to play a critical role in the looming Internet of Things paradigm. This paper presents a comprehensive review of the SDWSN literature. Moreover, it delves into some of the challenges facing this paradigm, as well as the major SDWSN design requirements that need to be considered to address these challenges.
Conference Paper
Full-text available
Traditional electric generation based on fossil fuel consumption threatens the humanity with global warming, climate change, and increased carbon emission. Renewable resources such as wind or solar power are the solution to these problems. The smart grid is the only choice to integrate green power resources into the energy distribution system, control power usage, and balance energy load. Smart grids employ smart meters which are responsible for two-way flows of electricity information to monitor and manage the electricity consumption. In a large smart grid, smart meters produce tremendous amount of data that are hard to process, analyze and store even with cloud computing. Fog computing is an environment that offers a place for collecting, computing and storing smart meter data before transmitting them to the cloud. This environment acts as a bridge in the middle of the smart grid and the cloud. It is geographically distributed and overhauls cloud computing via additional capabilities including reduced latency, increased privacy and locality for smart grids. This study overviews fog computing in smart grids by analyzing its capabilities and issues. It presents the state-of-the-art in area, defines a fog computing based smart grid and, gives a use case scenario for the proposed model.
Article
Full-text available
This paper focuses on the procurement of load shifting service by optimally scheduling the charging and discharging of PEVs in a decentralized fashion. We assume that the energy flow between PEVs and the grid is bidirectional, i.e., PEVs can also release energy back into the grid as distributed generation, which is known as vehicle-to-grid (V2G). The optimal scheduling problem is then formulated as a mixed discrete programming (MDP) problem, which is NP-hard and extremely difficult to solve directly. To get over this difficulty, we propose a solvable approximation of the MDP problem by exploiting the shape feature of the base demand curve during the night, and develop a decentralized algorithm based on iterative water-filling. Our algorithm is decentralized in the sense that the PEVs compute locally and communicate with an aggregator. The advantages of our algorithm include reduction in computational burden and privacy preserving. Simulation results are given to show the performance of our algorithm.
Article
Full-text available
This paper gives a comprehensive discussion on applying the cloud computing technology as the new information infrastructure for the next-generation power system. First, this paper analyzes the main requirements of the future power grid on the information infrastructure and the limitations of the current information infrastructure. Based on this, a layered cloud-based information infrastructure model for next-generation power grid is proposed. Thus, this paper discussed how different categories of the power applications can benefit from the cloud-based information infrastructure. For the demonstration purpose, this paper develops three specific cloud-enabled power applications. The first two applications demonstrate how to develop practical compute-intensive and data-intensive power applications by utilizing different layered services provided by the state-of-the-art public cloud computing platforms. In the third application, we propose a cloud-based collaborative direct load control framework in a smart grid and show the merits of the cloud-based information infrastructure on it. Some cybersecurity considerations and the challenges and limitations of the cloud-based information infrastructure are also discussed.
Article
Full-text available
The Smart Grid, regarded as the next generation power grid, uses two-way flows of electricity and information to create a widely distributed automated energy delivery network. In this article, we survey the literature till 2011 on the enabling technologies for the Smart Grid. We explore three major systems, namely the smart infrastructure system, the smart management system, and the smart protection system. We also propose possible future directions in each system. colorred{Specifically, for the smart infrastructure system, we explore the smart energy subsystem, the smart information subsystem, and the smart communication subsystem.} For the smart management system, we explore various management objectives, such as improving energy efficiency, profiling demand, maximizing utility, reducing cost, and controlling emission. We also explore various management methods to achieve these objectives. For the smart protection system, we explore various failure protection mechanisms which improve the reliability of the Smart Grid, and explore the security and privacy issues in the Smart Grid.
Article
Full-text available
For 100 years, there has been no change in the basic structure of the electrical power grid. Experiences have shown that the hierarchical, centrally controlled grid of the 20th Century is ill-suited to the needs of the 21st Century. To address the challenges of the existing power grid, the new concept of smart grid has emerged. The smart grid can be considered as a modern electric power grid infrastructure for enhanced efficiency and reliability through automated control, high-power converters, modern communications infrastructure, sensing and metering technologies, and modern energy management techniques based on the optimization of demand, energy and network availability, and so on. While current power systems are based on a solid information and communication infrastructure, the new smart grid needs a different and much more complex one, as its dimension is much larger. This paper addresses critical issues on smart grid technologies primarily in terms of information and communication technology (ICT) issues and opportunities. The main objective of this paper is to provide a contemporary look at the current state of the art in smart grid communications as well as to discuss the still-open research issues in this field. It is expected that this paper will provide a better understanding of the technologies, potential advantages and research challenges of the smart grid and provoke interest among the research community to further explore this promising research area.
Article
Smart Grid (SG) technology represents an unprecedented opportunity to transfer the energy industry into a new era of reliability, availability, and efficiency that will contribute to our economic and environmental health. On the other hand, the emergence of Electric Vehicles (EVs) promises to yield multiple benefits to both power and transportation industry sectors, but it is also likely to affect the SG reliability, by consuming massive energy. Nevertheless, the plug-in of EVs at public supply stations must be controlled and scheduled in order to reduce the peak load. This paper considers the problem of plug-in EVs at public supply stations (EVPSS). A new communication architecture for smart grid and cloud services is introduced. Scheduling algorithms are proposed in order to attribute priority levels and optimize the waiting time to plug-in at each EVPSS. To the best of our knowledge, this is one of the first papers investigating the aforementioned issues using new network architecture for smart grid based on cloud computing. We evaluate our approach via extensive simulations and compare it with two other recently proposed works, based on real supply energy scenario in Toronto. Simulation results demonstrate the effectiveness of the proposed approach when considering real EVs charging-discharging loads at peak-hours periods.
Article
We propose a decentralized algorithm to optimally schedule electric vehicle (EV) charging. The algorithm exploits the elasticity of electric vehicle loads to fill the valleys in electric load profiles. We first formulate the EV charging scheduling problem as an optimal control problem, whose objective is to impose a generalized notion of valley-filling, and study properties of optimal charging profiles. We then give a decentralized algorithm to iteratively solve the optimal control problem. In each iteration, EVs update their charging profiles according to the control signal broadcast by the utility company, and the utility company alters the control signal to guide their updates. The algorithm converges to optimal charging profiles (that are as “flat” as they can possibly be) irrespective of the specifications (e.g., maximum charging rate and deadline) of EVs, even if EVs do not necessarily update their charging profiles in every iteration, and use potentially outdated control signal when they update. Moreover, the algorithm only requires each EV solving its local problem, hence its implementation requires low computation capability. We also extend the algorithm to track a given load profile and to real-time implementation.
Article
Energy is one of the most valuable resources of the modern era and needs to be consumed in an optimized manner by an intelligent usage of various smart devices, which are major sources of energy consumption nowadays. With the popularity of low-voltage DC appliances such as-LEDs, computers, and laptops, there arises a need to design new solutions for self-sustainable smart energy buildings containing these appliances. These smart buildings constitute the next generation smart cities. Keeping focus on these points, this article proposes a cloud-assisted DC nanogrid for self-sustainable smart buildings in next generation smart cities. As there may be a large number of such smart buildings in different smart cities in the near future, a huge amount of data with respect to demand and generation of electricity is expected to be generated from all such buildings. This data would be of heterogeneous types as it would be generated from different types of appliances in these smart buildings. To handle this situation, we have used a cloudbased infrastructure to make intelligent decisions with respect to the energy usage of various appliances. This results in an uninterrupted DC power supply to all low-voltage DC appliances with minimal dependence on the grid. Hence, the extra burden on the main grid in peak hours is reduced as buildings in smart cities would be self-sustainable with respect to their energy demands. In the proposed solution, a collection of smart buildings in a smart city is taken for experimental study controlled by different data centers managed by different utilities. These data centers are used to generate regular alerts on the excessive usage of energy from the end users' appliances. All such data centers across different smart cities are connected to the cloud-based infrastructure, which is the overall manager for making all the decisions about energy automation in smart cities. The efficacy of the proposed scheme is evaluated with respect to various performance evaluation metrics such as satisfaction ratio, delay incurred, overhead generated, and demand-supply gap. With respect to these metrics, the performance of the proposed scheme is found to be good for implementation in a realworld scenario.
Article
Demand Side Management (DSM) is an important application of the future Smart Grid (SG). DSM programs allow consumers to participate in the operation of the electric grid by reducing or shifting their electricity usage during peak periods. Therefore, in this paper we propose a two-tier cloud-based demand side management to control the residential load of customers equipped with local power generation and storage facilities as auxiliary sources of energy. We consider a power system consisting of multiple regions and equipped with a number of microgrids. In each region an edge cloud is utilized to find the optimal power consumption schedule for customer appliances in that region. We propose a two-level optimization algorithm with a linear multi-level cost function. At the edge cloud, the power consumption level of local storage and the amount of power being demanded from both local storage facilities and power grid are scheduled using a bi-level optimization approach. The core cloud then gathers information of the total demand from consumers in different regions and finds the optimal power consumption schedule for each microgrid in the power system. Simulation results show that the proposed model reduces consumption cost for the customers and improves the power grid in terms of peak load and peak-to-average load ratio.
Article
Fog is an emergent architecture for computing, storage, control, and networking that distributes these services closer to end users along the cloud-To-Things continuum. It covers both mobile and wireline scenarios, traverses across hardware and software, resides on network edge but also over access networks and among end users, and includes both data plane and control plane. As an architecture, it supports a growing variety of applications, including those in the Internet of Things (IoT), fifth-generation (5G) wireless systems, and embedded artificial intelligence (AI). This survey paper summarizes the opportunities and challenges of fog, focusing primarily in the networking context of IoT.
Article
With the rapid increase of monitoring devices and controllable facilities in the demand side of electricity networks, more solid information and communication technology (ICT) resources are required to support the development of demand side management (DSM). Different from traditional computation in power systems which customizes ICT resources for mapping applications separately, DSM especially asks for scalability and economic efficiency, because there are more and more stakeholders participating in the computation process. This paper proposes a novel cost-oriented optimization model for a cloud-based ICT infrastructure to allocate cloud computing resources in a flexible and cost-efficient way. Uncertain factors including imprecise computation load prediction and unavailability of computing instances can also be considered in the proposed model. A modified priority list algorithm is specially developed in order to efficiently solve the proposed optimization model and compared with the mature simulating annealing based algorithm. Comprehensive numerical studies are fulfilled to demonstrate the effectiveness of the proposed cost-oriented model on reducing the operation cost of cloud platform in DSM.
Article
The cloud is migrating to the edge of the network, where routers themselves may become the virtualisation infrastructure, in an evolution labelled as \the fog". However, many other complementary technologies are reaching a high level of maturity. Their interplay may dramatically shift the information and communication technology landscape in the following years, bringing separate technologies into a common ground. This paper o ers a comprehensive definition \the fog", comprehending technologies as diverse as cloud, sensor networks, peer-to-peer networks, network virtualisation functions or configuration management techniques. We highlight the main challenges faced by this potentially break-through technology amalgamation.
Article
The cloud is migrating to the edge of the network, where routers themselves may become the virtualisation infrastructure, in an evolution labelled as "the fog". However, many other complementary technologies are reaching a high level of maturity. Their interplay may dramatically shift the information and communication technology landscape in the following years, bringing separate technologies into a common ground. This paper offers a comprehensive definition of the fog, comprehending technologies as diverse as cloud, sensor networks, peer-to-peer networks, network virtualisation functions or configuration management techniques. We highlight the main challenges faced by this potentially breakthrough technology amalgamation.
Article
In the context of electricity market and smart grid, the uncertainty of electricity prices due to the high complexities involved in market operation would significantly affect the profit and behavior of electric vehicle (EV) aggregators. An information gap decision theory-based approach is proposed in this paper to manage the revenue risk of the EV aggregator caused by the information gap between the forecasted and actual electricity prices. The proposed decision-making framework can offer effective strategies to either guarantee the predefined profit for risk-averse decision-makers or pursue the windfall return for risk-seeking decision-makers. Day-ahead charging and discharging scheduling strategies of the EV aggregators are arranged using the proposed model considering the risks introduced by the electricity price uncertainty. The results of case studies validate the effectiveness of the proposed framework under various price uncertainties.
Article
By introducing microgrids, energy management is required to control the power generation and consumption for residential, industrial, and commercial domains, e.g., in residential microgrids and homes. Energy management may also help us to reach zero net energy (ZNE) for the residential domain. Improvement in technology, cost, and feature size has enabled devices everywhere, to be connected and interactive, as it is called Internet of Things (IoT). The increasing complexity and data, due to the growing number of devices like sensors and actuators, require powerful computing resources, which may be provided by cloud computing. However, scalability has become the potential issue in cloud computing. In this paper, fog computing is introduced as a novel platform for energy management. The scalability, adaptability, and open source software/hardware featured in the proposed platform enable the user to implement the energy management with the customized control-as-services, while minimizing the implementation cost and time-to-market. To demonstrate the energy management-as-a-service over fog computing platform in different domains, two prototypes of home energy management (HEM) and microgrid-level energy management have been implemented and experimented.
Article
The smartphone is a typical cyberphysical system (CPS). It must be low energy consuming and highly reliable to deal with the simple but frequent interactions with the cloud, which constitutes the cloud-integrated CPS. Dynamic voltage scaling (DVS) has emerged as a critical technique to leverage power management by lowering the supply voltage andfrequency of processors. In this paper, based on the DVS technique, we propose a novel Energy-aware Dynamic Task Scheduling (EDTS) algorithm to minimize the total energy consumption for smartphones, while satisfying stringent time constraints and the probability constraint for applications. Experimental results indicate that the EDTS algorithm can significantly reduce energy consumption for CPS, as compared to the critical path scheduling method and the parallelism-based scheduling algorithm.
Article
Cloud computing is flourishing day by day and it will continue in developing phase until computers and internet era is in existence. While dealing with cloud computing, a number of issues are confronted like heavy load or traffic while computation. Job scheduling is one of the answers to these issues. It is the process of mapping task to available resource. In section (1) discuss about cloud computing and scheduling. In section (2) explain about job scheduling in cloud computing. In section (3) existing algorithms for job scheduling are discussed, section (4) existing algorithms are compared and lastly section (5) conclusion and future work are discussed.
Article
The presence of energy hubs and the advancement in smart grid technologies have motivated system planners to deploy intelligent multicarrier energy systems entitled “smart energy hub” (S.E. Hub). In this paper, we model S.E. Hub, and propose a modern energy management technique in electricity and natural gas networks based on integrated demand side management (IDSM). In conventional studies, energy consumption is optimized from the perspective of each individual user without considering the interactions with each other. Here, the interaction among S.E. Hubs in IDSM program is formulated as a noncooperative game. The existence and uniqueness of a pure strategy Nash equilibrium (NE) is proved. Additionally, the strategies for each S.E. Hub are determined by proposing a distributed algorithm. We also address the IDSM game in a cloud computing (CC) framework to achieve efficient data processing and information management. Simulations are performed on a grid consisting of ten S.E. Hubs. We compare the CC framework with conventional data processing techniques to evaluate the efficiency of our proposed approach in determining NE. It is also shown that in the NE, the energy cost for each S.E. Hub and the peak-to-average ratio of the electricity demand decrease substantially.
Article
More and more cloud computing services are handled by different Internet operators in distributed Internet data centers (IDCs), which incurs massive electricity costs. Today, the power usage of data centers contributes to more than 1.5% market share of electricity consumption across the United States. Minimization of these costs benefits cloud computing operators, and attracts increasing attentions from many research groups and industrial sectors. Along with the deployment of smart grid, the electrical real-time pricing policy promotes power consumers to adaptively schedule their electricity utilization for lower operational costs. This paper proposes a novel approach to enable electrical energy buffering in batteries to predictively minimize IDC electricity costs in smart grid. Batteries are charged when electricity price is low and discharged to power servers when electricity price is high. A power management controller is used per battery to arbitrate the charging and discharging actions of the battery. The controller is designed as a MPC-based (model predictive control) controller. To this end, an MPC power minimization problem is formulated based on a discrete state-space model with states of battery power level and cost. Extensive simulation results demonstrate the effectiveness of our approach based on real-life electricity prices in smart grid.
Article
In the future smart grid, both users and power companies can potentially benefit from the economical and environmental advantages of smart pricing methods to more effectively reflect the fluctuations of the wholesale price into the customer side. In addition, smart pricing can be used to seek social benefits and to implement social objectives. To achieve social objectives, the utility company may need to collect various information about users and their energy consumption behavior, which can be challenging. In this paper, we propose an efficient pricing method to tackle this problem. We assume that each user is equipped with an energy consumption controller (ECC) as part of its smart meter. All smart meters are connected to not only the power grid but also a communication infrastructure. This allows two-way communication among smart meters and the utility company. We analytically model each user's preferences and energy consumption patterns in form of a utility function. Based on this model, we propose a Vickrey-Clarke-Groves (VCG) mechanism which aims to maximize the social welfare, i.e., the aggregate utility functions of all users minus the total energy cost. Our design requires that each user provides some information about its energy demand. In return, the energy provider will determine each user's electricity bill payment. Finally, we verify some important properties of our proposed VCG mechanism for demand side management such as efficiency, user truthfulness, and nonnegative transfer. Simulation results confirm that the proposed pricing method can benefit both users and utility companies.
Article
Motivated by the power-grid-side challenges in the integration of electric vehicles, we propose a decentralized protocol for negotiating day-ahead charging schedules for electric vehicles. The overall goal is to shift the load due to electric vehicles to fill the overnight electricity demand valley. In each iteration of the proposed protocol, electric vehicles choose their own charging profiles for the following day according to the price profile broadcast by the utility, and the utility updates the price profile to guide their behavior. This protocol is guaranteed to converge, irrespective of the specifications (e.g., maximum charging rate and deadline) of electric vehicles. At convergence, the l2 norm of the aggregated demand is minimized, and the aggregated demand profile is as "flat" as it can possibly be. The proposed protocol needs no coordination among the electric vehicles, hence requires low communication and computation capability. Simulation results demonstrate convergence to optimal collections of charging profiles within few iterations.