ThesisPDF Available

Efficient Utilization of Energy using Fog and Cloud based Environment in Smart Grid

Authors:

Abstract

Demand Side Management (DSM) is an effective and robust scheme for energy management, Peak to Average Ratio (PAR) reduction and cost minimization. Many DSM techniques have been proposed for industrial, residential and commercial areas in last years. Smart Grid (SG) gives the opportunity of two-way digital communication to consumers and utility. SG balances and monitors the consumption of electricity of the consumer. Moreover, it reduces the cost and energy consumption of the utility and consumer. There are several Smart Cities (SCs) in the world. These SCs contain numerous Smart Societies (SSs) which have the number of Smart Buildings (SBs) contain Smart Homes (SHs). When requests from the consumer side sent to acquire the resources other storage issues also increase. To make an environment more efficient and enhance the performance of SG, cloud is introduced. Reducing delay and latency in the cloud computing environment is a challenging task for the research community. The resources are required to process and store data in cloud. To overcome these challenges, another infrastructure fog computing environment is introduced, which plays an important role to enhance the efficiency of the cloud. The Virtual Machines (VMs) are installed at fog to whom consumers’ requests are allocated. In this thesis, the cloud and fog based integrated environment is proposed. The aim of this proposed environment is to overcome the delay and latency issues of cloud and to enhance the performance of fog. When there is a large number of incoming requests on fog and cloud, load balancing is another major issue. This issue is also been resolved in this thesis. The nature-inspired algorithms such as: Genetic Algorithm (GA), Crow Search Algorithm (CSA), Honey Bee (HB), Round Robin (RR), Particle Swarm Optimization (PSO) and Improved PSO by using Levy Walk (IPSOLW), Cuckoo Search (CS), CS with Levy distribution (CLW), BAT algorithm and Flower Pollination (FP) are proposed and implemented in this thesis. The aim of proposed GA and CSA is scheduling the load and minimizing the PAR and cost in SG environment. These algorithms also contribute in the cloud and fog based integrated environment of the thesis. To balance the load CSA, HB, IPSOLW, CLW, FP are proposed. The proposed algorithms are compared with implementing RR, PSO, and BAT. The comparative analysis of these proposed and implemented algorithms is done on the basis of service broker policies. The Closest Data Center (CDC), Optimize Response Time (ORT), Reconfigure Dynamically with load and proposed Advance Service Broker Policy (ASP) are also implemented in this thesis to evaluate the results of this thesis algorithm. On the basis of these policies, using aforementioned nature-inspired algorithms, the Response Time (RT), Processing Time (PT), VM cost, Data Transfer (DT) cost, Micro Grid (MG) cost and Total Cost (TC) is minimized in cloud and fog based integrated environment.
A preview of the PDF is not available
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The influence of Information Communication and Technology (ICT) in power systems necessitates Smart Grid (SG) with monitoring and real-time control of electricity consumption. In SG, huge requests are generated from the smart homes in residential sector. Thus, researchers have proposed cloud based centralized and fog based semi-centralized computing systems for such requests. The cloud, unlike the fog system, has virtually infinite computing resources; however, in the cloud, system delay is the challenge for real-time applications. The prominent features of fog are; awareness of location, low latency, wired and wireless connectivity. In this paper, the impact of longer delay of cloud in SG applications is addressed. We proposed a cloud-fog based system for efficient processing of requests coming from the smart homes, their quick response and ultimately reduced cost. Each smart home is provided with a 5G based Home Energy Management Controller (HEMC). Then, the 5G-HEMC communicates with the High Performance Fog (HPF). The HPFs are capable of processing energy consumers’ huge requests. Virtual Machines (VMs) are installed on physical systems (HPFs) to entertain the requests using First Come First Service (FCFS) and Ant Colony Optimization (ACO) algorithms along with Optimized Response Time Policy (ORTP) for the selection of potential HPF for efficient processing of the requests with maximum resource utilization. It is analysed that size and number of virtual resources affect the performance of the computing system. In the proposed system model, micro grids are introduced in the vicinity of energy consumers for uninterrupted and cost optimized power supply. The impact of the number of VMs on the performance of HPFs is analysed with extensive simulations with three scenarios.
Article
Full-text available
Mobile cloud computing (MCC) and mobile edge computing (MEC) facilitate the mobile devices to augment their capabilities by utilizing the resources and services offered by Cloud and Edge Cloud, respectively. However, due to mobility, network connection becomes unstable that causes application execution disruption. Such disruption increases the execution time and in some cases restrain the mobile devices from getting execution results from the cloud. This research work analyzes the impact of user mobility on the execution of cloud-based mobile applications. We propose a Process State Synchronization (PSS)-based execution management to solve the aforementioned problem. We analytically compute a sufficient condition on synchronization interval that ensure reduction in mobile application execution time under PSS in case of disconnection. Similarly, we compute the upper bound on synchronization interval whereby a larger synchronization interval did not result in significant benefits in terms of execution time for the mobile application. The analytical results were confirmed by the sample implementation of PSS with the computed synchronization intervals. Moreover, we also compare the performance of proposed solution with the state-of-the-art solutions. The results show that the PSS-based execution outperforms the other contemporary solutions.
Article
Full-text available
Fog computing has recently emerged as a new infrastructure composed by three layers: node levels, cloud services, and companies (clients). In general, node levels deliver services to cloud computing layers which in turn serve to in-situ processes at companies. This kind of framework has gained popularity in the Internet of Things (IoT) networks context. The main purpose of node layers is to deliver inexpensive and highly responsive services and as a consequence, cloud layers are reserved for expensive processes. Thus, the optimal load balancing is a major concern between cloud and fog nodes as well as the efficient use of memory resources on those layers. We propose a simple Tabu Search method for optimal load balancing between cloud and fog nodes which accounts for resource constraints. The main motivation for using Tabu Search is that, on-line computations are a must in those layers and as tasks are received they should be processed. We consider a bi-objective cost function for such purpose, the first one denotes the computational cost of processing tasks in fog nodes while the last one stands for that in cloud nodes. During the optimization process, convex combinations of the objective functions are employed in order to reduce the optimization problem to mono-objective cases. Experimental tests are performed by using synthetic scenarios of tasks to be executed. The results reveal that, by using the proposed method the memory usage can be minimized as well as the computational costs of load balancing methods.
Article
Full-text available
The extension of the Cloud to the Edge of the network through Fog Computing can have a significant impact on the reliability and latencies of deployed applications. Recent papers have suggested a shift from VM and Container based deployments to a shared environment among applications to better utilize resources. Unfortunately, the existing deployment and optimization methods pay little attention to developing and identifying complete models to such systems which may cause large inaccuracies between simulated and physical run-time parameters. Existing models do not account for application interdependence or the locality of application resources which causes extra communication and processing delays. This paper addresses these issues by carrying out experiments in both cloud and edge systems with various scales and applications. It analyses the outcomes to derive a new reference model with data driven parameter formulations and representations to help understand the effect of migration on these systems. As a result, we can have a more complete characterization of the fog environment. This, together with tailored optimization methods than can handle the heterogeneity and scale of the fog can improve the overall system run-time parameters and improve constraint satisfaction. An Industry 4.0 based case study with different scenarios was used to analyze and validate the effectiveness of the proposed model. Tests were deployed on physical and virtual environments with different scales. The advantages of the model based optimization methods were validated in real physical environments. Based on these tests, we have found that our model is 90% accurate on load and delay predictions for application deployments in both cloud and edge.
Article
Full-text available
A wide range of Internet of Things devices, platforms and applications have been implemented in the past decade. The variation in platforms, communication protocols and data formats of these systems creates islands of applications. Many organizations are working towards standardizing the technologies used at different layers of communication in these systems. However, interoperability still remains one of the main challenges towards realizing the grand vision of IoT. Intergration approaches proven in the existing Internet or enterprise applications are not suitable for the IoT, mainly due to the nature of the devices involved; the majority of the devices are resource constrained. To address this problem of interoperability, our work considers various types of IoT application domains, architecture of the IoT and the works of standards organizations to give a holistic abstract model of IoT. According to this model, there are three computing layers, each with a different level of interoperability needs — technical, syntactic or semantic. This work presents a Web of Virtual Things (WoVT) server that can be deployed at the middle layer of IoT (Fog layer) and Cloud to address the problem of interoperability. It exposes a REST like uniform interface for syntactic integration of devices at the bottom layer of IoT (perception layer). An additional RESTful api is used for integration with other similar WoVT servers at the Fog or the Cloud layer. The server uses a state of the art architecture to enable this integration pattern and provides means towards semantic interoperability. The analysis and evaluation of the implementation, such as performance, resource utilization and security perspectives, are presented. The simulation results demonstrate that an integrated and scalable IoT through the web of virtual things can be realized.
Article
Full-text available
With the integration of distributed generation and the construction of cross-regional long-distance power grids, power systems become larger and more complex. They require faster computing speed and better scalability for power flow calculations to support unit dispatch. Based on the analysis of a variety of parallelization methods, this paper deploys the large-scale power flow calculation task on a cloud computing platform using resilient distributed datasets (RDDs). It optimizes a directed acyclic graph that is stored in the RDDs to solve the low performance problem of the MapReduce model. This paper constructs and simulates a power flow calculation on a large-scale power system based on standard IEEE test data. Experiments are conducted on Spark cluster which is deployed as a cloud computing platform. They show that the advantages of this method are not obvious at small scale, but the performance is superior to the stand-alone model and the MapReduce model for large-scale calculations. In addition, running time will be reduced when adding cluster nodes. Although not tested under practical conditions, this paper provides a new way of thinking about parallel power flow calculations in large-scale power systems.
Article
In this article, we introduce the concept of FoT, a paradigm for on-demand IoT. On-demand IoT is an IoT platform where heterogeneous connected things can be accessed and managed via a uniform platform based on real-time demands. Realizing such a platform faces challenges including heterogeneity, scalability, responsiveness, and robustness, due to the large-scale and complex nature of an IoT environment. The FoT paradigm features the incorporation of fog computing power, which empowers not only the IoT applications, but more importantly the scalable and efficient management of the system itself. FoT utilizes a flat-structured virtualization plane and a hierarchical control plane, both of which extend to the network edge and can be reconfigured in real time, to achieve various design goals. In addition to describing the detailed design of the FoT paradigm, we also highlight challenges and opportunities involved in the deployment, management, and operation of such an on-demand IoT platform. We hope this article can shed some light on how to build and maintain a practical and extensible control back-end to enable large-scale IoT that empowers our connected world.
Article
Internet of Things (IoT) analytics is an essential mean to derive knowledge and support applications for smart homes. Connected appliances and devices inside the smart home produce a significant amount of data about consumers and how they go about their daily activities. IoT analytics can aid in per-sonalizing applications that benefit both homeowners and the ever growing industries that need to tap into consumers profiles. This article presents a new platform that enables innovative analytics on IoT captured data from smart homes. We propose the use of fog nodes and cloud system to allow data-driven services and address the challenges of complexities and resource demands for online and offline data processing, storage, and classification analysis. We discuss in this paper the requirements and the design components of the system. To validate the platform and present meaningful results, we present a case study using a dataset acquired from real smart home in Vancouver, Canada. The results of the experiments show clearly the benefit and practicality of the proposed platform.
Article
Metagenomic studies are becoming increasingly widespread, yielding important insights into microbial communities covering diverse environments from terrestrial to aquatic ecosystems. This also because genome sequencing is likely to become a routinely and ubiquitous analysis in a near future thanks to a new generation of portable devices, such as the Oxford Nanopore MinION. The main issue is however represented by the huge amount of data produced by these devices, whose management is actually challenging considering the resources required for an efficient data transfer and processing. In this paper we discuss these aspects, and in particular how it is possible to couple Edge and Cloud computing in order to manage the full analysis pipeline. In general, a proper scheduling of the computational services between the data center and smart devices equipped with low-power processors represents an effective solution.
Article
Fog computing has been proposed as an extension of cloud computing to provide computation, storage and network service in network edge. For smart manufacturing, fog computing can provide a wealth of computational and storage services, such as fault detection and state analysis of devices in assembly lines, if the middle layer between the industrial cloud and terminal device is considered. However, limited resources and low delay services hinder the application of new virtualization technologies in the task scheduling and resource management of fog computing. Thus, we build a new task scheduling model by considering the role of containers. Then, we construct a task scheduling algorithm to ensure that tasks are completed on time and the number of concurrent tasks for the fog node is optimized. Finally, we propose a reallocation mechanism to reduce task delays in accordance with the characteristics of the containers. Results showed that our proposed task scheduling algorithm and reallocation scheme can effectively reduce task delays and improve the concurrency number of tasks in fog nodes.