Conference PaperPDF Available

Efficient Resource Allocation Model for Residential Buildings in Smart Grid using Fog and Cloud Computing

Authors:

Abstract and Figures

In this article, a resource allocation model is presented in order to optimize the resources in residential buildings. The whole world is categorized into six regions depending on its continents. The fog helps cloud computing connectivity on the edge network. It also saves data temporarily and sends to the cloud for permanent storage. Each continent has one fog which deals with three clusters having 100 buildings. Microgrids (MGs) are used for the effective electricity distribution among the consumers. The control parameters considered in this paper are: clusters, number of buildings, number of homes and load requests whereas the performance parameters are: cost, Response Time (RT) and Processing Time (PT). Particle Swarm Optimization with Simulated An-nealing (PSOSA) is used for load balancing of Virtual Machines (VMs) using multiple service broker policies. Service broker policies in this paper are: new dynamic service proximity, new dynamic response time and enhanced new response time. The results of proposed service broker policies with PSOSA are compared with the existing policy: new dynamic service proximity. New dynamic response time and enhanced new dynamic response time performs better than the existing policy in terms of cost, RT and PT. However, the maximum RT and PT of proposed policies are more than the existing policy. We have used CloudAnalyst for conducting simulations for the proposed scheme.
Content may be subject to copyright.
Efficient Resource Allocation Model for
Residential Buildings in Smart Grid using Fog
and Cloud Computing
Aisha Fatima1, Nadeem Javaid1,, Momina Waheed1,
Tooba Nazar1, Shaista Shabbir2, Tanzeela Sultana3
1COMSATS Institute of Information Technology, Islamabad 44000, Pakistan
2Virtual University of Pakistan, Kotli Campus 11100, Azad Kashmir
3University of Azad Jammu and Kashmir, Kotli 11100, Azad Kashmir
Correspondence: nadeemjavaidqau@gmail.com, www.njavaid.com
Abstract. In this article, a resource allocation model is presented in
order to optimize the resources in residential buildings. The whole world
is categorized into six regions depending on its continents. The fog helps
cloud computing connectivity on the edge network. It also saves data
temporarily and sends to the cloud for permanent storage. Each conti-
nent has one fog which deals with three clusters having 100 buildings.
Microgrids (MGs) are used for the effective electricity distribution among
the consumers. The control parameters considered in this paper are: clus-
ters, number of buildings, number of homes and load requests whereas
the performance parameters are: cost, Response Time (RT) and Pro-
cessing Time (PT). Particle Swarm Optimization with Simulated An-
nealing (PSOSA) is used for load balancing of Virtual Machines (VMs)
using multiple service broker policies. Service broker policies in this pa-
per are: new dynamic service proximity, new dynamic response time and
enhanced new response time. The results of proposed service broker poli-
cies with PSOSA are compared with the existing policy: new dynamic
service proximity. New dynamic response time and enhanced new dy-
namic response time performs better than the existing policy in terms
of cost, RT and PT. However, the maximum RT and PT of proposed
policies is more than the existing policy. We have used CloudAnalyst for
conducting simulations for the proposed scheme.
Key words: smart grid, cloud computing, particle swarm optimization,
simulated annealing.
1 Introduction
Utilization of advanced Information and Communication Technology (ICT) in
Demand Side Management (DSM) has been considered as one of the main char-
acteristics of Smart Grids (SGs) [1]. Bi-directional flow of energy and communi-
cation has been done by SG to get information of users and to distribute energy
between consumers. The traditional grid is converted into a SG to reduce the
2 Aisha Fatima et al.
Carbon Dioxide (CO2). The number of devices have been utilized on the de-
mand side. Many new concepts including Electric Vehicles (EVs) charging and
discharging, intelligent home appliances, smart meters and so on have been used
in DSM in the SG environment [1].
Cloud computing is generally associated with the services of the internet. The
internet has connected to the world. Users can transfer a large amount of data
and can also enjoy new technologies and services provided by the cloud at any
time and any place [2]. Cloud computing has provided various facilities including
minimum cost, maximum speed, high performance, and elasticity. Cloud can
be public, private or hybrid. Netflix, skype, emails, microsoft office 365 and
so on are the examples of cloud computing. However, there are some issues in
cloud computing: latency, and less security. To tackle aforementioned issues, the
concept of fog computing was introduced.
Fog computing concept is introduced by Computer Information System Com-
pany (CISCO) in 2014. Fog computing has emerged as a promising infrastructure
to provide elastic resources at the network edge to minimize latency and to in-
crease security. Fog computing is used to reduce the burden on cloud and for
direct communication with consumers. Communication between fog and con-
sumer is done through some communication medium (wireless, i.e., wi-fi). Fog
provides local services and can be accessed without internet.
The integrated cloud-fog based environment is three-layered architecture.
Fog is an intermittent layer between cloud and end user layer. The concept
of cloud and fog is almost same. The differences are: size, distance from user,
memory, and processing. The distance of cloud from ground level is thousands
of kilometer whereas the fog must be on ground level. The services provided by
cloud computing and fog computing are: Software as a Service (SaaS), Platform
as a Service (PaaS), and Infrastructure as a Service (IaaS) [2].
SaaS:
SaaS is the highest level of abstraction.
It is accessed by the users through a web browser.
SaaS provides an access to the licensed software.
PaaS:
PaaS provides simplicity and convenience for consumers.
A user can access PaaS services anywhere through a web browser.
PaaS then charges the users for that access.
IaaS:
It is a fundamental building block for cloud services.
– A cloud service provider provides the infrastructure components like data
centers, servers, storage, and networking hardware.
The main use of IaaS includes the actual development and deployment of PaaS
and SaaS.
Different service providers provide different services as shown in Fig. 1.1. SG with
cloud and fog based environment is considered. The proposed scenario is divided
Efficient Resource Allocation 3
SaaS
Email
Gaming
CRM
PaaS
Database
Web Server
Dev Tools
IaaS
VMs
Servers
Network
Storage
CLOUD SERVICES
Fig. 1. Cloud and Fog Services
into three layers: SG based layer, fog layer, and the cloud layer. In each cluster,
hundred buildings are considered, a controller is used in SG layer to communicate
with fog layer. Clusters are connected to fog in the same region. The data on
fog is stored temporarily and fog sends data to the cloud for permanent storage.
Consumers have to make a profile to communicate with fog. A profile contains
information about consumers’ location and daily electricity usage. These profiles
help the fog and the cloud to maintain its data accordingly.
1.1 Motivation
A cloud-fog based platform is presented in [4] - [5], where fog devices are installed
in a region between the end user layer and the cloud layer to minimize latency.
Six regions are considered on the basis of six continents to cover the whole world
[4]. One fog in each region is used rather than two to minimize the cost [4]. Fifty
VMs are used instead of twenty-five to increase the efficiency in terms of PT.
Five MGs are placed in each region to fulfill consumers’ requests as much as
needed to minimize RT. Three clusters instead of two [4] with hundred buildings
in each are considered to achieve the results closer to real-time scenario.
1.2 Contributions
In this paper, SG application is integrated with the cloud-fog based environ-
ment, which covers a large area based on six continents of the world. It provides
numerous benefits for SG applications such as;
Low latency services are provided, as fog devices are placed near the end user.
MGs are used to fulfill the electricity requirements of consumers.
PSOSA is used for load balancing.
Two hybrid service broker policies are used for the selection of fogs to entertain
requests coming from the user.
Remaining part of the paper is organized as: related work is presented in Sec-
tion 2. The proposed system model is described in Section 3. Load balancing
4 Aisha Fatima et al.
algorithms are discussed in Section 4 and service broker policies in Section 5.
However, simulation results and conclusion are drawn in Section 6 and 7.
2 Related Work
Fog computing is used as an intermittent layer between end user layer and the
cloud layer. Fog computing is used to manage renewable energy resources and is
accessible without internet. It provides true support for mobility and the Internet
of Things (IoT) devices. Fog computing brings data closer to the end user layer.
Cloud computing has some limitations for the SG, a huge number of SG devices
need enormous data storage, networking, and processing. So fog is used near the
end user layer to manage the SG resources.
Cao et al. [1] have proposed a cost-oriented optimization model. A Modified
Priority List (MPL) and Simulated Annealing (SA) algorithms have been used
to solve the proposed optimization model efficiently. Computing instance is a
minimal unit that a user can take from the cloud. On-Demand Instances (ODI)
and Reserved Instances (RI) are considered in this paper. ODI idea is like pay
as you go while in a RI; users have a relatively long-term computing demands.
RI has been declared better than ODI. However, a user has to pay the upfront
payment in RI.
In [2], PSO based on Service Cost Optimization (PSOSC) scheduling al-
gorithm has been proposed to schedule the tasks coming from users. PSOSC
balance a load of VMs to minimize the cost and shortens the completion time.
Task scheduling of workflow in the cloud is very important. However, RT has
increased.
The authors in [4] have proposed a new dynamic service proximity policy
for the selection of VMs. A VM having minimum latency is allocated to fulfill
the consumers’ need. The communication has been performed between the end
user, fog, and the cloud. However, using two fogs in the same region is quite
expensive.
Simulation technology has become a powerful and useful tool in cloud com-
puting for research community [6]. The authors have compared the two cloud
simulation tools CloudSim and CloudAnalyst. CloudAnalyst is declared the best
option if anyone wants to work particularly on service broker policy or on load
balancing algorithm as compare to CloudSim. However, CloudAnalyst is not a
comprehensive solution for all complex tasks.
The authors in [7] have found some common security gaps of existing fog
computing applications. Some impacts of security issues and possible solutions
have been discussed in this paper. The detailed comparison between edge com-
puting, cloudlet, and micro-data center has been given. However, the security
issue is still there for a huge number of IOT devices.
Anila Yasmeen et al. [9] have used cloud-fog based environment for efficient
resource allocation. The author has proposed PSOSA and Cuckoo Search (CS)
for balancing the load of VMs. The proposed service broker policy has been
Efficient Resource Allocation 5
used for the selection of fog to entertain the requests coming from consumers.
However, the RT and the PT is increased with the proposed service broker policy.
The authors have proposed PSO scheduling based algorithm for workflow
scheduling. Workflow scheduling is a complicated scheduling containing a set
of dependent tasks communicating with each other. Masdari et al. [15] have
discussed the types of PSO algorithm, their objectives, and properties. However,
load balancing of VMs is still a big problem and must be considered for efficient
resource allocation.
3 Proposed System Model
In this study, an efficient resource allocation model is presented to address the
following issues: minimization of PT, RT and the overall cost of VMs, MG, and
total data transfer. The proposed structure has three layers: layer 1(SG layer),
layer 2 (fog layer) and the layer 3(cloud layer). The centralized cloud platform
is used for data storage and macrogrid availability. The world is divided into six
regions based on the continents [1], as graphically shown in Fig. 1.2. Each region
contains one fog that minimizes the RT and PT, three clusters and five MG.
There are 100 buildings in one cluster and each building comprises of 50 to 80
apartments. A smart meter is appended to the all apartments.
MG incorporates with renewable energy. It has it’s own power generation
resources and have small-scale power. Macrogrid produces a large amount of
electricity. Windmills, fossil fuels, water turbine, etc are the source of electricity
for macrogrids. Fog in a region is able to respond the requests of three clusters
and based on the energy demand, forward these requests to the cloud server.
MGs are situated near the clusters of buildings. However, consumers are not
permitted to communicate directly with the MGs. The requests for electricity
from clusters are sent to the fog through the controller. The fog communicates
with the MGs in the same region to fulfill the consumers’ need. MGs send back
an acknowledgement of the power they have. On the other hand, if they do not
have adequate power, then the fog communicates with the cloud to provide the
macrogrid facility. Proposed system model is shown in Fig. 1.3.
4 Load Balancing Algorithms
Load balancing algorithms are used for the distribution of the workload to
achieve minimum RT and PT. Round robin and throttled algorithms were used
in [4] to balance the load of VMs; a new load balancing algorithm (PSOSA)
is used in this scenario. A number of particles form a swarm. These particles
communicate with each other. A particle is composed of 3 vectors (x-vector, p-
vector, and v-vector). These vectors record the current location of a particle, the
best solution found so far and a direction for which particle will travel. Following
steps are performed in PSOSA load balancing algorithm [2].
6 Aisha Fatima et al.
Fig. 2. Regions
1. Initialize number of particle swarms, a number of tasks and a number of
VMs.
Fig. 3. Proposed System Model
Efficient Resource Allocation 7
2. Initialize velocity and positions of the particles.
3. Definition of adaptive functions, which includes tasks allocation strategy and
fitness value to measure the merits of the allocation strategy. f(i)= fitness
function and the SumCost(i) = total cost of the ith particle.
4. Compare fitness value with individual extremum and global extremum.
5. Update particle’s speed and position.
5 Service Broker Policies
Resources are little bit complicated to manage. Cloud computing creates a set of
virtual resources, i.e., VMs. Service broker policies are used to route the traffic
coming from the end user to the fog. These policies decide which fog should deal
with consumers’ request. Following policies were used in [4]
A. Service Proximity Policy
Service Proximity policy is easy to implement.
It maintains the index table of all fogs in each region.
– The fog is selected which has minimum latency and closed to the cluster
located in the same region.
– The fog is selected randomly if all fogs in the same region have minimum
latency.
B. Optimize Response Time Policy
It maintains the index table of all available fogs located in all regions.
It checks the history that which fog provides best RT.
The fog in the same region with best RT is assigned to the consumer.
C. Dynamically Reconfigure with Load
This is the hybrid of service proximity policy and optimize response time.
The fog is selected which is closed to the cluster of the same region with best
RT.
It also provides a facility for scalability.
D. New Dynamic Service Proximity
New dynamic service proximity policy is the extension of dynamically recon-
figure with load and service proximity policy.
The fogs are allocated on the basis of minimum latency and already existing
traffic load on fog and predicts next fog to be selected.
Following are the proposed policies in this paper.
8 Aisha Fatima et al.
5.1 New Dynamic Response Time
It is the extension of dynamically reconfigure with load and optimize response
time.
The history of all fogs is sustained in the form of an index table.
The fog is assigned on the basis of best RT in the same region by checking the
history of all fogs.
5.2 Enhanced New Dynamic Response Time
– This is the extension of new dynamic response time and service proximity
policy.
The RT of all fogs is maintained in a table.
– The fog having best RT and minimum latency is allocated to the request
coming from the cluster in the same region.
Table 1. Overall RT and PT
New Dynamic Proximity Policy Avg (ms) Min (ms) Max (ms)
RT 111.71 37.91 31729
PT 61.67 0.12 31682
New Dynamic Response Time
RT 98.4 36.71 33175
PT 44.29 0.05 33124
Enhanced New Dynamic Response
Time
RT 99.05 37.92 33888
PT 44.94 0.05 33837
6 Simulations and Discussion
In this paper, CloudAnalyst tool is used for simulations. CloudAnalyst is used to
work specifically on service broker policies and load balancing algorithms. The
simulation results using PSOSA load balancing algorithm with three service
broker policies are discussed. For the experimental purpose first PSOSA with
new dynamic service proximity [1] is considered and than compared it with
two proposed policies: new dynamic response time and enhanced new dynamic
response time.
Minimum requests are serviced to minimize the cost in on-peak hours. RT
is the time interval between the time, when the request is sent to fog and the
response received against that request. The total time to process a request is
known as PT. The overall RT and PT for PSOSA and service broker policies:
Efficient Resource Allocation 9
new dynamic service proximity, new dynamic response time, and enhanced new
dynamic response time is shown in Table 1.1. Each fog has some VM cost, MG
cost, and data transfer cost. The grand total cost using three different policies
with PSOSA is shown in Table 1.2.
Table 2. Cost Comparison
New Dynamic Prox-
imity Policy
New Dynamic Re-
sponse Time
Enhanced New Dy-
namic Response
Time
Total VM cost ($) 1334.25 816.01 816.01
Total MG cost ($) 266.85 163.2 163.2
Total Data
Transfer Cost ($)
289.11 289.11 289.1
Grand Total ($) 1890.21 1268.32 1268.31
7 Conclusion
In this paper, an integrated fog and cloud based model is proposed to manage the
SG resources optimally. It is analyzed that energy management is very important
for both demand side and the supply side. Some service broker policies are also
used for efficient selection of fog. PSOSA algorithm along with two hybrid service
broker policies is implemented. Furthermore, we observed that the overall cost of
PSOSA with new dynamic response time and enhanced new dynamic response
time is approximately 20 % better as compared to the existing policy. However
the maximum RT and PT of new dynamic service proximity is approximately 3 %
better than the proposed policies. Simulations are performed on JAVA platform
using CloudAnalyst. In future, we will extend this study for five clusters and
ellaborate system model.
References
1. Cao, Zijian, Jin Lin, Can Wan, Yonghua Song, Yi Zhang, and Xiaohui Wang.
“Optimal cloud computing resource allocation for demand side management in
smart grid.”IEEE Transactions on Smart Grid 8, no. 4 (2017): 1943-1955.
2. Xue, Shengjun, Wenling Shi, and Xiaolong Xu. “A heuristic scheduling algorithm
based on PSO in the cloud computing environment.”International Journal of u-and
e-Service, Science and Technology 9, no. 1 (2016): 349-62.
3. Chekired, Djabir Abdeldjalil, and Lyes Khoukhi. “Smart Grid Solution for Charg-
ing and Discharging Services Based on Cloud Computing Scheduling.”IEEE Trans-
actions on Industrial Informatics 13, no. 6 (2017): 3312-3321.
4. Itrat Fatima, Nadeem Javaid, Muhammad Nadeem Iqbal, Isra Shafi, Ayesha An-
jum, and Ubed Memon, “Integration of Cloud and Fog based Environment for
Effective Resource Distribution in Smart Buildings”, in 14th IEEE International
Wireless Communications and Mobile Computing Conference (IWCMC-2018).
10 Aisha Fatima et al.
5. Saman Zahoor, Nadeem Javaid, Asif Khan, Fatima j. Muhammad, Maida Zahid,
and Mohsen Guizani, “A Cloud-Fog-Based Smart Grid Model for Efficient Resource
Utilization”, in 14th IEEE International Wireless Communications and Mobile
Computing Conference (IWCMC-2018).
6. Patel, Hetal, and Ritesh Patel. “Cloud Analyst: An Insight of Service Broker Pol-
icy.”International Journal of Advanced Research in Computer and Communication
Engineering 4, no. 1 (2015): 122-127.
7. Khan, Saad, Simon Parkinson, and Yongrui Qin. “Fog computing security: a review
of current applications and security solutions.”Journal of Cloud Computing 6, no.
1 (2017): 19.
8. Okay, Feyza Yildirim, and Suat Ozdemir. ”A fog computing based smart grid
model.” In Networks, Computers and Communications (ISNCC), 2016 Interna-
tional Symposium on, pp. 1-6. IEEE, 2016.
9. Anila Yasmeen, Nadeem Javaid, Obaid Ur Rehman, Hina Iftikhar, Muhammad
Faizan Malik, and Fatima J. Muhammad, “Efficient Resource Provisioning for
Smart Buildings Utilizing Fog and Cloud Based Environment”, in 14th IEEE Inter-
national Wireless Communications and Mobile Computing Conference (IWCMC-
2018).
10. Sakeena Javaid, Nadeem Javaid, Sahrish Khan Tayyaba, Norin Abdul Sattar, Bibi
Ruqia, and Maida Zahid, “Resource Allocation using Fog-2-Cloud based Environ-
ment for Smart Buildings”, in 14th IEEE International Wireless Communications
and Mobile Computing Conference (IWCMC-2018).
11. Moghaddam, Mohammad Hossein Yaghmaee, Alberto Leon-Garcia, and Morteza
Moghaddassian. “On the performance of distributed and cloud-based demand re-
sponse in smart grid.”IEEE Transactions on Smart Grid (2017).
12. Chekired, Djabir Abdeldjalil, Lyes Khoukhi, and Hussein T. Mouftah. “Decen-
tralized cloud-SDN architecture in smart grid: A dynamic pricing model.”IEEE
Transactions on Industrial Informatics 14, no. 3 (2018): 1220-1231.
13. Gai, Keke, Meikang Qiu, Hui Zhao, Lixin Tao, and Ziliang Zong. “Dynamic energy-
aware cloudlet-based mobile cloud computing model for green computing.”Journal
of Network and Computer Applications 59 (2016): 46-54.
14. Chen, Shang Liang, Yun-Yao Chen, and Suang-Hong Kuo. “CLB: A novel load
balancing architecture and algorithm for cloud services.”Computers and Electrical
Engineering 58 (2017): 154-160.
15. Masdari, Mohammad, Farbod Salehi, Marzie Jalali, and Moazam Bidaki. “A survey
of PSO-based scheduling algorithms in cloud computing.”Journal of Network and
Systems Management 25, no. 1 (2017): 122-158.
... Therefore, the presented system has performed in a way that response time (RT) increased, and the cost was optimized. In [76], [82] [75], [79], [80], [81] [77], [84] [78], [83] [88] ...
... Also, Fatima, et al. [80] depicted a model in which an integrated cloud and fog based system have optimized the resources in smart buildings and managed the SG resources. The paper divided the world into six regions based on its continents, and each continent contained a fog that minimized the RT and PT, five MG, and three clusters with 100 buildings in each one. ...
... These metrics are measured in most of the resource-management-based papers; hence, we bring them together here side by side.    Yasmeen, et al. [77]    Fatima, et al. [78]   Javaid, et al. [79]   Fatima, et al. [80]    Abbas, et al. [81]    Rehman, et al. [82]   Fatima, et al. [83]    Gill, et al. [84]   ...
Article
Full-text available
Smart homes are equipped residences for clients aiming at supplying suitable services via intelligent technologies. Through smart homes, household appliances as the Internet of Things (IoT) devices can easily be handled and monitored from a far distance by remote controls. With the day-to-day popularity of smart homes, it is anticipated that the number of connections rises faster. With this remarkable rise in connections, some issues such as substantial data volumes, security weaknesses, and response time disorders are predicted. In order to solve these obstacles and suggest an auspicious solution, fog computing as an eminently distributed architecture has been proposed to administer the massive, security-crucial, and delay-sensitive data, which are produced by communications of the IoT devices in smart homes. Indeed, fog computing bridges space between various IoT appliances and cloud-side servers and brings the supply side (cloud layer) to the demand side (user device layer). By utilizing fog computing architecture in smart homes, the issues of traditional architectures can be solved. This paper proposes a Systematic Literature Review (SLR) method for fog-based smart homes (published between 2014 and May 2019). A practical taxonomy based on the contents of the present research studies is represented as resource-management-based and service-management-based approaches. This paper also demonstrates an abreast comparison of the aforementioned solutions and assesses them under the same evaluation factors. Applied tools, evaluation types, algorithm types, and the pros and cons of each reviewed paper are observed as well. Furthermore, future directions and open challenges are discussed.
... The cost depends on two things: how much it costs to move data and how much it costs to run a virtual machine. We have calculated different costs associated with the execution of the job [45,46]. These costs are VM cost (VM cost ), the data transmission cost (DT cost ), and the MG cost (MG cost ) within the framework of the model that was presented. ...
Article
Full-text available
Data centers are producing a lot of data as cloud-based smart grids replace traditional grids. The number of automated systems has increased rapidly, which in turn necessitates the rise of cloud computing. Cloud computing helps enterprises offer services cheaply and efficiently. Despite the challenges of managing resources, longer response plus processing time, and higher energy consumption, more people are using cloud computing. Fog computing extends cloud computing. It adds cloud services that minimize traffic, increase security, and speed up processes. Cloud and fog computing help smart grids save energy by aggregating and distributing the submitted requests. The paper discusses a load-balancing approach in Smart Grid using Rock Hyrax Optimization (RHO) to optimize response time and energy consumption. The proposed algorithm assigns tasks to virtual machines for execution and shuts off unused virtual machines, reducing the energy consumed by virtual machines. The proposed model is implemented on the CloudAnalyst simulator, and the results demonstrate that the proposed method has a better and quicker response time with lower energy requirements as compared with both static and dynamic algorithms. The suggested algorithm reduces processing time by 26%, response time by 15%, energy consumption by 29%, cost by 6%, and delay by 14%.
... Li et al. [19] proposed the "Energy-Efficient Computation Offloading and Resource Allocation (ECORA)" techniques to reduce the overall cost of the system. Authors in [20,21,22] proposed suitable resource allocation techniques for residential buildings, consumers' power requests, and time-sensitive IoT-fog applications in a fog computing environment, respectively. ...
Article
Fog computing is an emerging technology which enables computing resources accessibility close to the end-users. It overcomes the drawbacks of available network bandwidth and delay in accessing the computing resources as observed in cloud computing environment. Resource allocation plays an important role in resource management in a fog computing environment. However, the existing traditional resource allocation techniques in fog computing do not guarantee less execution time, reduced energy consumption, and low latency requirements which is a pre-requisite for most of the modern fog computing-based applications. The complex fog computing environment requires a robust resource allocation technique to ensure the quality and optimal resource usage. Motivated from the aforementioned challenges and constraints, in this article, we propose a resource allocation technique for SDN-enabled fog computing with Collaborative Machine Learning (CML). The proposed CML model is integrated with the resource allocation technique for the SDN-enabled fog computing environment. The FogBus and iFogSim are deployed to test the results of the proposed technique using various performance evaluation metrics such as bandwidth usage, power consumption, latency, delay, and execution time. The results obtained are compared with other existing state-of-the-art techniques using the aforementioned performance evaluation metrics. The results obtained show that the proposed scheme reduces 19.35% processing time, 18.14% response time, and 25.29% time delay. Moreover, compared to the existing techniques, it reduces 21% execution time, 9% network usage, and 7% energy consumption.
... e load on cloud data center is still there because by migration we can only achieve high utilization rate of network resources. 4 Mathematical Problems in Engineering classified into 6 regions [27][28][29][30][31][32][33][34][35][36]. In this research, we consider region 0, which is North America, because the percentage of users in this region is 80 million [31]. ...
Article
Full-text available
As the cloud data centers size increases, the number of virtual machines (VMs) grows speedily. Application requests are served by VMs be located in the physical machine (PM). The rapid growth of Internet services has created an imbalance of network resources. Some hosts have high bandwidth usage and can cause network congestion. Network congestion affects overall network performance. Cloud computing load balancing is an important feature that needs to be optimized. Therefore, this research proposes a 3-tier architecture, which consists of Cloud layer, Fog layer, and Consumer layer. The Cloud serves the world, and Fog analyzes the services at the local edge of network. Fog stores data temporarily, and the data is transmitted to the cloud. The world is classified into 6 regions on the basis of 6 continents in consumer layer. Consider Area 0 as North America, for which two fogs and two cluster buildings are considered. Microgrids (MG) are used to supply energy to consumers. In this research, a real-time VM migration algorithm for balancing fog load has been proposed. Load balancing algorithms focus on effective resource utilization, maximum throughput, and optimal response time. Compared to the closest data center (CDC), the real-time VM migration algorithm achieves 18% better cost results and optimized response time (ORT). Realtime VM migration and ORT increase response time by 11% compared to dynamic reconFigure with load (DRL) with load. Realtime VM migration always seeks the best solution to minimize cost and increase processing time.
... Zeus identified requests that were common for multiple applications and performed only once their required tasks, sharing the results among the applications for saving resources. Fatima et al. [9] proposed an efficient resource allocation model for residential buildings in smart grid using fog and cloud computing. They aimed to optimize the resources in residential buildings. ...
Article
Full-text available
Nowadays, video-on-demand (VoD) providers offer multiple-quality video streaming services to users, called as multi-version VoD. Unlike traditional VoD, multi-version VoD providers should consider to allocate bandwidth resource and transcoding computation resource simultaneously. However, most of existing resource allocation works only focused on cost reduction or bandwidth optimization, and they did not consider to allocate transcoding computation resources for multi-version VoD systems. Therefore, how to allocate bandwidth resource and transcoding computation resource simultaneously for multi-version VoD systems is still one major challenge. In this paper, we propose a queue-based and learning-based dynamic resources allocation strategy (QLRA) for virtual streaming media server cluster of multi-version VoD system. First, we analyze the user behavior habits and build the virtual streaming media server cluster as an M/G/n queue system. Based on queueing theory, we can allocate initial resources for virtual streaming media server cluster of multi-version VoD system. Second, taking the changes of the user arrival rate and the workload of multi-version VoD system as feedbacks, we introduce learning automaton to allocate resources dynamically for virtual streaming media server cluster. Third, we evaluate QLRA with other methods, and results show the correctness and effectiveness of our strategy.
... Fatima et al. [101] proposed a fog computing based model for resource allocation in residential buildings. The performance has been measured using response time and processing time. ...
Article
Full-text available
In recent years, due to the unnecessary wastage of electrical energy in residential buildings, the requirement of energy optimization and user comfort has gained vital importance. In the literature, various techniques have been proposed addressing the energy optimization problem. The goal of each technique was to maintain a balance between user comfort and energy requirements such that the user can achieve the desired comfort level with the minimum amount of energy consumption. Researchers have addressed the issue with the help of different optimization algorithms and variations in the parameters to reduce energy consumption. To the best of our knowledge, this problem is not solved yet due to its challenging nature. The gap in the literature is due to the advancements in the technology and drawbacks of the optimization algorithms and the introduction of different new optimization algorithms. Further, many newly proposed optimization algorithms which have produced better accuracy on the benchmark instances but have not been applied yet for the optimization of energy consumption in smart homes. In this paper, we have carried out a detailed literature review of the techniques used for the optimization of energy consumption and scheduling in smart homes. The detailed discussion has been carried out on different factors contributing towards thermal comfort, visual comfort, and air quality comfort. We have also reviewed the fog and edge computing techniques used in smart homes.
Thesis
Full-text available
In this thesis, a blockchain-based data sharing and access control system is proposed, for communication between the Internet of Things (IoT) devices. The proposed system is intended to overcome the issues related to trust and authentication for access control in IoT networks. Moreover, the objectives of the system are to achieve trustfulness, authorization, and authentication for data sharing in IoT networks. Multiple smart contracts such as Access Control Contract (ACC), Register Contract (RC), and Judge Contract (JC) are used to provide efficient access control management. Where ACC manages overall access control of the system, and RC is used to authenticate users in the system, JC implements the behavior judging method for detecting misbehavior of a subject (i.e., user). After the misbehavior detection, a penalty is defined for that subject. Several permission levels are set for IoT devices' users to share services with others. In the end, performance of the proposed system is analyzed by calculating cost consumption rate of smart contracts and their functions. A comparison is made between existing and proposed systems. Results show that the proposed system is efficient in terms of cost. The overall execution cost of the system is 6,900,000 gas units and the transaction cost is 5,200,000 gas units.
Article
The Internet of Things (IoT) allows communication between devices, things, and any digital assets that send and receive data over a network without requiring interaction with a human. The main characteristic of IoT is the enormous quantity of data created by end-user's devices that needs to be processed in a short time in the cloud. The current cloud-computing concept is not efficient to analyze very large data in a very short time and satisfy the users' requirements. Analyzing the enormous quantity of data by the cloud will take a lot of time, which affects the quality of service (QoS) and negatively influences the IoT applications and the overall network performance. To overcome such challenges, a new architecture called edge computing - that allows to decentralize the process of data from the cloud to the network edge has been proposed to solve the problems occurred by using the cloud computing approach. Furthermore, edge computing supports IoT applications that require a short response time and consequently enhances the consumption of energy, resource utilization, etc. Motivated by the extensive research efforts in the edge computing and IoT applications, in this paper, we present a comprehensive review of edge and fog computing research in the IoT. We investigate the role of cloud, fog, and edge computing in the IoT environment. Subsequently, we cover in detail, different IoT use cases with edge and fog computing, the task scheduling in edge computing, the merger of software-defined networks (SDN) and network function virtualization (NFV) with edge computing, security and privacy efforts. Furthermore, the Blockchain in edge computing. Finally, we identify open research challenges and highlight future research directions.
Thesis
Cloud computing offers various services. Numerous cloud data centers are used to provide these services to the users in the whole world. A cloud data center is a house of physical machines (PMs). Millions of virtual machines (VMs) are used to minimize the utilization rate of PMs. The dramatic growth of Internet services results in unbalanced network resources. Resource management is an important factor for the performance of a cloud. Various techniques are used to manage the resources of a cloud efficiently. VM-consolidation is an intelligent and efficient strategy to balance the load of cloud data centers. VM-placement is an important subproblem of the VM-consolidation problem that needs to be resolved. The basic objective of VM-placement is to minimize the utilization rate of PMs. VM-placement is used to save energy and cost. In this thesis, an enhanced levy-based particle swarm optimization algorithm with bin packing (PSOLBP) is proposed for solving the VM-placement problem. Moreover, the best-fit strategy is used. Simulations are done to authenticate the adaptivity of the proposed algorithm. Three algorithms are implemented in Matlab. The given algorithm is compared with simple particle swarm optimization (PSO) and a hybrid of levy flight and particle swarm optimization (LFPSO). The proposed algorithm efficiently minimized the number of running PMs. Further, an enhanced levy based multi-objective gray wolf optimization (LMOGWO) algorithm is proposed to solve the VM placement problem efficiently. An archive is used to store and retrieve true Pareto front. A grid mechanism is used to improve the non-dominated VMs in the archive. A mechanism is also used for the maintenance of an archive. The proposed algorithm mimics the leadership and hunting behavior of gray wolves (GWs) in multi-objective search space. The proposed algorithm is tested on nine well-known bi-objective and tri-objective benchmark functions to verify the compatibility of the work done. LMOGWO is then compared with simple multi-objective gray wolf optimization (MOGWO) and multi-objective particle swarm optimization (MOPSO). Two scenarios are considered for simulations to check the adaptivity of the proposed algorithm. The proposed LMOGWO outperformed MOGWO and MOPSO for University of Florida 1 (UF1), UF5, UF7 and UF8 for Scenario 1. However, MOGWO and MOPSO performed better than LMOGWO for UF2. For Scenario 2, LMOGWO outperformed the other two algorithms for UF5, UF8 and UF9. However, MOGWO performed well for UF2 and UF4. The results of MOPSO are also better than the proposed algorithm for UF4. Moreover, the PM utilization rate (%) is minimized by 30% with LMOGWO, 11% with MOGWO and 10% with MOPSO. VM-consolidation is an NP-hard problem; however, the proposed algorithms outperformed.
Article
Full-text available
Fog computing is a new paradigm that extends the Cloud platform model by providing computing resources on the edges of a network. It can be described as a cloud-like platform having similar data, computation, storage and application services, but is fundamentally different in that it is decentralized. In addition, Fog systems are capable of processing large amounts of data locally, operate on-premise, are fully portable, and can be installed on heterogeneous hardware. These features make the Fog platform highly suitable for time and location-sensitive applications. For example, Internet of Things (IoT) devices are required to quickly process a large amount of data. This wide range of functionality driven applications intensifies many security issues regarding data, virtualization, segregation, network, malware and monitoring. This paper surveys existing literature on Fog computing applications to identify common security gaps. Similar technologies like Edge computing, Cloudlets and Micro-data centres have also been included to provide a holistic review process. The majority of Fog applications are motivated by the desire for functionality and end-user requirements, while the security aspects are often ignored or considered as an afterthought. This paper also determines the impact of those security issues and possible solutions, providing future security-relevant directions to those responsible for designing, developing, and maintaining Fog systems.
Conference Paper
Full-text available
Traditional electric generation based on fossil fuel consumption threatens the humanity with global warming, climate change, and increased carbon emission. Renewable resources such as wind or solar power are the solution to these problems. The smart grid is the only choice to integrate green power resources into the energy distribution system, control power usage, and balance energy load. Smart grids employ smart meters which are responsible for two-way flows of electricity information to monitor and manage the electricity consumption. In a large smart grid, smart meters produce tremendous amount of data that are hard to process, analyze and store even with cloud computing. Fog computing is an environment that offers a place for collecting, computing and storing smart meter data before transmitting them to the cloud. This environment acts as a bridge in the middle of the smart grid and the cloud. It is geographically distributed and overhauls cloud computing via additional capabilities including reduced latency, increased privacy and locality for smart grids. This study overviews fog computing in smart grids by analyzing its capabilities and issues. It presents the state-of-the-art in area, defines a fog computing based smart grid and, gives a use case scenario for the proposed model.
Article
Full-text available
Cloud computing provides effective mechanisms for distributing the computing tasks to the virtual resources. To provide cost-effective executions and achieve objectives such as load balancing, availability and reliability in the cloud environment, appropriate task and workflow scheduling solutions are needed. Various metaheuristic algorithms are applied to deal with the problem of scheduling, which is an NP-hard problem. This paper presents an in-depth analysis of the Particle Swarm Optimization (PSO)-based task and workflow scheduling schemes proposed for the cloud environment in the literature. Moreover, it provides a classification of the proposed scheduling schemes based on the type of the PSO algorithms which have been applied in these schemes and illuminates their objectives, properties and limitations. Finally, the critical future research directions are outlined.
Conference Paper
In this paper, a new orchestration of Fog-2-Cloud based framework is presented for efficiently managing the resources in the residential buildings. It is a three layered framework having: cloud layer, fog layer and consumer layer. Cloud layer is responsible for the on-demand delivery of the resources. Effective resource management is done through the fog layer because it minimizes the latency and enhances the reliability of cloud facilities. Consumer layer is based on the residential users who fulfill their daily electricity demands through fog and cloud layers. Six regions are considered in the study, where, each region has a cluster of buildings varying between 80 to 150 and each building has 80 to 100 homes. Load requests of the consumers are considered fixed during every hour in the complete day. Two control parameters are considered: clusters of buildings and load requests, whereas, three performance parameters: request per hour, response time and processing time are also included. These parameters are optimized by the round robin algorithm, equally spread current execution algorithm and our proposed algorithm shortest job first. The simulation results show that our proposed technique has outperformed the previous techniques in terms of the aforementioned parameters. Tradeoff occurs in the processing time of the algorithms as compared to response time and request per hour.
Conference Paper
Fog computing concept is introduced to reduce the load on cloud and provide similar services as cloud. However, fog covers small area rather than cloud by storing the data temporarily and sends data to cloud for permanent storage. In this paper, an integrated fog and cloud based environment for effective energy management of buildings is proposed. So, the load on cloud and fog should be balanced. Various load balancing algorithms are used to manage the load among virtual machines (VMs). In this scenario, algorithm used for load balancing among VMs is round robin (RR). Service broker policies considered in this paper are; dynamically reconfigure with load (DR) and the proposed policy. New dynamic service proximity (DSP) service broker policy is proposed for fog selection and results of DSP policy are compared with DR policy. Therefore, a tradeoff is observed between cost and response time.
Article
Smart grids (SG) energy management system and Electric Vehicle (EV) has gained considerable reputation in recent years. This has been enabled by the high growth of EVs on roads; however, this may lead to a significant impact on the power grids. In order to keep EVs far from causing peaks in power demand and to manage building energy during the day, it is important to perform an intelligent scheduling for EVs charging and discharging service and buildings areas by including different metrics, such as real time price and demandsupply curve. In this paper, we propose a real-time dynamic pricing model for EVs charging and discharging service and building energy management, in order to reduce the peak loads. Our proposed approach uses a decentralized cloud computing architecture based on Software Define Networking (SDN) technology and Network Function Virtualization (NFV). We aim to schedule user’s requests in a real-time way and to supervise communications between micro-grids controllers, smart grid and user entities (i.e., EVs, Electric Vehicles Public Supply Stations (EVPSS), Advance Metering Infrastructure (AMI), smart meters, etc.). We formulate the problem as a linear optimization problem for EV and a global optimization problem for all micro grids. We solve the problems using different decentralized decision algorithms. To the best of our knowledge, this is the first paper that proposes a pricing model based on decentralized cloud-SDN architecture in order to solve all the aforementioned issues. The extensive simulations and comparisons with related works proved that our proposed pricing model optimizes the energy load during peak hours, maximises EVs utility and maintains the micro grid stability. The simulation is based on real electric load of the city of Toronto.
Article
Smart Grid (SG) technology represents an unprecedented opportunity to transfer the energy industry into a new era of reliability, availability, and efficiency that will contribute to our economic and environmental health. On the other hand, the emergence of Electric Vehicles (EVs) promises to yield multiple benefits to both power and transportation industry sectors, but it is also likely to affect the SG reliability, by consuming massive energy. Nevertheless, the plug-in of EVs at public supply stations must be controlled and scheduled in order to reduce the peak load. This paper considers the problem of plug-in EVs at public supply stations (EVPSS). A new communication architecture for smart grid and cloud services is introduced. Scheduling algorithms are proposed in order to attribute priority levels and optimize the waiting time to plug-in at each EVPSS. To the best of our knowledge, this is one of the first papers investigating the aforementioned issues using new network architecture for smart grid based on cloud computing. We evaluate our approach via extensive simulations and compare it with two other recently proposed works, based on real supply energy scenario in Toronto. Simulation results demonstrate the effectiveness of the proposed approach when considering real EVs charging-discharging loads at peak-hours periods.
Article
By locally solving an optimization problem and broadcasting an update message over the underlying communication infrastructure, demand response program based on the distributed optimization model encourage all users to participate in the program. However, some challenging issues present themselves, such as the existence of an ideal communication network, especially when utilizing wireless communication, and the effects of communication channel properties, like the bit error rate, on the overall performance of the demand response program. To address the issues, this paper first defines a Cloud-based Demand Response (CDR) model, which is implemented as a two-tier cloud computing platform. Then a communication model is proposed to evaluate the communication performance of both the CDR and DDR (Distributed Demand Response) models. The present study shows that when users are finely clustered, the channel bit error rate is high and the User Datagram Protocol (UDP) is leveraged to broadcast the update messages, making the optimal solution unachievable. Contradictory to UDP, the Transmission Control Protocol (TCP) will be caught up with a higher bandwidth and increase the delay in the convergence time. Finally, the current work presents a cost-effectiveness analysis which confirms that achieving higher demand response performance incurs a higher communication cost.
Article
Cloud services are widely used in manufacturing, logistics, digital applications, and document processing. Cloud services must be able to handle tens of thousands of concurrent requests and to enable servers to seamlessly provide the amount of load balancing capacity required to respond to incoming application traffic in addition to allowing users to obtain information quickly and accurately. In the past, researchers have proposed the use of static load balancing or server response times to evaluate load balancing capacity, a lack of which may cause a server to load unevenly. In this study, a dynamic annexed balance method is used to solve this problem. Cloud load balancing (CLB) takes into consideration both server processing power and computer loading, thus making it less likely that a server will be unable to handle excessive computational requirements. Finally, two algorithms in CLB are also addressed with experiments to prove the proposed approach is innovative.