Conference PaperPDF Available

Cloud and Fog based Integrated Environment for Load Balancing using Cuckoo Levy Distribution and Flower Pollination for Smart Homes



Reducing delay and latency in cloud computing environment is a challenging task for the research community. There are several smart cities in the world. These smart cities contain numerous Smart Communities (SCs), which have number of Smart Buildings (SBs) and Smart Homes (SHs). They require resources to process and store data in cloud. To overcome these challenges, another infrastructure fog computing environment is introduced, which plays an important role to enhance the efficiency of cloud. The Virtual Machines (VMs) are installed on fog server to whom consumers’ requests are allocated. In this paper, the cloud and fog based integrated environment is proposed. To overcome the delay and latency issues of cloud and to enhance the performance of fog. When there are a large number of incoming requests on fog and cloud, load balancing is another major issue. This issue has also been resolved in this paper. The load balancing algorithm Cuckoo search with Levy Walk distribution (CLW) and Flower Pollination (FP) are proposed. The proposed algorithms are compared with existing Cuckoo Search (CS) and BAT algorithm. The comparative analysis of these proposed and existing techniques are performed on the basis of Closest Data Center (CDC), Optimize Response Time (ORT) and Reconfigure Dynamically with Load (RDL). The RT of DCs of cloud and clusters, Processing Time (PT) of fogs is also optimized on the basis of CLW and FP.
Cloud and Fog based Integrated Environment for
Load Balancing using Cuckoo Levy Distribution and
Flower Pollination for Smart Homes
Nadeem Javaid1,, Ayesha Anjum Butt1, Kamran Latif2and Amjad Rehman3
1COMSATS University Islamabad, Islamabad 44000, Pakistan
2National Institute of Electronics, Islamabad 44000, Pakistan
3MIS Department COBA, Al Yamamah University, Riyadh 11512, Saudi Arabia
Abstract—Reducing delay and latency in cloud com-
puting environment is a challenging task for the re-
search community. There are several smart cities in
the world. These smart cities contain numerous Smart
Communities (SCs), which have number of Smart
Buildings (SBs) and Smart Homes (SHs). They re-
quire resources to process and store data in cloud. To
overcome these challenges, another infrastructure fog
computing environment is introduced, which plays an
important role to enhance the efficiency of cloud. The
Virtual Machines (VMs) are installed on fog server
to whom consumers’ requests are allocated. In this
paper, the cloud and fog based integrated environment
is proposed. To overcome the delay and latency issues
of cloud and to enhance the performance of fog. When
there are a large number of incoming requests on fog
and cloud, load balancing is another major issue. This
issue has also been resolved in this paper. The load
balancing algorithm Cuckoo search with Levy Walk
distribution (CLW) and Flower Pollination (FP) are
proposed. The proposed algorithms are compared with
existing Cuckoo Search (CS) and BAT algorithm. The
comparative analysis of these proposed and existing
techniques are performed on the basis of Closest Data
Center (CDC), Optimize Response Time (ORT) and
Reconfigure Dynamically with Load (RDL). The RT of
DCs of cloud and clusters, Processing Time (PT) of
fogs is also optimized on the basis of CLW and FP.
Index Terms—Cloud Computing, Fog Computing,
Smart Homes, Load Balancing, Service Broker Policy,
Cuckoo search with Levy Distribution, Flower Pollina-
tion, Cuckoo Search, BAT
I. Introduction
The cloud computing is the very tremendous domain,
where a large number of requests and intensive tasks are
processed on demand basis. The cloud computing has some
valuable features such as resource pooling and elasticity,
self and on-demand services, pricing, quality of services
and scalability. Due to which cloud based services are pro-
visioned to consumers’ at every level, where the resource
management plays an important role. The resources and
services provided to the consumers’ exist in an industrial
area or may exist in Smart Homes (SHs) residential area.
When the requests are sent to the cloud from the consumer
side of residential and industrial side, because of its remote
location, delay arises. So, the Response Time (RT) and
Processing Time (PT) of cloud Data Centers (DCs) are
also increased. The other issue which occurs because of
distant location is known as latency issue. To overcome
the mentioned issues and overlapping the peculiarities of
cloud, fog is recently introduced by Cisco.
The fog has additional attributes of location aware-
ness and edge DC deployment because of its edge lo-
cation. The distributed computing solutions, to achieve
high performance in terms of scalability and elasticity is
supported by fog computing. The Virtual Machine (VM)
based environment is introduced in fog computing to
reduce the computational cost and increases the sharing
of information and resources.
The Internet of Things (IoT) gains popularity on a
regular basis because in this era every object is connected
over the internet. Because of this connectivity, most of the
surroundings and objects become smarter and intelligent
[1]. Now, it is easy to enhance the performance of objects
and remove end to end delay. It also plays a vital role in
convert traditional grid into Smart Grid (SG) equipped
with some communication equipment.
When SG is introduced, it means that there is a two-way
communication between the grid and consumer side. The
city, buildings, homes located at consumer side converted
into smart cities, Smart Buildings (SBs) and SHs. The SHs
are modern ways of living. Owing to smart environment,
the status of SHs consumption can easily be monitored by
SG. When there is two-way communication between SBs
and SG, the consumers’ can easily reduce their electricity
bill by selling their extra energy to SG or other SB.
The cloud and fog also provide efficiency to the smart
environment by handling the challenges of SBs and SHs.
The cloud provides storage of extra energy of SHs, SBs
and SCs. The fog provides nearer grids and resources to
the consumers’ to run their applications and requests [2].
The cloud and edge cloud plays an impactful role in
smart mobiles [3]. The processed state synchronization
is introduced to reduce the execution time of the cloud.
2019 International Conference on Computer and Information Sciences (2019 ICCIS) 978-1-5386-8125-1 c
2019 IEEE
The RT and connectivity issue is also resolved because of
edge cloud. To reduce the cost and to play a vital role in
terms of low power computing in [4], cloud edge is used.
The cloud edge also reduces the power consumption and
execution time of DCs. In [5], to enhance the features of
cloud and edge cloud. The purpose of introducing this fog
is to minimize the latency and delay in the cloud based
The Chaotic Social Spider Algorithm (CSSA) is pro-
posed to make a solution for efficient utilization resources.
The overall makespan of cloud computing is reduced and
load balancing is done on the DCs of the cloud [6]. The
Stackelberg is proposed in cloud computing environment
[7]. The purpose to implement this game theory approach
to minimize the cost and find appropriate information of
resource allocation of DCs of cloud.
A. Contributions
In the summary of previous studies and researches, these
things revealed that how cloud give their best perfor-
mance in IoT. How cloud and fog computing enhance
the efficiency of SG and how it converts homes into SHs.
Keeping in mind the previous studies, in this paper, we
implemented cloud and fog based integrated environment.
In the direction of efficiency and performance of imple-
menting the system, the contribution of this paper could
be summarized below:
Cloud and fog base integrated environment is imple-
mented for six regions of the world,
Cloud of this paper gives the storage facility to the
consumers’ of SHs,
Fogs of implementing scenario overcome the delay and
latency issue,
Load balancing is performed, to balance the load of
fog and cloud,
Cuckoo search by using Levy Walk distribution
(CLW) and Flower Pollination (FP) is proposed for
load balancing,
Comparison of the proposed techniques and existing
techniques Cuckoo Search (CS) and BAT algorithms
are also performed,
The comparison of techniques is performed on the ba-
sis of Closest Data Center (CDC), Optimize Response
Time (ORT) and Reconfigure Dynamically with Load
RT of the proposed techniques are minimized,
PT of proposed CLW and FP is also decreased on
ORT and RDL service broker policy,
VM cost, Micro Grid (MG) cost, Data Transfer (DT)
cost and Total Cost (TC) of proposed CLW and FP
are also minimized.
The remainder of the paper is organized as follows:
related studies are presented in Section II. system model
with the proposed methodology is demonstrated in Section
IV. Section V describes the simulation results of our
proposed schemes. The section VI describes the conclusion
of this paper.
II. Related Work
The fog computing is proposed as the extension of
cloud computing in [8]. The fog provides computational
and storage services to the SG. This also deals with
the fault tolerance and line loss problem of the SG.
The task scheduling algorithm is proposed because it is
also necessary to complete all tasks on time. The aim
of this proposed mechanism is to schedule the tasks on
time and to reduce the delay and latency between the
fog nodes. Delay is reduced on the basis of reallocation
mechanism. However, to make the system simpler the
resources and computational time is omitted. Dynamic
Clustering League Championship Algorithm (DCLCA) is
proposed in [9]. The purpose is to deal with the fault
tolerance problem. In previous researches, the schemes
and algorithms are proposed for scheduling. In this work,
they considered ignoring the fault tolerance problem by
using an implemented algorithm. The aim of this proposed
work is to deal with task allocation with fault tolerance
awareness problem in cloud computing. On the other
hand, the performance of this proposed algorithm on the
implemented system is not discussed.
Carrasco et al. presented new standard Topology
and Orchestration Specification for Cloud Applications
(TOSCA) and CAMP to satisfy the consumer level. It
is because they provide services to the customer on the
basis of three platforms, i.e., Infrastructure as a Service
(IaaS), Platform as a Service (PaaS) and Software as a
Service (SaaS) to meet the Quality of Services (QoS) and
Service Level Agreement (SLA). In this research work, the
TOSCA based model is proposed for Paas and Saas to
rigorously followed. CAMP concepts are proposed to build
unified Application Programming Interface (API). The
proposed works satisfy the services on PaaS as compared
to SaaS [10]. The server selection, configuration, reconfigu-
ration and automatic performance technology is proposed
by Yamato in [11]. The aim of this proposed technology is
also to meet the service level of cloud on different levels.
Because of various level of cloud, there is also the number
of servers such as CPU servers, container and baremetal
server. To differentiate the proposed technologies with
previous researches, named it as Field Programmable Gate
Arrays (FPGAs). The implemented technology works on
the basis of GPUs servers. The implemented technology
proves VMs servers performs better than a CPU server.
However, it is complex to implement and understand for
simple scenario.
In [12], multi-tier fog computing with large scale data
analytics is proposed for smart cities applications. The
aim of this proposed model is to tackle the problem of
computing infrastructure and less responsiveness of cloud.
The proposed model is for large-scale data handling and to
meet the QoS for the consumers’. This model also contains
ad hoc nodes to processes the incoming requests. The
resources present inside this model are shared resources,
due to which the performance of this model increases to
meet the QoS scheme. However, other attributes of this
proposed model, e.g. implemented cost, resource utiliza-
tion cost, RT and PT of proposed model can be discussed.
In [13], author’s discussed the problem of completion time
of cloud. Because it is really challenging tasks to schedule
the applications when there is a large number of Scientific
Work Flow Applications (SWFA). These SWFA effects the
performance and optimization cost of cloud, due to various
number of VMs and size of DCs. The Completion Time
Driven Hyper-Heuristic (CTDHH) on the basis of Low
Level Heuristic (LLH) to optimize the cost and reduces the
completion time of cloud resources. The implemented LLH
compared with Hyper-Heuristic Scheduling Algorithms
(HHSA) that are Genetic Algorithm (GA), PSO, etc.
After implementation CTDHH performs better and less
computational time than other algorithms. On the other
hand, they consider load balancing and fog base integrated
model in their work to achieve more efficiency.
The Water Constrained workload scheduling (WATCH)
for DCs is proposed in [14]. In this research, by implement-
ing this WATCH algorithm, they reduced the workload on
cloud DCs and also aimed to minimize operational cost.
The proposed scheduling algorithm is proposed keeping
in mind the whole geographic location of the world. This
proposed WATCH follows the Geographic Load Balancing
(GLB). This proposed algorithm follows the attributes
of water, where DCs seems free and have fewer requests
when maximum size requests move towards other DC. The
proposed algorithm also achieves the minimal operational
cost. On the other side, they can consider RT and PT of
the proposed scheduling algorithm.
Sotiriadis et al. proposed cloud VM scheduling algo-
rithm in [15]. The proposed algorithm is to optimize the
performance of VMs in an efficient manner. The aim of
this work is to minimize the workload and degradation of
workload on cloud DCs. When there is maximum number
of requests on any VM and CPU of that utilize maximum
resource, then it requests and shares the CPU of other
VM to degrade the load of its DC. In this work, they
also provided a comparative analysis of their proposed
work with other traditional schedulers by studying the
behavior of NoSQL (MongoDB, Apache Cassandra, and
Elastic Search). After, they achieved minimization of load,
increased CPU PT and decrease load on VMs because
of an overload of PMs. On the other hand, they can
introduce fog in their work, because of fog, the requests
with minimum size can be processed on fog and load on
the VMs of cloud automatically decreases.
III. Problem Statement
The challenges and issues of delay, task scheduling,
and distribution, load balancing, etc., arise because of
the large sharing of web resources over the internet. The
aforementioned challenges and their proposed solutions are
described below.
The CSSA, inspired by social spider algorithm is pro-
posed to tackle the problem of task scheduling in [6]. In
[16], to solve the delay optimal VM scheduling problem in
cloud, Shortest Job First (SJF) combines with Min-Min
Best Fit (MMBF) and proposed new scheduling scheme
that is SJF-MMBF to solve the delay problem of cloud.
The SJF is also combined with Reinforcement Learning
(RL) to solve the starvation issue of the requests. However,
in [6] and [16] the RT and PT of proposed algorithm are
not calculated. The Energy Aware Load Balance Schedul-
ing (ELBS) is proposed in [17] on fog based platform. The
purpose of this proposed scheduler is to solve the com-
plex energy consumption problem of clusters and balance
the load. To make this solution more optimal, PSO is
proposed. The performance of the proposed scheduler is
improved, because of an optimal solution. However, the
processing time of the scheduler and swarm optimization
technique is not calculated. The tabu search method for
load balancing in fog environment is proposed in [18].
The computational cost of processing tasks in a fog based
environment is reduced. However, RT and implementation
cost of VMs are not calculated. To solve bi-objective min-
imization problem, metaheuristic techniques PSO, Binary
PSO (BPSO) and BAT algorithm are proposed in [19].
The allocation of requests to VMs on fog based environ-
ment is considered as bi-objective minimization problem
in this research. The BAT performs as compared to other
two algorithms and tasks are efficiently assign to VMs.
The energy is also minimized in their proposed scenario.
However, load balancing is not considered. The concept of
Fog of Things (FoT) is introduced in [20] to overcome the
challenges of responsiveness, scalability, heterogeneity and
handling of large and complex data, etc. in the IoT. The
robustness and location-aware of the things are achieved.
Still, scalability issue is not resolved. On the other hand,
cloud can be considered in this work to solve the scalability
IV. Proposed System Model
The goal of this proposed work is to minimize the la-
tency and delay in cloud and fog based emerging platform.
This proposed work is aimed to utilize the resources in
an efficient manner. To justify the work of this proposed
model the comfort to the consumers’ at every step of
this smart environment is provided. This proposed model
consists of six number of Smart Communities (SCs). Each
community has the number of clusters and each cluster has
its own fog. Every community has also its own utility that
provides services against consumer requests. The utility1
provides the services to the SC1. The utility2 provides the
services to the SC2 and the utility3 provides the services
to SC3. The functionality of this architecture utilities is
defined further in other layers.
C1 C2 C4 C5 C6 C8 C9 C10 C12
Twoway communication
Power flow
Utility 1
Utility 2
Utility 3
SH in SB
SH in SB
SH in SB
Fig. 1: Smart Environment on the Basis of Cloud and Fog integrated Environment
A. Cloud Computing Layer
The proposed cloud computing layer of this research
work is described here. This cloud is implemented on the
top of the SCs. It provides the same utilities and facilities
as another cloud provides. To fulfill the consumer demand
and needs on hourly bases, there is the number of DCs
allocated inside the cloud. The allocated DCs processed
the incoming data from the consumer side. These DCs
check the status of incoming requests then process them
according to the status of the request. If consumers’ need
electricity, the cloud processed on the request then this
request is forwarded to the utility of that SC. There is
two-way communication between cloud and utility. The
required electricity is sent to the consumers’ and remaining
is kept stored inside the backup storage of the cloud. The
record of SBs, SHs, and SCs are also updated on the cloud.
SCs cloud maintains the record of every consumer on daily
basis. The purpose of keeping the record of the consumers’
is that the utility can easily get information of the regular
users. When the request of consumer forwarded to the
utility at cloud layer. The utility provides some knowledge
which is based on the current electricity rate. The cloud
forwards that requests to the consumers’ through a fog.
The utility of other five communities and cloud performs
the same functionality.
B. Fog Layer
The fog layer enhances the performance of the cloud.
As in previous researches [21], this thing describes that
fog is located at the edge of the network and reduces
the latency and delay of the cloud because of its two-
fold quality. The second one is that the consumer requests
processed faster or earlier at the edge of the network.
In this research study, proposed fog layer performs the
same functionality as done in previous researches. This
proposed fog computing layer is integrated with the cloud
to improve the efficiency and credibility of the cloud.
The several PMs and VMs are located at the fog edge
which executes and process the consumersâĂŹ requests. In
this study, fog also provides the two way communication
between cloud and consumer layer. Each cluster has its
own fog that provides the location services to that cluster.
Every community has two number of fogs that makes
the process fast and provides maximum response to the
consumer request. When the consumer request comes to
the fog the described functionality of described hardware
resources provides that services to that request. After the
execution of the request, if the consumer demand can be
fulfilled on fog level then this request forwarded to MG. If
the demand of the consumer is high or need more electric
power than fog forwards this request to cloud. Every fog
of each six communities provides the same services.
Service Broker Policies
Average PT (ms)
Fig. 2: Average PT
1) Micro Grid: MG consist of renewable energy re-
sources and each community has 2-5 its own MG. The
MG is located between the SBs and Fog. In this proposed
study, MG and fog both can communicate with each
other. When the requests from fog come to the MG,
it gives the response back to the fog that how much
power consumption or for how many homes it can provide
services. The power is supplied from MG to SBs and SHs
of this proposed study.
C. Consumer Layer
By combining SHs, SBs and SAs the SC of this pro-
posed environment is generated. The bottom-most layer
of this research is the consumer layer and which presents
the smart environment of this proposed work. In this
implemented work, SBs are located inside the consumer
layer, each SB has number have SHs also presented in
this proposed model. The SHs are located, these are SHs
because of smart meters are attached to them. They can
share their extra energy to other SHs and with the utility
by communicating with fog and cloud. The SAs that are
located inside the SHs also make them smart. Due to
automatic control system these appliances are called SAs,
because they can easily turn on or turn off from the central
controller during on and off-peak hours. The controllers
are located on fog and cloud, and easy to operate these
SAs directly. There is also two fold functionality of this
layer, the first one is, the requests for services from this
layer is to the upper layers of the proposed model. Second
is that, when the request is processed, this layer get feed-
back from those layers and use the requested power. The
functionality of each cluster, each SHs and each consumer
is the same in every community.
V. Simulation Results and Discussion
To evaluate the performance of our proposed techniques
and existing heuristic techniques, cloud analyst is used.
The performance evaluated via several parameters such as
RT, PT and cost of the proposed and existing techniques
such as CS, BAT and proposed CLW and FP. Moreover,
this paper has implemented some scenarios. The scenario
used for evaluation of this paper consists of six regions of
the world and each region contains two clusters and one
Service Broker Policies
Average RT (ms)
Fig. 3: Average RT
fog is also attached with these clusters. To process the
requests of the consumers’ of the clusters, 2-5 VMs are
allocated to these clusters. The 5-10 MG are used to fulfill
the demands of these SHs exists in these clusters. There
are total 12 number of clusters in this proposed scenario
and each cluster of SHs have 10000 number of requests.
The results of proposed scenario and techniques which are
used for load balancing are compared on the basis of CDC,
ORT and RDL service broker policies of the cloud analyst.
The Fig. 2, shows the comparative analysis of proposed
and existing techniques on the basis of PT. The results
show that the PT of FP and CLW is less than other
existing techniques. It defines that, the proposed schemes
of this paper perform better and process large number
of consumers’ requests on less delay and latency. A large
number of consumers’ requests processes and executes on
the basis of mentioned three policies. BAT takes maximum
time to process the consumers’ requests on the basis of
ORT and RDL policies.
The RT of these load balancing techniques are compared
in Fig. 3. The results revealed that the FP and CLW have
less RT on the basis of ORT and RDL. The numerous
requests of consumers’ processed and responded with less
latency. On these two policies, CS and BAT is not per-
formed better.
The RT of CLW on CDC is 32.40%, ORT is 35.69% and
RDL is 31.89%. It means that on RDL policy CLW has
better results.
C. Cost
In Fig. 4 shows the cost of load balancing algorithms.
The VM cost, MG cost, DT and TC are compared. The
comparative analysis revealed that CS, CLW and BAT has
less VM cost except for BAT algorithm. The VM cost of
these mentioned load balancing algorithms are less due to
2-5 fix number of VMs. The MG cost of these algorithms
are also minimum, it is because these algorithms balance
the load on fog and cloud. So, that’s why there is no more
need for MG to fulfill the energy requirements. The DT
cost of CS, FP and BAT are high except CLW. This Fig.
Service Broker Policies
Cost ($)
VM Cost MG Cost DT Cost TCost
Fig. 4: Cost Comparison
4 shows that the proposed techniques of this paper CLW
and FP have less total cost as compared to CS and BAT.
Overall, BAT consumes maximum cost as compared to
other three techniques.
VI. Conclusion
This paper proposes nature inspired load balancing
algorithms, CLW and FP in cloud and fog based integrated
environment. The scenario is modeled to access the effects
of proposed model and load balancing techniques. The
effects of the proposed model are evaluated by utilizing
the cloud analyst tool. The primary intent of proposed
techniques on cloud and fog environment is to minimize
RT, PT and cost. Our experiment results show that the FP
and CLW have minimum RT and PT as compared to other
two techniques. These results are taken on the basis of
CDC, ORT and RDL service broker policies. The section V
shows the detailed results of our proposed techniques. The
cost is also decreased by using our proposed algorithms
FP and CLW. The Future work involves looking into the
problem of VM migration and bin packing problem. The
individual task processing time will also be considered.
[1] Li, Shichao, Ning Zhang, Siyu Lin, Linghe Kong, Ajay Katangur,
Muhammad Khurram Khan, Minming Ni and Gang Zhu. “Joint
admission control and resource allocation in edge computing for
Internet of things.” IEEE Network 32, no. 1 (2018): 72-79.
[2] Yassine, Abdulsalam, Shailendra Singh, Shamim Hossain and
Ghulam Muhammad. “IoT big data analytics for smart homes
with fog and cloud computing.” Future Generation Computer
Systems 91 (2019): 563-573.
[3] Ahmed, Ejaz, Anjum Naveed, Abdullah Gani, Siti Hafizah
Ab Hamid, Muhammad Imran and Mohsen Guizani. “Process
state synchronization-based application execution management
for mobile edge/cloud computing.” Future Generation Computer
Systems 91 (2019): 579-589.
[4] Verba, Nandor, Kuo-Ming Chao, Jacek Lewandowski, Nazaraf
Shah, Anne James and Feng Tian. “Modeling industry 4.0 based
fog computing environments for application analysis and deploy-
ment.” Future Generation Computer Systems 91 (2019): 48-60.
[5] Agostino, Daniele, Lucia Morganti, Elena Corni, Daniele Cesini
and Ivan Merelli. “Combining Edge and Cloud computing for low-
power, cost-effective metagenomics analysis.” Future Generation
Computer Systems 90 (2019): 79-85.
[6] Xavier, VM Arul and Annadurai. “Chaotic social spider algo-
rithm for load balance aware task scheduling in cloud computing.
Cluster Computing (2018): 1-11.
[7] Wei, Xunli Fan, Houbing Song, Xiumei Fan and Jiachen Yang.
“Imperfect information dynamic stackelberg game based resource
allocation using hidden Markov for cloud computing.”IEEE
Transactions on Services Computing 11, no. 1 (2018): 78-89.
[8] Yin, Luxiu, Juan Luo and Haibo Luo. “Tasks s andcheduling
and resource allocation in fog computing based on containers for
smart manufacture.” IEEE Transactions on Industrial Informat-
ics (2018): 4712-4721.
[9] Latiff, Muhammad Shafie Abd, Syed Hamid Hussain Madni and
Mohammed Abdullahi. “Fault tolerance aware scheduling tech-
nique for cloud computing environment using dynamic clustering
algorithm.” Neural Computing and Applications 29, no. 1 (2018):
[10] Carrasco, Jose, Francisco DurÃąn Ernesto Pimentel. “Trans-
cloud: CAMP/TOSCA-based bidimensional cross-cloud.” Com-
puter Standards & Interfaces 58 (2018): 167-179.
[11] Yamato, Yoji. “Server selection, configuration and reconfigura-
tion technology for IaaS cloud with multiple server types.”Journal
of Network and Systems Management 26, no. 2 (2018): 339-360.
[12] He, Jianhua, Jian Wei, Kai Chen, Zuoyin Tang, Yi Zhou and
Yan Zhang. “Multi-tier fog computing with large-scale IoT data
analytics for smart cities.” IEEE Internet Things J. (2017):1-10.
[13] Alkhanak, Ehab Nabiel and Sai Peck Lee. “A hyper-heuristic
cost optimisation approach for Scientific Workflow Scheduling
in cloud computing.” Future Generation Computer Systems
[14] Islam, Mohammad, Shaolei Ren, Gang Quan, Muhammad
Shakir and Athanasios Vasilakos. “Water-constrained geographic
load balancing in data centers.” IEEE Transactions on Cloud
Computing 5, no. 2 (2017): 208-220.
[15] Sotiriadis, Stelios, Nik Bessis and Rajkumar Buyya. “Self man-
aged virtual machine scheduling in Cloud systems. Information
Sciences 433 (2018): 381-400.
[16] Guo, Mian, Quansheng Guan and Wende Ke. “Optimal Schedul-
ing of VMs in Queueing Cloud Computing Systems With a
Heterogeneous Workload.” IEEE Access 6 (2018): 15178-15191.
[17] Wan, Jiafu, Baotong Chen, Shiyong Wang, Min Xia, Di Li
and Chengliang Liu. “Fog Computing for Energy-aware Load
Balancing and Scheduling in Smart Factory.” IEEE Transactions
on Industrial Informatics (2018): 4548-4556.
[18] TÃľllez, Nadim, Miguel Jimeno, Augusto Salazar and Nino-
Ruiz. “A Tabu Search Method for Load Balancing in Fog Com-
puting.” Int.Artif. Intell 16, no. 2 (2018):1-31.
[19] Mishra, Sambit Kumar, Deepak Putha, Joel JPC Rodrigues,
Bibhudatta Sahoo and Eryk Dutkiewicz. “Sustainable Service
Allocation using Metaheuristic Technique in Fog Server for Indus-
trial Applications.” IEEE Transactions on Industrial Informatics
(2018): 4497-4506.
[20] Yu, Ruozhou, Guoliang Xue, Vishnu Teja Kilari and Xiang
Zhang. “The Fog of Things Paradigm: Road toward On-Demand
Internet of Things.” IEEE Communications Magazine56, no. 9
(2018): 48-54.
[21] Pan, Jianli and James McElhannon. “Future edge cloud and
edge computing for internet of things applications.” IEEE Inter-
net of Things Journal 5, no. 1 (2018): 439-449.
... Rahbari said that his algorithm shows improvements in the energy consumed with 18%, the execution cost with a percent of 15%, and the sensor lifetime by 5% compared with the first-come, first-served (FCFS) and knapsack algorithm. To overcome the delay and latency issues and improve the efficacy of the FC with the significant number of the requests sent to the fog and cloud, the cuckoo search algorithm (CSA) with the levy walk distribution and flower pollination (FP) ware adopted in [31] to solve those issues. This algorithm was compared with the exiting CS and the Bat algorithm [32]. ...
... 11 30: t=t+N. 31 During displaying in this section, we will show the efficacy of the proposed algorithm when tackling the MTSFC under four metrics: Make-span, Flow Time, dioxide emission rate, and energy consumed, in addition to comparing with a number of well-known robust multiobjective optimization algorithms described as follows: ...
Full-text available
Despite the remarkable work conducted to improve fog computing applications’ efficiency, the task scheduling problem in such an environment is still a big challenge. Optimizing the task scheduling in these applications, i.e. critical healthcare applications, smart cities, and transportation is urgent to save energy, improve the quality of service, reduce the carbon emission rate, and improve the flow time. As proposed in much recent work, dealing with this problem as a single objective problem did not get the desired results. As a result, this paper presents a new multi-objective approach based on integrating the marine predator’s algorithm with the polynomial mutation mechanism (MHMPA) for task scheduling in fog computing environments. In the proposed algorithm, a trade-off between the makespan and the carbon emission ratio based on the Pareto optimality is produced. An external archive is utilized to store the non-dominated solutions generated from the optimization process. Also, another improved version based on the marine predator’s algorithm (MIMPA) by using the Cauchy distribution instead of the Gaussian distribution with the levy Flight to increase the algorithm’s convergence with avoiding stuck into local minima as possible is investigated in this manuscript. The experimental outcomes proved the superiority of the MIMPA over the standard one under various performance metrics. However, the MIMPA couldnŠt overcome the MHMPA even after integrating the polynomial mutation strategy with the improved version. Furthermore, several well-known robust multi-objective optimization algorithms are used to test the efficacy of the proposed method. The experiment outcomes show that MHMPA could achieve better outcomes for the various employed performance metrics: Flow time, carbon emission rate, energy, and makespan with an improvement percentage of 414, 27257.46, 64151, and 2 for those metrics, respectively, compared to the second-best compared algorithm.
... This is shown in Table 13, where the swarm-based solutions are organized in the terms of the previously listed algorithms. All those works deal with different optimization scopes in the field of fog infrastructures, such as, resource allocation [156,157,158,159,160], load balancing [161,162,163], infrastructure deployment [164], scheduling [162, 163, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, [178,179,180,181,182,183,184,185], or data placement [186]. ...
Full-text available
Fog computing is a new computational paradigm that emerged from the need to reduce network usage and latency in the Internet of Things (IoT). Fog can be considered as a continuum between the cloud layer and IoT users that allows the execution of applications or storage/processing of data in network infrastructure devices. The heterogeneity and wider distribution of fog devices are the key differences between cloud and fog infrastructure. Genetic-based optimization is commonly used in distributed systems; however, the differentiating features of fog computing require new designs, studies, and experimentation. The growing research in the field of genetic-based fog resource optimization and the lack of previous analysis in this field have encouraged us to present a comprehensive, exhaustive, and systematic review of the most recent research works. Resource optimization techniques in fog were examined and analyzed, with special emphasis on genetic-based solutions and their characteristics and design alternatives. We defined a taxonomy of the optimization scope in fog infrastructures and used this optimization taxonomy to classify the 70 papers in this survey. Subsequently, the papers were assessed in terms of genetic optimization design. Finally, the benefits and limitations of each surveyed work are outlined in this paper. Based on these previous analyses of the relevant literature, future research directions were identified. We concluded that more research efforts are needed to address the current challenges in data management, workflow scheduling, and service placement. Additionally, there is still room for improved designs and deployments of parallel and hybrid genetic algorithms that leverage, and adapt to, the heterogeneity and distributed features of fog domains.
... The main purpose of an evolutionary algorithm is to optimize the population and select the best service composition with the evaluation model. There are many evolutionary algorithms used to solve such problems, such as the genetic algorithm (GA) [14,15], differential evolution (DE) [16,17], and cuckoo search (CS) [18]. These evolutionary algorithms start from an initial state and initial input proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state [19]. ...
Full-text available
Service composition and optimal selection (SCOS) is a core issue in cloud manufacturing (CMfg) when integrating distributed manufacturing services for complex manufacturing tasks. Generally, a set of recommended task parameter sequences (Tps) will be given when publishing manufacturing tasks. The similarity between the service composition parameter sequence (SCps) and Tps also reflects the rationality of the service composition. However, various evaluation models based on QoS have been proposed, ignoring the rationality between the Tps and SCps. Considering the similarity of the Tps and SCps in an evaluation model, we propose a manufacturing SCOS framework called MSCOS. The framework includes two parts: an evaluation model and an algorithm for both optimization and selection. In the evaluation model, based on the numerical proximity and geometric similarity between the Tps and SCps, improving the technique for order preference by similarity to an ideal solution (TOPSIS) with the grey correlation degree (GC), we propose the GC&TOPSIS (GTOPSIS). In the optimization and selection algorithm, an improved flower pollination algorithm (IFPA) is proposed to achieve optimization and selection based on polyline characteristics between the fitness values in the population. Experiments show that the MSCOS evaluation effect and optimal selection offer better performance than commonly used algorithms.
... In this line [101] addresses the problem of load balancing in cloud environments by proposing a hybrid Cuckoo Search and Firefly Algorithm, showing a promising performance. An additional approach for load balancing is described in [102], focused on both Fog and Cloud Computing environments. The authors compare the performance of several bio-inspired computation methods, including Cuckoo Search, Flower Pollination and Bat Algorithm. ...
Full-text available
This overview gravitates on research achievements that have recently emerged from the confluence between Big Data technologies and bio-inspired computation. A manifold of reasons can be identified for the profitable synergy between these two paradigms, all rooted on the adaptability, intelligence and robustness that biologically inspired principles can provide to technologies aimed to manage, retrieve, fuse and process Big Data efficiently. We delve into this research field by first analyzing in depth the existing literature, with a focus on advances reported in the last few years. This prior literature analysis is complemented by an identification of the new trends and open challenges in Big Data that remain unsolved to date, and that can be effectively addressed by bio-inspired algorithms. As a second contribution, this work elaborates on how bio-inspired algorithms need to be adapted for their use in a Big Data context, in which data fusion becomes crucial as a previous step to allow processing and mining several and potentially heterogeneous data sources. This analysis allows exploring and comparing the scope and efficiency of existing approaches across different problems and domains, with the purpose of identifying new potential applications and research niches. Finally, this survey highlights open issues that remain unsolved to date in this research avenue, alongside a prescription of recommendations for future research.
... Proposed proximity-aware system-based upon Kubernetes. Javaid et al. [50] proposed a nature-inspired cuckoo search load balancing algorithm which combined with levy walk distribution and flower pollination. They optimized response time and processing time of fog and cloud as well. ...
Full-text available
Internet of Things has been growing, due to which the number of user requests on fog computing layer has also increased. Fog works in a real-time environment and offers from connected devices need to be processed immediately. With the increase in users requests on fog layer, virtual machines (VMs) at fog layer become overloaded. Load balancing mechanism can distribute load among all the VMs in equal proportion. It has become a necessity in the fog layer to equally, and equitably distribute all the workload among the existing VMs in the segment. Till now, many load balancing techniques have been proposed for fog computing. An empirical study of existing methods in load balancing have been conducted, and taxonomy has been presented in a hierarchical form. Besides, the article contains the year-wise comprehensive review and summary of research articles published in the area of load balancing from 2013 to 2020. Furthermore, article also contains our proposed fog computing architecture to resolve load balancing problem. It also covers current issues and challenges that can be resolved in future research works. The paper concludes by providing future directions.
Conference Paper
Emergence of IoT applications and distributed computing has propelled the development of computing services which can handle dynamic requests at the network edge. Fog computing paradigm has evolved tremendously over the years for achieving above objective. Resource management in fog layer always remains the hot spot which is required to be addressed through some efficient load balancing techniques. Heuristic, Meta-heuristic, Probabilistic, Graph theory based and hybrid load balancing techniques are developed over the past few years to manage workload incurred at the fog servers. This paper provides the brief description of such methods and their comparative analysis in a tabular form. Major area of focus is the overall technique, simulation tool, parameters of evaluation, advantages and disadvantages of the proposed load balancing approaches. Potential researchers can carry forward and extend this research at the next level after analysing the research gaps from the literature survey.
The Internet of Things (IoT) server suffers from numerous business traffic with network bandwidth growth, resulting in downtime. Providing business support without affecting the user experience is the primary problem that IoT companies need to consider and solve in the face of traffic impact. This paper proposes a load balancing scheduling algorithm based on Particle Swarm Optimization Genetic Algorithm (PSO-GA) for IoT clusters. The algorithm uses CPU occupancy rate, memory occupancy rate, network bandwidth occupancy rate, and disk Input and Output (IO) occupancy rate to comprehensively measure the server node load and establish a resource balance model. The fitness function is used to quantify the influence as the basis of weight adjustment. Then, the Particle Swarm Optimization (PSO) algorithm uses the disturbance factor and contraction operator. The optimized algorithm is used to calculate the optimal solution of the fitness function and obtain the optimal weight. Finally, the PSO-GA algorithm is simulated, tested, and compared with the other three load balancing algorithms. As seen from the test results of response delay, throughput, request error rate, and resource utilization, the performance of this algorithm is improved by more than 5% compared with the performance of the traditional method, and the optimization ability is improved obviously. The research content of this paper provides a new way to alleviate the network load, reduce the server overload, congestion, downtime, and other problems, and realize the multi-task balanced scheduling of IoT.
Modern information technology, such as the internet of things (IoT) provides a real-time experience into how a system is performing and has been used in diversified areas spanning from machines, supply chain, and logistics to smart cities. IoT captures the changes in surrounding environments based on collections of distributed sensors and then sends the data to a fog computing (FC) layer for analysis and subsequent response. The speed of decision in such a process relies on there being minimal delay, which requires efficient distribution of tasks among the fog nodes. Since the utility of FC relies on the efficiency of this task scheduling task, improvements are always being sought in the speed of response. Here, we suggest an improved elitism genetic algorithm (IEGA) for overcoming the task scheduling problem for FC to enhance the quality of services to users of IoT devices. The improvements offered by IEGA stem from two main phases: first, the mutation rate and crossover rate are manipulated to help the algorithms in exploring most of the combinations that may form the near-optimal permutation; and a second phase mutates a number of solutions based on a certain probability to avoid becoming trapped in local minima and to find a better solution. IEGA is compared with five recent robust optimization algorithms in addition to EGA in terms of makespan, flow time, fitness function, carbon dioxide emission rate, and energy consumption. IEGA is shown to be superior to all other algorithms in all respects.
With the increasing use of the Internet of Things (IoT) in various fields and the need to process and store huge volumes of generated data, Fog computing was introduced to complement Cloud computing services. Fog computing offers basic services at the network for supporting IoT applications with low response time requirements. However, Fogs are distributed, heterogeneous, and their resources are limited, therefore efficient distribution of IoT applications tasks in Fog nodes, in order to meet quality of service (QoS) and quality of experience (QoE) constraints is challenging. In this survey, at first, we have an overview of basic concepts of Fog computing, and then review the application placement problem in Fog computing with focus on Artificial intelligence (AI) techniques. We target three main objectives with considering a characteristics of AI-based methods in Fog application placement problem: (i) categorizing evolutionary algorithms, (ii) categorizing machine learning algorithms, and (iii) categorizing combinatorial algorithms into subcategories includes a combination of machine learning and heuristic, a combination of evolutionary and heuristic, and a combinations of evolutionary and machine learning. Then the security considerations of application placement have been reviewed. Finally, we provide a number of open questions and issues as future works.
Full-text available
Mobile cloud computing (MCC) and mobile edge computing (MEC) facilitate the mobile devices to augment their capabilities by utilizing the resources and services offered by Cloud and Edge Cloud, respectively. However, due to mobility, network connection becomes unstable that causes application execution disruption. Such disruption increases the execution time and in some cases restrain the mobile devices from getting execution results from the cloud. This research work analyzes the impact of user mobility on the execution of cloud-based mobile applications. We propose a Process State Synchronization (PSS)-based execution management to solve the aforementioned problem. We analytically compute a sufficient condition on synchronization interval that ensure reduction in mobile application execution time under PSS in case of disconnection. Similarly, we compute the upper bound on synchronization interval whereby a larger synchronization interval did not result in significant benefits in terms of execution time for the mobile application. The analytical results were confirmed by the sample implementation of PSS with the computed synchronization intervals. Moreover, we also compare the performance of proposed solution with the state-of-the-art solutions. The results show that the PSS-based execution outperforms the other contemporary solutions.
Full-text available
Fog computing has recently emerged as a new infrastructure composed by three layers: node levels, cloud services, and companies (clients). In general, node levels deliver services to cloud computing layers which in turn serve to in-situ processes at companies. This kind of framework has gained popularity in the Internet of Things (IoT) networks context. The main purpose of node layers is to deliver inexpensive and highly responsive services and as a consequence, cloud layers are reserved for expensive processes. Thus, the optimal load balancing is a major concern between cloud and fog nodes as well as the efficient use of memory resources on those layers. We propose a simple Tabu Search method for optimal load balancing between cloud and fog nodes which accounts for resource constraints. The main motivation for using Tabu Search is that, on-line computations are a must in those layers and as tasks are received they should be processed. We consider a bi-objective cost function for such purpose, the first one denotes the computational cost of processing tasks in fog nodes while the last one stands for that in cloud nodes. During the optimization process, convex combinations of the objective functions are employed in order to reduce the optimization problem to mono-objective cases. Experimental tests are performed by using synthetic scenarios of tasks to be executed. The results reveal that, by using the proposed method the memory usage can be minimized as well as the computational costs of load balancing methods.
Full-text available
The extension of the Cloud to the Edge of the network through Fog Computing can have a significant impact on the reliability and latencies of deployed applications. Recent papers have suggested a shift from VM and Container based deployments to a shared environment among applications to better utilize resources. Unfortunately, the existing deployment and optimization methods pay little attention to developing and identifying complete models to such systems which may cause large inaccuracies between simulated and physical run-time parameters. Existing models do not account for application interdependence or the locality of application resources which causes extra communication and processing delays. This paper addresses these issues by carrying out experiments in both cloud and edge systems with various scales and applications. It analyses the outcomes to derive a new reference model with data driven parameter formulations and representations to help understand the effect of migration on these systems. As a result, we can have a more complete characterization of the fog environment. This, together with tailored optimization methods than can handle the heterogeneity and scale of the fog can improve the overall system run-time parameters and improve constraint satisfaction. An Industry 4.0 based case study with different scenarios was used to analyze and validate the effectiveness of the proposed model. Tests were deployed on physical and virtual environments with different scales. The advantages of the model based optimization methods were validated in real physical environments. Based on these tests, we have found that our model is 90% accurate on load and delay predictions for application deployments in both cloud and edge.
In this article, we introduce the concept of FoT, a paradigm for on-demand IoT. On-demand IoT is an IoT platform where heterogeneous connected things can be accessed and managed via a uniform platform based on real-time demands. Realizing such a platform faces challenges including heterogeneity, scalability, responsiveness, and robustness, due to the large-scale and complex nature of an IoT environment. The FoT paradigm features the incorporation of fog computing power, which empowers not only the IoT applications, but more importantly the scalable and efficient management of the system itself. FoT utilizes a flat-structured virtualization plane and a hierarchical control plane, both of which extend to the network edge and can be reconfigured in real time, to achieve various design goals. In addition to describing the detailed design of the FoT paradigm, we also highlight challenges and opportunities involved in the deployment, management, and operation of such an on-demand IoT platform. We hope this article can shed some light on how to build and maintain a practical and extensible control back-end to enable large-scale IoT that empowers our connected world.
Internet of Things (IoT) analytics is an essential mean to derive knowledge and support applications for smart homes. Connected appliances and devices inside the smart home produce a significant amount of data about consumers and how they go about their daily activities. IoT analytics can aid in per-sonalizing applications that benefit both homeowners and the ever growing industries that need to tap into consumers profiles. This article presents a new platform that enables innovative analytics on IoT captured data from smart homes. We propose the use of fog nodes and cloud system to allow data-driven services and address the challenges of complexities and resource demands for online and offline data processing, storage, and classification analysis. We discuss in this paper the requirements and the design components of the system. To validate the platform and present meaningful results, we present a case study using a dataset acquired from real smart home in Vancouver, Canada. The results of the experiments show clearly the benefit and practicality of the proposed platform.
Metagenomic studies are becoming increasingly widespread, yielding important insights into microbial communities covering diverse environments from terrestrial to aquatic ecosystems. This also because genome sequencing is likely to become a routinely and ubiquitous analysis in a near future thanks to a new generation of portable devices, such as the Oxford Nanopore MinION. The main issue is however represented by the huge amount of data produced by these devices, whose management is actually challenging considering the resources required for an efficient data transfer and processing. In this paper we discuss these aspects, and in particular how it is possible to couple Edge and Cloud computing in order to manage the full analysis pipeline. In general, a proper scheduling of the computational services between the data center and smart devices equipped with low-power processors represents an effective solution.
Fog computing has been proposed as an extension of cloud computing to provide computation, storage and network service in network edge. For smart manufacturing, fog computing can provide a wealth of computational and storage services, such as fault detection and state analysis of devices in assembly lines, if the middle layer between the industrial cloud and terminal device is considered. However, limited resources and low delay services hinder the application of new virtualization technologies in the task scheduling and resource management of fog computing. Thus, we build a new task scheduling model by considering the role of containers. Then, we construct a task scheduling algorithm to ensure that tasks are completed on time and the number of concurrent tasks for the fog node is optimized. Finally, we propose a reallocation mechanism to reduce task delays in accordance with the characteristics of the containers. Results showed that our proposed task scheduling algorithm and reallocation scheme can effectively reduce task delays and improve the concurrency number of tasks in fog nodes.
Effective management of Scientific Workflow Scheduling (SWFS) processes in a cloud environment remains a challenging task when dealing with large and complex Scientific Workflow Applications (SWFAs). Cost optimisation of SWFS benefits cloud service consumers and providers by reducing temporal and monetary costs in processing SWFAs. However, cost optimisation performance of SWFS approaches is affected by the inherent nature of the SWFA as well as various types of scenarios that depend on the number of available virtual machines and varied sizes of SWFA datasets. Cost optimisation performance of existing SWFS approaches is still not satisfactory for all considered scenarios. Thus, there is a need to propose a dynamic hyper-heuristic approach that can effectively optimise the cost of SWFS for all different scenarios. This can be done by employing different meta-heuristic algorithms in order to utilise their strengths for each scenario. Thus, the main objective of this paper is to propose a Completion Time Driven Hyper-Heuristic (CTDHH) approach for cost optimisation of SWFS in a cloud environment. The CTDHH approach employs four well-known population-based meta-heuristic algorithms, which act as Low Level Heuristic (LLH) algorithms. In addition, the CTDHH approach enhances the native random selection way of existing hyper-heuristic approaches by incorporating the best computed workflow completion time to act as a high-level selector to dynamically pick a suitable algorithm from the pool of LLH algorithms after each run. A real-world cloud based experimentation environment has been considered to evaluate the performance of the proposed CTDHH approach by comparing it with five baseline approaches, i.e. four population-based approaches and an existing hyper-heuristic approach named Hyper-Heuristic Scheduling Algorithm (HHSA). Several different scenarios have also been considered to evaluate data-intensiveness and computation-intensive performance. Based on the results of the experimental comparison, the proposed approach has proven to yield the most effective performance results for all considered experimental scenarios.
Due to the development of modern information technology, the emergence of the fog computing enhances equipment computational power and provides new solutions for traditional industrial applications. Generally, it is impossible to establish a quantitative energy-aware model with a smart meter for load balancing and scheduling optimization in smart factory. With the focus on complex energy consumption problems of manufacturing clusters, this paper proposes an energy-aware load balancing and scheduling (ELBS) method based on fog computing. Firstly, an energy consumption model related to the workload is established on the fog node, and an optimization function aiming at the load balancing of manufacturing cluster is formulated. Then, the improved particle swarm optimization (PSO) algorithm is used to obtain an optimal solution, and the priority for achieving tasks is built towards the manufacturing cluster. Finally, a multi-agent system is introduced to achieve the distributed scheduling of manufacturing cluster. The proposed ELBS method is verified by experiments with candy packing line, and experimental results showed that proposed method provides optimal scheduling and load balancing for the mixing work robots.
This paper studies the delay-optimal virtual machine (VM) scheduling problem in cloud computing systems, which have a constant amount of infrastructure resources such as CPU, memory and storage in the resource pool. The cloud computing system provides VMs as services to users. Cloud users request various types of VMs randomly over time, and the requested VM-hosting durations vary vastly. We first adopt a queueing model for the heterogeneous and dynamic workloads. Then, we formulate the VM scheduling in such a queueing cloud computing system as a decision-making process, where the decision variable is the vector of VM configurations and the optimization objective is the delay performance in terms of average job completion time. A low-complexity online scheme that combines the shortest-job-first (SJF) buffering and min-min best fit (MMBF) scheduling algorithms, i.e., SJF-MMBF, is proposed to determine the solutions. Another scheme that combines the SJF buffering and reinforcement learning (RL)-based scheduling algorithms, i.e., SJF-RL, is further proposed to avoid the potential of job starvation in SJF-MMBF. The simulation results show that SJF-RL achieves its goal of delay-optimal scheduling of VMs by provisioning a low delay at various job arrival rates for various shapes of job length distributions. The simulation results also illustrate that although SJF-MMBF is sub-delay-optimal in a heavy-loaded and highly dynamic environment, it is efficient in throughput performance in terms of the average job hosting rate provisioning.