Content uploaded by Nadeem Javaid
Author content
All content in this area was uploaded by Nadeem Javaid on Mar 21, 2019
Content may be subject to copyright.
Optimization of Response and Processing
Time for Smart Societies Using Particle
Swarm Optimization and Levy Walk
Ayesha Anjum Butt1, Zahoor Ali Khan2, Nadeem Javaid1(B
), Annas Chand3,
Aisha Fatima1, and Muhammad Talha Islam1
1COMSATS University Islamabad, Islamabad 44000, Pakistan
nadeemjavaidqau@gmail.com
2Computer Information Science, Higher Colleges of Technology, Fujairah 4114, UAE
3COMSATS University Islamabad, Abbotabad 22010, Pakistan
http://www.njavaid.com
Abstract. Reducing delay and latency in the cloud computing environ-
ment is a challenge for the present research community. This study per-
formed a rigorous, comparative analysis of the fog computing paradigm
and the conventional cloud computing paradigm in the context of the
Smart Grid (SG). To meet the consumers’ demand and optimize cloud
services to achieve service level objectives is of great importance. The
fog is introduced to enhance the efficiency of the cloud and to fulfill
the consumer requests at the edge of the network. When the requests
of Smart Societies (SSs) are huge on fog, the increased demand for real-
time response is becoming a challenge for the SG. In this study, Par-
ticle Swarm Optimization is implemented and compared with the pro-
posed techniques: Improved PSO with Lewy Walk (IPSOLW). These
load balancing algorithms are compared on the basis of Closest Data
Center (CDC) and Optimize Response Time (ORT). These proposed
algorithms handle the load of SS on fog. The proposed IPSOLW han-
dles more requests because of LW, the requests are directly allocated to
best DC.
1 Introduction
Recently world technology has been evolved to the conceptualization, develop-
ment, and implementation of cloud computing system. The way of traditional
Information Technology (IT) ridding towards the cloud. The way using hard-
ware, computers or even software spin into cloud computing, due to its adaptive
and widespread nature. Cloud computing depends on the network of Data Cen-
tric Network, which are the exclusive monopolized hubs which are responsible for
computation and storage. To diminish the restrain of cloud, most requests are
processing within the Data Centers (DCs). However, due to increase of the num-
ber of internet connections the number of connected smart devices also increased
over the internet. This thing turns up towards the emerging technology light of
the Internet of Things (IoT).
c
Springer Nature Switzerland AG 2020
L. Barolli et al. (Eds.): AINA 2019, AISC 926, pp. 14–25, 2020.
https://doi.org/10.1007/978-3-030-15032-7_2
Optimization of Response and Processing Time for Smart Societies 15
The cloud computing facilitates the operation of SG, by case of this the
energy utilization of SG becomes intelligent, efficient and automated integration
that are indirectly connected to the distributed power generation. When the huge
amount of data is collected by Smart Meters (SMs) then this data is required to
be processed and stored. The cloud provides this storage facility to the incoming
request of the SHs.
Cloud also facilitates about optimal workload allocation that how to control
load in a heterogeneous environment [1]. For the sake of this purpose, multi-
tenant framework is proposed, a load balancing cloud max-min algorithm is pro-
posed to schedule the consumers’ requests. The priority of incoming requests is
calculated on the basis of stochastic probability distribution. To meet the service
level architecture, two scenarios are implemented by selecting performance and
execution time of the balancing algorithm. The two scenarios are: (i) implemen-
tation of VMs, and divides consumer requests on them. (ii) implementation of
vertical scaling VMs with higher speed CPUs to maximize the consumer requests.
The Monte Carlo simulations are performed to execute the consumer requests.
In [2,3], the integration, distribution of power becomes larger and complex. It is
because the rate of growth of requests is increased from the consumer side day
by day. So, it becomes difficult for the grids and SGs to facilitate the consumer
according to their requirements. To make the system scalable and efficient to
handle the consumers’ requests, cloud platform are introduced in the SG envi-
ronment to reduce the load on grid side. The implemented cloud platform also
enhanced the performance of the grid stations. In [2], grid cloud is implemented
on the cloud platform to handle the bulk data, for this purpose online analy-
sis and forensic exploration is used. Resilient Distributed Data Sets (RDDs) by
using direct acyclic to solve the low performance problem is used in [3]. It is
because these implemented techniques monitor the consumers requests and then
operate the services and provide a validated platform to the requests on the
cloud.
There is another issue of cloud service provider is that maximum operational
cost. To deal this type of problem parallel distributed load balancing algorithm
is proposed in [4]. The implemented algorithm low cost complexity is based
on proximal Jacobian alternating direction method of multipliers. To check the
result of this implemented algorithm extensive simulation is done, by using fog
devices the Computational cost of cloud is also reduced. In [5] Ant Colony Opti-
mization is proposed to calculate cost, RT and PT on the basis of ORT policy.
Summarizing the previous work, these things reveal how the concept of cloud
and fog is introduced in SG. How these concepts made homes to SHs in IoT. To
keeping in mind the previous studies, in this paper, we implement cloud and fog
based integrated environment. In the direction of efficiency and performance of
implementing the system, the contributions of this paper could be summarized
below (Fig. 1):
•Fog and cloud based integrated platform for six regions of the world.
•Implementation of PSO and our proposed algorithm IPSOLW.
16 A. A. Butt et al.
Fig. 1. Proposed smart environment
•Comparison of both implemented and proposed algorithm on the basis of two
service broker policies CDC and ORT.
•On the basis of above mentioned algorithms and broker policies, we will cal-
culate average RT, PT and Cost.
•This paper proposed IPSOLW performs better than existing PSO.
The remainder of the paper is organized as follows: related studies are presented
in Sect. 1. System model with proposed methodology are demonstrated in Sect. 4.
The Sect. 5defines the proposed algorithm of this Sect. 6describes the simulation
results of our proposed schemes. The Sect. 7describes the conclusion of this
paper.
2 Related Work
Nikolaos et al. in [6], proposed a two layer architecture. Their aim is to antagonist
issues inside the data centers. They want to tackle the problems of resource uti-
lization, VMs allocations and load balancing of numerous incoming results and
their placement in physical servers. To resolve these tackled issues and main-
tain the QoS metrics and to minimize energy consumption, they also discuss
fuzzy Takagi-sugeno modeling in their work. By implementing two layer proposed
Optimization of Response and Processing Time for Smart Societies 17
model they achieve reduction of energy. Even so, they carry out their employ-
ment for lower stage or single host. In future, they can implement or apply this
for multiple servers. To guarantee the VM performance and how to place VMs
to reduce the consumption of PM is one of major challenge is discussed by Zhao
et al. in [7]. To resolve this challenge first they consider the problem of CPU uti-
lization and power consumption, second check the trend of VM degradation and
then formulates the bi-objective problem. For the sake of solution, they design
a power aware and performance guaranteed VMP (PPVMP) model. They also
propose a meta-heuristic algorithm in their work named as Ant Colony Opti-
mization (ACO) to reduce the energy consumption. The cloud sim and open
stack platform is used to evaluate the result of their designed model and pro-
posed algorithm. The results that, PPVMP can reduce PM power consumption
and guarantee the performance of VM placement over the cloud. However, they
can also consider fog in their work to enhance the performance of their model.
The QoS during task scheduling and to minimize energy consumption is
another challenge. To overcome this challenge energy aware task scheduling
method named as QET is introduced. The introduced model is based on QoS
in cloud environment to minimize the energy consumption. PM selection energy
consumption is also done through QoS based model. However, this implemented
model cannot handle solution when there are heterogeneous PMs. They can also
implement this model in a fog environment by resolving the shortcomings of
their model [8].
In [9], Zoltan discusses about two optimization problems, that influence each
other significantly. He is also examining the problem of VM placement and the
utilization of PMs. To propose the solution of these problems, the uses of pri-
vate cloud and fog DC of a private organization. The different algorithms such
as; dedicated black box, determine the candidates of VMs and PMs, etc., are
predicted to solve these problems. These implements algorithms gives different
results on different solutions. However, performance and cost of implementing
algorithms are not discussed.
Yu and Jiang in [4], intend to reduce the cost of cloud data centers by using
fog devices. It is because the revenue loss of these networks is less by allocating
the services to near cloud data centers due to huge network. To fulfill their
aim, they formulate the operational minimization cost of fog devices which is
provided by cloud service providers. The provided cost consists of four parts
that are: energy cost of cloud data centers, network bandwidth cost, due to delay
revenue loss, and paid allowances to fog devices. In their work, they also proposed
parallel and distributed algorithm which is based on PJ-ADM. They get less
computational cost with low complexity. Nevertheless, they can also compute the
performance of their network and implementation cost of the network can also be
considered. When there is large network with multiple servers, it consumes lot’s
of energy. Due to this the energy cost is also very high [10]. To investigate about
this Lia and Sun discuses in their study. According to them, the deployment
of cloud networks with Software Define Network (SDN) it becomes easier to
implement and adjustment of different types of network in the cloud. They also
18 A. A. Butt et al.
discuss about the Virtual Contents Distributed Networks (VCDN) and their
energy efficiency issue. To overcome all aforementioned issues and challenges,
they proposed an off-line optimal algorithm which provides the optimal solution
of studies problems. For the sake of improvement in energy efficiency, they design
a time efficient approximate algorithm to predict incoming traffic to improve
energy efficiency. To check the performance of their proposed approach they
perform multiple simulations. After performing the simulations, they save the
energy in VCDN based network. There is a tradeoff between numerous incoming
traffic and QoS in VCDN network.
Marco et al. discuss about the evolution of power systems towards the grids in
[11]. According to their study, new targets and technologies are required because
it totally depends on the modernization of the distributed grids. To achieve the
new targets their work presents smart metering infrastructure. Which unlocks
the set of services and automation management of the distributed grid. Their
proposed algorithm gives the opportunity to communicate with smart meter.
This is one way communication. In this communication the requirements of con-
sumers is monitored and than provides services according to that requirements.
It is because the proposed algorithm is real time distribution algorithm, which
provide services over the cloud by using automation of grid. Their aim is also
to provide the key features such as; interoperability, scalability to the cloud. By
using their implemented real time distribution algorithm they fulfill their aim.
On the other hand, if smart metering infrastructure facilitates with two way
communication it gives better results.
3 Problem Statement
The proposed work in [12–14] is focused on different problems: Virtual Machine
(VM) allocation problems, Variable Size Bin Packing Problem (VSBPP), multi
objective problems and energy management related to cloud and fog based com-
puting. In [12], Improved Levy Whale Optimization Algorithm (ILWOA) is pro-
posed. However, the multi objective problems such as processing and response
time of the cloud are not handled by the proposed algorithm. Multi Objective
Grey Wolf Optimizer (MOGWO) is proposed in [13]. Although, it is not possible
for it to find satisfactory solutions in larger search space, when optimum result is
required. When requests need VMs to store and process the data, so it is difficult
for it to find appropriate platform in the search space. In [6,7], the fog based
architecture is implemented by using day-ahead, Particle Swarm Optimization
(PSO), Binary Particle Swarm Optimization (BPSO) and BAT algorithm. How-
ever, there is more delay in finding a best solution and search space because of
their premature convergence. In [14], Levy Flight Particle Swarm Optimization
(LFPSO) is proposed. Because it searches both locally and globally. The solution
is given at current time due to randomness in it’s nature.
This work is to devise an integrated model of cloud and fog based comput-
ing by using Improved PSO by using Levy Walk (IPSOLW). The Micro Grid
(MG) and Smart Meter (SM) are attached to the SHs that are connected to
Optimization of Response and Processing Time for Smart Societies 19
the implemented architecture. MG is used as a small grid known as renewable
energy resources. MG also keeps the record of SHs. SM is used to keep the
energy consumption tracks of the SH. We formulate this scheduling problem as
a multi objective problem to schedule the incoming requests to cloud and fog
based architecture.
4 Proposed System Model
The cloud computing services is extended by fog computing and the services at
the edge of the network where data is created and acted upon. The term fog is
introduced by “cisco” in recent few years. In Fig. 2shows the proposed system
model which introduces SM and MG architecture based on fog computing and
efficient management of energy. As this proposed system model shows the three
tiers of the proposed architecture. In first tier, which is the bottom-most layer of
the proposed model is known as consumer layer. The SHs for which this energy
management model is proposed are located in the different regions of the world.
These are the SSs because of direct communication to fog and cloud. These SHs
in SSs can send request for the services to the cloud and fog of the architecture.
When their services is processed according to the requirements they can also
get response. SHs can also communicate with each other and share their energy
resources with each other. The saved energy is provided to the servers at the
bottom levels of the architecture, because SHs can manage their own energy.
SMs are also attached with the SHs, which records and monitors the services.
SM provides the record of energy consumption of the SHs or the Smart building
through which it is attached.
MG plays an vital role between bottom layer and the middle layer of this
proposed architecture. Each region and fog in the proposed model are connected
with its own MG. It keeps the record of MG. When any of the SHs requires
services from the servers of the top two layers, they requests for that. If required
services are not fulfilled by the upper servers, than MG provides services to that
SH. As the aforementioned scenario, then it is another possibility that if MG
able to fulfill the requirements of the consumers, then MG gives the services to
them.
In second tier, which is middle layer of the proposed system model is known as
fog computing layer. The VMs and inside them PMs are the primary components
of this layer. The fog is located at the edge of the network, then it provide services
to the consumers with less latency and delay. Fog provides its services locally
in this proposed system. It also act as intermediary layer between cloud and
consumer layer. Because of regional location, maximum number of requests is
processed by fog easily with less latency and delay. This fog layer also overcomes
the deficiencies of the cloud.
In third tier, which is the upper most layer of the proposed model. The
high level servers and DCs comprise this tier. Cloud always provides it’s services
remotely. It processed and serves large number of incoming requests. It also
overcomes the shortcomings of the fog, that is less storage and processing of
20 A. A. Butt et al.
Fig. 2. Cloud and fog based integrated system model
requests. This layer provides the services to the consumers in terms of storage.
As same like the other cloud, the proposed model cloud also provides three
types of services which are: Infrastructure as a Services (IaaS), consumers of
Optimization of Response and Processing Time for Smart Societies 21
this proposed model can used the services any time. They have to paid for these
services to the servers, benefit of these services is that they donot need to pay
extra charges for any other services. Platform as a Service (Paas), the consumers
of SHs can easily used any type of software and applications on the server systems
of DCs without installing them on their personal computers.
5 Proposed Scheme
The novel meta heuristic algorithm that depends on the nature of the swarm
and random behavior of levy is known as IPSOLW. The algorithm works same
as the nature of the swarm, i.e., particle search for best position, where the best
position is found, particle best position is updated. In this proposed algorithm,
the velocity of particle swarm is updated with levy walk, because of it’s prema-
ture convergence. In this paper, IPSOLW Algorithm 1performs load balancing.
Initializes VMs, fogs in the case of load balancing. Besides, the probability fitness
is calculated with respect to DC. Our environment is fog based, so the fogs act as
a source of best position. Therefore, the implemented load balancing algorithm
provides help to VMs to find the best feasible solution until they find the best
fitness value or best optimization solution value.
6 Simulation Results and Discussion
In this section, we have performed simulation by using Cloud analyst to check
the efficiency of PSO and proposed algorithm IPSOLW. To do so, performance of
these algorithms is compared on the basis of RT, PT, and cost. The cost, RT, and
PT is compared on the basis of service broker policies, i.e., Closest Data Center
(CDC) and Optimize Response Time (ORT). The clusters RT is also calculated
on the basis of these policies. The simulation results of average RT of this paper
implemented load balancing algorithms are shown in Fig.4a. In this figure the
Avg. RT of PSO and IPSOLW are compared on the basis of CDC and ORT.
The avg. RT of PSO on the basis of CDC is 13.24%, and on the basis of ORT
is 7.09%. The avg. RT on the basis of CDC is 7.031% and on the basis of ORT
is 4.37% for IPSOLW. In this fig. it is clearly shown that the RT of IPSOLW
is maximum than PSO. It simply means that IPSOLW handles large number
of requests of consumers of SHs rather than PSO. The avg. RT of IPSOLW is
maximum to handle large number of incoming requests from consumer side.
22 A. A. Butt et al.
Algorithm 1. IPSOLW
1: Start
2:
3: Initialize the position of a Particles of PSwarms in the search space randomly;
4:
5: Search for list of VM and Fog;
6:
7: j=Fog;
8:
9: i=VM;
10:
11: for t = 1:24 do
12: Let Yis a random position of search space;
13:
14: Evaluate the position of the PVM;
15:
16: Initialize the memory of each PVM;
17:
18: while iter. ≤itermax do
19: Determine the VM Size;
20:
21: Compute the processing time using equation 1;
22:
23: Calculate Transmission delay using equation 2;
24:
25: Compute the response time using equation 3;
26:
27: for i=1:Cdo
28: Randomly get a PVM jto follow i;
29:
30: Define awareness probability;
31:
32: if rj≥AP j,iter then
33: xi,iter1=xi,iter +riX(mj,iter −xi,iter )
34:
35: else
36: xi,iter+1 =Y;
37:
38: end if
39: end for
40: end while
41: Evaluate the usefulness of new fog;
42:
43: Check the new position of the PVM;
44:
45: end for
46: End
Optimization of Response and Processing Time for Smart Societies 23
IPSOLW is also higher than PSO, it is because IPSOLW also has maxi-
mum RT. The reason to have maximum PT is that IPSOLW handles number of
incoming requests of SHs consumers. The PSO also handles numerous requests of
consumer side, it gives nimble solution as compared to other algorithms. Some-
times, these solutions are not feasible to the consumers, it is because PSO has
premature convergence nature. In this fig. the PT of PSO is 1.15% and 5.381%
on the basis of CDC and ORT. The PT of IPSOLW is 2.12% and 2.77% on the
basis of CDC and ORT.
The formula which is used to calculate the processing time is given in below:
PT =TR/VM
speed (1)
where TR is the total requests of the consumers and VM
speed is the speed of
VMs. The transmission delay is computed as given below:
TD =Total
l+Total
TD (2)
where lis latency and TD is the transmission delay.
The formula which is used to calculate the response time is:
RT =FT−AT+TD (3)
where F Tis the finish time, A Tis the arrival time and TD is the transmission
delay.
Fig. 3. Proposed scenario world division map
The cost of PSO and IPSOLW is shown in Fig. 5a and b. These figures repre-
sent the VM cost, Data Transfer (DT) cost and Total Cost of these implemented
load balancing algorithms. In this Fig. 5a, it is clearly shown that VM cost and
DT cost of PSO is minimum on the basis of service broker policies. The VM
and DT cost of IPSOLW is also minimum on the basis of service broker policies
by using IPSOLW Fig. 5b. The VM Cost of PSO is 1.11% and 1.15%, which is
based on CDC and ORT. DT cost of PSO is 98.88% and 98.84%. The VM and
DT cost of IPSOLW which is based on CDC and ORT is 98.6% (Fig. 3).
24 A. A. Butt et al.
CDC ORT
Service Broker Policies
0
50
100
150
200
250
300
Average RT(ms)
PSO
IPSOLW
(a) Average Response Time
CDC ORT
Service Broker Policies
0
50
100
150
200
250
Average PT (ms)
PSO
IPSOLW
(b) Average Processing Time
Fig. 4. Average RT and PT
CDC ORT
Service Broker Policies
0
1000
2000
3000
4000
5000
6000
7000
Cost ($)
PSO.VM Cost
PSO.DT Cost
PSO.TCost
(a) PSO Cost Comparsion
CDC ORT
Service Broker Policies
0
2000
4000
6000
8000
Cost ($)
IPSOLW.VM Cost
IPSOLW.DT Cost
IPSOLW.TCost
(b) IPSOLW Cost Comparison
Fig. 5. PSO and IPSOLW cost comparison
7 Conclusion
Considering the numbers of SHs applications that are running on numerous
things, as well as integration of cloud and fog based SSs such application are
processed at the edge of the network. In this paper, we bring down the size
of VMs to minimize the utilization of energy. To calculate the results of our
proposed scenario on the basis of six SSs on the basis of CDC and ORT, nature
inspired algorithm are implemented and proposed. The Proposed algorithm in
this work is IPSOLW, which is compared with implementing. In the end, this
thing is clearly concluded that proposed IPSOLW has 7.031% and 4.37% RT
on the basis of CDC and ORT. The PT of IPSOLW on the basis of CDC is
2.12% and ORT is 2.77%. However, IPSOLW has a maximum cost as compared
to PSO. In future we will consider bin packing problem with the different meta
heuristics algorithm.
Optimization of Response and Processing Time for Smart Societies 25
References
1. Wang, Z., Hayat, M.M., Ghani, N., Shaban, K.B.: Optimizing cloud-service per-
formance: efficient resource provisioning via optimal workload allocation. IEEE
Trans. Parallel Distrib. Syst. 28(6), 1689–1702 (2017)
2. Anderson, D., Gkountouvas, T., Meng, M., Birman, K., Bose, A., Hauser, C.,
Litvinov, E., Luo, X., Zhang, F.: GridCloud: infrastructure for cloud-based wide
area monitoring of bulk electric power grids. IEEE Trans. Smart Grid, 1–10 (2018)
3. Wang, W., Zhou, F., Li, J.: Cloud-based parallel power flow calculation using
resilient distributed datasets and directed acyclic graph. J. Mod. Power Syst. Clean
Energy, 1–13 (2018)
4. Yu, L., Jiang, T., Zou, Y.: Fog-assisted operational cost reduction for cloud data
centers. IEEE Access 5, 13578–13586 (2017)
5. Buksh, R., Javaid, N., Fatima, I.: Towards efficient resource utilization exploiting
collaboration between HPF and 5G enabled energy management controllers in
smart homes. Sustainability 10(10), 3592 (2018). 3–24
6. Leontiou, N., Dechouniotis, D., Denazis, S., Papavassiliou, S.: A hierarchical control
framework of load balancing and resource allocation of cloud computing services.
Comput. Electr. Eng. 67, 235–251 (2018)
7. Zhao, H., Wang, J., Liu, F., Wang, Q., Zhang, W., Zheng, Q.: Power-aware and
performance-guaranteed virtual machine placement in the cloud. IEEE Trans. Par-
allel Distrib. Syst. 29(6), 1385–1400 (2018)
8. Xue, S., Zhang, Y., Xiaolong, X., Xing, G., Xiang, H., Ji, S.: QET: a QoS-
based energy-aware task scheduling method in cloud environment. Cluster Comput.
20(4), 3199–3212 (2017)
9. Mann, Z. ´
A.: Resource optimization across the cloud stack. IEEE Trans. Parallel
Distrib. Syst. 29(1), 169–182 (2018)
10. Liao, D., Sun, G., Yang, G., Chang, V.: Energy-efficient virtual content distribution
network provisioning in cloud-based data centers. Future Gener. Comput. Syst. 83,
347–357 (2018)
11. Pau, M., Patti, E., Barbierato, L., Estebsari, A., Pons, E., Ponci, F., Monti, A.:
A cloud-based smart metering infrastructure for distribution grid services and
automation. Sustain. Energy Grids Netw. 15, 14–25 (2017)
12. Abdel-Basset, M., Abdle-Fatah, L., Sangaiah, A.K.: An improved L´evy based whale
optimization algorithm for bandwidth-efficient virtual machine placement in cloud
computing environment. Cluster Comput., 1–16 (2018)
13. Mirjalili, S., Saremi, S., Mirjalili, S.M., dos Coelho, L.: Multi-ob jective grey wolf
optimizer: a novel algorithm for multi-criterion optimization. Expert Syst. Appl.
47, 106–119 (2016)
14. Jensi and Wiselin Jiji: An enhanced particle swarm optimization with levy flight
for global optimization. Appl. Soft Comput. 43, 248–261 (2016)