ChapterPDF Available

Optimization of Response and Processing Time for Smart Societies Using Particle Swarm Optimization and Levy Walk


Abstract and Figures

Reducing delay and latency in the cloud computing environment is a challenge for the present research community. This study performed a rigorous, comparative analysis of the fog computing paradigm and the conventional cloud computing paradigm in the context of the Smart Grid (SG). To meet the consumers’ demand and optimize cloud services to achieve service level objectives is of great importance. The fog is introduced to enhance the efficiency of the cloud and to fulfill the consumer requests at the edge of the network. When the requests of Smart Societies (SSs) are huge on fog, the increased demand for real-time response is becoming a challenge for the SG. In this study, Particle Swarm Optimization is implemented and compared with the proposed techniques: Improved PSO with Lewy Walk (IPSOLW). These load balancing algorithms are compared on the basis of Closest Data Center (CDC) and Optimize Response Time (ORT). These proposed algorithms handle the load of SS on fog. The proposed IPSOLW handles more requests because of LW, the requests are directly allocated to best DC.
Content may be subject to copyright.
Optimization of Response and Processing
Time for Smart Societies Using Particle
Swarm Optimization and Levy Walk
Ayesha Anjum Butt1, Zahoor Ali Khan2, Nadeem Javaid1(B
), Annas Chand3,
Aisha Fatima1, and Muhammad Talha Islam1
1COMSATS University Islamabad, Islamabad 44000, Pakistan
2Computer Information Science, Higher Colleges of Technology, Fujairah 4114, UAE
3COMSATS University Islamabad, Abbotabad 22010, Pakistan
Abstract. Reducing delay and latency in the cloud computing environ-
ment is a challenge for the present research community. This study per-
formed a rigorous, comparative analysis of the fog computing paradigm
and the conventional cloud computing paradigm in the context of the
Smart Grid (SG). To meet the consumers’ demand and optimize cloud
services to achieve service level objectives is of great importance. The
fog is introduced to enhance the efficiency of the cloud and to fulfill
the consumer requests at the edge of the network. When the requests
of Smart Societies (SSs) are huge on fog, the increased demand for real-
time response is becoming a challenge for the SG. In this study, Par-
ticle Swarm Optimization is implemented and compared with the pro-
posed techniques: Improved PSO with Lewy Walk (IPSOLW). These
load balancing algorithms are compared on the basis of Closest Data
Center (CDC) and Optimize Response Time (ORT). These proposed
algorithms handle the load of SS on fog. The proposed IPSOLW han-
dles more requests because of LW, the requests are directly allocated to
best DC.
1 Introduction
Recently world technology has been evolved to the conceptualization, develop-
ment, and implementation of cloud computing system. The way of traditional
Information Technology (IT) ridding towards the cloud. The way using hard-
ware, computers or even software spin into cloud computing, due to its adaptive
and widespread nature. Cloud computing depends on the network of Data Cen-
tric Network, which are the exclusive monopolized hubs which are responsible for
computation and storage. To diminish the restrain of cloud, most requests are
processing within the Data Centers (DCs). However, due to increase of the num-
ber of internet connections the number of connected smart devices also increased
over the internet. This thing turns up towards the emerging technology light of
the Internet of Things (IoT).
Springer Nature Switzerland AG 2020
L. Barolli et al. (Eds.): AINA 2019, AISC 926, pp. 14–25, 2020.
Optimization of Response and Processing Time for Smart Societies 15
The cloud computing facilitates the operation of SG, by case of this the
energy utilization of SG becomes intelligent, efficient and automated integration
that are indirectly connected to the distributed power generation. When the huge
amount of data is collected by Smart Meters (SMs) then this data is required to
be processed and stored. The cloud provides this storage facility to the incoming
request of the SHs.
Cloud also facilitates about optimal workload allocation that how to control
load in a heterogeneous environment [1]. For the sake of this purpose, multi-
tenant framework is proposed, a load balancing cloud max-min algorithm is pro-
posed to schedule the consumers’ requests. The priority of incoming requests is
calculated on the basis of stochastic probability distribution. To meet the service
level architecture, two scenarios are implemented by selecting performance and
execution time of the balancing algorithm. The two scenarios are: (i) implemen-
tation of VMs, and divides consumer requests on them. (ii) implementation of
vertical scaling VMs with higher speed CPUs to maximize the consumer requests.
The Monte Carlo simulations are performed to execute the consumer requests.
In [2,3], the integration, distribution of power becomes larger and complex. It is
because the rate of growth of requests is increased from the consumer side day
by day. So, it becomes difficult for the grids and SGs to facilitate the consumer
according to their requirements. To make the system scalable and efficient to
handle the consumers’ requests, cloud platform are introduced in the SG envi-
ronment to reduce the load on grid side. The implemented cloud platform also
enhanced the performance of the grid stations. In [2], grid cloud is implemented
on the cloud platform to handle the bulk data, for this purpose online analy-
sis and forensic exploration is used. Resilient Distributed Data Sets (RDDs) by
using direct acyclic to solve the low performance problem is used in [3]. It is
because these implemented techniques monitor the consumers requests and then
operate the services and provide a validated platform to the requests on the
There is another issue of cloud service provider is that maximum operational
cost. To deal this type of problem parallel distributed load balancing algorithm
is proposed in [4]. The implemented algorithm low cost complexity is based
on proximal Jacobian alternating direction method of multipliers. To check the
result of this implemented algorithm extensive simulation is done, by using fog
devices the Computational cost of cloud is also reduced. In [5] Ant Colony Opti-
mization is proposed to calculate cost, RT and PT on the basis of ORT policy.
Summarizing the previous work, these things reveal how the concept of cloud
and fog is introduced in SG. How these concepts made homes to SHs in IoT. To
keeping in mind the previous studies, in this paper, we implement cloud and fog
based integrated environment. In the direction of efficiency and performance of
implementing the system, the contributions of this paper could be summarized
below (Fig. 1):
Fog and cloud based integrated platform for six regions of the world.
Implementation of PSO and our proposed algorithm IPSOLW.
16 A. A. Butt et al.
Fig. 1. Proposed smart environment
Comparison of both implemented and proposed algorithm on the basis of two
service broker policies CDC and ORT.
On the basis of above mentioned algorithms and broker policies, we will cal-
culate average RT, PT and Cost.
This paper proposed IPSOLW performs better than existing PSO.
The remainder of the paper is organized as follows: related studies are presented
in Sect. 1. System model with proposed methodology are demonstrated in Sect. 4.
The Sect. 5defines the proposed algorithm of this Sect. 6describes the simulation
results of our proposed schemes. The Sect. 7describes the conclusion of this
2 Related Work
Nikolaos et al. in [6], proposed a two layer architecture. Their aim is to antagonist
issues inside the data centers. They want to tackle the problems of resource uti-
lization, VMs allocations and load balancing of numerous incoming results and
their placement in physical servers. To resolve these tackled issues and main-
tain the QoS metrics and to minimize energy consumption, they also discuss
fuzzy Takagi-sugeno modeling in their work. By implementing two layer proposed
Optimization of Response and Processing Time for Smart Societies 17
model they achieve reduction of energy. Even so, they carry out their employ-
ment for lower stage or single host. In future, they can implement or apply this
for multiple servers. To guarantee the VM performance and how to place VMs
to reduce the consumption of PM is one of major challenge is discussed by Zhao
et al. in [7]. To resolve this challenge first they consider the problem of CPU uti-
lization and power consumption, second check the trend of VM degradation and
then formulates the bi-objective problem. For the sake of solution, they design
a power aware and performance guaranteed VMP (PPVMP) model. They also
propose a meta-heuristic algorithm in their work named as Ant Colony Opti-
mization (ACO) to reduce the energy consumption. The cloud sim and open
stack platform is used to evaluate the result of their designed model and pro-
posed algorithm. The results that, PPVMP can reduce PM power consumption
and guarantee the performance of VM placement over the cloud. However, they
can also consider fog in their work to enhance the performance of their model.
The QoS during task scheduling and to minimize energy consumption is
another challenge. To overcome this challenge energy aware task scheduling
method named as QET is introduced. The introduced model is based on QoS
in cloud environment to minimize the energy consumption. PM selection energy
consumption is also done through QoS based model. However, this implemented
model cannot handle solution when there are heterogeneous PMs. They can also
implement this model in a fog environment by resolving the shortcomings of
their model [8].
In [9], Zoltan discusses about two optimization problems, that influence each
other significantly. He is also examining the problem of VM placement and the
utilization of PMs. To propose the solution of these problems, the uses of pri-
vate cloud and fog DC of a private organization. The different algorithms such
as; dedicated black box, determine the candidates of VMs and PMs, etc., are
predicted to solve these problems. These implements algorithms gives different
results on different solutions. However, performance and cost of implementing
algorithms are not discussed.
Yu and Jiang in [4], intend to reduce the cost of cloud data centers by using
fog devices. It is because the revenue loss of these networks is less by allocating
the services to near cloud data centers due to huge network. To fulfill their
aim, they formulate the operational minimization cost of fog devices which is
provided by cloud service providers. The provided cost consists of four parts
that are: energy cost of cloud data centers, network bandwidth cost, due to delay
revenue loss, and paid allowances to fog devices. In their work, they also proposed
parallel and distributed algorithm which is based on PJ-ADM. They get less
computational cost with low complexity. Nevertheless, they can also compute the
performance of their network and implementation cost of the network can also be
considered. When there is large network with multiple servers, it consumes lot’s
of energy. Due to this the energy cost is also very high [10]. To investigate about
this Lia and Sun discuses in their study. According to them, the deployment
of cloud networks with Software Define Network (SDN) it becomes easier to
implement and adjustment of different types of network in the cloud. They also
18 A. A. Butt et al.
discuss about the Virtual Contents Distributed Networks (VCDN) and their
energy efficiency issue. To overcome all aforementioned issues and challenges,
they proposed an off-line optimal algorithm which provides the optimal solution
of studies problems. For the sake of improvement in energy efficiency, they design
a time efficient approximate algorithm to predict incoming traffic to improve
energy efficiency. To check the performance of their proposed approach they
perform multiple simulations. After performing the simulations, they save the
energy in VCDN based network. There is a tradeoff between numerous incoming
traffic and QoS in VCDN network.
Marco et al. discuss about the evolution of power systems towards the grids in
[11]. According to their study, new targets and technologies are required because
it totally depends on the modernization of the distributed grids. To achieve the
new targets their work presents smart metering infrastructure. Which unlocks
the set of services and automation management of the distributed grid. Their
proposed algorithm gives the opportunity to communicate with smart meter.
This is one way communication. In this communication the requirements of con-
sumers is monitored and than provides services according to that requirements.
It is because the proposed algorithm is real time distribution algorithm, which
provide services over the cloud by using automation of grid. Their aim is also
to provide the key features such as; interoperability, scalability to the cloud. By
using their implemented real time distribution algorithm they fulfill their aim.
On the other hand, if smart metering infrastructure facilitates with two way
communication it gives better results.
3 Problem Statement
The proposed work in [1214] is focused on different problems: Virtual Machine
(VM) allocation problems, Variable Size Bin Packing Problem (VSBPP), multi
objective problems and energy management related to cloud and fog based com-
puting. In [12], Improved Levy Whale Optimization Algorithm (ILWOA) is pro-
posed. However, the multi objective problems such as processing and response
time of the cloud are not handled by the proposed algorithm. Multi Objective
Grey Wolf Optimizer (MOGWO) is proposed in [13]. Although, it is not possible
for it to find satisfactory solutions in larger search space, when optimum result is
required. When requests need VMs to store and process the data, so it is difficult
for it to find appropriate platform in the search space. In [6,7], the fog based
architecture is implemented by using day-ahead, Particle Swarm Optimization
(PSO), Binary Particle Swarm Optimization (BPSO) and BAT algorithm. How-
ever, there is more delay in finding a best solution and search space because of
their premature convergence. In [14], Levy Flight Particle Swarm Optimization
(LFPSO) is proposed. Because it searches both locally and globally. The solution
is given at current time due to randomness in it’s nature.
This work is to devise an integrated model of cloud and fog based comput-
ing by using Improved PSO by using Levy Walk (IPSOLW). The Micro Grid
(MG) and Smart Meter (SM) are attached to the SHs that are connected to
Optimization of Response and Processing Time for Smart Societies 19
the implemented architecture. MG is used as a small grid known as renewable
energy resources. MG also keeps the record of SHs. SM is used to keep the
energy consumption tracks of the SH. We formulate this scheduling problem as
a multi objective problem to schedule the incoming requests to cloud and fog
based architecture.
4 Proposed System Model
The cloud computing services is extended by fog computing and the services at
the edge of the network where data is created and acted upon. The term fog is
introduced by “cisco” in recent few years. In Fig. 2shows the proposed system
model which introduces SM and MG architecture based on fog computing and
efficient management of energy. As this proposed system model shows the three
tiers of the proposed architecture. In first tier, which is the bottom-most layer of
the proposed model is known as consumer layer. The SHs for which this energy
management model is proposed are located in the different regions of the world.
These are the SSs because of direct communication to fog and cloud. These SHs
in SSs can send request for the services to the cloud and fog of the architecture.
When their services is processed according to the requirements they can also
get response. SHs can also communicate with each other and share their energy
resources with each other. The saved energy is provided to the servers at the
bottom levels of the architecture, because SHs can manage their own energy.
SMs are also attached with the SHs, which records and monitors the services.
SM provides the record of energy consumption of the SHs or the Smart building
through which it is attached.
MG plays an vital role between bottom layer and the middle layer of this
proposed architecture. Each region and fog in the proposed model are connected
with its own MG. It keeps the record of MG. When any of the SHs requires
services from the servers of the top two layers, they requests for that. If required
services are not fulfilled by the upper servers, than MG provides services to that
SH. As the aforementioned scenario, then it is another possibility that if MG
able to fulfill the requirements of the consumers, then MG gives the services to
In second tier, which is middle layer of the proposed system model is known as
fog computing layer. The VMs and inside them PMs are the primary components
of this layer. The fog is located at the edge of the network, then it provide services
to the consumers with less latency and delay. Fog provides its services locally
in this proposed system. It also act as intermediary layer between cloud and
consumer layer. Because of regional location, maximum number of requests is
processed by fog easily with less latency and delay. This fog layer also overcomes
the deficiencies of the cloud.
In third tier, which is the upper most layer of the proposed model. The
high level servers and DCs comprise this tier. Cloud always provides it’s services
remotely. It processed and serves large number of incoming requests. It also
overcomes the shortcomings of the fog, that is less storage and processing of
20 A. A. Butt et al.
Fig. 2. Cloud and fog based integrated system model
requests. This layer provides the services to the consumers in terms of storage.
As same like the other cloud, the proposed model cloud also provides three
types of services which are: Infrastructure as a Services (IaaS), consumers of
Optimization of Response and Processing Time for Smart Societies 21
this proposed model can used the services any time. They have to paid for these
services to the servers, benefit of these services is that they donot need to pay
extra charges for any other services. Platform as a Service (Paas), the consumers
of SHs can easily used any type of software and applications on the server systems
of DCs without installing them on their personal computers.
5 Proposed Scheme
The novel meta heuristic algorithm that depends on the nature of the swarm
and random behavior of levy is known as IPSOLW. The algorithm works same
as the nature of the swarm, i.e., particle search for best position, where the best
position is found, particle best position is updated. In this proposed algorithm,
the velocity of particle swarm is updated with levy walk, because of it’s prema-
ture convergence. In this paper, IPSOLW Algorithm 1performs load balancing.
Initializes VMs, fogs in the case of load balancing. Besides, the probability fitness
is calculated with respect to DC. Our environment is fog based, so the fogs act as
a source of best position. Therefore, the implemented load balancing algorithm
provides help to VMs to find the best feasible solution until they find the best
fitness value or best optimization solution value.
6 Simulation Results and Discussion
In this section, we have performed simulation by using Cloud analyst to check
the efficiency of PSO and proposed algorithm IPSOLW. To do so, performance of
these algorithms is compared on the basis of RT, PT, and cost. The cost, RT, and
PT is compared on the basis of service broker policies, i.e., Closest Data Center
(CDC) and Optimize Response Time (ORT). The clusters RT is also calculated
on the basis of these policies. The simulation results of average RT of this paper
implemented load balancing algorithms are shown in Fig.4a. In this figure the
Avg. RT of PSO and IPSOLW are compared on the basis of CDC and ORT.
The avg. RT of PSO on the basis of CDC is 13.24%, and on the basis of ORT
is 7.09%. The avg. RT on the basis of CDC is 7.031% and on the basis of ORT
is 4.37% for IPSOLW. In this fig. it is clearly shown that the RT of IPSOLW
is maximum than PSO. It simply means that IPSOLW handles large number
of requests of consumers of SHs rather than PSO. The avg. RT of IPSOLW is
maximum to handle large number of incoming requests from consumer side.
22 A. A. Butt et al.
Algorithm 1. IPSOLW
1: Start
3: Initialize the position of a Particles of PSwarms in the search space randomly;
5: Search for list of VM and Fog;
7: j=Fog;
9: i=VM;
11: for t = 1:24 do
12: Let Yis a random position of search space;
14: Evaluate the position of the PVM;
16: Initialize the memory of each PVM;
18: while iter. itermax do
19: Determine the VM Size;
21: Compute the processing time using equation 1;
23: Calculate Transmission delay using equation 2;
25: Compute the response time using equation 3;
27: for i=1:Cdo
28: Randomly get a PVM jto follow i;
30: Define awareness probability;
32: if rjAP j,iter then
33: xi,iter1=xi,iter +riX(mj,iter xi,iter )
35: else
36: xi,iter+1 =Y;
38: end if
39: end for
40: end while
41: Evaluate the usefulness of new fog;
43: Check the new position of the PVM;
45: end for
46: End
Optimization of Response and Processing Time for Smart Societies 23
IPSOLW is also higher than PSO, it is because IPSOLW also has maxi-
mum RT. The reason to have maximum PT is that IPSOLW handles number of
incoming requests of SHs consumers. The PSO also handles numerous requests of
consumer side, it gives nimble solution as compared to other algorithms. Some-
times, these solutions are not feasible to the consumers, it is because PSO has
premature convergence nature. In this fig. the PT of PSO is 1.15% and 5.381%
on the basis of CDC and ORT. The PT of IPSOLW is 2.12% and 2.77% on the
basis of CDC and ORT.
The formula which is used to calculate the processing time is given in below:
speed (1)
where TR is the total requests of the consumers and VM
speed is the speed of
VMs. The transmission delay is computed as given below:
TD =Total
TD (2)
where lis latency and TD is the transmission delay.
The formula which is used to calculate the response time is:
where F Tis the finish time, A Tis the arrival time and TD is the transmission
Fig. 3. Proposed scenario world division map
The cost of PSO and IPSOLW is shown in Fig. 5a and b. These figures repre-
sent the VM cost, Data Transfer (DT) cost and Total Cost of these implemented
load balancing algorithms. In this Fig. 5a, it is clearly shown that VM cost and
DT cost of PSO is minimum on the basis of service broker policies. The VM
and DT cost of IPSOLW is also minimum on the basis of service broker policies
by using IPSOLW Fig. 5b. The VM Cost of PSO is 1.11% and 1.15%, which is
based on CDC and ORT. DT cost of PSO is 98.88% and 98.84%. The VM and
DT cost of IPSOLW which is based on CDC and ORT is 98.6% (Fig. 3).
24 A. A. Butt et al.
Service Broker Policies
Average RT(ms)
(a) Average Response Time
Service Broker Policies
Average PT (ms)
(b) Average Processing Time
Fig. 4. Average RT and PT
Service Broker Policies
Cost ($)
(a) PSO Cost Comparsion
Service Broker Policies
Cost ($)
(b) IPSOLW Cost Comparison
Fig. 5. PSO and IPSOLW cost comparison
7 Conclusion
Considering the numbers of SHs applications that are running on numerous
things, as well as integration of cloud and fog based SSs such application are
processed at the edge of the network. In this paper, we bring down the size
of VMs to minimize the utilization of energy. To calculate the results of our
proposed scenario on the basis of six SSs on the basis of CDC and ORT, nature
inspired algorithm are implemented and proposed. The Proposed algorithm in
this work is IPSOLW, which is compared with implementing. In the end, this
thing is clearly concluded that proposed IPSOLW has 7.031% and 4.37% RT
on the basis of CDC and ORT. The PT of IPSOLW on the basis of CDC is
2.12% and ORT is 2.77%. However, IPSOLW has a maximum cost as compared
to PSO. In future we will consider bin packing problem with the different meta
heuristics algorithm.
Optimization of Response and Processing Time for Smart Societies 25
1. Wang, Z., Hayat, M.M., Ghani, N., Shaban, K.B.: Optimizing cloud-service per-
formance: efficient resource provisioning via optimal workload allocation. IEEE
Trans. Parallel Distrib. Syst. 28(6), 1689–1702 (2017)
2. Anderson, D., Gkountouvas, T., Meng, M., Birman, K., Bose, A., Hauser, C.,
Litvinov, E., Luo, X., Zhang, F.: GridCloud: infrastructure for cloud-based wide
area monitoring of bulk electric power grids. IEEE Trans. Smart Grid, 1–10 (2018)
3. Wang, W., Zhou, F., Li, J.: Cloud-based parallel power flow calculation using
resilient distributed datasets and directed acyclic graph. J. Mod. Power Syst. Clean
Energy, 1–13 (2018)
4. Yu, L., Jiang, T., Zou, Y.: Fog-assisted operational cost reduction for cloud data
centers. IEEE Access 5, 13578–13586 (2017)
5. Buksh, R., Javaid, N., Fatima, I.: Towards efficient resource utilization exploiting
collaboration between HPF and 5G enabled energy management controllers in
smart homes. Sustainability 10(10), 3592 (2018). 3–24
6. Leontiou, N., Dechouniotis, D., Denazis, S., Papavassiliou, S.: A hierarchical control
framework of load balancing and resource allocation of cloud computing services.
Comput. Electr. Eng. 67, 235–251 (2018)
7. Zhao, H., Wang, J., Liu, F., Wang, Q., Zhang, W., Zheng, Q.: Power-aware and
performance-guaranteed virtual machine placement in the cloud. IEEE Trans. Par-
allel Distrib. Syst. 29(6), 1385–1400 (2018)
8. Xue, S., Zhang, Y., Xiaolong, X., Xing, G., Xiang, H., Ji, S.: QET: a QoS-
based energy-aware task scheduling method in cloud environment. Cluster Comput.
20(4), 3199–3212 (2017)
9. Mann, Z. ´
A.: Resource optimization across the cloud stack. IEEE Trans. Parallel
Distrib. Syst. 29(1), 169–182 (2018)
10. Liao, D., Sun, G., Yang, G., Chang, V.: Energy-efficient virtual content distribution
network provisioning in cloud-based data centers. Future Gener. Comput. Syst. 83,
347–357 (2018)
11. Pau, M., Patti, E., Barbierato, L., Estebsari, A., Pons, E., Ponci, F., Monti, A.:
A cloud-based smart metering infrastructure for distribution grid services and
automation. Sustain. Energy Grids Netw. 15, 14–25 (2017)
12. Abdel-Basset, M., Abdle-Fatah, L., Sangaiah, A.K.: An improved L´evy based whale
optimization algorithm for bandwidth-efficient virtual machine placement in cloud
computing environment. Cluster Comput., 1–16 (2018)
13. Mirjalili, S., Saremi, S., Mirjalili, S.M., dos Coelho, L.: Multi-ob jective grey wolf
optimizer: a novel algorithm for multi-criterion optimization. Expert Syst. Appl.
47, 106–119 (2016)
14. Jensi and Wiselin Jiji: An enhanced particle swarm optimization with levy flight
for global optimization. Appl. Soft Comput. 43, 248–261 (2016)
... In recent years, the applications of Metaheuristic algorithms have been increasingly taken into consideration [3,[38][39][40][41][42]. Meta-heuristics are techniques that can be derived from different phenomena like physics, nature, and human behavior and are used for solving the optimization problems. There are different types of these methods like Genetic algorithm [43,44], particle swarm optimization [45,46], shark smell optimization [37,44,[47][48][49], quantum invasive weed optimization [42], world cup optimization algorithm [50], Moth search algorithm [51], Harris hawks optimization [52], Schoolbased optimization algorithm [53]. ...
Full-text available
Medical image enhancement is a principal category of the medical image processing which has a great impact on the final diagnosis results. In this paper, a new optimization technique has been presented for enhancing the contrast of the medical images. The main idea here is to propose an optimization problem by considering both global and local enhancement to achieve a strong image enhancement method. The other novelty here is to propose a new improved version of shark smell optimization algorithm to apply to the mentioned optimization problem for enhancing the algorithm convergence. Final results are analyzed based on five different measure indexes and are compared with five popular methods for illustrating the superiority of the presented technique.
Full-text available
Recently, there has been growing interest in social network analysis. Graph models for social network analysis are usually assumed to be a deterministic graph with fixed weights for its edges or nodes. As activities of users in online social networks are changed with time, however, this assumption is too restrictive because of uncertainty, unpredictability and the time-varying nature of such real networks. The existing network measures and network sampling algorithms for complex social networks are designed basically for deterministic binary graphs with fixed weights. This results in loss of much of the information about the behavior of the network contained in its time-varying edge weights of network, such that is not an appropriate measure or sample for unveiling the important natural properties of the original network embedded in the varying edge weights. stochastic graphs, in which weights associated with the edges are random variables, can be a suitable model for complex social network. In this paper, according to the principle that Social networks are one of the cases where the distribution of links to nodes is according to the power law that we proposed Levy's initial flight automation sampling algorithm for random graphs, which is a good model for complex social networks. Using Levy Flight instead of gait-based learning that guarantees part of the solution is not separate from the present solution, therefore, it endores an optimizer tolerance, local optimal tolerance, and early convergence. In order to study the performance of the proposed sampling algorithms, several experiments are conducted on real and synthetic stochastic graphs. These algorithms ‘performance is evaluated based on the relative cost, Kendall correlation coefficient, Kolmogorov–Smirnov D statistics, and relative error.
With the increasing use of the Internet of Things (IoT) in various fields and the need to process and store huge volumes of generated data, Fog computing was introduced to complement Cloud computing services. Fog computing offers basic services at the network for supporting IoT applications with low response time requirements. However, Fogs are distributed, heterogeneous, and their resources are limited, therefore efficient distribution of IoT applications tasks in Fog nodes, in order to meet quality of service (QoS) and quality of experience (QoE) constraints is challenging. In this survey, at first, we have an overview of basic concepts of Fog computing, and then review the application placement problem in Fog computing with focus on Artificial intelligence (AI) techniques. We target three main objectives with considering a characteristics of AI-based methods in Fog application placement problem: (i) categorizing evolutionary algorithms, (ii) categorizing machine learning algorithms, and (iii) categorizing combinatorial algorithms into subcategories includes a combination of machine learning and heuristic, a combination of evolutionary and heuristic, and a combinations of evolutionary and machine learning. Then the security considerations of application placement have been reviewed. Finally, we provide a number of open questions and issues as future works.
This paper presents a new optimal design for the stability and control of the synchronous machine connected to an infinite bus. The model of the synchronous machine is 4th order linear Philips-Heffron synchronous machine. In this study, a PID controller is utilized for stability and its parameters have been achieved optimally by minimizing a fitness function to removes the unstable Eigen-values to the left-hand side of the imaginary axis. The considered parameters of the PID controller are optimized based on a new nature-inspired, called moth search algorithm. The proposed system is then compared with the particle swarm optimization.
Medical image fusion is a principal category in the medical applications which has great impacts on the final diagnosis results. In this study, a hybrid optimization technique is presented for developing a high efficiency technique for the fusion of the medical images. The presented method uses both advantages of the wavelet transform and the homomorphic filter for improving the system efficiency. For achieving the optimal values of the system, a new optimization algorithm based on two new introduced methods, shark smell optimization algorithm and world cup optimization algorithm is introduced. The new algorithm is then applied to the wavelet part of the system to get the optimal values. Simulations are applied on two classes of five clinical images including MR-CT, MR-SPECT, and MR-PET the results are compared with six popular methods. The final results showed that the proposed system has higher efficiency from the studied methods.
Full-text available
The influence of Information Communication and Technology (ICT) in power systems necessitates Smart Grid (SG) with monitoring and real-time control of electricity consumption. In SG, huge requests are generated from the smart homes in residential sector. Thus, researchers have proposed cloud based centralized and fog based semi-centralized computing systems for such requests. The cloud, unlike the fog system, has virtually infinite computing resources; however, in the cloud, system delay is the challenge for real-time applications. The prominent features of fog are; awareness of location, low latency, wired and wireless connectivity. In this paper, the impact of longer delay of cloud in SG applications is addressed. We proposed a cloud-fog based system for efficient processing of requests coming from the smart homes, their quick response and ultimately reduced cost. Each smart home is provided with a 5G based Home Energy Management Controller (HEMC). Then, the 5G-HEMC communicates with the High Performance Fog (HPF). The HPFs are capable of processing energy consumers’ huge requests. Virtual Machines (VMs) are installed on physical systems (HPFs) to entertain the requests using First Come First Service (FCFS) and Ant Colony Optimization (ACO) algorithms along with Optimized Response Time Policy (ORTP) for the selection of potential HPF for efficient processing of the requests with maximum resource utilization. It is analysed that size and number of virtual resources affect the performance of the computing system. In the proposed system model, micro grids are introduced in the vicinity of energy consumers for uninterrupted and cost optimized power supply. The impact of the number of VMs on the performance of HPFs is analysed with extensive simulations with three scenarios.
Full-text available
With the integration of distributed generation and the construction of cross-regional long-distance power grids, power systems become larger and more complex. They require faster computing speed and better scalability for power flow calculations to support unit dispatch. Based on the analysis of a variety of parallelization methods, this paper deploys the large-scale power flow calculation task on a cloud computing platform using resilient distributed datasets (RDDs). It optimizes a directed acyclic graph that is stored in the RDDs to solve the low performance problem of the MapReduce model. This paper constructs and simulates a power flow calculation on a large-scale power system based on standard IEEE test data. Experiments are conducted on Spark cluster which is deployed as a cloud computing platform. They show that the advantages of this method are not obvious at small scale, but the performance is superior to the stand-alone model and the MapReduce model for large-scale calculations. In addition, running time will be reduced when adding cluster nodes. Although not tested under practical conditions, this paper provides a new way of thinking about parallel power flow calculations in large-scale power systems.
Full-text available
Service providers must guarantee Quality of Service (QoS) requirements of the co-hosted applications in a data center and simultaneously achieve optimal utilization of their infrastructure under varying workload. This paper presents a hierarchical control framework that aims at compromising antagonistic objectives inside a data center. The local control level tackles simultaneously the problems of resource allocation and admission control of virtual machines while the upper level addresses together the load balancing of the incoming requests and placement of virtual machines into a cluster of physical servers. Numerical results show that the cooperation of the two control layers guarantees the satisfaction of the system’s constraints and the user’s requirements towards the fluctuations of incoming requests.
Full-text available
The consolidation of virtual machine (VM) is the strategy of efficient and intelligent use of cloud datacenters resources. One of the important subproblems of VM consolidation is VM placement problem. The main objective of VM placement problem is to minimize the number of running physical machines or hosts in cloud datacenters. This paper focuses on solving VM placement problem with respect to the available bandwidth which is formulated as variable sized bin packing problem. Moreover, a new bandwidth allocation policy is developed and hybridized with an improved variant of whale optimization algorithm (WOA) called improved Lévy based whale optimization algorithm. Cloudsim toolkit is used in order to test the validity of the proposed algorithm on 25 different data sets that generated randomly and compared with many optimization algorithms including: WOA, first fit, best fit, particle swarm optimization, genetic algorithm, and intelligent tuned harmony search. The obtained results are analyzed by Friedman test which indicates the prosperity of the proposed algorithm for minimizing the number of running physical machine.
Full-text available
Previous work on optimizing resource provisioning in virtualized environments focused either on mapping virtual machines (VMs) to physical machines (PMs) or mapping application components to VMs. In this paper, we argue that these two optimization problems influence each other significantly and in a highly non-trivial way. We define a sophisticated problem model for the joint optimization of the two mappings, taking into account sizing aspects, colocation constraints, license costs, and hardware affinity relations. As demonstrated by the empirical evaluation on a real-world workload trace, the combined optimization leads to significantly better overall results than considering the two problems in isolation.
Full-text available
The evolution of the power systems towards the smart grid paradigm is strictly dependent on the modernization of distribution grids. To achieve this target, new infrastructures, technologies and applications are increasingly required. This paper presents a smart metering infrastructure that unlocks a large set of possible services aimed at the automation and management of distribution grids. The proposed architecture is based on a cloud solution, which allows the communication with the smart meters from one side and provides the needed interfaces to the distribution grid services on the other one. While a large number of applications can be designed on top of the cloud, in this paper the focus will be on a real-time distributed state estimation algorithm that enables the automatic reconfiguration of the grid. The paper will present the key role of the cloud solution for obtaining scalability, interoperability and flexibility, and for enabling the integration of different services for the automation of the distribution system. The distributed state estimation algorithm and the automatic network reconfiguration will be presented as an example of coordinated operation of different distribution grid services through the cloud.
Cloud-based content distribution networks (CDNs) consist of multiple servers that consume large amounts of energy. However, with the development of a cloud-based software defined network (SDN), a new paradigm of the virtual content distribution network (vCDN) has emerged. In an emerging cloud-based vCDN environment, the development and adjustment of vCDN components has become easier with the aid of SDN technology. This transformation provides the opportunity to use vCDNs to reduce energy consumption by adjusting the scale of the vCDN components. Energy costs can be reduced by deactivating the commercial servers carrying the software components of the vCDN, such as replica servers, the firewall or routers. In addition, the CDN requires a high service level agreement (SLA) to respond to clients' requests, potentially consuming large amounts of energy. In this research, we focus on the issue of the energy savings of a CDN in a cloud computing environment while maintaining a high quality of service (QoS). We propose an approximate algorithm termed max flow forecast (MFF) to determine the number of software components in the vCDN. Additionally, we use a real traffic trace from a website to assess our algorithm. The experimental results show that MFF can produce a larger energy reduction than the existing algorithms for an identical SLA. We fully justify our research as a good example for the emerging cloud.
Cloud service providers offer virtual machines (VMs) as services to users over Internet. As VMs are running on physical machines (PMs), PM power consumption needs to be considered. Meanwhile, VMs running on the same PM share physical resources, and there exists great resource contention, which results in VM performance degradation. Therefore, how to place VMs to reduce PM power consumption and guarantee VM performance is still one major challenge. However, existing VMPs did not study VM performance degradation, so they could not guarantee VM performance. To solve the high power consumption and VMs performance degradation problems, this paper explores the balance between saving PM power and guaranteeing VM performance, and proposes a power-aware and performance-guaranteed VMP (PPVMP). First, we investigate the relationship between power consumption and CPU utilization to build a non-linear power model, which is helpful for the following VMP. Second, we construct VM performance models to present the VM performance degradation trend. Third, based on these models, we formulate VMP as a bi-objective optimization problem, which tries to minimize PM power consumption and guarantee VM performance. We then propose an algorithm based on ant colony optimization to solve it. Finally, the results show the efficiency of our algorithm.
The continuing rollout of phasor measurement units (PMUs) enables wide area monitoring and control (WAMS/WACS), but the difficulty of sharing data in a secure, scalable, cost-effective, low-latency manner limits exploitation of this new capability by bulk electric power grid operators. GridCloud is an open-source platform for real time data acquisition and sharing across the jurisdictions that control a bulk interconnected grid. It leverages commercial cloud tools to reduce costs, employing cryptographic methods to protect sensitive data, and software-mediated redundancy to overcome failures. The system has been tested by ISO New England and the results reported here demonstrate a level of responsiveness, availability and security easily adequate for regional WAMS/WACS, with the capacity for nation-wide scalability in the future.
Currently, energy consumption for cloud data centers has attracted much attention from both industry and academia. Meanwhile, it is also important to satisfy the customers’ quality of service (QoS) for cloud service providers. However, it is still a challenge to achieve energy savings based on QoS during task scheduling. In this paper, a QoS-based energy-aware task scheduling method, named QET, in cloud environment is proposed to address the above challenge. Technically, an energy consumption model based on QoS is proposed for heterogeneous cloud environment. And a corresponding task scheduling method is designed to minimize the energy consumption through QoS-aware PM selection. Comprehensive experimental analysis is conducted to evaluate the efficiency and effectiveness of our proposed method.