Content uploaded by Subhi R. M. Zeebaree
Author content
All content in this area was uploaded by Subhi R. M. Zeebaree on Feb 12, 2024
Content may be subject to copyright.
International Journal of
INTELLIGENT SYSTEMS AND APPLICATIONS IN
ENGINEERING
ISSN:2147-67992147-6799 www.ijisae.org
Original Research Paper
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(11s), 469-490 | 469
Distributed Systems Meet Cloud Computing: A Review of Convergence
and Integration
Zainab Salih Ageed1,3, Subhi R. M. Zeebaree2
Submitted: 04/11/2023 Revised: 25/12/2023 Accepted: 03/01/2024
Abstract: Conventional cloud computing, in which processing, storage, and networking resources are hosted in one or a few centralised
data centres, has been made unsuitable as a result of the stringent latency requirements of emerging applications. Additionally, the rapid
expansion of networks has led to the emergence of a trend known as network cloudification, which involves the delivery of network services
based on cloud service models. Therefore, the development of the new distributed cloud model represents a progression from the
conventional centralised cloud computing model to the worldwide distributed cloud computing services that are positioned according to
the needs of the application. In this essay, we make an effort to provide a comprehensive overview of clouds that are dispersed. The first
thing that is discussed is the concept of distributed cloud computing. We will now continue to outline the architecture of the distributed
cloud as well as the technologies that are linked with it. We also carry out a case study as part of our work. When it comes down to it, we
tackle open research problems that are associated with distributed cloud computing. by conducting a comprehensive review of twenty-one
papers that cover a wide range of methodology and approaches.
Keywords: stringent, cloudification, comprehensive, architecture, methodology
1. Introduction
In the rapidly evolving landscape of modern computing,
the convergence of two groundbreaking technologies,
distributed systems, and cloud computing, has ushered in
a new era of scalable, resilient, and efficient computing
paradigms. The synergy between distributed systems and
cloud computing has become a driving force behind the
architecture and infrastructure that power today's digital
world. This intersection brings forth a wealth of
opportunities and challenges, as traditional distributed
systems principles intertwine with the elastic and on-
demand nature of cloud resources. Many modeling
techniques have proliferated in the last few years due to
developments in network-based computing, such as cloud
computing, community networks, online stores, software
as a service, and many more [1]. Distributed systems,
characterized by the seamless coordination of multiple
interconnected nodes to achieve a common goal, find new
dimensions in the cloud environment. Cloud computing,
with its promise of ubiquitous access to a shared pool of
configurable resources, introduces novel ways to design,
deploy, and manage distributed systems. This
convergence not only transforms the way applications are
built and operated but also shapes the fabric of our
interconnected, data-driven society. With the use of
contemporary technology, a third-party "cloud provider"
can offer services to customers in a number of scenarios,
from any location, at any time. Virtualization and service
delivery platforms are used by cloud computing to meet
customer requirements and provide cloud resources.
Utilizing the resources of a distant computer is possible
with cloud computing as an alternative to storing and
retrieving data from your computer. Clients are not aware
of the network's infrastructure since they rely on cloud
services instead of running on their own hardware [2].
This exploration delves into the symbiotic relationship
between distributed systems and cloud computing,
uncovering the intricacies of their integration and the
transformative impact on scalability, fault tolerance, and
performance. From the intricacies of data distribution and
consistency to the challenges of managing resources
across a dynamic cloud infrastructure, the journey of these
two technological realms intertwines, offering a glimpse
into the future of computing architectures. Customers can
use apps and access content from any connected device
with cloud computing. In addition to central processing
power, a distributed computing system known as "cloud
computing" provides memory, hard drives, software, and
other computer resources. It offers pay-per-use on-
demand products and services to customers [3, 4]. Cloud
computing is comprised of three technologies:
virtualization, on-demand computing, and data centers. A
cloud system can use resources more efficiently thanks to
task distribution, which is essentially required [5] .
1IT Dept., Technical College of Informatics-Akre, Akre University for
Applied Sciences, Duhok, Iraq, zainab.salih@auas.edu.krd
2Energy Eng. Dept., Technical College of Engineering, Duhok Polytechnic
University, Duhok, Iraq, subhi.rafeeq@dpu.edu.krd
3 Computer Science Dept., College of Science, Nawroz University, Duhok,
Iraq, zainab.ageed@nawroz.edu.krd
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(11s), 469-490 | 470
2. Background Theory
2.1 Distributed Systems
The essential components of a distributed system are
shown in Figure 1. Distributed systems consist of multiple
independent computers and components that are dispersed
over multiple systems yet communicate with each other to
function as a unified entity. In this thorough introduction,
learn about distributed systems' workings, some real-
world applications, basic architectures, advantages and
disadvantages, and common solutions for real-time
distributed streaming [6]. The cloud has risen, and with it,
a new era of distributed computing has dawned. No longer
are systems chained to the confines of physical servers,
but instead stretch and flex across the boundless expanse
of the cloud, their components dancing amongst the
virtual winds. This convergence of distributed systems,
masters of parallel processing and fault tolerance, and
cloud computing, the agile architect of on-demand
resources, unlocks a realm of possibilities for building
scalable, flexible, and resilient applications. Founded by
the same guys who brought us Apache Kafka, Confluent
is a comprehensive data streaming platform that connects
over 120 data sources. It is perfect for analytics,
processing, and real-time data integration [7, 8].
Distributed computing is the study of distributed systems
in computer science. A system is a distributed system. By
exchanging messages between computers and other
devices, they can speak with each other. Distributed
computing systems are becoming more and more popular
as a result of improvements in computer network
technology and declining hardware costs [9].
Fig 1: Key components of distributed system[6]
The primary purposes of distributed computing include
[10]:
• Resource sharing - whether it be data, software, or
hardware that is shared.
• Openness - To what extent is the software meant to
be developed and shared between others.
• Concurrency - The same function can be processed
concurrently by numerous machines.
• Scalability - How do processing and computing
powers grow when applied to multiple machines.
• Fault tolerance - How fast and easily can system
problems in different parts be identified and fixed.
• Transparency - How much access does a node have
within the system to find and connect with other
nodes?
2.2 Cloud Computing
Imagine a world where the vast computing resources of
the cloud seamlessly intertwine with the distributed
intelligence of powerful systems. This is the exciting
intersection explored in "Distributed Systems Meet Cloud
Computing," a book that delves into the transformative
convergence of these two technological giants. On one
hand, distributed systems have revolutionized our ability
to tackle complex problems by parceling them out across
a network of interconnected devices. From managing
massive online transactions to powering scientific
simulations, distributed systems have become the
backbone of modern computing. On the other hand, cloud
computing has democratized access to computing power,
offering on-demand scalability and flexibility. It has
reshaped how businesses and individuals operate,
providing a platform for innovation and agility. "Cloud
computing" is a word used to describe the Internet
metaphorically. Global applications: Serving users across
continents with imperceptible latency, powered by a
geographically distributed network of cloud servers.
Microservices in motion: Independent software
components, each a cloud-hosted citizen, collaborating
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(11s), 469-490 | 471
seamlessly to deliver complex functionality. Elasticity
unbound: Resources scaling up and down like the tide,
adjusting effortlessly to demand, never burdened by idle
infrastructure. Fault tolerance unshackled: System
failures become mere blips, as redundancy woven into the
cloud fabric automatically reroutes tasks and heals the
wound. As Figure 2 shows, the Internet is usually
represented in network diagrams as a cloud. The cloud
emblem is meant to symbolize "all that other stuff" that
keeps the network going. It works something like "etc."
for the rest of the solution map. When it typically refers to
a portion of the diagram or solution that is the duty of
another individual, why diagram it all out? This concept
most likely applies to the idea of cloud computing [11,
12]. Figure 2 how the internet represented by a cloud.
[13].
Fig (2): The Internet is represented by a cloud in network diagrams [13]
Even though cloud computing offers a lot of benefits,
corporate executives and IT teams can also benefit from
it. Concerns about security and uneven performance are
the most prevalent restrictions that still hinder cloud
viewing. It is possible to receive a web-based network,
RAM, software resources, storage, and CPU using this
alternative technology for the internet [14]. Cloud
computing uses hardware and software to provide a
service across a network. Users can share resources,
software, and other services, as well as get information, at
any time that covers their charges, with cloud computing,
an on-demand service that is commonly utilized on the
Internet [15, 16]. A cloud can be used to represent the
Internet as a whole. Using the cloud reduces the cost of
development and operations. Additionally, the cloud
provider bears the obligation for maintaining and
monitoring data stored on the cloud [15, 17].
a. Cloud Components
Imagine a world where the vast computing resources of
the cloud seamlessly intertwine with the distributed
intelligence of powerful systems. This is the exciting
intersection explored in "Distributed Systems Meet Cloud
Computing," a book that delves into the transformative
convergence of these two technological giants. On one
hand, distributed systems have revolutionized our ability
to tackle complex problems by parceling them out across
a network of interconnected devices. From managing
massive online transactions to powering scientific
simulations, distributed systems have become the
backbone of modern computing. On the other hand, cloud
computing has democratized access to computing power,
offering on-demand scalability and flexibility. It has
reshaped how businesses and individuals operate,
providing a platform for innovation and agility. In a
topologically simple sense, a cloud computing system
consists of clients, dispersed servers, and the datacenter.
These components, as shown in Figure (3), constitute the
three components of a cloud computing solution. Let's
take a closer look at each element. Each one serves a
different purpose and helps to produce a functional cloud
application. [18].
Fig (3): Three components make up a cloud computing solution [19].
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(11s), 469-490 | 472
Where:
A cloud computing architecture includes all of the
elements found in a typical, daily local area network
(LAN), including the clients. These are, in general,
desktop computers. However, because of their portability,
they may also be smartphones, tablets, PDAs, or
laptops—all of which are important contributors to cloud
computing. Clients, by whatever name you choose, are the
devices that end users use to access and control their cloud
data. Typically, clients fall into three types [20, 21]:
• Mobile: Mobile devices include PDAs or
smartphones, like a Blackberry, Windows Mobile
Smartphone, or an iPhone.
• Thin: Clients are computers that do not have internal
hard drives, but rather let the server do all the work,
but then display the information.
• Thick : This type of client is a regular PC that is
linked to the cloud using a web browser like Firefox
or Internet Explorer. Thin clients are a solution that
is becoming more and more popular due to their
price and environmental impact. The following are
some benefits of hiring thin clients: [22]:
• Lower hardware costs: Due to their reduced
hardware content, thin clients are less expensive than
thick clients. They also have a longer lifespan before
they need to be updated or replaced.
• Lower IT costs: With thin clients, there are fewer
points of failure because they are managed at the
server.
• Security: There is less possibility of malware
infecting the device because processing occurs on
the server rather than the hard drive. Additionally,
there is less likelihood of thin clients being
physically stolen because they require a server to
function.
• Data security: Data loss from client computer
crashes or theft is less likely because data is saved
on the server.
• Less power consumption: Compared to thick
clients, thin clients use less electricity. This implies
that the cost of air conditioning the office and
powering them will both be lower.
• Ease of repair or replacement: It is simple to
replace a thin client in case of death. The user's
desktop is restored to its pre-failure state by simply
replacing the box.
• Less noise: The thin client can use quieter fans
because there isn't a rotating hard drive producing as
much heat.
b. Datacenter
A datacenter, also spelled data center or data centre, is a
facility that houses computer systems and associated
components, primarily used for: Storing large amounts of
data: Think of it as a giant vault for digital information,
from personal emails and online transactions to business
records and scientific research data. Processing and
distributing data: Servers within the datacenter perform
calculations, run applications, and deliver data to users
across the internet or within internal networks. Providing
access to shared resources: Datacenters can host
applications and services accessible by many users, like
email platforms, streaming services, and cloud storage
solutions. The importance of datacenters continues to
grow in today's data-driven world. They are the crucial
backbone of the internet, powering everything from online
communication and entertainment to essential business
operations and scientific research. The program you are
presently running is stored on the collection of servers
known as the datacenter. Either a large basement space in
your building or a room full of servers located on the other
side of the world that you can access online may be your
options. Within IT, virtualizing servers is becoming more
and more popular. Stated differently, software that allows
many virtual server instances to run can be installed. This
allows you to host six virtual servers on one physical
server [23, 24].
c. Distributed Servers
It is not necessary to keep every server in one location,
though. Servers are often located across different
geographic regions. But to you, the cloud subscriber, these
servers seem to be working almost next to each other. The
service provider now has more alternatives and security.
One illustration is the Amazon global server network,
which houses their cloud service [25, 26]. In the event that
something went wrong at one place, there would still be a
way to use the service through another website.
Furthermore, if the cloud requires more hardware, it may
be expanded to include more servers at a different
location, doing away with the requirement to house them
in the safe room [27].
d. Infrastructure
There are several approaches to implement the
infrastructure. The ideal method for constructing the
infrastructure by the cloud solution provider will depend
on the particular application [28]. This is one of the key
advantages of using the cloud. Operating such servers
inside can be far more expensive than you would like to
because of your demanding requirements. Alternatively,
you might only need a little amount of processing power,
in which case buying and maintaining a dedicated server
wouldn't be necessary. The cloud satisfies both
requirements [29].
2.3 Grid Computing
Grid computing and cloud computing are often confused,
despite their stark differences. Using the resources of
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(11s), 469-490 | 473
several computers linked to a network to work on a single
task at once is known as grid computing. Usually, this is
done to address a scientific or technological issue [30].
Grid computing is appealing for a variety of reasons,
including: • It can solve issues needing a lot of processing
power; • It is an affordable approach to employ a certain
amount of computer resources; and • It allows multiple
computers' resources to be shared cooperatively without
one machine directing the others. What then is the
relationship between grid computing and cloud
computing? Not particularly, as they function in
essentially different ways. Grid computing divides a large
project over numerous computers in order to optimize
their resources [31, 32]. Quite the reverse is true with
cloud computing. It makes it possible to run several
smaller apps simultaneously [33].
The idea of using fine-grained, reusable components
across a vendor's network is referred to as services in
cloud computing. Most people refer to this as "as a
service." Services that have the following characteristics
are offered as a suffix [34]:
• Device independence, allowing users to access the
systems on different hardware;
• Multitenancy, allowing resources to be shared by
several users;
• Low entry hurdles, making them accessible to small
enterprises.
a. Software as a Service
The concept known as Software as a Service (SaaS)
allows users to access applications hosted as services over
the Internet [35]. The customer is relieved of the
responsibility for software maintenance and support when
it is hosted off-site as shown in figure 4. However, when
the hosting service decided to change it, the consumer had
no control over it. The objective is to utilize the software
just as it is out of the box, without requiring extensive
customization or system integration. The supplier
maintains the infrastructure and handles all patching and
upgrades [36, 37].
Fig (4): SaaS [38].
b. Platform as a Service
PaaS, which stands for Platform as a Service, is a type of
cloud computing service model that provides everything
developers need to build, run, and manage applications
without getting bogged down in managing the underlying
infrastructure. Think of it as a pre-built workspace with all
the tools and resources neatly laid out for you to focus on
building cool stuff. Another application delivery
paradigm that comes right after Software as a Service
(SaaS) is Platform as a Service (PaaS). Without the need
to download or install software, PaaS provides all the tools
needed to develop apps and services entirely from the
Internet [39]. Application development, testing, hosting,
and deployment are all included in PaaS services [40].
Figure 5 show the structure of PaaS.
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(11s), 469-490 | 474
Fig (5): PaaS [41].
Additional capabilities include storage, versioning,
security, scalability, web service integration, database
integration, and team communication. Lack of provider
mobility and interoperability is a drawback for PaaS users
[42-44]. That example, if you develop an application with
one cloud provider and then wish to switch to another
provider, you might not be able to do so—or you'll have
to pay a steep price—to do so. Moreover, your apps and
data will be destroyed if the supplier fails to continue
operating [45].
Based primarily on HTML or JavaScript, PaaS typically
provides some assistance to aid in the building of user
interfaces [46]. Since multiple users are anticipated to use
Concurrently, PaaS is made for that kind of application
and usually provides security features, scalability,
failover, and automatic concurrency management.
Furthermore, PaaS enables web development interfaces
like Simple Object Access Protocol (SOAP) and
Representational State Transfer (REST) that make it
possible to create a variety of online services, often known
as mashups. The interfaces can also reuse services from
within a private network and access archives [47].
c. Hardware as a Service
A business model where companies pay for the usage of
hardware (often IT equipment) instead of purchasing it
outright. Think of it like leasing hardware with a
subscription fee that covers maintenance and updates.
Benefits include lower upfront costs, scalability, and
easier management. The Service of Hardware Cloud
computing's next generation of services is called (HaaS).
Customers can access apps through SaaS and PaaS, but
not through HaaS [48] as shown in figure 6. All it provides
is the hardware, meaning your company can install
whatever on it. [8, 49]. The HaaS model can be a cost-
effective way for small or mid-sized businesses to provide
employees with state-of-the-art hardware in a cost-
effective manner [50].
Fig (6): HaaS [51].
3. Literature Review
In 2023 M. S. Al Reshan et al [52] It was suggested that
load-balancing in cloud computing may be accomplished
via the use of Swarm Intelligence (SI). Numerous
alternatives, including genetic algorithms, ACOs, PSOs,
BATs, and GWOs, are investigated in the published
literature; however, none of these alternatives take into
account the convergence time of load balancing in the
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(11s), 469-490 | 475
context of global optimisation. Grey Wolf Optimisation
(GWO) and Particle Swarm Optimisation (PSO) are the
two algorithms that are especially investigated in this
study. Through the use of a GWO-PSO strategy, this
research introduces a unique approach that combines the
benefits of global optimisation with the speed with which
convergence occurs. Both the performance of the system
and the allocation of its resources are improved by the
combination of these two methodologies, which allows
the load-balancing issue to be resolved simultaneously.
A. N. Malti et al. [53], To address the challenge of multi-
objective task scheduling in heterogeneous infrastructure
as a service (IaaS) cloud systems, a unique hybrid
optimisation technique has been proposed. The search
exploration capabilities of the grey wolf optimizer
algorithm and the pollination behaviour of flowers are the
foundations upon which this approach is built. Time
makespan, resource utilisation, degree of imbalance, and
throughput are the four optimisation criteria that are used
to assess the approach that has been recommended.
Several other test-bed scenarios, in addition to synthetic
and traditional workload traces, are included in the
requirements, which are developed from the CloudSim
framework. A number of well-established optimization-
based scheduling strategies, including TSMGWO,
GGWO, LPGWO, and FPA, were compared to our
suggested method within the context of the existing body
of research.
Also in the same year, K. Malathi and K. Priyadarsini
[54], During the process of building the load balancer
algorithm for cloud computing, we investigated the
opportunities presented by heuristic approaches. In this
section, two significant enhancements to load balancing
methods are shown. The hybrid approach has been
successful in achieving better applicability as well as
remarkable performance in terms of maximising
turnaround time and resource usage on virtual machines.
The development of the Lion Optimizer, which enables
load balancing by optimising the selection of virtual
machine characteristics, is a noteworthy accomplishment
that has been accomplished. Two probabilities are
developed in order to improve the selection process.
These probabilities are the probability of picking a virtual
machine and the chance of scheduling a work. Fitness
criteria are used by the lion optimizer, and these criteria
are dependent on the features of both the work and the
virtual machine.
C. Chandrashekar et al. [55], In order to address and
overcome the scheduling problem, the goal of this
research was to use an improved meta-heuristic method
known as the Hybrid Weighted Ant Colony Optimisation
(HWACO) algorithm. This algorithm is a more complex
version of the Ant Colony Optimisation method, which
was established before. The suggestion outlines a system
that is both perfect and optimal for arranging job
responsibilities. A comparison is made between this
algorithm and other algorithms that are currently in use,
taking into consideration characteristics such as
efficiency, makespan, and cost. In light of the fact that the
goal of achieving quick convergence was effectively
attained, the use of the HWACO that was recommended
results in further advantages in the findings that were
obtained. A number of different traditional algorithms,
including Ant Colony Optimisation (ACO), Quantum-
Based Avian Navigation Optimizer Algorithm (QANA),
Modified-Transfer-Function-Based Binary Particle
Swarm Optimisation (MTF-BPSO), MIN-MIN Algorithm
(MM), and First-Come-First-Serve (FCFS), were
surpassed by the model that was projected.
P. Pirozmand,et al. [56], By using a multi-adaptive
learning mechanism, we were able to cut down on the
amount of time required for the first Particle Swarm
Optimisation (PSO) method to cloud computing
assignment scheduling. In order to address the issue of job
scheduling, a number of different approaches have been
proposed up to this point. The Improved Particle Swarm
Optimisation (IPSO) approach is presented as a potential
solution to the problem that was mentioned before in this
article. Ordinary particles and locally best particles are
the two categories of particles that are defined by the
Multi Adaptive Learning for Particle Swarm Optimisation
(MALPSO) algorithm that has been presented. During this
phase, the population goes through a period in which its
variety decreases, which ultimately leads to an increased
possibility of reaching the local optimum. Makespan, load
balancing, stability, and efficiency are the four criteria that
are used in this study in order to evaluate the suggested
technique in comparison to a number of other algorithms.
In 2022 S. Duan et a [57], explored a broad variety of
computing paradigms that are commonly utilised,
emphasised the advantages of the EECC paradigm in
terms of supporting distributed artificial intelligence, and
spoke about the underlying technologies that are
employed in distributed AI. Afterwards, in order to
enhance distributed training and inference in a different
manner, it is necessary to construct a comprehensive
classification system for the cutting-edge optimisation
techniques that are made available by EECC. Following
that, the authors describe the security and privacy flaws
that are present in the DAI-EECC architecture and
evaluate the benefits and drawbacks of each protection
strategy in respect to the risks that have been revealed. In
the end, they outline a number of research challenges that
have not yet been resolved, investigate the complications
that are connected with immersive performance capture,
and shed light on the potential fascinating applications
that may be made possible by DAI-EECC.
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(11s), 469-490 | 476
In the same year, M. S. Al-Abiad et al [58], Research was
conducted on resource allocation strategies with the goal
of lowering the amount of energy that is used by
distributed learning (DL) inside Internet of Things (IoT)
networks. These networks are supported by integrated
fog-cloud computing. In order to create a connection
between Internet of Things (IoT) devices and the cloud
server (CS), the proposed architecture makes use of a
number of fog access points, also known as F-APs. These
F-APs are responsible for training local models using data
acquired from Internet of Things devices. For the purpose
of updating the model parameters, the F-APs work in
conjunction with the CS.
In 2022 Y. Wang and J. Zhao, [59], The usage of MEC
inside the Metaverse was investigated in this article, and
an explanation of the motivations for the incorporation of
MEC into the Metaverse was offered. The 6G with MEC
paradigm and MEC upgraded by blockchain are two of the
many technological fusions that are garnering a lot of
attention. Cloud computing is being compared to a
number of other technical fusions. Furthermore, the
merger of MEC with other impending technologies, such
as the Metaverse, 6G wireless communications, artificial
intelligence (AI), and blockchain, addresses difficulties
concerning the allocation of network resources, the rising
of network traffic, and the demands for latency.
N. Bhalaji [60], It was suggested that the Water Wave
Algorithm (WWA) be used for the purpose of resource
allocation in cloud-based applications. When compared to
resource scheduling calculations, the WWA approach that
was proposed displayed improved performance both in
terms of response time and turnaround time, as well as
cloudlets migration time. The comparison between WWA
and the algorithms that were presented is shown in the
graph that may be seen below. The First-Come, First-
Served (FCFS) assessment has allowed WWA to achieve
a reduction in the amount of time it takes to complete a
transaction. When compared to other methods, the
response time, migration performance, and turnaround
time have all shown considerable improvements. There is
a considerable relationship between the features of service
parameters and the stability and flexibility of distributed
cloud computing. Through the use of water wave
enhancement computation and the consideration of a
multitude of quality of service (QOS) characteristics, it is
possible that future study may improve the approach of
determining the optimum virtual machine.
R. Gulbaz, et al. [61], The Balancer Genetic Algorithm
(BGA) is a revolutionary load balancing scheduler that I
developed with the intention of enhancing both the
makespan and the load balancing capabilities. It is
possible that insufficient load balancing will result in
excessive resource usage owing to the availability of
resources that are not being used. The method of load
balancing that is used by BGA takes into account the real
load, which is quantified in terms of the million
instructions that are assigned to virtual machines (VMs).
In addition, it is highlighted that multi-objective
optimisation should be employed in order to improve both
the makespan and the load balancing. In comparison to
certain technologies that are considered to be state-of-the-
art, BGA has shown considerable gains in terms of
makespan, throughput, and load balancing.
H. S. Alatawi and S. A. Sharaf [62], Work scheduling and
cloud balancing were accomplished via the use of a hybrid
technique that combines fuzzy logic with the benefits of
the honeybee behaviour algorithm. This hybrid algorithm
was designed expressly with the intention of improving
the performance of approaches that were previously used.
In addition to including power consumption and other
quality of service elements, the design is compliant with
the ABC requirements. This makes it possible to get
precise measurements of the amount of power that is used
by virtual machines (VMs) on the host, which guarantees
the successful implementation of an effective load
balancing algorithm. The purpose of this study is to
determine the energy consumption of virtual machines
(VMs) while taking into consideration key quality of
service (QoS) elements. This will allow for the selection
of the host and virtual machine that are the best suitable
for the task at hand. In order to simulate the ILBA_HB
algorithm, CloudSim was used. In terms of average
response time, makespan, and degree of imbalance, the
ILBA HB algorithm is evaluated in comparison to the
LBA HB algorithm and the HBB-LB algorithm.
According to the data, the algorithm that was
recommended displayed greater performance when
compared to both LBA_HB and HBB-LB approaches.
D. Lindsay et al. [63], Conducted an analysis and
investigation of the key factors that have influenced and
driven the advancement of distributed system paradigms,
beginning with the initial mainframe computers and the
establishment of the worldwide internetwork, and
concluding with the present-day systems such as edge
computing, fog computing, and the Internet of Things. As
a result of the investigation, it has been shown that there
are significant shifts taking place in the fundamental
assumptions surrounding distributed systems. The
following are some of the changes that have taken place:
(1) an increase in the fragmentation of paradigms as a
result of business considerations and the limitations
imposed by the end of Moore's law; (2) a transition from
generalised architectures and frameworks to more
specialised ones; and (3) each paradigm architecture
involves a shifting balance between centralization and
decentralisation in terms of coordination.
A. M. Senthil Kumar et al. [64], a method that maximises
the success of the endeavour while limiting the amount of
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(11s), 469-490 | 477
time required to finish it. Increasing the speed at which
Grey Wolf Optimisation is performed is the goal of the
approach that has been presented, which includes the use
of Particle Swarm Optimisation (PSO). When dealing
with non-deterministic polynomial (NP) hard problems,
such as job scheduling, optimisation techniques may be
used to solve the problem. An innovative hybrid approach
is presented in this research paper. This methodology
combines the particle swarm optimisation algorithm
(PSO) with the grey wolf optimisation (GWO) algorithm.
With a cloud environment, user task scheduling is of the
utmost importance. In order to efficiently distribute
resources and improve the Quality of Service (QoS)
parameter result for user tasks, it is necessary to have a
task scheduling strategy that is efficient.
S. Ouhame and Y. Hadi [65], Energy consumption, data
processing speed, network dependability, and average
network response time are the four primary scheduling
criteria that make up the resource allocation system of
virtual machines (VMs) for cloud computing. The
technique that is being used tries to enhance these four
essential scheduling criteria. The GWO approach is
comprised of three major components, all of which have
been reinforced. The first improvement is made to the
local search section of the website. Specifically, the Grey
Wolf Optimisation (GWO) method and the Artificial Bee
Colony (ABC) algorithm are both components of the
hybrid strategy. During the process of using the local
search strategy of the ABC algorithm, an extra
improvement is made to both the fitness function and the
energy parameter. The technique that has been provided
makes an effort to enhance four essential scheduling
characteristics that are included in the resource allocation
system of virtual machines (VMs) for cloud computing.
These characteristics are electricity consumption,
throughput, network stability, and the average network
expectation time.
G. Muthsamy and S. Ravi Chandran [66], plans that will
be used in the process of assigning the jobs to the
appropriate virtual machines that have been specified. A
scheduling problem in distributed systems, such as cloud
data centres, is regarded to be NP-complete on the basis
of its complexity. By uniformly distributing the workload
among virtual machines (VMs), an effective scheduling
approach will maximise the utilisation of available
resources without sacrificing efficiency. As a result, there
is a need for an innovative scheduling system that is
capable of successfully distributing workloads while
taking into consideration a number of quality of service
(QoS) measures. These metrics include makespan,
response time, execution time, and priorities for tasks. It
is advised to apply task scheduling using artificial bee
foraging (TSABF) optimisation in order to achieve the
most effective schedule for assigning work to virtual
machines (VMs), while taking into consideration the
criteria that were mentioned before.
J.-q. Li and Y.-q. Han [67], In order to solve the problem
of flexible task scheduling in a cloud computing
environment, a hybrid discrete artificial bee colony (ABC)
strategy was investigated using this approach. Initially, it
is determined that the issue is a hybrid flowshop
scheduling (HFS) problem. Consideration is given to both
single and multiple goals simultaneously. Three
objectives are simultaneously evaluated in multiple
objective HFS problems: reducing the greatest amount of
time required to complete the task, limiting the maximum
amount of stress placed on the device, and minimising the
total amount of work that is performed by all of the
devices. HFS will be broken down into two categories:
HFS that involves computers that are not linked to one
another, and HFS that uses machines that are identical to
one another in parallel. Three different types of fake bees
are included into the proposed algorithm, which is similar
to the traditional ABC technique. These artificial bees are
the employed bee, the spectator bee, and the scout bee. A
string of integers acts as a representation of each and every
solution. In order to address the peculiarities of the
problem, several types of perturbation structures are
investigated in order to enhance the capabilities of the
search.
R. Agarwal, et al. [68], A approach that takes into account
the MakeSpan parameters has been proposed as a solution
to the difficulty that is linked with the metaheuristic
procedures that are currently in use. The solution that has
been presented makes use of the mutation-based Particle
Swarm algorithm in order to distribute the load evenly
across all of the data centres. A user's degree of demand
determines the amount of money they pay for resources.
It is necessary for a cloud provider to overcome a number
of obstacles. Load balancing presents a number of key
issues, including a reduction in the pace of convergence,
an early convergence, an initial random selection of
solutions, and the possibility of being trapped in a local
optimal solution.
L. Xingjun et al. [69], The well-known grey wolf
optimisation technique was presented as a novel way to
reduce the amount of time required for responding. They
came to the conclusion that if all of the duties had the same
amount of time to do them, then the response time should
also be decreased. For the purpose of determining the state
of virtual machines, the present load is used. The tasks
will be assigned to the appropriate virtual machine based
on a criterion that takes into account the shortest distance,
and they will be removed from the machine that is now
experiencing the greatest stress depending on the status of
the virtual machine.
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(11s), 469-490 | 478
A. Saadat and E. Masehian [70], The following is a
proposal for a hybrid intelligent load-balancing approach:
a fuzzy logic module provides the objective function to
identify the busy statuses of servers by taking into
consideration their RAM and CPU task queues, and a
genetic algorithm module randomly organises the work.
Instances of inaccurate inputs include fuzzy input
variables, which include the degree of satisfaction
provided by the service as well as the beginning and
ending times of the service. On the other hand, service
availability is an example of an output that is not exact.
When it comes to load balancing, the use of task
scheduling algorithms has the potential to be quite
successful owing to the enormous state space that is
involved in such a situation. Within the scope of this
investigation, the researchers developed a hybrid
intelligent strategy for load balancing use. To be more
specific, a fuzzy logic module is used in order to construct
an objective function that analyses the busy statuses of
servers, taking into consideration the amount of RAM and
CPU that is being utilised by the servers. In addition, a
module of a genetic algorithm is used in order to randomly
distribute the duties.The percentage of customers who are
satisfied with the service, as well as the beginning and
ending times of the service, are the factors that feed into
the fuzzy system. It is the availability of the service that is
represented by the fuzzy output, on the other hand.
A. Ragmani, et al. [71], A approach that takes into account
the response time of the cloud and tries to accomplish load
balancing has been proposed. Furthermore, there is a
considerable connection between the overall performance
of the ACO algorithm and the values that are supplied to
the parameters that it uses. The Taguchi experimental
design is used in order to determine the optimal value of
the ACO parameters, and a fuzzy module is implemented
in order to assess the pheromone value. The proposed
method is intended to cut down on the amount of time
required for computing. Through the use of an
evaporation process derived from the pheromone
experiment, the method is designed to avoid an untimely
convergence towards solutions that are not suitable. The
results of the simulations that were carried out on the
CloudAnalyst platform shown that the approach that was
recommended has the potential to improve load balancing
in the Cloud architecture. Additionally, it has the potential
to reduce response time by as much as 82%, processing
time by as much as 90%, and total cost by as much as 9%,
depending on the particular case. On the other hand, they
extend the use of the provided technique in order to
enhance the formulation of pheromones and the operation
of algorithms within a framework that is feasible.
L. Shen et al. [72], The Artificial Bee Colony (ABC)
optimisation problem is being proposed. This problem
makes use of the load balance technique in order to
improve the overall load balancing performance as well as
the increase in additivity. The ABC method is improved
by making use of the sophisticated capabilities of smart
grid cloud sources in order to effectively group virtual
machines (VMs) together. When it comes to ensuring that
resources are used in the most efficient manner and that
energy is managed in an efficient manner, the load
balancing algorithm of the cloud data centre is very
essential. When it comes to load balancing in the cloud,
the approaches that are now available give priority to
solving the needs of certain systems or applications that
are not scalable.
4. Comparison Among Reviewed Works
Twenty-one publications covering various algorithms are
included in the following table, as shown below in Table
1:
Table (1): Comparison among reviewed works.
#
Ref.
Techniques and Algorithms
Results
1.
[52]
2023
Swarm Intelligence (SI) is a technology that has
been proposed as a means of distributing the
burden in cloud computing with the intention of
achieving load-balancing. There are many
different approaches that have been investigated in
the literature. Some of these methodologies
include genetic algorithm, ACO, PSO, BAT, and
GWO. Unfortunately, none of them take into
consideration the amount of time it takes for load
balancing to reach a point of convergence with
global optimisation.
Assuring while attaining fast convergence that is
globally optimized and cutting down on reaction
time overall. When compared to alternative
algorithms, the suggested technique's overall
response time is, on average, 12% faster.
Additionally, PSO is improved to 97.253% in
terms of convergence by the best optimal value
found from the suggested GWO-PSO
algorithm's objective function.
2.
[53]
2023
In order to find a solution to the issue of multi-
objective work scheduling in a variety of IaaS
cloud situations, it is recommended to make use of
a hybrid optimisation approach that is suited to the
specific conditions. This will allow for the
These advantages of the hybrid algorithm that
was recently presented are shown by the data
that was acquired as a consequence of the
method's success. The hybrid algorithm was
recently proposed. In contrast to the
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(11s), 469-490 | 479
problem to be solved. Our expectations are high
that this will assist us in finding a solution to the
problem that we are facing. The search exploration
skills of the grey wolf optimizer algorithm are
taken into consideration by this method, in
addition to the pollination behaviour of flowers,
which is taken into account by this methodology.
To phrase it another way, it is a system that
encompasses its reach in a manner that is very
comprehensive.
optimization-based scheduling methods that are
often discussed in the literature, this
methodology does not include any kind of
optimisation into its constituent parts. A few
examples of the methodologies that are included
in this category are the FPA method, the
TSMGWO method, the GGWO method, and the
LPGWO method.
3.
[54]
2023
The load balancer technique for cloud computing
is developed by doing an analysis of the benefits
that heuristic methods provide when used. There is
an emphasis placed on enhancing the selection
process by taking into account the likelihood of
work scheduling and the chance of selecting
virtual machines. The invention of a genetic
algorithm that alters the global search criteria in
order to align with the lion optimizer is the second
contribution that has been made.
The use of a hybrid genetic algorithm that is
founded on lion genetics has been shown to
result in beneficial consequences, as indicated
by the findings of a number of study studies.
The results of the research indicate that these
statements are correct and should be taken
seriously.
4.
[55]
2023
In this paper, the Hybrid Weighted Ant Colony
Optimisation (HWACO) approach is introduced.
This technique is an upgraded meta-heuristic
strategy that is used to address the scheduling
issue. During the course of this inquiry, this
approach was devised and refined. In this study,
the HWACO algorithm is analysed and contrasted
with a number of other algorithms that are already
in existence, with regard to the areas of efficiency,
makespan, and cost efficiency. It is necessary to
carry out this examination and comparison. Both
the Ant Colony Optimisation Algorithm and this
more powerful version have been included into the
algorithm, which has already undergone
improvements.
In comparison to other conventional algorithms,
such as the MIN-MIN Algorithm (MM), Ant
Colony Optimisation (ACO), Quantum-Based
Avian Navigation Optimizer Algorithm
(QANA), Modified-Transfer-Function-Based
Binary Particle Swarm Optimisation (MTF-
BPSO), and First-Come-First-Serve (FCFS), the
proposed algorithm is regarded as the most
efficient method for task scheduling. This is as a
result of the fact that it displays superior
performance in comparison to these other
algorithms.
5.
[56]
2023
In order to address the issue that was discussed
before, a fresh Particle Swarm Optimisation (PSO)
approach that is referred to as Improved Particle
Swarm Optimisation (IPSO) has been presented.
When it comes to work scheduling in the cloud
computing environment, a multi-adaptive learning
strategy is used in order to cut down on the amount
of time required for the original Particle Swarm
Optimisation (PSO) method to complete its
execution.
The CEC 2017 benchmark was used as the
foundation throughout the process of developing
the solution that was recommended. Not only
does the strategy that has been presented have
the potential to provide optimal outcomes for the
majority of the criteria, but it also has the
capability to solve the issue in a more expedient
manner than what is now known.
6.
[57]
2022
Distributed artificial intelligence (DAI) is
enhanced by incorporating end-edge cloud
computing (EECC) in order to satisfy the multiple
demands that are presented by resource-intensive
and distributed AI computation. This is done in
order to fulfil the numerous demands that are
posed by AI computation. It is possible to
efficiently integrate the various capabilities of on-
device computing, edge computing, and cloud
computing via the utilisation of this sort of
computing.
This paragraph presents an explanation of the
concept of distributed artificial intelligence
(DAI), which is made feasible by end-edge cloud
computing (EECC). DAI is an acronym for
distributed artificial intelligence. Particularly, it
focuses on how the many requirements of
resource-intensive and distributed artificial
intelligence computation may be achieved by
effectively coordinating the diverse capabilities
of cloud computing, edge computing, and on-
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(11s), 469-490 | 480
device application development. Specifically, it
focuses on how this can be accomplished.
7.
[58]
2022
It is necessary to construct a combined
optimisation problem. This is something that must
be implemented. The scheduling of Internet of
Things devices with F-APs, the allocation of
transmit power, and the allocation of compute
frequency at the F-APs should all be discussed in
relation to this problem. It is essential that each and
every one of these components will be included
into the problem. The subsequent step that has to
be taken is to split this issue into two distinct
groups according to the type of the difficulties.
This is the next step that needs to be taken.
When it comes to Internet of things (IoT)
networks that are enabled by integrated fog-
cloud computing, it is essential to cut down on
the amount of energy that is used by distributed
learning (DL). Determine the most effective way
to maximise the energy efficiency of F-APs
while taking into consideration the limitations
that are imposed on the amount of time that is
spent computing and transmitting. In order to
solve the two subproblems in an iterative way,
you will need to build a technique that is based
on conflict graphs. It has been shown via
numerical data that the strategy that has been
recommended is superior to the alternatives that
are now available in terms of energy efficiency.
8.
[59]
2022
Offers a study into the computational frameworks
that are used in order to fulfil the severe criteria
that are imposed by current applications. The
purpose of this article is to offer an overview of the
application scenarios that are suggested for Mobile
Edge Computing (MEC) in Mobile Augmented
Reality (MAR1).
The integration of MEC with cutting-edge
technologies such as the Metaverse, 6G wireless
communications, artificial intelligence (AI), and
blockchain helps to address the challenges that
are associated with the distribution of network
resources, the increase in network traffic, and the
requirements for latency. These challenges are
addressed in order to address the challenges that
are associated with these issues.
9.
[60]
2022
On the subject of cloud computing, it is suggested
that the Water Wave Algorithm (WWA) be used
with the intention of achieving the aim of
controlling the distribution of resources in an
effective way.
The WWA algorithm is better than the FCFS,
MCT, MET, and OLB algorithms in terms of
throughput, response time, turnaround time,
migration time, resource utilisation, fault
tolerance, and scalability, according to the
findings of a research that compared the
performance of these algorithms.
10.
[61]
2021
This load balancing scheduler is an innovative
one that aims to increase both the makespan and
the load balancing of the system. This particular
technique is most often referred to by its
acronym, BGA, which stands for the Balancer
Genetic technique.
It has been shown that the BGA approach has
demonstrated a substantial increase in terms of
its efficacy when compared to other advanced
methodologies for monitoring makespan,
throughput, and load balancing.
11.
[62]
2021
By combining the major quality of service
characteristics with power consumption, ABC is
able to precisely analyse the amount of power that
is being used by virtual machines (VMs) on the
host, which ensures that an efficient load balancing
system is in place. The purpose of this study is to
evaluate the power consumption of virtual
machines (VMs) while taking into account
significant quality of service (QoS) metrics. With
the aid of this evaluation, you will be able to
choose the host and virtual machine that are
appropriate for carrying out the task at hand.
Compared to LBA_HB and HBB-LB, the
recommended technique not only improves the
average reaction time and makespan, but it also
increases the amount of imbalance. This is the
conclusion that can be drawn from the data of the
simulation. The study showed that there is a
significant positive link between energy usage
and cost; however, it also highlighted that there
is a distinct lack of connection between the
amount of energy consumed and the length of
time it takes to process the data. This was one of
the findings of the research.
12.
[63]
2021
A substantial amount of change is taking place in
the assumptions that have historically served as the
foundation for the creation of distributed systems.
A fast fragmentation of paradigms, driven by
economic interests and physical restrictions
coming from the termination of Moore's law, is
one of the changes that have occurred as a
consequence of these shifts. A further
development is that there is a movement away
As a result of the shift away from generalist
architectures and frameworks and towards more
specialisation and paradigm architecture, there
has been a movement in coordination between
centralization and decentralisation. This
movement has occurred as a consequence of the
change. The alteration that took happened is
directly responsible for this movement that has
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(11s), 469-490 | 481
from generalist designs and frameworks and
towards a greater focus on specialism.
taken place. There is a clear correlation between
the first shift and the occurrence of this shift.
13.
[64]
2021
The creation of a unique hybrid approach has been
brought about as a consequence of the merging of
the grey wolf optimisation (GWO) technique with
the particle swarm optimisation algorithm (PSO).
It was the combination of these two approaches
that led to this breakthrough, which was brought
about as a result of the combination. The
combination of these two methods has resulted in
the creation of this strategy, which has been
brought about as a consequence of both of these
techniques coming together.
Both GA and GWO algorithms are used in
order to carry out the process of evaluating and
analysing the performance of the approach that
is proposed via the usage of both of these
algorithms. The GWO-PSO algorithm that was
created produces superior results in terms of
response time and makespan for the whole of
the algorithm's execution when compared to the
outcomes of the GWO and GA algorithms. This
is the case when the algorithm is compared to
the GWO method. In point of fact, this is the
current state of affairs.
14.
[65]
2020
For the purpose of improving the VM allocation
system, a hybrid approach is used. The
combination of the Grey Wolf Optimisation
(GWO) and the Artificial Bee Colony (ABC)
algorithms is what constitutes the hybrid method.
The GWO approach is broken down into three
primary components, with the initial improvement
happening in the local search portion of the
technique.
When comparing the results of the suggested
approach to those of the ABC, GWO, and RAA
algorithms, comparisons are performed between
the outcomes of each of these algorithms. The
findings of those comparisons indicate that the
approach that was presented resulted in a 1.25
percent increase in both the accuracy and
efficiency of the resource allocation system in
virtual machines (VMs) for cloud computing.
This improvement was achieved by the method.
15.
[66]
2020
A one-of-a-kind scheduling paradigm has been
established in order to provide load balancing
while simultaneously taking into account a broad
range of quality of service (QoS) metrics. These
measurements include makespan, reaction time,
execution time, and task priority. With this in
mind, the recommendation is to make use of
artificial bee foraging (TSABF) optimisation for
task scheduling in order to achieve an optimum
allocation of work to virtual machines (VMs). This
is done in consideration of the methodologies that
have been discussed before.
Through the use of a set of virtual machines, the
fulfilment of the ideal timetable is accomplished.
The objective of task preemption is to cut down
on the amount of time that is required to react to
and complete all of the tasks that are connected
with the various priorities. This article presents a
study of the similarities and differences between
the outcomes of the experiment and the honey
bee behavior-inspired load balancing (HBB-LB)
algorithm that is presently being used.
According to the results, TSABF is better than
HBB-LB in terms of quality of service metrics
and has the potential to be used as an alternate
scheduling mechanism for load balancing.
16.
[67]
2020
A challenge that is related to the scheduling of
tasks inside a cloud computing management
system is the primary focus of this investigation,
which is being carried out with the objective of
finding a solution to the problem. This objective
will be accomplished by the use of a hybrid
discrete artificial bee colony (ABC) technology
through the utilisation of this technology. To get
things started, the issue that has to be resolved is
known to as a hybrid flowshop scheduling (HFS)
problem.
By constructing an improved scout bee that
employs a variety of local search strategies in
order to find the most acceptable food source or
abandoned solution, it is possible that the
capacity of the proposed algorithm to converge
might be improved. If this were to occur, the
algorithm would be able to converge more
rapidly. The performance of the approach that
was provided will be certified when it has been
evaluated on a number of different sets of well-
known benchmark situations. This will take
place after the technique has been inspected.
17.
[68]
2020
In order to find answers to the problems that are
connected to the metaheuristic methods that are
presently being used, it is essential to conduct an
inquiry into the settings of MakeSpan. This will
allow for the identification of potential remedies.
The development of an efficient load balancing
strategy is now underway with the objective of
improving the fitness function of cloud
computing and lowering performance metrics
such as the amount of time required to complete
MakeSpan.
18.
[69]
2020
An approach that is one of a kind in terms of
reducing reaction time by using a grey wolf
optimisation method that is well recognised
The findings of the CloudSim simulation
environment demonstrated that the response
time has significantly improved, in contrast to
the EBCA-LB and HBB-LB algorithms, which
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(11s), 469-490 | 482
throughout the majority of the industries in which
it is used.
showed that the reaction time had significantly
altered. In addition to the fact that the degree of
load balancing is far higher than it was with
TSLBACO and HJSA, this adds an additional
insult to the situation.
19.
[70]
2019
In order to divide the work in a way that is fair and
equitable, the following is a mix of several
different challenging methods:
It is the responsibility of the Genetic Algorithm
module to randomly organise the tasks, while the
fuzzy logic module is in charge of generating the
goal function on its own. It is possible to determine
the busy statuses of servers by using the objective
function, which takes into account the task queues
employed by the CPU and RAM.
According to the findings of the computer trials,
the optimal solution was found in half the
amount of time that was allotted for its
implementation, which led to an increase in the
level of satisfaction experienced by the
customers with the product.
20.
[71]
2019
A unique hybrid method that makes use of fuzzy
logic and ant colony optimisation (ACO) ideas has
been devised with the intention of enhancing load
balancing in the setting of the cloud. This was done
in order to achieve the goal of increasing load
performance.
In each and every one of the simulations that
were executed using the Cloud Analyst
platform, the efficacy of the combined Fuzzy-
ACO algorithm was shown. The combined
method is compared to the many alternative
load balancing strategies via the use of these
simulations.
21.
[72]
2019
Based on the load balance method, the suggested
optimisation problem for the Artificial Bee Colony
(ABC) aims to enhance the overall performance of
the load balance and obtain improved adaptability.
This is the purpose of the proposed problem. Cloud
computing is used by the smart grid in order to
successfully cluster virtual machines (VMs) that
exhibit certain qualities, all while simultaneously
enhancing the ABC algorithm.
It has been shown that the strategy that has been
suggested is effective via the use of simulation
analysis.
5. Extracted Statistics from Reviewed Works
In this section of the review paper, we will extract data
from nearly all of the research listed in Table 1 in order to
analyze and compare the findings, we will also need to
collect data from other relevant studies.
The Grey Wolf Optimizer (GWO) was used in
conjunction with a number of different approaches, and it
was dependent on a number of different metrics in the
study that was cited in references [52], [53], [64], [65],
and [69]. As a consequence of the researchers' ability to
achieve the maximum optimum value that could be
obtained from the objective function of the suggested
GWO-PSO strategy, the researchers were able to achieve
a convergence improvement of 97.253% when compared
to the PSO method. In addition to this, it was shown that
the degree of load balancing was remarkable in contrast to
both TSLBACO and HJSA. The approach that is
described in [53] makes use of the crossover operators of
evolutionary algorithms in order to achieve a beneficial
equilibrium between exploring innovative solutions and
employing previously established ones. On the other
hand, the approach that is described in [64] optimises the
result in order to reduce the makespan. The Particle
Swarm Optimisation (PSO) approach has been shown to
be more effective than the Grey Wolf Optimisation
algorithm. This has been demonstrated via extensive
research.
In both [61] and [70], a Genetic Algorithm module was
used to produce a random order for the tasks, and a fuzzy
logic module was utilised to construct the objective
function that determines the busy statuses of servers based
on their RAM and CPU task queues. Both of these
modules were utilised to execute the tasks. Both of these
modules made it possible to arrange the tasks in a
haphazard manner. There is an algorithm known as ACO
that was created by researchers [55] and [71] with the
intention of lowering the amount of time and money that
is necessary for the process while simultaneously
enhancing the effectiveness of the cloud computing
environment. In the second piece of study, load balancing
and response time objectives in the cloud were also taken
into consideration as considerations of major importance.
To make matters worse, the effectiveness of the ACO
algorithm is directly linked to the specific values that are
assigned to the ACO parameters. This is a further insult to
the injury. MALPSO, which stands for Multi Adaptive
Learning for Particle Swarm Optimisation, was provided
in the study [56], and PSO and GWO were used in the
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(11s), 469-490 | 483
research [64] in an attempt to cut down on the quantity of
spam that was produced. An approach to obtaining an
optimal work schedule for virtual machines (VMs) was
provided in both [62] and [66]. This approach was
presented in both of these documents. The recommended
method is better to the LBA_HB and HBB-LB approaches
in terms of the average reaction time, makespan, and
degree of imbalance. This is the case when compared to
the techniques. The study [67] [72] has led to the
development of an enhanced version of the ABC
algorithm. The use of simulation analysis has shown that
the technique that was suggested is successful in
achieving the desired results. The MakeSpan
requirements were taken into consideration in relation to
the metaheuristic techniques that are now being used,
namely [55], [65], [66], [68], [71], and [72]. This was
done while taking into consideration the current
conditions. By using load balancing algorithms, the
suggested technique proposes to achieve load balancing
across all of the data centres, hence ensuring stability and
increasing throughput. This will be accomplished while
simultaneously optimising throughput.
The simulations that were performed on the CloudAnalyst
platform demonstrated that the approach that was
proposed is capable of providing a contribution to an
improvement in load balancing within the Cloud
infrastructure. This was shown by the fact that the
technique was able to make a contribution to the
improvement. This enhancement may result in a decrease
in response time by as much as 82 percent, a reduction in
processing time by as much as 90 percent, and a reduction
in total cost by as much as nine percent and nine percent,
respectively, depending on the precise conditions that are
being employed. However, if we generalise the method
that has been described, we could be able to enhance the
formulation of pheromones and optimise the performance
of the algorithm in a setting that is typical of the real
world. This would be a significant step forward.
6. Discussion for Compared Metrics
Specifically, Grey Wolf Optimisation (GWO) and Particle
Swarm Optimisation (PSO) are the topics that are being
investigated in the study that is being carried out by 51.
The outcomes of the study show that there is potential for
decreasing the overall reaction time and reaching globally
optimum rapid convergence. This suggests that there is
possibility for achieving both of these goals when
compared to other conventional approaches. According to
the results of a comparison between the recommended
approach and other competing algorithms, the suggested
method has an average total reaction time that is 12%
quicker than the results of the other ways. The user's text
is where the "[53]" comes from. A comparison was done
between the well-known optimization-based scheduling
strategies, such as TSMGWO, GGWO, LPGWO, and
FPA approach, and the alternatives that were supplied for
them in the literature. The comparison was made before
and after the techniques were used. Some of the
limitations of previous methods, such as the local
optimality trap and immature convergence, have been
successfully addressed by the newly constructed hybrid
algorithm. Immature convergence and the local optimality
trap are two examples. The findings that were acquired are
presented here in order to illustrate the advantages that
may be gained by using this strategy. The goal of this
unique hybrid optimisation approach is to handle the
problem of multi-objective task scheduling in
heterogeneous infrastructure as a service cloud settings.
This technique was created in order to address the issue.
In order to make things easier for you, the number 54 is
enclosed inside square brackets. Two probabilities are
constructed in order to enhance the selection process.
These probabilities are the likelihood of selecting a virtual
machine and the chance of scheduling a job. In this
analysis, each of these probability are considered
separately. It is the lion optimizer that makes use of fitness
criteria, and these criteria are depending on the
characteristics of both the job and the virtual machine. The
second contribution was the modification of the global
search criteria via the use of a genetic algorithm in order
to enhance the performance of the lion optimizer. The
second most important contribution was this one.
Specifically, it is enclosed in square brackets, each of
which contains the number 55. The HWACO was found
to have produced average increases in makespan of 3.83
percent, 16.54 percent, 25.34 percent, 8.66 percent, and
57.1 percent, respectively, when compared to the QANA,
MTF-BPSO, ACO, MIN-MIN, and FCFS, respectively.
These figures were determined after comparing the
HWACO to the aforementioned other models.
Furthermore, in compared to other businesses of a similar
kind, it was able to achieve cost reductions of 12.15
percent, 18.88 percent, 23.6 percent, 27.05 percent, and
32.9 percent respectively. The conclusion that can be
drawn from this is that the technique that has been offered
is capable of optimising the performance of the task
scheduler in terms of both the makespan and the cost. This
is something that can be done because of the information
that has been supplied. The conclusions of the research
[56] indicate that the recommended approach is assessed
by comparing it to a variety of algorithms based on four
parameters: makespan, load balancing, stability, and
efficiency. This evaluation is carried out in order to
determine the effectiveness of the solution. The CEC 2017
benchmark is also taken into account while we are
analysing the technique that has been provided. Not only
does the strategy that has been presented have the
potential to provide optimal outcomes for the majority of
the criteria, but it also has the capability to solve the issue
in a more expedient manner than what is now known. The
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(11s), 469-490 | 484
study article [57] investigated a variety of research
challenges and concerns that have not yet been addressed
in connection to immersive performance acquisition.
These difficulties and concerns have not yet been
resolved. In the study, a number of common computing
paradigms were discussed, and the merits of the EECC
paradigm were emphasised in terms of its ability to enable
distributed artificial intelligence. In addition to that, the
article investigated the core technologies that are
necessary for distributed artificial intelligence. In addition
to this, they exhibited a broad range of potential
applications that may be made feasible by the DAI-EECC.
Following that, we construct an all-encompassing
classification system for the advanced optimisation tools
that are made possible by EECC in order to improve
distributed training and inference. This is done in order to
optimise the performance of these tools.
[58] The major focus of the research that was carried out
was on optimising the scheduling and power allocation for
Internet of Things devices, as well as the computing
frequency allocation for the second subproblem. This was
the core focus of the study. Following this, they devised
an iterative technique that involved the application of a
conflict graph in order to find a solution to the two
subproblems that they had encountered after that. It has
been shown via numerical data that the strategy that has
been recommended is superior to the alternatives that are
now available in terms of energy efficiency. In the course
of the research project [59], the paradigms of computing
that are used in order to satisfy the severe requirements of
the applications that are now underway were investigated.
There is a variety of mobile augmented reality (MAR) use
cases that are available for mobile augmented reality
(MEC). In addition, the objective of this study is to
analyse the utilisation of Mobile Edge Computing (MEC)
in the Metaverse and to offer an explanation of the
reasoning that led to the choice to approach the
establishment of the Metaverse from a viewpoint that is
centred on MEC. In this particular instance, certain
technological fusions that have been considered in the past
are given significant emphasis. Two examples of these
fusions are the incorporation of 6G technology into the
MEC paradigm and the use of blockchain technology to
strengthen the MEC paradigm. The study is focused on
the number [60] as its primary emphasis. It is necessary to
possess both resilience and the capacity to grow. Through
the use of these algorithms, it was discovered that WWA
generates a greater throughput in comparison to MCT and
MET. It was revealed when the performance of the cloud
was reviewed that this was the case. A decrease in
turnaround time has been accomplished by WWA as a
result of the use of the First-Come, First-Served (FCFS)
evaluation approach. The use of a variety of algorithms
has resulted in a significant enhancement in response,
migration, and turnaround time. This improvement has
been accomplished. There exists a significant connection
between the characteristics of service parameters and the
stability and adaptability of cloud computing that is
distributed. Using water wave augmentation calculations,
it is likely that future study may examine additional
quality of service factors in order to improve the process
of selecting the suitable virtual machine. This would be
done in order to improve the virtual machine selection
process.
[61] [61] The focus of the investigation that is being
conducted Within the context of task scheduling, it is
essential to take into consideration load balancing as an
essential element in addition to makespan in order to
achieve the highest possible level of efficiency. The
importance of multi-objective optimisation is shown by
this. The use of meta-heuristics makes it possible to
conduct a systematic examination of a large search space
that contains viable solutions. Furthermore, the
incorporation of heuristic merging further enhances the
capacity to investigate even more ideal answers. A
platform that mimics the behaviour of processes that are
housed in a cloud data centre is called CloudSim. This
platform does this by making use of a broad range of
resources and tasks that are allocated to them. In terms of
the performance criteria of makespan, throughput, and
load balancing, it has been shown via thorough testing that
BGA is superior than the extremely sophisticated
schedulers MGGS, ETA-GA, DSOS, and RALBA. This
demonstrates that BGA delivers better results. For the
purpose of avoiding the biassed experimentation that is
brought about by the dataset, it is essential to implement a
diverse workload distribution that is made up of a large
number of task batches. The primary emphasis of the
investigation is on the number 62. In contrast to LBA-HB
and HBB-LB, the overall performance of the system is
improved as a consequence of the use of fuzzy logic,
which leads to an improvement in the reaction time and
Makespan of the method that is presented. This, in turn,
leads to an increase in the overall performance of the
system. The results that are produced by the ILBA-HB
technique are promising, despite the fact that it only
provides a little improvement. Compared to LBA_HB and
HBB-LB, the recommended technique not only improves
the average reaction time and makespan, but it also
increases the amount of imbalance. This is the conclusion
that can be drawn from the data of the simulation. The data
reveals that there is a significant positive correlation
between energy consumption and cost, while also
revealing that there is a little link between the amount of
energy consumed and the length of time it takes to process
the data. In other words, the data demonstrates that there
is a good relationship between the two. [63] [63] [63] [63]
Within the framework of climate change, it is of utmost
importance to explore the advantages and disadvantages
of doing research on distributed systems, as well as the
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(11s), 469-490 | 485
challenges that distributed researchers face when trying to
analyse complex phenomena on a wide scale. It is [64]
that serves as the investigation's primary focus. It is vital
to adopt a task scheduling strategy that is both effective
and efficient, with the potential to effectively allocate
resources for the fulfilment of user tasks, in order to
improve the performance of quality of service
characteristics. This is because it is essential to increase
the performance of quality of service characteristics. A
performance assessment and analysis of the suggested
technique is carried out with the assistance of the GA and
GWO algorithms. This evaluation and analysis is carried
out. In terms of reaction time and makespan, the GWO-
PSO strategy that has been proposed provides better
performance to both the GWO and GA algorithms. This is
the case by delivering greater performance.
There is a particular emphasis placed on [65] in this study.
A comparison is made between their findings with the
outcomes that were achieved by the ABC algorithm, the
GWO algorithm, and the RAA algorithm individually. In
accordance with the comparisons that were carried out,
the solution that was offered results in a 1.25 percent
improvement in both the accuracy and efficiency of the
mechanism that is responsible for allocating resources in
virtual machines (VMs) for cloud computing utilisation.
The mechanism that is responsible for resource allocation
in virtual machines (VMs) is the subject of this study,
which makes use of a hybrid method in order to enhance
it. Because of this, virtual machines (VMs) in cloud data
centres may sometimes experience overloading or
underloading due to poor load balancing measures. This
is the reason why this is the case. It is [66] that serves as
the investigation's primary focus. Utilising the existing
resources in the most effective and efficient manner
feasible. In order to provide load balancing while taking
into account a broad variety of quality of service (QoS)
requirements, it is necessary to have a scheduling system
that is inventive. Makespan, reaction time, execution time,
and task priority are some of the features included under
this category. For this reason, it is recommended that the
artificial bee foraging (TSABF) optimisation technique be
used for the purpose of managing job scheduling. The
purpose of this method is to obtain an optimal distribution
of workloads across virtual machines (VMs), while taking
into account the characteristics that have been stated in the
previous section. To put it simply, the production of the
ideal schedule is ultimately the responsibility of a group
of virtual machines (VMs) that are outfitted with
preemptive task scheduling. Task preemption is a method
that seeks to reduce the amount of time that is necessary
for activities that are connected with different priorities to
react and carry out their execution. According to the
investigation into the number 67 Additionally, in order to
attain the best possible equilibrium between the
capabilities of exploration and exploitation, the
recommended method incorporates an improved version
of the adaptive perturbation framework. This goal is
accomplished by including the framework. It is necessary
to use a revised technique and a strategy that is
straightforward while yet being effective in order to
enhance the process of exploitation. The purpose of deep-
exploitation operators is to further increase the
capabilities of exploitation in order to make the most of
their potential. For the purpose of locating the most
suitable food supply or solution that has been abandoned,
a more efficient scout bee programme that makes use of a
range of local search tactics is used. As a consequence of
this, the level of convergence that the suggested method is
capable of achieving may be enhanced. For the purpose of
determining whether or not the approach in question is
effective, an evaluation of the method's performance on
previously established benchmark datasets is carried out.
An technique to load balancing that is extremely effective
has been devised by [68] with the aim of lowering
performance indicators such as MakeSpan time and
enhancing the fitness function pertaining to cloud
computing. Within the scope of this project, an
investigation is being conducted to determine how the
condition of virtual machines is affected by the workload
that is now being operated. It is determined that the tasks
will be assigned to the virtual machine that is most suited
for processing them, and this decision is made based on
the criteria of the least distance, which is utilised to select
how tasks are distributed. In addition, the tasks will be
taken from the machine that is now facing an excessive
level of burden, but this will depend on the present
condition of the virtual machine on which they are being
removed. When the findings of the simulation
environment known as CloudSim were compared to the
algorithms known as EBCA-LB and HBB-LB, it was
found that the CloudSim environment generated a much
longer response time. When contrasted with TSLBACO
and HJSA, the amount of load imbalance that exists in this
scenario is much greater. The user's text is where the
"[70]" comes from. Load balancing is the primary concern
that stands out as the most significant characteristic when
it comes to strategically distributing incoming network
traffic across several servers in an effective manner.
Therefore, it is assured that no one server is capable of
handling an excessive amount of requests, which
eventually leads to an increase in the accessibility of
websites and services to consumers. This is accomplished
via the use of this method. The outcomes of computer
testing indicated that the ideal solution was discovered
within half of the allocated execution time, which resulted
in an increase in the degree of pleasure that would be
experienced by consumers. During the course of the
research that was carried out by [71], a fuzzy logic module
was used in order to ascertain the pheromone value. In
addition, the Taguchi concept was used in order to
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(11s), 469-490 | 486
improve the parameters of the algorithm. Because of this
integration, the algorithm for the ant colony has
undergone a substantial development that has led to a
considerable progress. It has been concluded, on the basis
of the findings of the tests that were carried out on the
Cloud Analyst simulator, that the method is better
appropriate for the administration of complex networks.
The employment of fuzzy logic for the calculation of
pheromones and the utilisation of the Taguchi technique
for determining the most suited ACO parameters are two
of the most significant advancements that have been
achieved by the technology that has been provided. The
research focused on the specific needs that must be met by
applications or systems that are unable to expand or adjust
in response to shifting conditions. The Artificial Bee
Colony (ABC) algorithm is presented with a challenge
over the course of this study. This challenge is
accomplished via the use of the load balancing approach.
The major purpose of this investigation is to improve the
algorithm's overall load balancing performance as well as
the adaptive outcomes it produces.
7. Recommendations
An experimental simulation was carried out, and a
statistical t-test was carried out for the aim of conducting
research. When the HWACO technique was compared
against the QANA algorithm, the MTF-BPSO
methodology, the ACO algorithm, the MIN-MIN
algorithm, and the FCFS algorithm, it demonstrated
average increases in terms of makespan that were 3.83%,
16.54%, 25.34%, 8.66%, and 57.11% respectively. When
it comes to the reduction of costs, the QANA, MTF-
BPSO, ACO, MIN-MIN, and FCFS all succeeded in
achieving improvements of 12.15 percent, 18.88 percent,
23.6 percent, 27.05 percent, and 32.9 percent respectively.
When this is taken into consideration, it is feasible to
arrive at the conclusion that the solution that was
developed optimises the task scheduler in terms of both
the amount of time it takes to complete the work and the
amount of money it saves. In addition to this, it not only
enhances the fitness function of cloud computing but also
decreases performance metrics such as makespan time,
amongst other things. Several additional scholars'
contributions have resulted in the enhancement of the
ABC algorithm. By doing simulation study, it has been
shown that the suggested strategy is successful, hence
indicating that it is effectively implemented.
8. Conclusion
Conventional cloud computing, in which processing,
storage, and networking resources are housed in one or a
few centralised data centres, is unsuited for new
applications because of the strict latency requirements that
these applications have. As a further point of interest, the
rapid expansion of networks has ushered in the trend of
cloudification of networks and the supply of network
services based on cloud service models. This movement
is a result of the greater availability of connection, which
has brought about several changes. As a consequence of
this, the new distributed cloud architecture represents an
advancement in comparison to the traditional centralised
cloud computing. Across the whole planet, it offers
services for distributed cloud computing that are
structured in accordance with the requirements of the
application using those services. Within the confines of
this essay, we will make an effort to provide a
comprehensive review of distributed clouds. At the
beginning of this article, there is a description of
distributed cloud computing that is given. Following that,
we will talk about the architecture of the distributed cloud
as well as the technologies that are associated with it once
we have finished that. We take on the responsibility of
addressing the open research questions that are linked
with distributed cloud computing. as a result of the
analysis of twenty-one articles that explored a variety of
strategies and procedures.
References
[1] P. Mundhenk, A. Hamann, A. Heyl, and D.
Ziegenbein, "Reliable distributed systems," in 2022
Design, Automation & Test in Europe Conference &
Exhibition (DATE), 2022, pp. 287-291.
[2] D. Senapati, A. Sarkar, and C. Karfa, "Performance-
effective DAG scheduling for heterogeneous
distributed systems," in Proceedings of the 23rd
International Conference on Distributed Computing
and Networking, 2022, pp. 234-235.
[3] N. T. Muhammed, S. R. Zeebaree, and Z. N. Rashid,
"Distributed Cloud Computing and Mobile Cloud
Computing: A Review," QALAAI ZANIST
JOURNAL, vol. 7, pp. 1183-1201, 2022.
[4] Z. N. Rashid, S. R. Zebari, K. H. Sharif, and K.
Jacksi, "Distributed cloud computing and distributed
parallel computing: A review," in 2018 International
Conference on Advanced Science and Engineering
(ICOASE), 2018, pp. 167-172.
[5] G. Cavallaro, D. B. Heras, Z. Wu, M. Maskey, S.
Lopez, P. Gawron, et al., "High-Performance and
Disruptive Computing in Remote Sensing:
HDCRS—A new Working Group of the GRSS Earth
Science Informatics Technical Committee
[Technical Committees]," IEEE Geoscience and
Remote Sensing Magazine, vol. 10, pp. 329-345,
2022.
[6] Y. Jiang, J. Kang, D. Niyato, X. Ge, Z. Xiong, C.
Miao, et al., "Reliable distributed computing for
metaverse: A hierarchical game-theoretic approach,"
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(11s), 469-490 | 487
IEEE Transactions on Vehicular Technology, vol.
72, pp. 1084-1100, 2022.
[7] Q. Li, J. Zhang, J. Zhao, J. Ye, W. Song, and F. Li,
"Adaptive hierarchical cyber attack detection and
localization in active distribution systems," IEEE
transactions on smart grid, vol. 13, pp. 2369-2380,
2022.
[8] Z. Ageed, M. R. Mahmood, M. Sadeeq, M. B.
Abdulrazzaq, and H. Dino, "Cloud computing
resources impacts on heavy-load parallel processing
approaches," IOSR Journal of Computer
Engineering (IOSR-JCE), vol. 22, pp. 30-41, 2020.
[9] Y. S. Jghef and S. Zeebaree, "State of art survey for
significant relations between cloud computing and
distributed computing," International Journal of
Science and Business, vol. 4, pp. 53-61, 2020.
[10] J. P. Sahoo, A. K. Tripathy, M. Mohanty, K.-C. Li,
and A. K. Nayak, Advances in Distributed
Computing and Machine Learning: Springer, 2022.
[11] K. Peng, H. Huang, B. Zhao, A. Jolfaei, X. Xu, and
M. Bilal, "Intelligent computation offloading and
resource allocation in IIoT with end-edge-cloud
computing using NSGA-III," IEEE Transactions on
Network Science and Engineering, 2022.
[12] S. Zebari and N. O. Yaseen, "Effects of parallel
processing implementation on balanced load-
division depending on distributed memory systems,"
J. Univ. Anbar Pure Sci, vol. 5, pp. 50-56, 2011.
[13] D. Yu, Z. Ma, and R. Wang, "Efficient smart grid
load balancing via fog and cloud computing,"
Mathematical Problems in Engineering, vol. 2022,
pp. 1-11, 2022.
[14] Y. Ding, K. Li, C. Liu, and K. Li, "A potential game
theoretic approach to computation offloading
strategy optimization in end-edge-cloud
computing," IEEE Transactions on Parallel and
Distributed Systems, vol. 33, pp. 1503-1519, 2021.
[15] T. Eltaeib and N. Islam, "Taxonomy of challenges in
cloud security," in 2021 8th IEEE International
Conference on Cyber Security and Cloud Computing
(CSCloud)/2021 7th IEEE International Conference
on Edge Computing and Scalable Cloud (EdgeCom),
2021, pp. 42-46.
[16] S. R. Zeebaree, H. M. Shukur, L. M. Haji, R. R.
Zebari, K. Jacksi, and S. M. Abas, "Characteristics
and analysis of hadoop distributed systems,"
Technology Reports of Kansai University, vol. 62,
pp. 1555-1564, 2020.
[17] P. Arthurs, L. Gillam, P. Krause, N. Wang, K.
Halder, and A. Mouzakitis, "A taxonomy and survey
of edge cloud computing for intelligent
transportation systems and connected vehicles,"
IEEE Transactions on Intelligent Transportation
Systems, 2021.
[18] M. K. I. Rahmani, M. Shuaib, S. Alam, S. T.
Siddiqui, S. Ahmad, S. Bhatia, et al., "Blockchain-
based trust management framework for cloud
computing-based internet of medical things (IoMT):
a systematic review," Computational Intelligence
and Neuroscience, vol. 2022, 2022.
[19] A. Alam, "Cloud-based e-learning: scaffolding the
environment for adaptive e-learning ecosystem
based on cloud computing infrastructure," in
Computer Communication, Networking and IoT:
Proceedings of 5th ICICC 2021, Volume 2, ed:
Springer, 2022, pp. 1-9.
[20] F. J. G. Peñalvo, A. Sharma, A. Chhabra, S. K.
Singh, S. Kumar, V. Arya, et al., "Mobile cloud
computing and sustainable development:
Opportunities, challenges, and future directions,"
International Journal of Cloud Applications and
Computing (IJCAC), vol. 12, pp. 1-20, 2022.
[21] J. Saeed and S. Zeebaree, "Skin lesion classification
based on deep convolutional neural networks
architectures," Journal of Applied Science and
Technology Trends, vol. 2, pp. 41-51, 2021.
[22] L. Wen, "Cloud computing intrusion detection
technology based on BP-NN," Wireless Personal
Communications, vol. 126, pp. 1917-1934, 2022.
[23] P. K. Bal, S. K. Mohapatra, T. K. Das, K. Srinivasan,
and Y.-C. Hu, "A joint resource allocation, security
with efficient task scheduling in cloud computing
using hybrid machine learning techniques," Sensors,
vol. 22, p. 1242, 2022.
[24] P. Y. Abdullah, S. Zeebaree, K. Jacksi, and R. R.
Zeabri, "An hrm system for small and medium
enterprises (sme) s based on cloud computing
technology," International Journal of Research-
GRANTHAALAYAH, vol. 8, pp. 56-64, 2020.
[25] Z. S. Ageed, S. R. Zeebaree, M. M. Sadeeq, S. F.
Kak, H. S. Yahia, M. R. Mahmood, et al.,
"Comprehensive survey of big data mining
approaches in cloud systems," Qubahan Academic
Journal, vol. 1, pp. 29-38, 2021.
[26] P. Y. Abdullah, S. Zeebaree, H. M. Shukur, and K.
Jacksi, "HRM system using cloud computing for
Small and Medium Enterprises (SMEs),"
Technology Reports of Kansai University, vol. 62, p.
04, 2020.
[27] R. R. Kumar, A. Tomar, M. Shameem, and M. N.
Alam, "Optcloud: An optimal cloud service
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(11s), 469-490 | 488
selection framework using QoS correlation lens,"
Computational Intelligence and Neuroscience, vol.
2022, 2022.
[28] Z. S. Ageed, R. K. Ibrahim, and M. A. Sadeeq,
"Unified ontology implementation of cloud
computing for distributed systems," Current Journal
of Applied Science and Technology, vol. 39, pp. 82-
97, 2020.
[29] N. Manikandan, N. Gobalakrishnan, and K. Pradeep,
"Bee optimization based random double adaptive
whale optimization model for task scheduling in
cloud computing environment," Computer
Communications, vol. 187, pp. 35-44, 2022.
[30] S. Shen, Y. Ren, Y. Ju, X. Wang, W. Wang, and V.
C. Leung, "Edgematrix: A resource-redefined
scheduling framework for sla-guaranteed multi-tier
edge-cloud computing systems," IEEE Journal on
Selected Areas in Communications, vol. 41, pp. 820-
834, 2022.
[31] Z. S. Ageed, S. R. Zeebaree, M. A. Sadeeq, R. K.
Ibrahim, H. M. Shukur, and A. Alkhayyat,
"Comprehensive Study of Moving from Grid and
Cloud Computing Through Fog and Edge
Computing towards Dew Computing," in 2021 4th
International Iraqi Conference on Engineering
Technology and Their Applications (IICETA), 2021,
pp. 68-74.
[32] S. I. Ahmed, S. Y. Ameen, and S. R. Zeebaree, "5G
Mobile Communication System Performance
Improvement with Caching: A Review," in 2021
International Conference of Modern Trends in
Information and Communication Technology
Industry (MTICTI), 2021, pp. 1-8.
[33] A. Belgacem and K. Beghdad-Bey, "Multi-objective
workflow scheduling in cloud computing: trade-off
between makespan and cost," Cluster Computing,
vol. 25, pp. 579-595, 2022.
[34] L. Surya, "Software as a service in cloud
computing," International Journal of Creative
Research Thoughts (IJCRT), ISSN, pp. 2320-2882,
2019.
[35] V. D. Majety, N. Sharmili, C. R. Pattanaik, E. L.
Lydia, S. R. Zeebaree, S. N. Mahmood, et al.,
"Ensemble of Handcrafted and Deep Learning
Model for Histopathological Image Classification,"
Computers, Materials & Continua, vol. 73, 2022.
[36] S. Raghavan R, J. KR, and R. V. Nargundkar,
"Impact of software as a service (SaaS) on software
acquisition process," Journal of Business &
Industrial Marketing, vol. 35, pp. 757-770, 2020.
[37] L. M. ABDULRAHMAN, Z. S. AGEED, T. M. G.
SAMI, R. QASHI, and M. J. AHMED, "CLOUD-
BASED AND ENTERPRISE SYSTEMS:
CONCEPTS, ARCHITECTURE, POLICES,
COMPATIBILITY, AND INFORMATION
EXCHANGING."
[38] M. B. Ali, T. Wood-Harper, and R. Ramlogan, "The
Role of SaaS Applications in Business IT
Alignment: A Closer Look at Value Creation in
Service Industry," United Kingdom Academy for
Information Systems, 2020.
[39] D. Cunha, P. Neves, and P. Sousa, "PaaS manager:
A platform-as-a-service aggregation framework,"
2014.
[40] M. Viggiato, D. Paas, C. Buzon, and C.-P. Bezemer,
"Using natural language processing techniques to
improve manual test case descriptions," in
Proceedings of the 44th International Conference on
Software Engineering: Software Engineering in
Practice, 2022, pp. 311-320.
[41] Z. N. Rashid, S. R. Zeebaree, M. A. Sadeeq, R. R.
Zebari, H. M. Shukur, and A. Alkhayyat, "Cloud-
based Parallel Computing System Via Single-Client
Multi-Hash Single-Server Multi-Thread," in 2021
International Conference on Advance of Sustainable
Engineering and its Application (ICASEA), 2021,
pp. 59-64.
[42] S. F. KHORSHID, L. M. ABDULRAHMAN, Z. S.
AGEED, T. M. G. SAMI, and M. J. AHMED,
"INFLUENCES OF CLOUD AND WEB
TECHNOLOGY ON IOT COMMUNICATION
FOR EMBEDDED SYSTEMS."
[43] O. H. Jader, S. R. Zeebaree, R. R. Zebari, H. M.
Shukur, Z. N. Rashid, M. A. Sadeeq, et al., "Ultra-
Dense Request Impact on Cluster-Based Web Server
Performance," in 2021 4th International Iraqi
Conference on Engineering Technology and Their
Applications (IICETA), 2021, pp. 252-257.
[44] M. A. Sadeeq and S. R. Zeebaree, "Design and
analysis of intelligent energy management system
based on multi-agent and distributed iot: Dpu case
study," in 2021 7th International Conference on
Contemporary Information Technology and
Mathematics (ICCITM), 2021, pp. 48-53.
[45] S. Bharany, K. Kaur, S. Badotra, S. Rani, Kavita, M.
Wozniak, et al., "Efficient middleware for the
portability of paas services consuming applications
among heterogeneous clouds," Sensors, vol. 22, p.
5013, 2022.
[46] D. M. Abdulqader and S. R. Zeebaree, "Impact of
Distributed-Memory Parallel Processing Approach
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(11s), 469-490 | 489
on Performance Enhancing of Multicomputer-
Multicore Systems: A Review," QALAAI ZANIST
JOURNAL, vol. 6, pp. 1137-1140, 2021.
[47] M. Liu, M. J. Gorgievski, J. Qi, and F. Paas,
"Increasing teaching effectiveness in
entrepreneurship education: Course characteristics
and student needs differences," Learning and
Individual Differences, vol. 96, p. 102147, 2022.
[48] T. Ernawati and F. Febiansyah, "Peer to peer (P2P)
and cloud computing on infrastructure as a service
(IaaS) performance analysis," Jurnal Infotel, vol. 14,
pp. 161-167, 2022.
[49] F. K. Parast, C. Sindhav, S. Nikam, H. I. Yekta, K.
B. Kent, and S. Hakak, "Cloud computing security:
A survey of service-based models," Computers &
Security, vol. 114, p. 102580, 2022.
[50] M. Bozdal, M. Randa, M. Samie, and I. Jennions,
"Hardware trojan enabled denial of service attack on
can bus," Procedia Manufacturing, vol. 16, pp. 47-
52, 2018.
[51] S. A. Mostafa, A. Mustapha, A. A. Ramli, R.
Darman, S. R. Zeebaree, M. A. Mohammed, et al.,
"Applying Trajectory Tracking and Positioning
Techniques for Real-time Autonomous Flight
Performance Assessment of UAV Systems," Journal
of Southwest Jiaotong University, vol. 54, 2019.
[52] M. S. Al Reshan, D. Syed, N. Islam, A. Shaikh, M.
Hamdi, M. A. Elmagzoub, et al., "A Fast
Converging and Globally Optimized Approach for
Load Balancing in Cloud Computing," IEEE Access,
vol. 11, pp. 11390-11404, 2023.
[53] A. N. Malti, M. Hakem, and B. Benmammar, "A
new hybrid multi-objective optimization algorithm
for task scheduling in cloud systems," Cluster
Computing, pp. 1-24, 2023.
[54] K. Malathi and K. Priyadarsini, "Hybrid lion–GA
optimization algorithm-based task scheduling
approach in cloud computing," Applied
Nanoscience, vol. 13, pp. 2601-2610, 2023.
[55] C. Chandrashekar, P. Krishnadoss, V. Kedalu
Poornachary, B. Ananthakrishnan, and K.
Rangasamy, "HWACOA scheduler: Hybrid
weighted ant colony optimization algorithm for task
scheduling in cloud computing," Applied Sciences,
vol. 13, p. 3433, 2023.
[56] P. Pirozmand, H. Jalalinejad, A. A. R. Hosseinabadi,
S. Mirkamali, and Y. Li, "An improved particle
swarm optimization algorithm for task scheduling in
cloud computing," Journal of Ambient Intelligence
and Humanized Computing, vol. 14, pp. 4313-4327,
2023.
[57] S. Duan, D. Wang, J. Ren, F. Lyu, Y. Zhang, H. Wu,
et al., "Distributed artificial intelligence empowered
by end-edge-cloud computing: A survey," IEEE
Communications Surveys & Tutorials, 2022.
[58] M. S. Al-Abiad, M. Z. Hassan, and M. J. Hossain,
"Energy efficient distributed learning in integrated
fog-cloud computing enabled IoT networks," in
2022 IEEE International Conference on
Communications Workshops (ICC Workshops),
2022, pp. 872-877.
[59] Y. Wang and J. Zhao, "Mobile edge computing,
metaverse, 6G wireless communications, artificial
intelligence, and blockchain: Survey and their
convergence," in 2022 IEEE 8th World Forum on
Internet of Things (WF-IoT), 2022, pp. 1-8.
[60] N. Bhalaji, "Load balancing in cloud computing
using water wave algorithm," Concurrency Comput.,
Pract. Exper., vol. 34, 2022.
[61] R. Gulbaz, A. B. Siddiqui, N. Anjum, A. A. Alotaibi,
T. Althobaiti, and N. Ramzan, "Balancer genetic
algorithm—A novel task scheduling optimization
approach in cloud computing," Applied Sciences,
vol. 11, p. 6244, 2021.
[62] H. S. Alatawi and S. A. Sharaf, "Hybrid load
balancing approach based on the integration of QoS
and power consumption in cloud computing,"
International Journal, vol. 10, 2021.
[63] D. Lindsay, S. S. Gill, D. Smirnova, and P.
Garraghan, "The evolution of distributed computing
systems: from fundamental to new frontiers,"
Computing, vol. 103, pp. 1859-1878, 2021.
[64] A. M. Senthil Kumar, P. Krishnamoorthy, S.
Soubraylu, J. K. Venugopal, and K. Marimuthu, "An
efficient task scheduling using GWO-PSO algorithm
in a cloud computing environment," in Proceedings
of International Conference on Intelligent
Computing, Information and Control Systems:
ICICCS 2020, 2021, pp. 751-761.
[65] S. Ouhame and Y. Hadi, "A Hybrid Grey Wolf
Optimizer and Artificial Bee Colony Algorithm
Used for Improvement in Resource Allocation
System for Cloud Technology," International
Journal of Online & Biomedical Engineering, vol.
16, 2020.
[66] G. Muthsamy and S. Ravi Chandran, "Task
scheduling using artificial bee foraging optimization
for load balancing in cloud data centers," Computer
Applications in Engineering Education, vol. 28, pp.
769-778, 2020.
[67] J.-q. Li and Y.-q. Han, "A hybrid multi-objective
artificial bee colony algorithm for flexible task
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(11s), 469-490 | 490
scheduling problems in cloud computing system,"
Cluster Computing, vol. 23, pp. 2483-2499, 2020.
[68] R. Agarwal, N. Baghel, and M. A. Khan, "Load
balancing in cloud computing using mutation based
particle swarm optimization," in 2020 International
Conference on Contemporary Computing and
Applications (IC3A), 2020, pp. 191-195.
[69] L. Xingjun, S. Zhiwei, C. Hongping, and B. O.
Mohammed, "A new fuzzy‐based method for load
balancing in the cloud‐based Internet of things using
a grey wolf optimization algorithm," International
Journal of Communication Systems, vol. 33, p.
e4370, 2020.
[70] A. Saadat and E. Masehian, "Load balancing in
cloud computing using genetic algorithm and fuzzy
logic," in 2019 International Conference on
Computational Science and Computational
Intelligence (CSCI), 2019, pp. 1435-1440.
[71] A. Ragmani, A. Elomri, N. Abghour, K. Moussaid,
and M. Rida, "An improved hybrid fuzzy-ant colony
algorithm applied to load balancing in cloud
computing environment," Procedia Computer
Science, vol. 151, pp. 519-526, 2019.
[72] L. Shen, J. Li, Y. Wu, Z. Tang, and Y. Wang,
"Optimization of artificial bee colony algorithm
based load balancing in smart grid cloud," in 2019
IEEE Innovative Smart Grid Technologies-Asia
(ISGT Asia), 2019, pp. 1131-1134.