ArticlePDF Available

Embracing Distributed Systems for Efficient Cloud Resource Management: A Review of Techniques and Methodologies

Authors:

Abstract

The development of parallel computing, distributed computing, and grid computing has introduced a new computing model, combining elements of grid, public computing, and SaaS. Cloud computing, a key component of this model, assigns computing to distributed computers rather than local computers or remote servers. Research papers from 2017 to 2023 provide an overview of the advancements and challenges in cloud computing and distributed systems, focusing on resource management and the integration of advanced technologies like machine learning, AI-centric strategies, and fuzzy meta-heuristics. These studies aim to improve operational efficiency, scalability, and adaptability in cloud environments, focusing on energy efficiency and cost reduction. However, these advancements also present challenges, such as implementation complexity, adaptability in diverse environments, and the rapid pace of technological advancements. These issues necessitate practical, efficient, and forward-thinking solutions in real-world settings. The research conducted between 2017 and 2023 highlights the dynamic and rapidly evolving field of cloud computing and distributed systems, providing valuable guidance for ongoing and future research. This body of work serves as a crucial reference point for advancing the field and emphasizing the need for practical, efficient, and forward-thinking solutions in the ever-evolving landscape of cloud computing and distributed systems.
Indonesian Journal of Computer Science
ISSN 2549-7286 (online)
Jln. Khatib Sulaiman Dalam No. 1, Padang, Indonesia
Website: ijcs.stmikindonesia.ac.id | E-mail: ijcs@stmikindonesia.ac.id
Attribution-ShareAlike 4.0 International License Vol. 13, No. 2, Ed. 2024 | page 1912
Embracing Distributed Systems for Efficient Cloud Resource Management: A
Review of Techniques and Methodologies
Abdo Sulaiman Abdi1, Subhi R. M. Zeebaree2
abdo.abdi@dpu.edu.krd 1, abdo.abdi@auas.edu.krd 1, subhi.rafeeq@dpu.edu.krd 2
1Web Technology Dept., Duhok Technical Institute, Duhok Polytechnic University, Duhok, Iraq,
1Information Technology Dept., Technical College of Informatics-Akre, Akre University of
Applied Science, Duhok, Iraq,
2 Energy Eng. Dept., Technical College of Engineering, Duhok Polytechnic University, Duhok,
Iraq.
Article Information
Abstract
Submitted : 5 Mar 2024
Reviewed : 12 Mar 2024
Accepted : 1 Apr 2024
The development of parallel computing, distributed computing, and grid
computing has introduced a new computing model, combining elements of
grid, public computing, and SaaS. Cloud computing, a key component of this
model, assigns computing to distributed computers rather than local
computers or remote servers. Research papers from 2017 to 2023 provide
an overview of the advancements and challenges in cloud computing and
distributed systems, focusing on resource management and the integration
of advanced technologies like machine learning, AI-centric strategies, and
fuzzy meta-heuristics. These studies aim to improve operational efficiency,
scalability, and adaptability in cloud environments, focusing on energy
efficiency and cost reduction. However, these advancements also present
challenges, such as implementation complexity, adaptability in diverse
environments, and the rapid pace of technological advancements. These
issues necessitate practical, efficient, and forward-thinking solutions in real-
world settings. The research conducted between 2017 and 2023 highlights
the dynamic and rapidly evolving field of cloud computing and distributed
systems, providing valuable guidance for ongoing and future research. This
body of work serves as a crucial reference point for advancing the field and
emphasizing the need for practical, efficient, and forward-thinking solutions
in the ever-evolving landscape of cloud computing and distributed systems.
Keywords
Cloud Computing, Cloud
Resources, Distributed
Systems.
ISSN 2549-7286 (online)
Indonesian Journal of Computer Science Vol. 13, No. 2, Ed. 2024 | page 1913
A. Introduction
Distributed systems have become an integral part of modern computing,
allowing organizations to distribute tasks and data across multiple components in
a network. These systems are used in a wide range of applications, including data
processing, storage, and retrieval, network communication, and web services [1].
Cloud computing is a new technology which is managed by a third party “cloud
provider” to provide the clients with services anywhere, at any time, and under
various circumstances [2]. In order to provide clients with cloud resources and
satisfy their needs, cloud computing employs virtualization and resource
provisioning techniques [3].
The primary focus of distributed computing is to provide access to
information as well as to allow for the computation and sharing of information.
Distributed computing is achieved by connecting various processing units that are
linked to one another via the use of computer networks. These networks may
consist of the internet or a local area network (LAN) [4]. Cloud computing is an
evolution of information technology and a dominant business model for delivering
IT resources. With cloud computing, individuals and organizations can gain on
demand network access to a shared pool of managed and scalable IT resources,
such as servers, storage, and applications [5][6]. Recently, academics as well as
practitioners have paid a great deal of attention to cloud computing. We rely
heavily on cloud services in our daily lives, e.g., for storing data, writing
documents, managing businesses, and playing games online. Cloud computing also
provides the infrastructure that has powered key digital trends such as mobile
computing, the Internet of Things, big data, and artificial intelligence, thereby
accelerating industry dynamics, disrupting existing business models, and fueling
the digital transformation [7].
This review explores the use of distributed systems for efficient cloud
resource management. It categorizes key research studies. The review provides a
comprehensive overview of the advancements in distributed systems, addressing
both technological progress and challenges. It aims to contribute to the ongoing
discourse in this dynamic field, offering valuable insights for current practitioners
and future research.
B. Background Theory
a. Cloud Computing
Cloud computing represents a form of computing that operates over the
Internet, offering on-demand processing resources and capabilities. It enables
access to a collective pool of configurable technological assets, including networks,
servers, and various applications. These resources are readily available and can be
allocated or relinquished with minimal management effort [8]. Cloud computing
enables individuals and organizations to store and process data in data centers
owned by third-party providers, which may be located in distant geographical
areas [9]. One major advantage of this technology is its ability to assist firms in
avoiding significant upfront costs related to infrastructure, such as the acquisition
of hardware and servers. Moreover, it allows organizations to focus on their
primary business operations, rather than on overseeing tangible infrastructure.
[10]. Further, Cloud computing allows enterprises to quickly implement their
ISSN 2549-7286 (online)
Indonesian Journal of Computer Science Vol. 13, No. 2, Ed. 2024 | page 1914
applications, greatly lowering the time required to become operational.
Additionally, it provides firms with a practical method to adapt resources in
accordance with fluctuating demands. This adaptability enables organizations to
easily expand their computing capacities in response to growing requirements and
likewise reduce them when the demand decreases [11]. Cloud service providers
generally employ a pricing mechanism known as "pay-as-you-go." Figure 1 depicts
the different essential elements of cloud computing in this method [12]. Notable
cloud computing platforms comprise Amazon's Elastic Compute Cloud (EC2),
Microsoft's Azure, and Oracle Cloud. Virtualization is the fundamental technology
that enables cloud computing. It involves dividing a single physical computing unit
into many virtual devices [13]. Virtualization enables seamless use and
management of each virtual device for computing operations. This results in the
creation of a scalable system consisting of multiple autonomous computing
devices, which improves the allocation and utilization of unused physical
resources more efficiently [14]. Cloud computing vendors offer their services using
several models. Infrastructure as a Service (IaaS) provides users with computing
infrastructure, such as virtual machines and other resources, in the form of a
service. Platform as a Service (PaaS) enables users to deploy projects, including
infrastructure and applications, onto the cloud with assistance from the provider.
PaaS suppliers offer a specialised platform that caters to the needs of application
developers. Software as a Service (SaaS) allows users to utilise the applications
provided by a service provider. These applications are hosted on cloud
infrastructure and can be accessed by web browsers other program interfaces
[15]. Cloud providers are accountable for overseeing the infrastructure and
platforms that enable the applications in these model [16] [17].
Figure 1. Cloud Computing [12].
ISSN 2549-7286 (online)
Indonesian Journal of Computer Science Vol. 13, No. 2, Ed. 2024 | page 1915
i. Main Types of Cloud Computing
Infrastructure as a Service (IaaS) Offers virtualized computing resources
across the internet, encompassing storage, computational capacity, and
networking. The user has the ability to allocate processing, storage, networks, and
other essential computing resources. Users have the ability to install and execute
any software of their choice, including operating systems and apps [18]. The
consumer lacks authority over the core cloud infrastructure, but retains control on
operating systems, storage, deployed applications, and maybe has limited control
over certain networking components such as host firewalls [19].
Platform as a Service (PaaS) Enables users to create, execute, and oversee
applications without having to handle intricate infrastructure intricacies. The user
is responsible for deploying consumer-developed or acquired apps into the cloud
infrastructure, using programming languages and tools that are supported by the
provider. The consumer lacks authority or jurisdiction over the cloud
infrastructure, but retains control over the deployed apps and potentially the
configurations of the application hosting environment [17].
Software as a Service (SaaS) Deploys diverse software programs via the
Internet, obviating the need for local installs. The consumer utilizes the
applications provided by the supplier, which are hosted on a cloud infrastructure.
The apps can be accessed by different client devices through a thin client interface,
such as a web browser, for example, web-based email. The consumer lacks the
ability to oversee or manipulate the fundamental cloud infrastructure, such as the
network, servers, operating systems, storage, and even specific application
functionalities, except for potentially limited user-specific configuration settings
for applications [17].
Function as a Service (FaaS) With serverless computing, developers may
handle event-driven execution of processes without worrying about the underlying
infrastructure [20].
ii. The Benefits of Cloud Computing
In the modern era, cloud computing has revolutionized the way businesses
and individuals utilize technology, offering a myriad of benefits that cater to
diverse needs and operational scales. This paradigm shift in computing has
brought forth several key advantages that have fundamentally altered the
landscape of digital services and infrastructure management [21], [22].
One of the primary advantages of cloud computing is Cost Efficiency. Unlike
traditional computing models that often require significant upfront investment in
infrastructure, cloud computing operates on a pay-as-you-go basis. This model
allows users to pay only for the resources they use, significantly reducing initial
expenses and enabling more efficient budgeting. This cost-effective approach is
particularly beneficial for small and medium-sized enterprises (SMEs) and
startups, as it provides them with access to high-end technology without the hefty
price tag [23].
Another significant benefit is the provision of Managed Services. Cloud
providers take on the responsibility of maintaining and updating the
infrastructure, which includes regular software updates, security patches, and
system upgrades [24]. This relieves users from the complex and time-consuming
ISSN 2549-7286 (online)
Indonesian Journal of Computer Science Vol. 13, No. 2, Ed. 2024 | page 1916
tasks associated with infrastructure management, allowing them to focus more on
their core business activities [25], [26].
Scalability is a hallmark of cloud services. The flexibility to scale resources
up or down based on current demands is a game-changer, especially in today's
dynamic market environment. Businesses can easily adjust their resource usage to
handle peak loads during high-demand periods or scale down during slower
periods, ensuring operational efficiency and cost-effectiveness [27].
Lastly, Accessibility is a crucial aspect of cloud computing. Users can access
cloud services and data from any location, provided they have an internet
connection. This level of accessibility facilitates remote collaboration and offers
unprecedented flexibility, making it easier for teams to work together from
different geographical locations. This feature is especially relevant in the current
landscape where remote working and digital collaboration have become more
prevalent [28].
b. Distributed Systems
Distributed computing represents a model of computation in which tasks are
segmented and executed across a network of interconnected computers. This
method promotes simultaneous processing and cooperative efforts, in contrast to
the conventional centralized computing model. It leverages the combined
capabilities of a network, facilitating effective resolution of problems and optimal
use of resources [29].
A distributed system (Fig. 2)[30] A distributed system is a network of
computers working together to accomplish a task. It presents itself as an
integrated computing entity, and is commonly used in large-scale computer
systems. Nodes are dispersed across networked computers, allowing each to
operate distinctively at different times. Communication occurs through message
passing, and distributed systems facilitate the sharing of hardware and software
resources, as well as information exchange between individuals and processes
[30][31].
A distributed system is characterized by its support for concurrency, with
several processors operating simultaneously across different networked
computers. These processors run in parallel [32], with each computer functioning
under its own local operating system. A key design feature of such a system is its
resilience to failures of individual computers. It is engineered to maintain
continuous service and operation even in the event of a node failure [33]. In
essence, a distributed system is engineered for fault tolerance, ensuring
continuous operation despite hardware, software, or network failures [34]. To
achieve this, it incorporates recovery and redundancy mechanisms, such as
duplicating information across multiple computers. These features are integral to
maintaining service continuity, albeit potentially at a reduced capacity, in the event
of system failures [35].
The architectural design of a distributed system is notably more intricate
compared to a centralized system, owing to the potential complexity in
interactions among its various components and the underlying system
infrastructure. The performance of a distributed system is significantly influenced
by factors such as network bandwidth, system load, and the processing speed of
ISSN 2549-7286 (online)
Indonesian Journal of Computer Science Vol. 13, No. 2, Ed. 2024 | page 1917
the individual computers within the network. This contrasts with a centralized
system, where performance largely hinges on the speed of a singular processor
[36] [37]. The efficiency and response time of a distributed system can fluctuate
based on network load and bandwidth, leading to varying user experiences. Nodes
within such a system typically operate as independent entities without centralized
oversight. Additionally, the network linking these nodes constitutes a complex
system, independent of the control of the computers utilizing it. Distributed
systems find application in a myriad of settings, including fixed-line, mobile, and
wireless networks, corporate intranets, the Internet, and the World Wide Web
[38].
Figure 2. A distributed system [30].
i. Benefits of Distributed Computing
In the realm of distributed computing systems, several key principles stand
out as pillars of their effectiveness and efficiency: parallel processing, scalability,
fault tolerance, and resource efficiency [39].
Parallel Processing: The implementation of parallel processing in
distributed systems involves the distribution of computational tasks across
multiple computing units. This method allows for the simultaneous processing of
different parts of a complex problem, which significantly reduces the overall
processing time. By leveraging the capabilities of multiple computers, parallel
processing effectively addresses the computational demands of intricate and
resource-intensive tasks [40].
Scalability: Scalability is a fundamental characteristic of distributed systems,
enabling them to efficiently manage increases in workload. This is achieved by
augmenting the system with additional nodes as needed. The inherent design of
distributed systems allows for this expansion without adversely affecting system
performance. As a result, the system remains adept at handling growing demands,
ensuring that the addition of new nodes or resources is a seamless process that
maintains operational efficiency [41].
ISSN 2549-7286 (online)
Indonesian Journal of Computer Science Vol. 13, No. 2, Ed. 2024 | page 1918
Fault Tolerance: Distributed systems are inherently designed to exhibit a
high degree of fault tolerance. This is achieved through their ability to remain
operational even in the event of individual node failures. The architecture of
distributed systems ensures that the failure of one or more nodes does not
compromise the overall functionality of the system. This robustness against
hardware issues is crucial in maintaining continuous operation, thereby enhancing
the reliability and resilience of the system [42].
Resource Efficiency: The efficient utilization of resources is a key advantage
of distributed systems. By strategically distributing computational tasks across
various nodes in the network, these systems optimize the use of available
resources. This approach not only maximizes the performance of individual nodes
but also enhances the overall responsiveness and efficiency of the system.
Consequently, distributed systems are able to achieve a higher level of operational
effectiveness, making practical and optimal use of the resources distributed
throughout the network [43].
ii. Main Types of Distributed Computing
In the diverse landscape of modern computing, various architectures play
pivotal roles in addressing specific computational needs and challenges. Among
these, cluster computing, grid computing, parallel computing, and Peer-to-Peer
(P2P) computing stand out for their unique approaches and applications.
Cluster Computing: Cluster computing refers to the technique of linking
multiple computers, often known as nodes, to function collectively as a single,
more powerful system. This interconnected group of computers shares processing
tasks and resources, working in tandem to handle computationally intensive tasks.
Each node in a cluster operates in concert with others, resulting in enhanced
processing capabilities and higher availability. Cluster computing is particularly
effective in scenarios requiring high-performance computing, as it combines the
power of individual nodes to form a robust and scalable solution [44][45].
Grid Computing: Grid computing is a form of distributed computing that
involves harnessing the power of a vast network of computers spread across
different geographical locations. These computers collaborate to work on a
common task, typically involving large-scale, complex computations such as
scientific research, data analysis, and simulations. Unlike cluster computing, where
nodes are closely connected and often homogenous, grid computing leverages a
more heterogeneous and expansive network, pooling resources from various
systems. This approach is ideal for tasks that require immense computational
power and data processing, often transcending the capabilities of a single machine
or local cluster [46].
Parallel Computing: Parallel computing is a computational approach where a
large task is divided into smaller subtasks, which are then processed concurrently
across multiple processors [47] [48]. The method optimizes computational speed
and efficiency by allowing different parts of a task to be executed simultaneously
rather than sequentially [49]. It effectively reduces the time taken to process
complex and large-scale computations, making it an essential approach in high-
performance computing environments [50]. Parallel computing is often used in
scientific simulations, image processing, and complex mathematical calculations,
ISSN 2549-7286 (online)
Indonesian Journal of Computer Science Vol. 13, No. 2, Ed. 2024 | page 1919
where the ability to process multiple operations at the same time significantly
accelerates the overall task completion.
Peer-to-Peer (P2P) Computing: Peer-to-Peer computing, commonly known as
P2P computing, is a decentralized network model where each computer, or peer, in
the network shares resources and processing power directly with other peers
without the need for a centralized administrative system. In a P2P network, all
computers participate in data sharing and processing, making the network highly
resilient and scalable. This model is frequently used for file sharing, distributed
computing, and collaborative work. P2P computing allows for a more efficient use
of resources as each peer contributes to and benefits from the network, leading to
a more robust and fault-tolerant system [51].
C. Literature Review
Gonzalez et al. in 2017 conducted a systematic review of over 110
publications on cloud resource management, developing a taxonomy and
highlighting future challenges. Their work, while comprehensive, notes potential
biases and scope limitations. Key findings include identifying gaps in security and
dynamic resource allocation in diverse cloud environments [52].
2019 - Aziz et al. analyzed Spark’s resource allocation, emphasizing machine
learning's impact on performance, offering practical tuning advice. Their findings,
specific to certain setups, may have limited wider applicability. They conclude that
optimal allocation significantly enhances Spark's performance, particularly noting
the role of RDDs' persistence [53] .
Paper [54] proposes Dynamic Resource Allocation (DRA) and Energy Saving
techniques using VM live migration in OpenStack. These methods aim to enhance
VM efficiency and reduce energy consumption in physical machines, achieving
approximately 39.89% energy savings. However, scalability and reliance on
OpenStack are potential challenges. The study demonstrates that integrating DRA
with energy-saving approaches significantly enhances energy efficiency and
resource utilization in cloud environments.
2019 - Guo et al. utilized stochastic dynamic models for backup resource
provisioning in cloud computing. Their comprehensive and practical approach,
validated on Amazon EC2, provides efficient strategies for SLA formulation and
resource management in cloud data centers [55].
2020 - Fard et al. conducted a systematic review of resource allocation
mechanisms in cloud computing. They offered a detailed classification and analysis
of various strategies. However, the rapid evolution of cloud computing could
quickly outdate some methods. The study provides a detailed overview of the
current state of cloud resource allocation, highlighting potential areas for future
research [56].
2020 - Shukur et al. examined various approaches and algorithms for
resource allocation in cloud computing through virtualization. The study highlights
how virtualization impacts cloud resource allocation, network performance, and
cost-efficiency. Despite benefits, challenges such as scalability and adaptability in
diverse cloud environments were noted. Key findings include that each approach
and algorithm contribute to optimizing cloud resource use, balancing loads, and
conserving energy [57] [50].
ISSN 2549-7286 (online)
Indonesian Journal of Computer Science Vol. 13, No. 2, Ed. 2024 | page 1920
[58] presented a taxonomy focusing on autonomic and elastic resource
management in cloud environments. Their study analyzes the design and
applications, offering crucial insights for future research and development in cloud
computing resource management. It addresses the efficiency and adaptability of
current resource management strategies, especially under varying cloud demands.
The classification emphasizes the need for more efficient and adaptable resource
management solutions in cloud computing.
Lindsay [2021] et al. conducted a comprehensive review of distributed
computing's evolution over six decades, focusing on its development,
decentralization, and influences on centralization trends. The paper provides a
historical perspective, charting the technological progression and shift towards
decentralization in various computing paradigms. However, its broad scope might
limit the depth of analysis for each paradigm. Key observations include a trend
towards diversification and specialization in distributed systems, especially for
low-latency tasks, and the adaptation to challenges like increasing complexity and
technological constraints such as the end of Moore's Law [59].
[60] focused on the application of machine learning in cloud resource
management. Their study offers a thorough analysis of current practices and
outlines potential directions for future research. They identify those inefficiencies
in Virtual Machine Placement (VMP) lead to significant resource wastage and
increased operational costs. The paper provides an in-depth insight into cloud
resource management, particularly emphasizing the applications and implications
of machine learning in this field.
2022 - Yu et al. introduced a novel approach to cloud resource scheduling
using deep reinforcement learning, specifically tailored for distributed services
with a focus on container-based models and algorithms [61]. Their work offers a
detailed and innovative perspective on resource scheduling, potentially increasing
efficiency in cloud service operations. However, the complexity of implementation
and the need for adaptability in various cloud environments present significant
challenges. The study demonstrates the effectiveness of deep reinforcement
learning in cloud scheduling, notably improving both reliability and efficiency of
cloud services [62].
[63] focused on the analysis of scheduling within cloud computing,
particularly emphasizing the application of machine learning. Their study provides
a comprehensive overview of the various challenges associated with resource
scheduling in cloud environments. They acknowledge that rapid advancements in
cloud technology may render some earlier concepts obsolete. The paper advocates
for the use of machine learning as a key tool for intelligent scheduling, addressing
the complexities inherent in modern cloud computing scenarios.
[64] conducted a review focusing on distributed intelligence across the
Edge-to-Cloud continuum, delving into aspects of Machine Learning and Data
Analytics. Their paper presents a comprehensive overview of distributed
intelligence technologies, outlining the key challenges faced and potential
directions for future research. They note that the rapid evolution of technology in
this field might constrain the scope of their study and affect the longevity of its
findings. The study emphasizes the critical need for continued research in
ISSN 2549-7286 (online)
Indonesian Journal of Computer Science Vol. 13, No. 2, Ed. 2024 | page 1921
performance optimization and the efficient deployment of machine learning and
data analytics within the realm of distributed computing.
2022 - Debauche et al. employed a systematic review methodology to
compare various computing architectures within the context of Agriculture 4.0.
Their study assesses the suitability, adaptability, and potential for industrial
transition of Collaborative Computing models in agriculture. However, they
acknowledge potential biases in source selection and the rapid evolution of
technology, which might limit the relevance of their findings over time. The
abstract of their paper provides a comparative analysis of different architectures in
Agriculture 4.0, evaluating them against eight specific criteria and discussing the
advantages and disadvantages of each [65].
[66] conducted an in-depth study on dynamic resource allocation (DRA) in
cloud computing, reviewing a range of approaches, scheduling techniques, and
optimization metrics. The paper effectively reviews and clarifies various DRA
methods, providing a detailed categorization of scheduling and optimization
techniques pivotal in the evolution of cloud computing. However, it is noted that
the study primarily consolidates existing knowledge and lacks empirical testing of
these methods, offering no new data or groundbreaking methodologies. The paper
concludes that DRA in cloud computing encompasses heuristic, numerical, and
learning methods, underscoring the importance of efficiency, scalability, and
adaptability within cloud environments.
The study [67] presents a novel integration of machine learning, task
scheduling, and NSUPREME encryption to enhance security in cloud computing.
The approach not only optimizes resource usage and reduces power consumption
but also significantly improves the responsiveness of cloud computing systems.
Despite its advantages, the study recognizes challenges in adaptability due to
rapidly changing technologies in the field. The findings demonstrate that this
hybrid approach, combining machine learning with advanced scheduling
techniques, outperforms existing methods in cloud resource management, marking
a significant advancement in the efficiency and security of cloud computing
operations.
Research by [68] proposes an AI-centric, data-driven Resource Management
System (RMS) model designed for resource management in distributed computing
systems. The study highlights how AI-driven resource management significantly
enhances efficiency and adaptability within complex computing environments.
However, it acknowledges the challenges associated with implementing AI-centric
solutions in diverse and varying environments. The findings of the study
demonstrate the feasibility and potential of AI-centric approaches in the context of
modern computing, suggesting promising directions for future research in this
field.
Conducted an examination of resource allocation algorithms across different
computing environments, including cloud computing and cellular networks. Their
work provides a comprehensive comparison of a variety of algorithms, along with
their respective applications and the environments in which they are
implemented. The study goes beyond mere comparison, offering a critical
evaluation of these resource allocation algorithms. It delves into the strengths and
weaknesses of each algorithm and discusses the factors that influence their
ISSN 2549-7286 (online)
Indonesian Journal of Computer Science Vol. 13, No. 2, Ed. 2024 | page 1922
performance. This critical assessment aids in understanding the effectiveness and
suitability of these algorithms in different contexts [69].
2023 - Ilager et al., in their study marked as n.d.-b, delve into AI-centric
resource management within distributed systems, emphasizing data-driven
solutions and AI-based systems. Their research underscores the feasibility and
advantages of AI-centric methods in distributed computing, particularly noting
improvements in efficiency and adaptability. However, they caution that the rapid
evolution of technology could potentially impact the long-term relevance of their
findings. The study highlights the significant role of AI in enhancing resource
management, pointing out its practical applications and suggesting directions for
future research in this rapidly advancing area [70].
[71]introduced an innovative method for cloud resource allocation, utilizing
fuzzy meta-heuristics that integrate TakagiSugenoKang (TSK) neural-fuzzy
systems with ant colony optimization (ACO). This approach primarily targets
enhancing energy efficiency and optimizing virtual machine (VM) migration
processes. The method adeptly combines fuzzy logic and ACO, leading to more
efficient cloud resource prediction and allocation, significantly reducing energy
consumption while boosting overall system efficiency. However, the complexity of
this proposed method poses potential challenges in practical implementation,
especially in large-scale systems, and its real-world application might vary from
the results observed in theoretical simulations. Nevertheless, simulation results
indicate a notable reduction in energy usage and an improvement in resource
allocation efficiency, demonstrating the effective integration of neural-fuzzy
systems with ACO in the realm of cloud computing.
D. Discussion and Comparison
The collection of reviewed works presents a comprehensive exploration of
various aspects of resource management in cloud and distributed computing,
highlighting advancements and challenges from 2017 to 2023. These studies
collectively underscore the evolving landscape of cloud and distributed computing,
emphasizing the role of machine learning, AI, and innovative algorithms in
enhancing efficiency, security, and adaptability, while also acknowledging the
challenges posed by rapid technological advancements and implementation
complexities.
Table 1 provides a comprehensive overview of research studies, organized
into six columns: Author, Year, Techniques/Methodology, Strengths/Benefits,
Weaknesses/Limitations, and Key Findings/Results. It provides basic bibliographic
information, outlines research methods, offers a balanced perspective, and
summarizes the main research outcomes.
Table 1. Comparative Analysis of Research Studies in Distributed Systems
and Cloud Resource Management
Author/
years
Techniques/
Methodology
Strengths/Benefits
Weaknesses/Limitati
ons
[52]
2017
Systematic review
of 110+
publications on
cloud resource
Paper develops
taxonomy for cloud
resource
management,
Limitations include
scope and potential
biases in selected
articles for systematic
ISSN 2549-7286 (online)
Indonesian Journal of Computer Science Vol. 13, No. 2, Ed. 2024 | page 1923
management,
chosen from
journals and
conferences.
quantitatively
assessing solutions
and future
challenges.
reviews.
[53]
2019
Analyzes Spark’s
resource
allocation,
focusing on
machine learning's
performance
impact.
Offers detailed
evaluations and
advice for Apache
Spark resource
tuning.
Findings specific to
tested setups,
potentially limiting
broader applicability.
[54]
2019
Paper suggests
DRA and Energy
Saving methods
using VM live
migration in
OpenStack.
Methods focus on
efficient VM use
and cutting physical
machine energy
use, saving
~39.89%.
Scalability issues and
dependency on
OpenStack
infrastructure may
arise.
[55]
2019
Uses stochastic
dynamic models
for backup
resource
provisioning in
cloud computing.
Comprehensive,
practical approach
validated on
Amazon EC2.
[56]
2020
Conducts
systematic review
of cloud
computing
resource
allocation
mechanisms.
Offers detailed
classification and
analysis of diverse
resource allocation
strategies.
Rapid cloud
computing evolution
may quickly date
some methods.
[57]
2020
Examines
approaches and
algorithms for
cloud computing
resource
allocation via
virtualization.
Shows
virtualization's
impact on cloud
resource allocation,
network
performance, and
cost-efficiency.
Challenges include
scalability and
adaptability in diverse
cloud environments.
[58]
2021
Presents
taxonomy of
autonomic, elastic
resource
management in
clouds, analyzing
design and
applications
Provides key
insights for future
cloud computing
resource
management
research and
development.
Addresses efficiency
and adaptability of
current resource
management in
fluctuating cloud
demands.
[59]
2021
Reviews six-
decade evolution
of distributed
computing,
exploring
Paper gives
historical view of
distributed
computing, its
technological
Broad scope may
restrict in-depth
analysis of each
distributed computing
paradigm.
ISSN 2549-7286 (online)
Indonesian Journal of Computer Science Vol. 13, No. 2, Ed. 2024 | page 1924
development,
decentralization,
and influences on
centralization
trends.
progression, and
shift towards
decentralization
across various
paradigms.
[60]
2022
Machine learning
for resource
management in
cloud settings.
Offers thorough
analysis and
outlines future
research directions
VMP inefficiency
causes resource waste
and higher
operational costs.
[62]
2022
Introduces deep
reinforcement
learning-based
cloud resource
scheduling for
distributed
services, with a
novel container-
focused model
and algorithm.
Offers a detailed
and innovative
approach to
resource
scheduling,
potentially
enhancing
efficiency in cloud
services.
However,
implementation
complexity and
adaptability to diverse
cloud environments
pose challenges.
[63]
2022
Analyzes
scheduling in
cloud computing,
emphasizing
machine learning
application.
Offers a detailed
overview of
resource scheduling
challenges in cloud
computing.
Advancements in
cloud technology
could surpass earlier
ideas.
[64]
2022
Reviews
distributed
intelligence in
Edge-to-Cloud,
covering Machine
Learning and Data
Analytics.
Offers overview of
distributed
intelligence tech,
with key challenges
and future research
directions.
Rapid tech evolution
in the field may limit
the study's scope and
findings' longevity.
[65]
2022
Used systematic
review to compare
architectures in
Agriculture 4.0
context.
Collaborative
Computing in
Agriculture 4.0,
assessing
suitability,
adaptability, and
industrial transition.
Biases in source
selection and
technology's rapid
evolution may date
some findings.
[66]
2022
Paper studies
dynamic resource
allocation in cloud
computing,
reviewing various
approaches,
scheduling, and
optimization
metrics.
Reviews and
clarifies DRA
methods,
identifying and
categorizing
scheduling and
optimization
techniques in cloud
computing
evolution.
Lacks empirical
testing of methods,
primarily consolidates
existing knowledge
without new data or
methods.
ISSN 2549-7286 (online)
Indonesian Journal of Computer Science Vol. 13, No. 2, Ed. 2024 | page 1925
[67]
2022
a shared-memory
parallel processing
method
Method improves
system control and
speed in diverse
multicore setups.
potential issues
include complex
management and
hardware reliance
[68]
2023
Proposes AI-
centric, data-
driven RMS
model for
resource
management in
distributed
systems.
AI-driven resource
management
enhances efficiency
and adaptability in
complex computing
environments.
AI-centric solutions
face challenges in
diverse environment
implementation.
[69]
2023
Examines
resource
allocation
algorithms in
environments like
cloud computing
and cellular
networks.
Provides a broad
comparison of
various algorithms,
applications, and
environments.
[70]
2023
AI-centric
resource
management in
distributed
systems with data-
driven solutions
and AI-based
systems.
Highlights AI-
centric methods'
feasibility and
benefits in
distributed
computing for
efficiency and
adaptability.
Technology's fast
evolution may affect
the findings'
relevance.
[71]
2023
Proposes fuzzy
meta-heuristics for
cloud resource
allocation,
combining TSK
neural-fuzzy
systems and ACO,
focusing on
energy efficiency
and VM
migration.
Method integrates
fuzzy logic and
ACO for efficient
cloud resource
prediction and
allocation, reducing
energy
consumption, and
improving overall
efficiency.
Proposed method's
complexity may
challenge practical
implementation,
especially in large
systems; real-world
application may differ
from theoretical
simulations.
E. Extracted Statistics
The research spans from 2017 to 2023, beginning with the identification of
gaps in cloud resource management, particularly in security and dynamic
allocation. Studies in 2019 and 2020 address these challenges with energy-
efficient DRA and guidelines for resource management. The role of machine
learning and AI becomes prominent in 2022 and 2023, with insights into
intelligent scheduling, performance optimization, and the introduction of
innovative methods like deep reinforcement learning and neural-fuzzy systems.
Recommendations and classifications for efficient resource management emerge in
ISSN 2549-7286 (online)
Indonesian Journal of Computer Science Vol. 13, No. 2, Ed. 2024 | page 1926
2020 and 2021, while 2021 and 2022 also bring attention to specific fields like
distributed systems and Agriculture 4.0. The research concludes in 2023 with
evaluations of resource allocation algorithms and the potential of AI-centric
approaches, highlighting the continuous evolution in cloud and distributed
computing and the critical need for ongoing research and development.
26%
16%
58%
Key Findings
Role of Machine Learning and
AI
Applications and Innovations
Implications for Specific
Fields
Figure 3. Thematic Distribution of Research Articles in Cloud Resource Management
(2017-2023).
The chart in the document represents a distribution of 19 research articles
related to cloud resource management, categorized into three main areas of focus:
Role of Machine Learning and AI: Out of the 19 articles, 5 are dedicated to
exploring the role of machine learning and artificial intelligence in cloud resource
management. This signifies a substantial interest in how AI and machine learning
technologies are shaping the landscape of cloud computing, specifically in the
context of resource management. Applications and Innovations: 3 of the 19 articles
delve into applications and innovations in the field. This category likely covers new
technologies, methods, or innovative approaches being developed and applied in
cloud resource management, showcasing the advancements and novel solutions in
the domain.
Implications for Specific Fields: The largest category, with 11 articles, focuses
on the implications of cloud resource management in specific fields or industries.
This indicates a broad and diverse range of applications and impacts of cloud
computing advancements across various sectors, highlighting how these
innovations are being applied and the consequences they have in different
contexts.
ISSN 2549-7286 (online)
Indonesian Journal of Computer Science Vol. 13, No. 2, Ed. 2024 | page 1927
In a comprehensive overview of advancements in cloud and distributed
computing from 2017 to 2023, the papers collectively emphasize a diverse range of
methodologies and techniques. Key focuses include systematic reviews of cloud
resource management and allocation mechanisms, highlighting the evolution of
technologies and methodologies over the years. A significant portion of the
research concentrates on integrating machine learning and AI into resource
management, aiming to optimize performance, efficiency, and security. Techniques
such as dynamic resource allocation, deep reinforcement learning, and the use of
fuzzy meta-heuristics demonstrate innovative approaches to enhance cloud
computing efficiency, particularly in VM efficiency, energy saving, and resource
scheduling. Overall, these studies reflect a dynamic and evolving field, increasingly
relying on sophisticated algorithms and AI-driven strategies to address the
complexities of modern cloud and distributed computing environments.
33%
28%
22%
17%
Techniques and Methodologies
Machine Learning and AI in
Resource Management
Resource Allocation Techniques
and Models
Specialized Methodological
Approaches
Systematic Reviews and
Historical Analyses
Figure 4. Techniques and Methodologies in Cloud and Distributed Systems: A 2017-
2023 Perspective.
In the domain of cloud and distributed computing research, the
categorization and analysis of various scholarly papers reveal distinct focus areas,
each represented by a specific number of studies. Firstly, the category of
"Systematic Reviews and Historical Analyses" comprises 4 papers, reflecting a
scholarly interest in comprehensive overviews and historical perspectives of cloud
computing's evolution. The second category, "Machine Learning and AI in Resource
Management," is the most prominent, with 6 papers. This underscores a significant
trend towards integrating advanced technologies like machine learning and
artificial intelligence in managing cloud resources. The third category, "Resource
Allocation Techniques and Models," includes 5 papers, indicating a strong research
focus on developing and refining methods for efficient resource distribution in
cloud environments. Lastly, the "Specialized Methodological Approaches" category,
ISSN 2549-7286 (online)
Indonesian Journal of Computer Science Vol. 13, No. 2, Ed. 2024 | page 1928
also comprising 4 papers, highlights research dedicated to specific, innovative
techniques and models in cloud computing. Collectively, these counts not only offer
a quantitative overview of the research landscape but also qualitatively
underscore the diverse methodologies and thematic areas that are shaping the
field of cloud and distributed computing.
F. Recommendations
In the rapidly evolving field of cloud and distributed computing, two pivotal
recommendations stand out for driving future progress. Firstly, a significant
emphasis should be placed on integrating artificial intelligence (AI) and machine
learning technologies. This integration is crucial as it promises to significantly
enhance the efficiency and security of cloud computing environments. By
leveraging AI, cloud systems can become more adaptive and intelligent, leading to
optimized resource management and improved performance. Secondly, the
development of dynamic and adaptive resource management strategies is
essential. Such strategies will allow cloud systems to efficiently adapt to fluctuating
demands and workloads, ensuring optimal resource utilization. This focus on
adaptability not only improves the overall efficiency of cloud services but also
ensures their scalability and reliability, catering to the growing and diverse needs
of modern digital infrastructures.
G. Conclusion
The papers reviewed from 2017 to 2023 present a comprehensive and
evolving landscape of research in the field of cloud computing and distributed
systems, each contributing significantly to our understanding and capabilities in
these areas. continuously pushing the boundaries of what's possible in cloud
computing and distributed systems. These studies collectively underscore the
importance of innovative approaches to resource management, Highlighting the
significant potential of advanced computational technologies to revolutionize
distributed systems and cloud computing resource management. They also bring
to light the critical balance between advancing technology and addressing practical
implementation challenges, ensuring that these solutions are not only theoretically
sound but also viable in real-world applications. As we move forward, the insights
from these papers will undoubtedly guide future research and development,
shaping the next generation of cloud computing and distributed system
technologies.
H. References
[1] B. N. Taha, Distributed And Cloud Systems Influences On Pattern Matching
Techniques, 2023.
[2] Z. S. Ageed and S. R. M. Zeebaree, “Distributed Systems Meet Cloud Computing: A
Review of Convergence and Integration,” International Journal of Intelligent Systems
and Applications in Engineering, vol. 12, no. 11s, pp. 469490, 2024.
[3] S. Koehler, H. Desamsetti, V. K. R. Ballamudi, and S. Dekkati, “Real World
Applications of Cloud Computing: Architecture, Reasons for Using, and Challenges,”
Asia Pacific Journal of Energy and Environment, vol. 7, no. 2, pp. 93102, 2020.
ISSN 2549-7286 (online)
Indonesian Journal of Computer Science Vol. 13, No. 2, Ed. 2024 | page 1929
[4] S. R. Swain, A. K. Singh, and C. N. Lee, “Efficient resource management in cloud
environment,” arXiv preprint arXiv:2207.12085, 2022.
[5] A. Sunyaev and A. Sunyaev, “Cloud computing,” Internet Computing: Principles of
Distributed Systems and Emerging Internet-Based Technologies, pp. 195236, 2020.
[6] I. M. I. Zebari, S. R. M. Zeebaree, and H. M. Yasin, “Real time video streaming from
multi-source using client-server for video distribution,” in 2019 4th Scientific
International Conference Najaf (SICN), IEEE, 2019, pp. 109114.
[7] R. Mahajan, P. R. Patil, M. Shahakar, and A. Potgantwar, “An Analytical Evaluation of
Various Approaches for Load Optimization in Distributed System,” International
Journal of Intelligent Systems and Applications in Engineering, vol. 12, no. 1s, pp. 526
548, 2024.
[8] S. T. Siddiqui, S. Alam, Z. A. Khan, and A. Gupta, “Cloud-based e-learning: using
cloud computing platform for an effective e-learning,” in Smart innovations in
communication and computational sciences: proceedings of ICSICCS-2018, Springer,
2019, pp. 335346.
[9] P. Y. Abdullah, S. R. Zeebaree, H. M. Shukur, and K. Jacksi, “HRM system using cloud
computing for Small and Medium Enterprises (SMEs),” Technology Reports of Kansai
University, vol. 62, no. 04, p. 04, 2020.
[10] S. Giri and S. Shakya, “Cloud computing and data security challenges: A Nepal case,”
International Journal of Engineering Trends and Technology, vol. 67, no. 3, p. 146,
2019.
[11] M. Attaran and J. Woods, “Cloud computing technology: improving small business
performance using the Internet,” Journal of Small Business & Entrepreneurship, vol. 31,
no. 6, pp. 495519, 2019.
[12] D. C. Marinescu, Cloud computing: theory and practice. Morgan Kaufmann, 2022.
[13] V. Premnath and M. Vetrivel, “Energy Efficient Search Scheme Over Encrypted Data
On Mobile Users On Cloud,” South Asian Journal of Engineering and Technology, vol.
8, no. 2, pp. 217222, 2019.
[14] Ltd. Huawei Technologies Co., “Virtualization Technology,” in Cloud Computing
Technology, Springer, 2022, pp. 97144.
[15] S. R. M. Zeebaree, H. M. Shukur, L. M. Haji, R. R. Zebari, K. Jacksi, and S. M. Abas,
“Characteristics and analysis of hadoop distributed systems,” Technology Reports of
Kansai University, vol. 62, no. 4, pp. 15551564, 2020.
[16] M. E. Suliman and K. S. A. Madinah, “A brief analysis of cloud computing
Infrastructure as a Service (IaaS),” International Journal of Innovative Science and
Research TechnologyIJISRT, vol. 6, no. 1, pp. 14091412, 2021.
[17] F. Wulf, T. Lindner, S. Strahringer, and M. Westner, “IaaS, PaaS, or SaaS? The Why of
Cloud Computing Delivery Model Selection: Vignettes on the Post-Adoption of Cloud
Computing,” in Proceedings of the 54th Hawaii International Conference on System
Sciences, 2021, 2021, pp. 62856294.
[18] H. Malallah et al., “A comprehensive study of kernel (issues and concepts) in different
operating systems,” Asian Journal of Research in Computer Science, vol. 8, no. 3, pp.
1631, 2021.
[19] H. B. Patel and N. Kansara, “Cloud Computing Deployment Models: A Comparative
Study,” International Journal of Innovative Research in Computer Science &
Technology (IJIRCST), 2021.
ISSN 2549-7286 (online)
Indonesian Journal of Computer Science Vol. 13, No. 2, Ed. 2024 | page 1930
[20] A. P. Rajan, “A review on serverless architectures-function as a service (FaaS) in cloud
computing,” Telkomnika (Telecommunication Computing Electronics and Control),
vol. 18, no. 1, pp. 530537, 2020.
[21] A. E. Oke, A. F. Kineber, I. Al-Bukhari, I. Famakin, and C. Kingsley, “Exploring the
benefits of cloud computing for sustainable construction in Nigeria, Journal of
Engineering, Design and Technology, vol. 21, no. 4, pp. 973990, 2023.
[22] S. A. Bello et al., “Cloud computing in construction industry: Use cases, benefits and
challenges,” Autom Constr, vol. 122, p. 103441, 2021.
[23] R. Kollolu, “Infrastructural constraints of Cloud computing,” International Journal of
Management, Technology and Engineering, vol. 10, pp. 255260, 2020.
[24] S. R. Zeebaree, “DES encryption and decryption algorithm implementation based on
FPGA,” Indones. J. Electr. Eng. Comput. Sci, vol. 18, no. 2, pp. 774–781, 2020.
[25] M. Norman et al., “CloudBank: Managed Services to Simplify Cloud Access for
Computer Science Research and Education,” in Practice and Experience in Advanced
Research Computing, 2021, pp. 14.
[26] O. Anicho and T. Abdullah, “Impact of Cloud-based Infrastructure on Telecom
Managed Services Models,” Data Science: Journal of Computing and Applied
Informatics, vol. 4, no. 2, pp. 7188, 2020.
[27] S. El Kafhali, I. El Mir, K. Salah, and M. Hanini, “Dynamic scalability model for
containerized cloud services,” Arab J Sci Eng, vol. 45, pp. 1069310708, 2020.
[28] M. N. Abdullah and W. S. Bhaya, “Predication of Quality of Service (QoS) in Cloud
Services: A Survey,” in Journal of Physics: Conference Series, IOP Publishing, 2021, p.
012049.
[29] Z. N. Rashid, S. R. M. Zeebaree, R. R. Zebari, S. H. Ahmed, H. M. Shukur, and A.
Alkhayyat, “Distributed and Parallel Computing System Using Single-Client Multi-
Hash Multi-Server Multi-Thread,” in 2021 1st Babylon International Conference on
Information Technology and Science (BICITS), IEEE, 2021, pp. 222227.
[30] D. Sitaram and G. Manjunath, “Chapter 9 - Related Technologies,” in Moving To The
Cloud, D. Sitaram and G. Manjunath, Eds., Boston: Syngress, 2012, pp. 351387. doi:
https://doi.org/10.1016/B978-1-59749-725-1.00009-3.
[31] H. Shukur, S. Zeebaree, R. Zebari, O. Ahmed, L. Haji, and D. Abdulqader, “Cache
coherence protocols in distributed systems,” Journal of Applied Science and
Technology Trends, vol. 1, no. 3, pp. 9297, 2020.
[32] S. R. M. Zeebaree, A. B. Sallow, B. K. Hussan, and S. M. Ali, “Design and Simulation
of High-Speed Parallel/Sequential Simplified DES Code Breaking Based on FPGA,” in
2019 International Conference on Advanced Science and Engineering (ICOASE), 2019,
pp. 7681. doi: 10.1109/ICOASE.2019.8723792.
[33] Z. N. Rashid, S. R. M. Zeebaree, M. A. M. Sadeeq, R. R. Zebari, H. M. Shukur, and A.
Alkhayyat, “Cloud-based Parallel Computing System Via Single-Client Multi-Hash
Single-Server Multi-Thread,” in 2021 International Conference on Advance of
Sustainable Engineering and its Application (ICASEA), IEEE, 2021, pp. 5964.
[34] D. A. Hasan, B. K. Hussan, S. R. M. Zeebaree, D. M. Ahmed, O. S. Kareem, and M. A.
M. Sadeeq, “The impact of test case generation methods on the software performance:
A review,” International Journal of Science and Business, vol. 5, no. 6, pp. 33–44, 2021.
[35] M. Khaldi, M. Rebbah, B. Meftah, and O. Smail, “Fault tolerance for a scientific
workflow system in a cloud computing environment,” International Journal of
Computers and Applications, vol. 42, no. 7, pp. 705714, 2020.
ISSN 2549-7286 (online)
Indonesian Journal of Computer Science Vol. 13, No. 2, Ed. 2024 | page 1931
[36] M. Wolsink, “Distributed energy systems as common goods: Socio-political acceptance
of renewables in intelligent microgrids,” Renewable and Sustainable Energy Reviews,
vol. 127, p. 109841, 2020.
[37] D. M. ABDULQADER, S. R. M. ZEEBAREE, R. R. ZEBARI, S. A. L. I. SALEH, Z.
N. RASHID, and M. A. M. SADEEQ, “SINGLE-THREADING BASED
DISTRIBUTED-MULTIPROCESSOR-MACHINES AFFECTING BY
DISTRIBUTED-PARALLEL-COMPUTING TECHNOLOGY,” Journal of Duhok
University, vol. 26, no. 2, pp. 416426, 2023.
[38] E. M. Portnov, A. K. Myo, A. A. Anisimov, R. A. Kasimov, and K. O. Epishin,
“Development of a Method for Managing Resource-Intensive Applications in
Distributed Computing Systems,” in 2020 IEEE Conference of Russian Young
Researchers in Electrical and Electronic Engineering (EIConRus), IEEE, 2020, pp.
24012405.
[39] A. Z. A. L. Shaqsi, K. Sopian, and A. Al-Hinai, “Review of energy storage services,
applications, limitations, and benefits,” Energy Reports, vol. 6, pp. 288306, 2020.
[40] Z. Wu, J. Sun, Y. Zhang, Z. Wei, and J. Chanussot, “Recent developments in parallel
and distributed computing for remotely sensed big data processing,” Proceedings of the
IEEE, vol. 109, no. 8, pp. 12821305, 2021.
[41] A. Chakravorty, W. S. Cleveland, and P. J. Wolfe, “Statistical scalability and
approximate inference in distributed computing environments,” arXiv preprint
arXiv:2112.15572, 2021.
[42] N. Gupta and N. H. Vaidya, “Fault-tolerance in distributed optimization: The case of
redundancy,” in Proceedings of the 39th Symposium on Principles of Distributed
Computing, 2020, pp. 365374.
[43] A. B. Klimenko, “Distributed Data Proceeding Systems Architectures Resource
Efficiency Improvement on the Basis of Apriory Data about the Jobs Late Completion
Times,” Юго-Западного государственного университета, p. 125, 2023.
[44] F. Es-Sabery and A. Hair, “Big data solutions proposed for cluster computing systems
challenges: A survey,” in Proceedings of the 3rd International Conference on
Networking, Information Systems & Security, 2020, pp. 17.
[45] S. M. Mohammed, K. Jacksi, and S. Zeebaree, “A state-of-the-art survey on semantic
similarity for document clustering using GloVe and density-based algorithms,”
Indonesian Journal of Electrical Engineering and Computer Science, vol. 22, no. 1, pp.
552562, 2021.
[46] M. Rahmany, A. Sundararajan, and A. Zin, “A review of desktop grid computing
middlewares on non-dedicated resources,” J. Theor. Appl. Inf. Technol, vol. 98, no. 10,
pp. 16541663, 2020.
[47] S. R. M. Zeebaree et al., “Multicomputer multicore system influence on maximum
multi-processes execution time,” TEST Engineering & Management, vol. 83, no. 03,
pp. 1492114931, 2020.
[48] L. M. Haji, S. R. M. Zeebaree, O. M. Ahmed, M. A. M. Sadeeq, H. M. Shukur, and A.
Alkhavvat, “Performance Monitoring for Processes and Threads Execution-
Controlling,” in 2021 International Conference on Communication & Information
Technology (ICICT), IEEE, 2021, pp. 161166.
[49] Y. S. Jghef, S. R. M. Zeebaree, Z. S. Ageed, and H. M. Shukur, “Performance
Measurement of Distributed Systems via Single-Host Parallel Requesting using (Single,
Multi and Pool) Threads,” in 2022 3rd Information Technology To Enhance e-learning
and Other Application (IT-ELA), IEEE, 2022, pp. 3843.
ISSN 2549-7286 (online)
Indonesian Journal of Computer Science Vol. 13, No. 2, Ed. 2024 | page 1932
[50] M. A. M. Sadeeq and S. R. M. Zeebaree, “Design and implementation of an energy
management system based on distributed IoT,” Computers and Electrical Engineering,
vol. 109, p. 108775, 2023.
[51] A. Mamonov, R. Varlamov, and S. Salpagarov, “Computing Load Distribution by
Using Peer-to-Peer Network,” in Distributed Computer and Communication Networks:
Control, Computation, Communications: 23rd International Conference, DCCN 2020,
Moscow, Russia, September 14-18, 2020, Revised Selected Papers 23, Springer, 2020,
pp. 521532.
[52] N. M. Gonzalez, T. C. M. de B. Carvalho, and C. C. Miers, “Cloud resource
management: towards efficient execution of large-scale scientific applications and
workflows on complex infrastructures,” Journal of Cloud Computing, vol. 6, no. 1.
Springer Verlag, Dec. 01, 2017. doi: 10.1186/s13677-017-0081-4.
[53] K. Aziz, D. Zaidouni, and M. Bellafkih, “Leveraging resource management for efficient
performance of Apache Spark,” J Big Data, vol. 6, no. 1, Dec. 2019, doi:
10.1186/s40537-019-0240-1.
[54] C. T. Yang, S. T. Chen, J. C. Liu, Y. W. Chan, C. C. Chen, and V. K. Verma, “An
energy-efficient cloud system with novel dynamic resource allocation methods,” Journal
of Supercomputing, vol. 75, no. 8, pp. 44084429, Aug. 2019, doi: 10.1007/s11227-
019-02794-w.
[55] Z. Guo, J. Li, and R. Ramesh, “Optimal management of virtual infrastructures under
flexible cloud service agreements,” Information Systems Research, vol. 30, no. 4, pp.
14241446, 2019, doi: 10.1287/isre.2019.0871.
[56] M. V. Fard, A. Sahafi, A. M. Rahmani, and P. S. Mashhadi, “Resource allocation
mechanisms in cloud computing: A systematic literature review,” IET Software, vol. 14,
no. 6. Institution of Engineering and Technology, pp. 638653, Dec. 01, 2020. doi:
10.1049/iet-sen.2019.0338.
[57] H. Shukur, S. Zeebaree, R. Zebari, D. Zeebaree, O. Ahmed, and A. Salih, Cloud
computing virtualization of resources allocation for distributed systems,” Journal of
Applied Science and Technology Trends, vol. 1, no. 3, pp. 98105, 2020.
[58] M. A. N. Saif, S. K. Niranjan, and H. D. E. Al-ariki, “Efficient autonomic and elastic
resource management techniques in cloud environment: taxonomy and analysis,”
Wireless Networks, vol. 27, no. 4, pp. 28292866, May 2021, doi: 10.1007/s11276-021-
02614-1.
[59] D. Lindsay, S. S. Gill, D. Smirnova, and P. Garraghan, “The evolution of distributed
computing systems: from fundamental to new frontiers,” Computing, vol. 103, no. 8,
pp. 18591878, Aug. 2021, doi: 10.1007/s00607-020-00900-y.
[60] S. R. Swain, A. K. Singh, and C. N. Lee, “Efficient Resource Management in Cloud
Environment,” Jun. 2022, [Online]. Available: http://arxiv.org/abs/2207.12085
[61] N. A. Kako, “DDLS: Distributed Deep Learning Systems: A Review,” Turkish Journal
of Computer and Mathematics Education (TURCOMAT), vol. 12, no. 10, pp. 7395
7407, 2021.
[62] L. Yu, P. S. Yu, Y. Duan, and H. Qiao, “A resource scheduling method for reliable and
trusted distributed composite services in cloud environment based on deep
reinforcement learning,” Front Genet, vol. 13, Oct. 2022, doi:
10.3389/fgene.2022.964784.
[63] W. Khallouli and J. Huang, “Cluster resource scheduling in cloud computing: literature
review and research challenges,” Journal of Supercomputing, vol. 78, no. 5, pp. 6898–
6943, Apr. 2022, doi: 10.1007/s11227-021-04138-z.
ISSN 2549-7286 (online)
Indonesian Journal of Computer Science Vol. 13, No. 2, Ed. 2024 | page 1933
[64] D. Rosendo, A. Costan, P. Valduriez, and G. Antoniu, “Distributed intelligence on the
Edge-to-Cloud Continuum: A systematic literature review,” J Parallel Distrib Comput,
vol. 166, pp. 7194, Aug. 2022, doi: 10.1016/j.jpdc.2022.04.004.
[65] O. Debauche, S. Mahmoudi, P. Manneback, and F. Lebeau, “Cloud and distributed
architectures for data management in agriculture 4.0: Review and future trends,” Journal
of King Saud University - Computer and Information Sciences, vol. 34, no. 9. King
Saud bin Abdulaziz University, pp. 74947514, Oct. 01, 2022. doi:
10.1016/j.jksuci.2021.09.015.
[66] A. Belgacem, “Dynamic resource allocation in cloud computing: analysis and
taxonomies,” Computing, vol. 104, no. 3, pp. 681–710, Mar. 2022, doi:
10.1007/s00607-021-01045-2.
[67] L. M. Haji, S. R. M. Zeebaree, Z. S. Ageed, O. M. Ahmed, M. A. M. Sadeeq, and H. M.
Shukur, “Performance Monitoring and Controlling of Multicore Shared-Memory
Parallel Processing Systems,” in 2022 3rd Information Technology To Enhance e-
learning and Other Application (IT-ELA), IEEE, 2022, pp. 4448.
[68] S. Ilager, R. Muralidhar, and R. Buyya, “Artificial Intelligence (AI)-Centric
Management of Resources in Modern Distributed Computing Systems Artificial
Intelligence (AI)Centric Management of Resources in Modern Distributed Computing
Systems.” [Online]. Available: https://www.researchgate.net/publication/342093888
[69] K. Wns Hamaali, K. Wns HamaAli, and S. R. M Zeebaree, “Resources Allocation for
Distributed Systems: A Review”, doi: 10.5281/zenodo.4462088.
[70] S. Ilager, R. Muralidhar, and R. Buyya, “Artificial Intelligence (AI)-Centric
Management of Resources in Modern Distributed Computing Systems Artificial
Intelligence (AI)Centric Management of Resources in Modern Distributed Computing
Systems.” [Online]. Available: https://www.researchgate.net/publication/342093888
[71] A. K. Sangaiah, A. Javadpour, P. Pinto, S. Rezaei, and W. Zhang, “Enhanced resource
allocation in distributed cloud using fuzzy meta-heuristics optimization,” Comput
Commun, vol. 209, pp. 1425, Sep. 2023, doi: 10.1016/j.comcom.2023.06.018.
... This study involved an analysis of scientific publications covering topics related to the management of high-load distributed systems, various microservice architectures, and agile software development methodologies. A. Abdi and S. R. M. Zeebaree [1] provided a comprehensive review of resource management techniques for cloud-based distributed systems, highlighting the latest technologies and approaches to scaling and optimization. J. Choi, J. Lee, J.-S. ...
... Many authors emphasize load balancing methods that enable the even distribution of client requests and help prevent bottlenecks [4,9]. Additionally, efficient implementation of multithreading and parallel computing allows for the simultaneous processing of larger traffic volumes, reducing overall operation execution time [1,8]. ...
... To achieve high throughput, asynchronous queues and messaging systems are often implemented, as this approach minimizes client downtime in case of failures in individual nodes [1,3]. Integrating monitoring tools at the microservice level provides developers with precise metrics that help detect performance degradation early and apply corrective actions with minimal cost [9]. ...
... show that organizations implementing comprehensive monitoring across cloud resources typically track between 175-225 distinct metrics per service instance, with data collection intervals averaging 15 seconds for critical metrics and 60 seconds for standard telemetry [11]. ...
... Analysis of cloud-native testing strategies reveals critical patterns in reliability validation. Research shows that organizations leveraging automated testing frameworks across distributed systems achieve 67% better resource utilization compared to manual testing approaches [11]. These frameworks typically execute across multiple availability zones, simulating realistic network conditions with latency variations ranging from 50-150 milliseconds and packet loss rates between 0.1% and 2%. ...
... Cloud resource management research reveals optimal patterns for maintaining system reliability. Organizations implementing comprehensive monitoring and automation achieve mean time between failures (MTBF) of 4,500 hours compared to 2,100 hours for systems with basic monitoring [11]. These implementations typically maintain resource utilization rates between 65-85%, with automated scaling decisions executing within 45 seconds of threshold violations. ...
Article
Full-text available
This article examines the fundamental principles and practical approaches for designing reliable distributed systems in modern digital environments. It explores key architectural patterns, including microservices and event-driven architectures, while analyzing the trade-offs between consistency, availability, and partition tolerance. The article investigates how organizations implement reliability features across different Prudhvi Chandra https://iaeme.com/Home/journal/IJRCAIT 2584 editor@iaeme.com domains, from e-commerce platforms to financial trading systems, and examines the cost-benefit relationships of various reliability investments. The article also addresses monitoring strategies, testing methodologies, and operational excellence patterns that contribute to system reliability. Through analysis of real-world implementations and industry case studies, this article provides comprehensive insights into the challenges and solutions in building resilient distributed systems at scale.
... Additional examples of various sorts of equipment include printers and scanners, which are shown below for your reference. Additionally, there are input devices like as keyboards and mouse, as well as peripherals such as printers, scanners, and network adapters such as these [25,100]. If the software that is loaded on a computer does not provide support for the hardware of the machine, then the hardware will be unable to execute the tasks that it was designed to accomplish in an effective manner. ...
... These tables and files are also used to link and store data inside the databases themselves. The whole of a database is made up of files and tables that are linked to one another and stored together [100]. It is possible to establish links between these snippets of data. ...
Article
When it comes to achieving sustainability, the combination of artificial intelligence (AI), the Internet of Things (IoT), and business systems presents a number of challenges as well as significant opportunities. Within the scope of this book, a number of researchmethodologies that have been used to investigate the environmental, economic, and social ramifications of these technologies in connection to their contributions to sustainable development are brought under scrutiny and evaluated critically. Effectivenessand the process of making judgments based on accurate information are both improved by the capabilities of artificial intelligence in the areas of machine learning, natural language processing, and computer vision. The Internet of Things (IoT) makes it possible for networked devices and sensors to share data in real time with one another. This facilitates the optimization of resource use and the reduction of environmental effect. The link with enterprise systems improves operating efficiency, makes the flow of information easier, and encourages the adoption of environmentally responsible business practices. Even while there are many advantages, there are still challenges to be conquered in terms of protecting the privacy of data, ensuring its security, and ensuring that robust frameworks are in place to handle large volumes of data. For the purpose of addressing these challenges and making the most of the potential of artificial intelligence, internet of things, and business systems in achieving long-term sustainability, the study highlights the need of interdisciplinary research and collaboration
... DCC systems can improve the efficiency of AI models through optimised resource allocation. Although traditional cloud models tend to suffer from inefficient resource allocation where certain data centres remain underutilized, distributed systems addresses this issue by allocating computational tasks across various nodes within the network based on demand 19 . Such an approach maximizes performance of individual nodes and enhances the responsiveness and efficiency of systems. ...
... Abdi, A., et al. [11] How It Works: Distributed systems allow you to get the most out of cloud resources by spreading jobs between multiple servers, as mentioned .This results in Better Capacity Utilization as it makes sure that the available capacity is optimally used and costs are minimized. Also, it will improve the scalability and integrity of this system. ...
Article
The use of artificial intelligence (AI) in cloud architectures has significantly increased processing efficiency and scale. However, with the development of complex algorithms and big data as well as surprisingly entered into our machine learning world; workload management becomes a significant issue in AI cloud computing. Existing workload management solutions are rule-based heuristics that may result in underutilization of resources and poor performance. For that, we present an algorithmic comparative approach to easing the burden of workload management for AI-driven cloud architectures. This is in contrast to executing a batch of tasks with different algorithms and comparing performance, cost, etc. We use ML methods to determine the best algorithm for our workload, and then deploy this in a self-contained binary that can switch between algorithms at runtime on an available resource. We validated our scheme with simulations, which demonstrates the capability of superior resource use and diminished completion time in comparison to rule-based schemes. When needed, flexibility and scalability allow you easier control over workloads that are subject to change or allocation. By simplifying AI-driven cloud workload management, the elasticity of their overall approach greatly enhances efficiency and scalability for those organizations looking to run even larger and take advantage of more complex workloads faster Tweet this Share on Facebook.
... The internet provides easy access to a wealth of information that is both timely and relevant. Information that is available to the public on the internet is not only easily accessible, but there is also an abundance of freely available material online [34]. ...
Article
Full-text available
Sustainability is more important than ever in today's fast-paced business world. Companies are under increasing demands to maximise profits while reducing their environmental footprint. To achieve these goals, businesses must use AI-driven solutions that integrate data from the Internet of Things (IoT) into their sustainability management systems.By streamlining the collection, processing, and use of real-time data, artificial intelligence technologies provide powerful instruments for assessing environmental performance. Energy consumption, waste production, carbon emissions, and water use are just a few of the environmental repercussions that may be better understood with the help of data visualisation techniques, predictive analytics, and machine learning algorithms. By seeing trends, patterns, and optimisation opportunities that may otherwise go unnoticed, AIpowered systems allow for continuous improvement. With the help of cutting-edge monitoring, modelling, and simulation tools, businesses can make educated decisions that put sustainability first without sacrificing operational efficiency. In addition, stakeholders are able to track environmental performance metrics all the way down the value chain thanks to Artificial Intelligence (AI), which increases transparency and collaboration in supply chains. In order to achieve long-term goals, this encourages personal accountability and fosters teamwork. A company's environmental performance, innovation, and competitiveness in a sustainabilityfocused market may all be improved with the use of data collected via the IoT and AI.
... Identifying, managing, and minimising risks are some of the obligations that come within this area of responsibility. Other responsibilities include resuming normal operations and ensuring that regular activities are carried out [79]. ...
Article
Distributed Systems and Web-based Inspirations on Cybersecurity and Sustainability
... The method of processing data on a computer that combines distributed computing with parallel processing is referred to as "Computing on a grid," and the word "a grid" is used to describe the technology. As a consequence of this, the group is now capable of completing very complex tasks in a far shorter amount of time and with more efficiency than ever before [128]. Grid computing is a kind of parallel and distributed computing in which users primarily interact with and modify data that is stored in a grid structure. ...
Article
In the contemporary business landscape, the drive towards sustainability has become imperative, pushing enterprises to adopt frameworks that integrate advanced technologies for green business transformation. This comparative review explores various sustainable enterprise system frameworks, focusing on the integration of web technology, cloud computing, artificial intelligence (AI), the Internet of Things (IoT), and security measures. The study aims to provide a comprehensive analysis of how these technologies collectively contribute to creating eco-friendly business environments. By leveraging web technology and cloud computing, businesses can enhance their operational efficiency and reduce carbon footprints through virtualized resources and scalable solutions. AI and IoT are pivotal in optimizing resource utilization, predictive maintenance, and energy management, fostering intelligent and sustainable practices. Security, an essential aspect of any enterprise system, is scrutinized to ensure that the integration of these technologies does not compromise data integrity and privacy. The review highlights key frameworks and models, comparing their strengths, limitations, and applicability in various industries. Through case studies and real-world examples, the paper illustrates successful implementations and identifies best practices. The findings underscore the importance of a holistic approach in adopting these technologies to achieve a sustainable and resilient business model. This research contributes to the ongoing discourse on green business transformation, providing valuable insights for policymakers, industry leaders, and researchers seeking to advance sustainability in enterprise systems. The comparative review underscores the potential of integrated technological frameworks in driving sustainable growth and emphasizes the need for robust security measures to safeguard the integrity of green business initiatives.
... Identifying, managing, and minimising risks are some of the obligations that come within this area of responsibility. Other responsibilities include resuming normal operations and ensuring that regular activities are carried out [79]. ...
Article
This study compiles current research on the difficulties of cybersecurity and sustainability in corporate systems, focusing on important findings and techniques. This study examined the influence ofcybersecurity on the ability of organisations to withstand and recover from disruptions, as well as their overall economic success, in both the service and manufacturing industries. The findings indicated that cybersecurity had a beneficial effect, however there were significant limitations in the research methods used. A approach has been shown effective in addressing the security gaps between IT and OT. A systematic methodology for implementing Cyber Threat Intelligence (CTI) was suggested, emphasising essential elements for enhancing organisational resilience. The significance of Enterprise Risk Management (ERM) and Business Continuity (BC) in strengthening cyber resilience was emphasised. This essay delved into the importance of Green Cybersecurity in promoting sustainability. The study focused on analysing the long-term viability of cybersecurity measures in businesses, highlighting the obstacles and proposing tactics to enhance their capacity to withstand threats, with a specific emphasis on small and medium-sized organisations (SMEs). The need of continuous improvement and risk management in ensuring the long-term viability of cybersecurity was emphasised. The efficacy of information sharing in enhancing cybersecurity practices among small and medium-sized enterprises (SMEs) was shown. The discussion revolved on the implementation of Sustainable Data Governance (SDG) in Nigerian organisations. An analysis was conducted on the implementation of cloud-based ERP systems and cybersecurity in commercial banks in Jordan. Deficiencies in security procedures throughout the Software Development Life Cycle (SDLC) in small and medium-sized enterprises (SMEs) were observed. This paper offers significant ideas and methods for improving cybersecurity and sustainability in different companies.
Article
Full-text available
This comprehensive article examines the evolution and implementation of intelligent data governance frameworks in modern distributed systems, focusing on integrating artificial intelligence and metadata-driven approaches. The article explores how organizations address unprecedented challenges in managing sensitive data across complex infrastructures while maintaining regulatory compliance. The article investigates the transformation from traditional governance models to sophisticated, AI-enhanced systems that enable real-time monitoring, automated classification, and predictive analytics. The article provides insights into successful deployment strategies through a detailed examination of implementation considerations, including technical architecture requirements and organizational readiness factors. It demonstrates the significant improvements in operational efficiency, security controls, and compliance management achieved through intelligent governance frameworks.
Article
Full-text available
Conventional cloud computing, in which processing, storage, and networking resources are hosted in one or a few centralised data centres, has been made unsuitable as a result of the stringent latency requirements of emerging applications. Additionally, the rapid expansion of networks has led to the emergence of a trend known as network cloudification, which involves the delivery of network services based on cloud service models. Therefore, the development of the new distributed cloud model represents a progression from the conventional centralised cloud computing model to the worldwide distributed cloud computing services that are positioned according to the needs of the application. In this essay, we make an effort to provide a comprehensive overview of clouds that are dispersed. The first thing that is discussed is the concept of distributed cloud computing. We will now continue to outline the architecture of the distributed cloud as well as the technologies that are linked with it. We also carry out a case study as part of our work. When it comes down to it, we tackle open research problems that are associated with distributed cloud computing. by conducting a comprehensive review of twenty-one papers that cover a wide range of methodology and approaches.
Article
Full-text available
The objective of this study is to propose a methodology for developing a distributed memory system with multiple computers and multicore processors. This system can be implemented on distributed-shared memory systems, utilizing the principles of client/server architecture. The presented system consists of two primary components: monitoring and managing programs executed on distributed-multi-core architectures with 2, 4, and 8 CPUs in order to accomplish a specific task. In the context of problem-solving, the network has the capacity to support multiple servers along with one client. During the implementation phase, it is imperative to consider three distinct scenarios that encompass the majority of design alternatives. The proposed system has the capability to compute the Total-Task-Time (TTT) on the client side, as well as the timings of all relevant servers, including Started, Elapsed, CPU, Kernel, User, Waiting, and Finish. When designing User Programs (UPs), the following creation scenario is carefully considered: The term "single-process-multi-thread" (SPMT) refers to a computing paradigm where a single process is executed by multiple threads The results unequivocally indicate that an augmentation in processing capacity corresponds to a proportional enhancement in the speed at which problems are solved. This pertains specifically to the quantity of servers and the number of processors allocated to each server. Consequently, the duration required to finish the assignment increased by a factor of 9.156, contingent upon three distinct scenarios involving SPMT UPs. The C# programming language is utilized for the coding process in the implementation of this system.
Article
Full-text available
Cloud computing is a significant IT innovation. It is one of the most remarkable ways to manage and allocate internet-wide information and resources. Cloud computing involves accessing IT infrastructure over a computer network without installing anything. Cloud computing lets businesses adjust resource levels to meet operational needs. Cloud computing reduces infrastructure costs for businesses. With improved administration and less maintenance, organizations can test their applications faster. Cloud computing lets the IT team respond to unexpected needs. Applications in numerous contexts show that cloud computing is part of daily life. Cloud computing architecture, attributes, types, service models, advantages, and challenges will be covered in this essay. In this article, we have explored the fundamental ideas of cloud computing, its architecture, the reasons for employing cloud computing, and the obstacles that cloud computing faces.
Article
Full-text available
Accessing and sharing information that is related to computing across a number of different processing systems that are linked together via the use of computer networks is the primary focus of distributed computing. The future global village will be constructed on a foundation of distributed computing, which will serve as the base for all other computing and information access disciplines. This foundation will serve as the basis for the future global village. There is no shadow of a doubt that this is a significant area of study to look into. In addition, this developing company is confronted with a wide variety of difficulties, each of which calls for answers that are founded on logical notions, and all of which need to be addressed immediately. The scientific topic of distributed computing has become more prominent in recent years. Both the development of pattern-matching algorithms and the manner in which they are put into practice have been significantly influenced by the proliferation of distributed computing systems in recent years. Pattern matching refers to both the act of detecting particular patterns within a given collection of data or information as well as the process of identifying such patterns. Pattern matching is also the term used to describe the process of discovering such patterns. It plays a significant role in a range of domains, some of which include natural language processing, the analysis of pictures and videos, bioinformatics, and the detection of network intrusions, to mention just a few of the applications that fall under these categories.
Article
Full-text available
With the vigorous development of Internet technology, applications are increasingly migrating to the cloud. Cloud, a distributed network environment, has been widely extended to many fields such as digital finance, supply chain management, and biomedicine. In order to meet the needs of the rapid development of the modern biomedical industry, the biological cloud platform is an inevitable choice for the integration and analysis of medical information. It improves the work efficiency of the biological information system and also realizes reliable and credible intelligent processing of biological resources. Cloud services in bioinformatics are mainly for the processing of biological data, such as the analysis and processing of genes, the testing and detection of human tissues and organs, and the storage and transportation of vaccines. Biomedical companies form a data chain on the cloud, and they provide services and transfer data to each other to create composite services. Therefore, our motivation is to improve process efficiency of biological cloud services. Users’ business requirements have become complicated and diversified, which puts forward higher requirements for service scheduling strategies in cloud computing platforms. In addition, deep reinforcement learning shows strong perception and continuous decision-making capabilities in automatic control problems, which provides a new idea and method for solving the service scheduling and resource allocation problems in the cloud computing field. Therefore, this paper designs a composite service scheduling model under the containers instance mode which hybrids reservation and on-demand. The containers in the cluster are divided into two instance modes: reservation and on-demand. A composite service is described as a three-level structure: a composite service consists of multiple services, and a service consists of multiple service instances, where the service instance is the minimum scheduling unit. In addition, an improved Deep Q-Network (DQN) algorithm is proposed and applied to the scheduling algorithm of composite services. The experimental results show that applying our improved DQN algorithm to the composite services scheduling problem in the container cloud environment can effectively reduce the completion time of the composite services. Meanwhile, the method improves Quality of Service (QoS) and resource utilization in the container cloud environment.
Chapter
Full-text available
Virtualization technology emerged with the emergence of computer technology and has always played an essential role in computer technology development. From the introduction of the concept of virtualization in the 1950s to the commercialization of virtualization on mainframes by IBM in the 1960s, from the virtual memory of the operating system to the Java virtual machine, to the server virtualization technology based on the x86 architecture The vigorous development of virtualization has added extremely rich connotations to the seemingly abstract concept of virtualization. In recent years, with the popularization of server virtualization technology, new data center deployment and management methods have emerged, bringing efficient and convenient management experience to data center administrators. This technology can also improve the resource utilization rate of the data center and reduce energy consumption. All of this makes virtualization technology the focus of the entire information industry.
Article
Purpose of research. The purpose of this study is to develop a method for improving the efficiency of distributed architectures of data processing systems operating in the fog and edge layers of the network. In conditions of high dynamics of both the network infrastructure and the load, the task of forming the architecture of data processing systems is solved regularly (migration of virtual machines, horizontal scalingб etc.) At the same time, the issue of the consumption of the residual resource of computing nodes is practically not considered, while the often used devices have relatively low capacity, and their high workload leads to a reduction in the service life. Therefore, the creation of methods for forming an architecture of a computing device system that is effective in terms of saving a computing resource is an urgent task. Methods. The main scientific methods used in this study are domain analysis, operations research methods, optimization methods and computer modeling, confirming the feasibility of the main aspects of the developed method. To improve the efficiency of placing computational tasks on the nodes of a network fragment, this paper formulated a multicriteria optimization problem, where each element of the vector objective function corresponds to an individual value of the probability of failure-free operation of a computing device. To obtain estimated values of the cost function, a priori estimates of the late completion of the solution of computational problems by nodes are used, since the resource allocated for solving depends on the allocated time, and the time for solving the problem, respectively, on the allocated computing resource. The value of the cost function is calculated on the basis of approximate a priori estimates, which leads to a positive effect in terms of the consumption of computing resources of devices. Results. The result of the study is a developed method for improving the efficiency of distributed architectures of data processing systems operating in the fog and edge layers of the network. Conclusion. The method proposed in this work allows to choose such a load distribution in order to reduce the workload of devices and thus reduce the consumption of computing resources of the devices.