Article

The NIST definition of cloud computing

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The transition toward cloud computing represents a fundamental transformation in how government agencies approach their technology infrastructure. The National Institute of Standards and Technology formally defines cloud computing as "a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction" [3]. This definition underscores the essential characteristics that make cloud computing particularly valuable for government agencies: on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service. ...
... This strategic transition enables government agencies to redirect resources from routine infrastructure management toward service innovation and mission advancement. The NIST cloud computing model emphasizes how the measured service aspect of cloud platforms provides transparency into resource utilization, helping agencies optimize spending and align technology investments with actual requirements rather than projected maximum capacity [3]. This capability is particularly valuable in government contexts where budget optimization is a constant priority. ...
... Evolution of Legacy Systems to Cloud Infrastructure[3,4] ...
... An additional architectural aspect of technical systems is whether a component is running on-premise or cloud-based, respectively. Both environments build the basis for hosting software applications (Mell & Grance, 2011), including, for example, a data warehouse (Nambiar & Mundra, 2022). However, the underlying cost models are completely different: While on-premise requires high initial investments (Nambiar & Mundra, 2022), cloud computing is defined as a "model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction" (Mell & Grance, 2011, p.2), usually following a pay-per-use cost logic (Lowe & Galhotra, 2018). ...
... Both environments build the basis for hosting software applications (Mell & Grance, 2011), including, for example, a data warehouse (Nambiar & Mundra, 2022). However, the underlying cost models are completely different: While on-premise requires high initial investments (Nambiar & Mundra, 2022), cloud computing is defined as a "model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction" (Mell & Grance, 2011, p.2), usually following a pay-per-use cost logic (Lowe & Galhotra, 2018). Generally, cloud storage or processing can support a more decentralized approach compared to on-premise storage due to the benefit of scalable provisioning of computing resources (Mell & Grance, 2011). ...
... However, the underlying cost models are completely different: While on-premise requires high initial investments (Nambiar & Mundra, 2022), cloud computing is defined as a "model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction" (Mell & Grance, 2011, p.2), usually following a pay-per-use cost logic (Lowe & Galhotra, 2018). Generally, cloud storage or processing can support a more decentralized approach compared to on-premise storage due to the benefit of scalable provisioning of computing resources (Mell & Grance, 2011). ...
Article
Full-text available
This paper explores how large incumbent organizations adopt the newly proposed data management approach “Data Mesh”. Particularly, this paper explores to which extent data ownership and data governance are shifting from a centralized to a decentralized approach and whether companies take different paths in this transition.
... Interface design represents a critical success factor, as healthcare professionals must navigate complex information environments while maintaining focus on patient interactions. Advanced implementations incorporate context-awareness capabilities that automatically adjust information presentation based on clinical scenario, patient characteristics, location, device type, and user role, reducing the need for explicit system navigation during time-constrained encounters [7]. ...
... Strategic implementations frequently employ a "minimal footprint" philosophy that prioritizes background intelligence over explicit user interaction requirements whenever possible, thereby reducing perceived implementation burden. Adoption strategies must additionally address team dynamics in contemporary healthcare delivery, ensuring that system capabilities support rather than undermine established communication patterns and collaborative decision-making processes within care teams [7]. ...
... Strategic implementations frequently employ portfolio approaches distributing investments across multiple use cases with varying risk-return profiles and time horizons, enabling balanced value capture aligned with organizational objectives and financial constraints. This structured approach to value assessment supports sustained investment in cognitive capabilities over time by clearly documenting return across multiple organizational dimensions [7]. ...
Article
Full-text available
The Cognitive Companion CRM represents a paradigm shift in healthcare information management, transitioning from reactive documentation tools to proactive clinical partners that anticipate needs and deliver contextualized insights. By integrating artificial intelligence capabilities including machine learning, natural language processing, and predictive analytics directly into clinical workflows, this architecture continuously monitors data streams to identify patterns and surface actionable information without requiring explicit user prompting. The system addresses fundamental healthcare challenges through multiple mechanisms: reducing provider cognitive load and administrative burden, enabling personalized patient care through risk factor identification, and improving resource allocation through predictive capabilities. Core components include predictive analytics for anticipating patient needs, an intelligent insights engine for contextualizing information, natural language processing for patient engagement, administrative automation architecture, robust data integration frameworks, and comprehensive privacy infrastructure. Implementation success depends on thoughtful workflow integration, patient journey mapping, provider adoption strategies, and rigorous return on investment analysis. Despite promising potential, significant challenges remain regarding data quality, validation protocols, ethical considerations, and stakeholder acceptance, necessitating continued interdisciplinary collaboration.
... 3.2.1 Cloud Computing. As defined by the National Institute of Standards and Technology (NIST ), cloud computing is 'a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction' [61]. This paradigm includes centralised or distributed computing technologies operating over the Internet, primarily functioning as a scalable storage and processing infrastructure. ...
... Its inherent computational resources and scalability are paramount for managing high data flood [117]. Cloud service models are commonly categorised into private, community, public, and hybrid deployments [61]. Private clouds are dedicated to a single organization, regardless of who manages it or its location. ...
Preprint
The rapid advancement of intelligent agents and Large Language Models (LLMs) is reshaping the pervasive computing field. Their ability to perceive, reason, and act through natural language understanding enables autonomous problem-solving in complex pervasive environments, including the management of heterogeneous sensors, devices, and data. This survey outlines the architectural components of LLM agents (profiling, memory, planning, and action) and examines their deployment and evaluation across various scenarios. Than it reviews computational and infrastructural advancements (cloud to edge) in pervasive computing and how AI is moving in this field. It highlights state-of-the-art agent deployment strategies and applications, including local and distributed execution on resource-constrained devices. This survey identifies key challenges of these agents in pervasive computing such as architectural, energetic and privacy limitations. It finally proposes what we called "Agent as a Tool", a conceptual framework for pervasive agentic AI, emphasizing context awareness, modularity, security, efficiency and effectiveness.
... Bulud hesablama -mərkəzi server və şəbəkə infrastrukturundan asılı olmadan internet üzərindən konfiqurasiya oluna bilən resurslara (serverlər, saxlama qurğuları, proqram təminatı və s.) asan və sürətli giriş imkanı yaradan modeldir (Mell, Grance, 2011). Bu model sayəsində təşkilatlar öz məlumatlarını uzaq bulud platformalarında saxlayaraq elastik, dinamik və ucuz hesablama gücündən yararlana bilir ELMİ İŞ Beynəlxalq Elmi Jurnal. ...
... Bulud arxitekturası sayəsində yeni layihələrin həyata keçirilməsi də sürətlənirserver qurulması və proqram təminatı sazlamaları saniyələrlə ölçülür, bu da qərarvermə və bazara çıxma vaxtını xeyli azaldır. Nəhayət, yüksək xidmət standartları təşkilatlara fasiləsiz dəstək verir: misal üçün, Azərbaycan "Hökumət Buludu"nun infrastrukturu Tier III sertifikatlı datamərkəzdə qurulub ki, bu da 24/7 monitorinq və yüksək xidmət səviyyəsini təmin edir (Mell, Grance, 2011). ...
... Cloud computing is defined by the National Institute of Standards and Technology (NIST) as "a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources" [5]. Its characteristics-on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service-align well with the requirements for Big Data processing. ...
... These findings affirm the original hypothesis that although Big Data-cloud computing integration significantly enhances organizational outcomes, these benefits are moderated by critical operational and strategic challenges. Future advancements in serverless computing, federated cloud architectures, and enhanced AI-driven security protocols will be instrumental in mitigating these risks and fully realizing the synergistic potential of Big Data and cloud computing technologies [5], [6]. ...
Thesis
Full-text available
The convergence of Big Data analytics and cloud computing represents a paradigm shift in how organizations manage, process, and derive insights from vast volumes of heterogeneous data. This study critically examines the synergistic relationship between Big Data and cloud platforms, focusing on the opportunities, architectural models, and emerging challenges that shape this evolving landscape. Employing a mixed-methods approach that integrates meta-analysis of literature from 2015-2022 and structured interviews with cloud architects, data scientists, and IT professionals, the research identifies measurable improvements in decision-making capabilities, data processing speeds, and cost efficiencies resulting from cloud adoption. However, it also highlights persistent barriers, including security vulnerabilities, latency issues, and hidden operational costs, which moderate the full realization of these benefits. Hypothetical results show that although platforms like AWS and Azure demonstrate substantial performance gains, variability in security outcomes and user satisfaction underscores the complexity of cloud integration strategies. The findings affirm that while Big Data and cloud computing offer transformative potential, realizing their full value demands proactive risk management, continuous innovation, and strategic alignment of technology with organizational goals. This study provides a roadmap for future research and practical implementations aimed at maximizing the effectiveness of Big Data-driven cloud initiatives.
... In the cloud computing paradigm, ensuring the desired Quality of Service (QoS) between the provider and the customer is crucial. QoS requirements are established through Service Level Agreements (SLAs), which are conventional agreements outlining the expected service quality from the service provider [1][2][3][4]. Cloud computing provides three types of services that are delivered and consumed in real-time, named: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) [5,6]. Cloud servers include physical machines (servers) and an extra layer (the virtual environment of computing resources) that executes user applications [7]. ...
... Cloudsim is a toolkit that enables modelling and simulation of cloud systems [36]. The proposed GQIRIME is compared with five other popular meta-heuristic algorithms: standard RIME optimizer [1], Grey Wolf optimizer [37], Levy flight Jaya algorithm [38], Particle Swarm optimizer [39] and Min-Min algorithm [40]. The final performance was considered as an average of over 20 independent simulation runs. ...
Article
Task scheduling in cloud computing still remains a crucial issue regarding system performance and user satisfaction. As organizations increasingly rely on cloud infrastructure to manage and execute their computational tasks, the need for effective scheduling becomes paramount. The main purpose of task scheduling is to assign submitted tasks to available appropriate resources while maintaining the quality of service and service level agreement. An effective task scheduling algorithm must be able to reduce the makespan as a crucial performance metric in cloud computing systems. In this paper, we have introduced a new efficient task scheduling algorithm called GQIRIME based on the RIME optimization algorithm to reduce the makespan, cost, and total execution time of the cloud system. An enhanced exploitation strategy is proposed based on generalized quadratic interpolation and levy flight to increase convergence rate while maintaining robust search. Furthermore, the proposed algorithm is integrated with chaos mapping to get a more diversified initial population. We have evaluated the effectiveness of our proposed approach using the Cloudsim toolkit. The results analysis demonstrates that our proposed algorithm outperforms other methods and significantly improves key performance metrics in task scheduling. According to the experiment results, the proposed task scheduling algorithm achieved a lower cost, makespan, and total execution time. The results show a 29.3% improvement in terms of makespan, 61.8% improvement in cost, and 29.4% improvement in total execution time on average compared to counterparts.
... Cloud computing is defined by NIST as "a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources...that can be rapidly provisioned and released with minimal management effort or service provider interaction" [17]. It transforms traditional IT by offering resources-compute, storage, networking-as a utility rather than a capital investment. ...
... 1) Definition and Core Characteristics: NIST specifies five essential characteristics of cloud computing: on-demand selfservice, broad network access, resource pooling, rapid elasticity, and measured service. On-demand self-service allows users to provision resources without human intervention [17]. Broad network access ensures services are accessible over standard protocols. ...
Conference Paper
Full-text available
This paper presents a research-style treatment of an automated AWS infrastructure provisioning system using Ansible and Infrastructure as Code (IaC) principles. We describe the challenges of manual cloud provisioning, survey the evolution of cloud computing and IaC, and detail our solution architecture, implementation, and outcomes. Our approach leverages AWS services-VPC, EC2, ELB, Auto Scaling, CloudWatch-and Ansible playbooks to achieve reliable, scalable, and repeatable deployments. We conclude with lessons learned and outline future work.
... Besides, fog computing provides computing storage services close to edge devices and reinforces efficiency by reducing the amount of data that must be transmitted. Edge computing serves as a platform for providing services closest to the data source in a data collection system [43], yielding faster network service response to satisfy the needs in online monitoring, real-time scheduling, and increased data processing power of the DT. The dynamic data flow in the PDTM is exhibited in Fig. 4. ...
Article
Full-text available
The digital twin (DT), as a dynamic intelligence system that organically combines virtual and realistic models and multiple sources of data, fully combines numerical models with real-world data to monitor the operational status of products and predict their lifespan. Therefore, a system modeling approach with high fidelity and timeliness is of great significance for DT models. However, most current DT modeling approaches focus on individual objects and individual aspects of a product, while being deficient in a full lifecycle and multi-object-oriented modeling approach, which is not conducive to the mining and utilization of data on the whole product. Given this challenge, a product-level DT modeling approach based on PLM/PDM theory is proposed in this paper. It combines property model, simulation model, process model, status model, quality model, and feedback model through a digital thread communication framework to collect and utilize product lifecycle data to achieve accurate control of all aspects of the product lifecycle. The product-level DT results are presented by a visualization platform that enables interaction between customers, designers, and fabricators under real-time monitoring of the product manufacturing process throughout the system. Finally, a test case of a wind energy generator was performed to validate the proposed product-level DT modeling approach. The results revealed that the proposed approach is effective and feasible.
... NIST provides a widely accepted definition. According to the NIST definition [11], "Cloud computing provides on-demand access to configurablecomputing resources, such as networks, servers, storage, applications, and services, that can be quickly provisioned and released with minimal management effort or service provider interaction. The cloud model consists of 5 basic features, 5 service types, and 4 deployment models". ...
... Cloud computing has recently arisen as a reliable and trusted computing technology that enhances the utilization of virtualized resources and services for end users [1]. A user can access software and hardware as computing resources through the internet by the pay-as-you-use concept. ...
Article
Full-text available
Dynamic Virtual Machine Consolidation (DVMC) is a key mechanism for developing an energy-aware dynamic resource management in cloud datacenter. The basic idea is to balance the hosts' load by migrating Virtual Machines (VMs) from overloaded and underloaded host to normal host and turning the underloaded host into sleep mode. Each VM migration leads performance degradation and Service Level Agreement Violation (SLAV). It is necessary to enhance performance while dealing with energy–SLAV tradeoff in DVMC. Therefore, this paper proposes an improved DVMC model named RLSK_US which consists four phases: 1) in first phase, Robust Logistic Regression algorithm is proposed to detect overloaded host. This algorithm utilizes regression and adaptive approaches both; 2) second phase proposes an SLA algorithm to detect underloaded host by incorporating the NVM of the host with CPU utilization; 3) third phase proposes a Knapsack based VM selection algorithm that selects VM having higher ratio of CPU utilization to VM migration time; 4) fourth phase proposes Utilization-SLA aware VM placement algorithm that allocates migrated VMs to appropriate host by selecting host having higher correlation factor with VM. The proposed RLSK_US is evaluated using real workload traces in CloudSim and compared with other existing benchmark algorithms. Simulation results prove that proposed model outperforms others. It improves SLAV by 77% and ESV (product of Energy and SLAV) by 83% compared to the best competitive algorithm.
... Regulatory Pressures and Technological Responses Financial regulations, particularly standards like BCBS 239 concerning risk data aggregation and reporting (10), heavily influence IT architecture design. They necessitate robust data governance frameworks (38), reliable data lineage, and often employ technologies like event streaming platforms (e.g., Kafka (27)) for timely data processing. The field of Regulatory Technology (RegTech) specifically focuses on using technology to streamline and automate compliance tasks (5; 8). ...
Article
Full-text available
Financial institutions operate under dual pressures: the need for rapid innovation driven by competition and evolving customer expectations, contrasted with the necessity of adhering to stringent regulatory frameworks like Basel III, CRD V, and SR 11-7. Modern methodologies like DevOps and MLOps promise agility and efficiency but face significant adoption challenges within this regulated context. This paper addresses this critical intersection by consolidating current research on IT architecture, DevOps, and MLOps specifically for the banking sector. We focus on practices supporting robust data aggregation, risk management, and compliance reporting, while acknowledging persistent challenges such as legacy system integration and rigorous model governance. Recognizing a gap between general principles and practical implementation guidance, we propose two concise, research-grounded architectural blueprints. These blueprints offer actionable models for designing integrated DevOps/MLOps workflows that ensure continuous compliance and operational resilience, providing valuable insights for practitioners and researchers navigating the complex interplay of agile development and financial regulation. INTRODUCTION The global financial services industry exists in a state of continuous flux, driven by intense market competition, shifting customer demands for digital services, and an ever-more complex web of regulations (48; 2). International accords like Basel III (9), regional directives such as CRD V (16), and national guidance on critical areas like model risk management (e.g., the US Federal Reserve's SR 11-7 (11)) impose strict operational and reporting requirements. Consequently, financial institutions must constantly evolve their Information Technology (IT) architectures and operational processes to simultaneously achieve agility, maintain resilience, and ensure unwavering regulatory compliance (18). This balancing act represents a central challenge for the sector. Modern software engineering and operational paradigms, notably DevOps (26; 21) and Machine Learning Operations (MLOps) (46; 13), offer significant potential benefits. DevOps practices aim to break down silos between development and operations, automating delivery pipelines to increase speed and reliability. MLOps extends these principles to the unique lifecycle of machine learning models, addressing challenges like reproducibility, monitoring, and governance crucial for financial applications from fraud detection to algorithmic trading. The state-of-the-art involves highly automated CI/CD pipelines, infrastructure managed as code, and increasingly sophisticated model management platforms. However, the adoption of these modern practices within the highly regulated financial context is far from straightforward (15; 2). The core tenets of DevOps and MLOps-speed, iteration, and continuous change-must be carefully reconciled with non-negotiable regulatory demands for security, auditability, data integrity, robust governance, and transparent reporting. Furthermore, many institutions grapple with significant legacy systems, which often represent substantial technical debt and hinder modernization efforts (31). While research explores DevOps (43) and MLOps (6) adoption challenges, and the potential of RegTech (8), a gap often exists between high-level principles and concrete architectural guidance tailored for financial compliance. This paper aims to bridge this gap by providing actionable architectural blueprints. We synthesize current academic knowledge and industry best practices concerning IT architecture, DevOps, MLOps, and regulatory compliance within finance. Our contribution lies in presenting two distinct, yet principled, reference models (Section 3) designed to address common scenarios: modernizing domestic institutions with legacy cores, and managing complex international operations under multiple regulatory regimes. These blueprints provide concrete structures for integrating DevOps and MLOps workflows in a manner that fosters continuous compliance alongside operational excellence, offering practical value to practitioners and a structured basis for further academic inquiry. The subsequent sections review relevant background literature (Section 2), detail the proposed blueprints (Section 3), discuss their implications (Section 4), and offer concluding remarks (Section 5). BACKGROUND
... Cloud computing, which lets people use shared computing resources whenever they need them [1], and artificial intelligence (AI), which lets machines do things that usually require human intelligence [2], have become game-changing technologies in many fields, including economic management. Cloud computing provides a scalable and cost-effective infrastructure for storing, processing, and analyzing vast amounts of data [3]. ...
Article
Full-text available
This systematic review examines existing literature on the role of AI-driven cloud computing in optimizing economic management processes, identifying key trends, benefits, challenges, and future research directions. The study adheres to the PRISMA framework to systematically collect and analyze research from academic databases, including Scopus, Web of Science, IEEE Xplore, and Google Scholar. Findings reveal that AI-powered cloud solutions offer scalability, real-time data analytics, cost reduction, and automation of business processes. However, challenges such as data security risks, ethical concerns, and regulatory constraints hinder full-scale adoption. The study also highlights emerging trends, including AI-driven financial forecasting, intelligent automation, and Explainable AI (XAI) models, which facilitate transparent decision-making. Additionally, the research identifies gaps in the literature, particularly in the adoption of AI within public sector economic management and regulatory frameworks. The discussion compares these findings with existing studies, exploring theoretical and practical implications for businesses, policymakers, and researchers. Key recommendations include the need for robust cyber-security frameworks, ethical AI governance, and industry-specific AI applications. Future research should focus on longitudinal studies, cross-sectoral analyses, and the role of AI in sustainable economic growth. This review contributes to the growing body of knowledge on AI-cloud integration, offering insights to drive effective and responsible adoption in economic management.
... Cloud computing operates through thousands of globally distributed datacenters and offers three main service models: Software as a Service (SaaS), where applications are provided by third-party servers; Platform as a Service (PaaS), providing software and hardware tools to the user; and Infrastructure as a Service (IaaS), which enables users to access resources such as storage and memory through virtual machines (VMs). These services are offered on a subscription basis [1]. A primary issue in cloud datacenters is their high power consumption, which increases both the environmental impact and cost [2]. ...
Article
Full-text available
Virtual machine (VM) placement in cloud datacenters is a complex multi-objective challenge involving trade-offs among energy efficiency, carbon emissions, and network performance. This paper proposes NCRA-DP-ACO (Network-, Cost-, and Renewable-Aware Ant Colony Optimization with Dynamic Power Usage Effectiveness (PUE)), a bio-inspired metaheuristic that optimizes VM placement across geographically distributed datacenters. The approach integrates real-time solar energy availability, dynamic PUE modeling, and multi-criteria decision-making to enable environmentally and cost-efficient resource allocation. The experimental results show that NCRA-DP-ACO reduces power consumption by 13.7%, carbon emissions by 6.9%, and live VM migrations by 48.2% compared to state-of-the-art methods while maintaining Service Level Agreement (SLA) compliance. These results indicate the algorithm's potential to support more environmentally and cost-efficient cloud management across dynamic infrastructure scenarios.
... However, this shift to cloud-based infrastructure has also introduced significant security concerns, particularly with respect to data protection and privacy. Cloud computing is a model that enables ubiquitous, on-demand access to a shared pool of configurable computing resources, which can be rapidly provisioned with minimal management effort (Mell & Grance, 2011). ...
Article
Cloud computing has revolutionized data management, offering scalability and efficiency. However, it also presents significant security challenges, particularly in data deletion. Traditional deletion methods often leave residual data that can be recovered, increasing the risk of unauthorized access. Ensuring complete and irreversible data disposal is critical for maintaining data security and regulatory compliance. This research investigates crypto-shredding, a technique that enhances data security by destroying encryption keys, rendering the associated data permanently inaccessible. The study focuses on the design, implementation, and evaluation of crypto-shredding techniques within cloud environments, comparing their effectiveness to conventional deletion methods. A structured framework for integrating crypto-shredding into cloud architectures is proposed, ensuring compliance with data protection regulations such as the General Data Protection Regulation (GDPR) and the Health Insurance Portability and Accountability Act (HIPAA). The study also explores the role of user authentication mechanisms and content discovery techniques in verifying that deleted files leave no recoverable traces in active storage or archives. Extensive testing and performance analysis will assess the feasibility and reliability of crypto-shredding as a secure data disposal method. The findings of this research aim to strengthen cloud security frameworks, mitigate the risks of data breaches, and contribute to the advancement of secure data management practices in cloud computing.
... Cloud computing has become a revolution in the IT market by providing on-demand computational resources, scalability, and the capability of an affordable solution [1]. Nevertheless, the increase in complexity of cloud environments makes traditional resource management and maintenance techniques ineffective at sustaining advancements in demands [2]. Static provisioning approaches, however, define rules by which resources are given based on pre-established criteria, and hence they either over-provide the resources that result in excess costs in operations or under-provide resources, which makes the system weak to performance degradation and system failure [3]. ...
Article
The growing trend of cloud computing calls for efficient utilization of resources and robustness of the system. Typically, such inefficiencies due to traditional resource allocation and maintenance strategies include underutilization, latency issues, and unexpected system failures. Dynamic resource allocation and Predictive maintenance, as well as its AI-driven optimization, are enhanced in cloud infrastructure with intelligent solutions that are based on AI-driven optimization techniques. Historical data for the workload trends, virtual resource distribution, and detecting anomalies to the system are analyzed by machine learning algorithms. We review AI-based strategies for the optimization of cloud infrastructure and discuss their comparison with traditional methods concerning the main performance metrics. Finally, it discusses what future research directions in AI-driven cloud optimization are possible.
... A computação em nuvem é um paradigma de computação distribuída que possibilita o acesso a recursos virtualizados, incluindo computadores, redes, armazenamento, plataformas de desenvolvimento ou aplicativos [1]. Na educação, a computação em nuvem oferece soluções para atender a necessidades específicas e fornecer serviços de aprendizado online, especialmente em cenários em que esses serviços são intensivos em computação (mundos virtuais, simulações, etc.) [2]. ...
Conference Paper
Full-text available
Computação em nuvem é um paradigma que oferece soluções para armazenamento e processamento de informações. No contexto educacional, a nuvem apresenta um enorme potencial para melhorar o aprendizado, especialmente em áreas como programação paralela e distribuída. Este trabalho descreve o desenvolvimento e a implementação de uma plataforma de programação paralela baseada em nuvem, projetada para fins educacionais, com foco no ensino de programação paralela e distribuída. Esta plataforma tem como objetivo simplificar a execução de programasMessage Passing Interface (MPI) em um cluster de máquinas virtuais em uma nuvem privada, oferecendo uma interface que abstrai os detalhes complexos da configuração de ambientes distribuídos. Seu propósito é oferecer aos estudantes de ciência da computação um aprendizado mais direto, focando no desenvolvimento de algoritmos paralelos sem exigir conhecimento técnico da infraestrutura.
... Scientific and technological innovation has accelerated the adoption of flexible, intelligent computational infrastructures, prioritizing scalability, efficiency, and privacy [1]. Through service models such as IaaS, PaaS, and SaaS, cloud computing offers broad access to computational resources via virtualization [2]. On-demand services enable the scalable provisioning of resources, particularly for scientific applications. ...
Conference Paper
Full-text available
Hybrid cloud, integrating public and private clouds, presents a promising environment for scientific applications by combining scalability with cost efficiency. However, the complexity of these environments requires tools to support infrastructure planning and optimization prior to actual deployment. This study evaluates two simulators: SimGrid and CloudSim Plus. The focus is on assessing their suitability for simulating the execution of scientific applications within hybrid cloud environments; particularly regarding scalability. Scientific workflows modeled as directed acyclic graph, were used to evaluate a hybrid cloud infrastructure, and both simulators were assessed on workload representation, cloud infrastructure, and scheduling strategies. The results suggest that SimGrid offers greater flexibility in network modeling, while CloudSim Plus excels in resource allocation policy simulation. This comparative analysis aims to help researchers select the most appropriate tool for simulating and optimizing scientific applications and cloud scheduling strategies. Future research should explore the integration of emerging technologies, such as containers and microservices, within these simulators.
... In Fig. 3(a), we also display the YY and X X sequences, constructed using the symmetric definitions given in Eqs. (6) and (8), respectively. An unexpected feature observed in Fig. 3(a) is that all five sequences shown (including the robust ones) exhibit oscillations, which typically arise from coherent errors. ...
Article
Full-text available
The virtual- Z (vz) gate has been established as an important tool for performing quantum gates on various platforms, including but not limited to superconducting systems. Many such platforms offer a limited set of calibrated gates and compile all other gates using combinations of X -type and vz gates. Here, we show that the method of compilation has important consequences in an open quantum system setting. Specifically, we experimentally demonstrate that it is crucial to choose a compilation that is symmetric with respect to vz rotations. An important example is dynamical-decoupling (DD) sequences, where improper gate decomposition can result in unintended effects such as the implementation of the wrong sequence. Our findings indicate that in many cases the performance of DD is adversely affected by the incorrect use of vz gates, compounding other coherent pulse errors. This holds even for DD sequences designed to be robust against systematic control errors. In addition, we identify another source of coherent errors: interference between consecutive pulses that follow each other too closely. This work provides insights into improving general quantum gate performance and optimizing DD sequences in particular. Published by the American Physical Society 2025
... The National Institute of Standards and Technology (NIST) (Mell & Grance, 2011)defines cloud computing as "a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources". Cloud computing is a type of distributed computing built on a high-performance central data center and designed essentially as privileged resource sharing and dynamic demand of multi-tenancy requests. ...
Article
Full-text available
Containers have become increasingly popular in the virtualization landscape. Their lightweight nature and fast deployment behavior make them an efficient alternative to traditional hypervisor-based virtual machines. In IoT applications and edge/cloud deployment, the live container migration can substantially reduce computing system overheads by minimizing the migration time and transmitting minimum memory pages from the source host without interrupting the service process. Until today, there has been a lack of comprehensive research discussing live container migration in the IoT domain and investigating the challenges of representing them in the edge/cloud environment. This survey presents cutting-edge articles that involve a live container migration approach. This survey aims to boost current knowledge, identify best practices, and highlight the challenges of live container migration in the IoT and edge/cloud environments, which will contribute to the advancement of container technology, as well as the optimization of deployment practices. The survey results indicate that selecting a suitable container engine relies heavily on the workload characteristics in the edge/cloud environment, particularly given the constraintions of live container migration. The survey highlights the direct and indirect challenges that influence container migration and proposes machine learning and blockchain as potential solutions.
... We now test the accuracy various CCSDT variants in obtaining spectroscopic parameters, namely, the force constant (k) a) see Ref [77]. b) see Ref [78,79]. and fundamental vibrational frequencies (ν vib ) of CO and HF molecules as test examples. ...
Article
Full-text available
Frozen (F) natural orbitals (NO) approach in coupled cluster (CC) singles and doubles (SD) and equation‐of‐motion (EOM) CCSD methods is well‐known for provide cost‐effective yet accurate alternative for energy computation. In this article, we extend the FNO approach to CCSDT (CC with singles, doubles, and triples) implemented within Q‐CHEM. This can be employed within both the (conventional) double precision (DP) as well as the single precision (SP) algorithms. Errors due to employing SP algorithm instead of DP are insignificant and therefore are not discussed. However, for computational timings, we present the performance of FNO‐CCSDT versus conventional CCSDT methods with both SP and DP algorithms using water molecule as a test system. FNO‐CCSDT results at different thresholds can be extrapolated to give the XFNO‐CCSDT approach, which provides an enhanced accuracy. To illustrate this, we present total energies of a few molecules, adiabatic triplet–singlet gaps of a few chromophores and bond‐stretching trends in total energies and vertical triplet–singlet gaps of hydrogen fluoride molecule. We also examine these methods for numerical estimation of spectroscopic parameters – force constants and vibrational frequencies of some diatomic molecules.
... Organizations moving their on-premise systems into the public and private cloud infrastructure must make sure that their operations adhere to certain legal, ethical, and compliance standards. As stated by Buyya et al. (2008) and Mell & Grance (2011), the most challenging issue concerning hybrid cloud systems is the compliance regulations that differ for every industry and region. Compliance regulations entail a myriad of issues, including but not limited to, where data is stored and accessed from, privacy issues, and even the levels of openness required in data processing. ...
Preprint
Full-text available
The pervasive use of hybrid cloud computing models has changed enterprise as well as Information Technology services infrastructure by giving businesses simple and cost-effective options of combining on-premise IT equipment with public cloud services. hybrid cloud solutions deploy multifaceted models of security, performance optimization, and cost efficiency, conventionally fragmented in the cloud computing milieu. This paper examines how organizations manage these parameters in hybrid cloud ecosystems while providing solutions to the challenges they face in operationalizing hybrid cloud adoptions. The study captures the challenges of achieving a balance in resource distribution between on-premise and cloud resources (herein referred to as the "resource allocation challenge"), the complexity of pricing models from cloud providers like AWS, Microsoft Azure, Google Cloud (herein called the 'pricing complexity problem'), and the urgency for strong security infrastructure to safeguard sensitive information (known as 'the information security problem'). This study demonstrates the security and performance management solutions proposed were validated in a detailed case study of adoption of AWS and Azure based hybrid cloud and provides useful guidance. Also, a hybrid cloud security and cost optimization framework based on zero trust architecture, encryption, hybrid cloud policies, and others, is proposed. The conclusion includes recommendations for research on automation of hybrid cloud service management, integration of multi-clouds, and the ever-present question of data privacy, stressing how those matters affect contemporary enterprises.
... These findings affirm the original hypothesis that although Big Data-cloud computing integration significantly enhances organizational outcomes, these benefits are moderated by critical operational and strategic challenges. Future advancements in serverless computing, federated cloud architectures, and enhanced AI-driven security protocols will be instrumental in mitigating these risks and fully realizing the synergistic potential of Big Data and cloud computing technologies [5], [6]. ...
Article
Full-text available
The convergence of Big Data analytics and cloud computing represents a paradigm shift in how organizations manage, process, and derive insights from vast volumes of heterogeneous data. This study critically examines the synergistic relationship between Big Data and cloud platforms, focusing on the opportunities, architectural models, and emerging challenges that shape this evolving landscape. Employing a mixed-methods approach that integrates meta-analysis of literature from 2015–2022 and structured interviews with cloud architects, data scientists, and IT professionals, the research identifies measurable improvements in decision-making capabilities, data processing speeds, and cost efficiencies resulting from cloud adoption. However, it also highlights persistent barriers, including security vulnerabilities, latency issues, and hidden operational costs, which moderate the full realization of these benefits. Hypothetical results show that although platforms like AWS and Azure demonstrate substantial performance gains, variability in security outcomes and user satisfaction underscores the complexity of cloud integration strategies. The findings affirm that while Big Data and cloud computing offer transformative potential, realizing their full value demands proactive risk management, continuous innovation, and strategic alignment of technology with organizational goals. This study provides a roadmap for future research and practical implementations aimed at maximizing the effectiveness of Big Data-driven cloud initiatives.
... Rather than owning or maintaining physical data centers, organizations can rent cloud services. This paradigm shift enables businesses and individuals to use computing resources as needed without upfront investment in hardware [21]. Cloud computing is primarily categorized into three models: public cloud. ...
... Cloud computing has revolutionized the delivery of IT services by providing scalable, on-demand access to shared resources. Its dynamic, multi-tenant nature poses significant security challenges, particularly in detecting and mitigating cyber threats [1]. Intrusion Detection Systems (IDS) serve as a critical line of defense but often fall short in cloud environments due to limited scalability and adaptability [2]. ...
Article
Full-text available
The proliferation of cloud computing has transformed data storage and processing but also introduced complex security challenges. Traditional Intrusion Detection Systems (IDS) often struggle in dynamic cloud environments due to scalability, adaptability, and the high rate of false positives. Machine Learning (ML) has emerged as a powerful tool to enhance IDS by enabling systems to learn from vast datasets, identify anomalous behavior, and adapt to evolving threats. This paper investigates the application of ML techniques such as supervised, unsupervised, and deep learning to intrusion detection in cloud-based systems. It reviews key methodologies, evaluates performance across widely used benchmark datasets (NSL-KDD, CICIDS2017), and highlights real-world implementations in commercial cloud platforms. The study also addresses critical challenges including data privacy, adversarial ML, real-time detection, and scalability. Through a comprehensive analysis, we identify promising research directions such as federated learning, explainable AI, and hybrid cloud-edge IDS architectures.
... Organizations leverage cloud infrastructure to store, manage, and process vast amounts of data, benefiting from cost efficiency, flexibility, and accessibility. However, as cloud adoption grows, so do security concerns, including data breaches, unauthorized access, and denial-of-service attacks, which pose significant risks to organizations and individuals alike [1]. Traditional security mechanisms, such as rule-based firewalls and static access control policies, struggle to counter sophisticated cyber threats that continuously evolve in complexity and frequency [2]. ...
Article
Full-text available
Cloud computing has revolutionized data storage, processing, and accessibility, but it also introduces significant security challenges, including data breaches, insider threats, unauthorized access, and distributed denial-of-service (DDoS) attacks. Traditional security approaches, such as rule-based firewalls and static access control mechanisms, struggle to counter increasingly sophisticated cyber threats. Artificial Intelligence (AI) has emerged as a transformative solution, leveraging machine learning (ML), deep learning (DL), and natural language processing (NLP) to enhance cloud security. AI-driven threat detection systems analyze vast datasets in real time, identifying anomalies and predicting potential attacks with high accuracy. AI-powered automated incident response mechanisms help mitigate security risks by proactively addressing vulnerabilities and adapting to evolving threats. The integration of AI techniques into cloud security frameworks, highlighting applications such as intelligent intrusion detection, adaptive authentication, AI-enhanced encryption, and automated compliance monitoring. The advantages AI brings in reducing response time, improving threat intelligence, and optimizing resource allocation. AI’s application in cybersecurity also poses challenges, including adversarial AI attacks, data bias, and computational overhead. By leveraging AI, organizations can achieve a more resilient and proactive defense against emerging cyber threats in cloud environments.
... Cloud computing provides the necessary processing power and storage facilities by offering a pool of shared and virtually unlimited resources and services (Hassan, 2011;Mell and Grance, 2011). Cloud services are typically hosted on the Internet, where users can easily subscribe/unsubscribe, scale up/down used services, and pay only for their actual use of these services. ...
Chapter
Full-text available
One can think of the Internet of Things (IoT) as a connected world not only of specialized devices with embedded sensors and actuators but also of everyday objects that, until recently, were not in the loop of the interconnected world of information technology (IT). This new paradigm allows these objects to communicate autonomously and exchange data with minimal or no human intervention. Recently, new IoT flavors have emerged, adding more intelligence and value to the network. These include both physical and virtual objects, people, data, and processes as key components. Such a network of connected objects poses several challenges, ranging from data management and analysis, data security and privacy, and interoperability to network management, standardization, and legal issues. This chapter gives an overview of the IoT and discusses current definitions, international standardization efforts, reference architecture proposals, enabling technologies, and a wide range of application areas.
... These capabilities in data mining and machine learning principles [2,3], are essential for managing the dynamic and complex nature of cloud environments. However, considerations like data security, model explainability, the balance between automation and human oversight, and rigorous testing are crucial for successful implementation [5]. ...
Article
Full-text available
This study investigated the application of Predictive analytics and auto-remediation in cloud computing operations. AI/ML algorithms analyze historical data to forecast future trends and potential issues, enabling proactive resource management and automated responses. This approach minimizes downtime, reduces manual effort, and optimizes resource allocation. However, successful implementation requires careful consideration of data quality, model accuracy, explainability, security, and human oversight. Future trends like XAI, real-time analytics, and AIOps promise even greater automation and efficiency in cloud operations.
... Sarosh [21] explored a hybrid IDS using SVM and K-means clustering for virtualized cloud setups, enhancing anomaly detection speed and interpretability. Maheswari et al. [22] proposed an IDS based on deep-recurrent neural networks and optimized feature selection, showing improved classification accuracy on datasets like CICIDS and DARPA. ...
Article
Full-text available
As cloud computing becomes increasingly central to data storage and processing, the need for robust security mechanisms to protect sensitive information during cloud uploads is more critical than ever. This research presents a novel hybrid security framework that combines symmetric (AES) and asymmetric (RSA) encryption techniques with a machine learning-based Intrusion Detection System (IDS) to secure data transmissions in cloud environments. The proposed model addresses key challenges such as insider threats, data breaches, and insecure APIs by employing a two-tier approach: encrypting data for confidentiality and using ML-driven IDS to detect malicious patterns in real time. The system was evaluated using the CICIDS2017 dataset and implemented in a simulated cloud setting. Performance analysis demonstrated that the hybrid model outperforms standalone encryption or IDS systems in terms of detection accuracy, encryption speed, resource efficiency, and resilience against various attack vectors. The results support the model’s suitability for secure, scalable, and intelligent cloud data management, offering a future-proof solution adaptable to evolving cyber threats.
... Key Success Factors from a Technical Perspective. From a technical standpoint, the robustness of technology is a primary consideration (Chen et al., 2014;Mell & Grance, 2011). Technical robustness manifests in the reliability, security, and scalability of AIGC systems, ensuring effective, accurate, and secure system operation under various conditions. ...
Article
Full-text available
This study aims to explore the criteria and success factors for the application of Artificial Intelligence Generated Content (AIGC) in higher education, and guide its practice through the construction of a comprehensive system and framework. This study first identifies seven primary criteria, encompassing technical robustness, integration with existing systems, evidence-based practice, user acceptance and engagement, ethical considerations, collaborative ecosystems, and cultural and contextual sensitivity. These criteria are further refined into 19 subfactors. Utilizing the Decision-Making Trial and Evaluation Laboratory (DEMATEL) method for analysis, the results indicate that user acceptance and engagement occupy a central position in AIGC applications, emerging as the primary factor influencing successful implementation. Simultaneously, the establishment of a collaborative ecosystem is identified as a critical aspect. Additionally, factors such as technical robustness, integration with existing systems, and evidence-based practice not only directly impact user acceptance and engagement but also indirectly affect other elements like the collaborative ecosystem. In terms of specific key success factors, scalability and feedback mechanisms play a crucial role in AIGC implementation. Furthermore, partnerships demonstrate high prominence in higher education AIGC applications, highlighting the importance of building and maintaining strong collaborative relationships for successful implementation. This study provides significant insights into theories in educational technology and offers practical guidance for higher education institutions in their applications.
... Cloud computing represents one of the most significant developments in distributed computing, offering scalable and flexible resource allocation models [7]. The three primary service models -Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS)each provide different levels of abstraction and control over distributed resources. ...
Conference Paper
Full-text available
Distributed computing systems have become the backbone of modern computational infrastructure, enabling organizations to process vast amounts of data across geographically dispersed locations. This paper presents a comprehensive analysis of contemporary distributed computing architectures, examining their design principles, performance characteristics, and implementation challenges. We investigate various distributed computing paradigms including cloud computing, edge computing, and hybrid architectures, analyzing their effectiveness in handling large-scale data processing tasks. Our research covers fault tolerance mechanisms, load balancing strategies, and security considerations in distributed environments. Through extensive literature review and comparative analysis, we identify key performance metrics and evaluate the trade-offs between different architectural approaches. The paper also explores emerging trends such as serverless computing, blockchain integration, and quantum distributed systems. Our findings indicate that modern distributed systems must balance scalability, reliability, and cost-effectiveness while addressing challenges related to data consistency, network latency, and resource optimization. The research contributes to the understanding of distributed system design principles and provides insights for future development in this rapidly evolving field.
Article
Full-text available
Cloud computing (CC) represents a key digital transformation advancement, reshaping the ways in which businesses operate across diverse industries. The exponential growth in data production has created an unprecedented challenge in the ways in which data is created, processed, and managed. This study investigates the impact of cloud computing on the digital transformation of the enterprise, including issues concerning sustainability and long-term preservation and curation. While there has been a proliferation of studies concerning the adoption and implementation of cloud computing in the enterprise, there is still a gap in the literature concerning the use of cloud computing technology for long-term preservation, digital curation, and sustainability. The study employed a mixed-methods approach that utilized a systematic review of the literature and an Internet-based survey. The combination of the systematic review and survey was intended to provide insights into the key strategic factors impacting the use of cloud computing for long-term preservation and sustainability. The results of the study show that, despite the growing recognition of the benefits of cloud computing, most organizations are still concerned about issues such as security, privacy, accessibility, and cost. Concerns regarding the long-term preservation and sustainability of enterprise information are closely tied to the extent to which cloud computing services are deemed reliable and trustworthy. This study underscores the varying levels of satisfaction among users, with businesses acknowledging both the advantages and disadvantages of the current cloud solutions.
Chapter
The advent of self-service technology has transformed industries far beyond its initial applications in retail and banking, fundamentally reshaping how businesses operate and deliver services across sectors. This paper explores the impact of self-service transformation on financial services and other industries through a series of case studies. By examining real-world examples, we highlight how companies have leveraged self-service technologies to improve efficiency, reduce costs, and enhance customer experiences. In financial services, self-service technologies such as ATMs, mobile banking apps, and online financial management tools have empowered consumers to take greater control of their finances. These tools not only streamline routine transactions but also enable financial institutions to reallocate resources toward more personalized and high-value services. Beyond finance, industries such as healthcare, hospitality, and transportation have adopted self-service solutions to address operational challenges, improve service delivery, and cater to evolving consumer expectations.
Chapter
At some time in the twenty-first century the United States will start the transition into an economy with no human work. After the inflection point, job destruction will be greater than job creation and overall employment will decline. Once the decline starts it will continue and human employment over the next several centuries will approach a very small fraction of current employment, perhaps even zero. This book deals with the transition, not the end result. Chapter 3 focuses on the evolution of digital technology starting with what types of information objects can be represented by binary numbers. The basic technology to process binary numbers is integrated circuits. Its evolution and Moore’s law are discussed. Hardware consists of systems of integrated circuits that process binary numbers following software instructions. A summary of the evolution of digital technology hardware and software is presented. A subtopic of software is artificial intelligence applications. Next is the evolution of analog and digital communication. The advance of this technology has created a political economic social nervous system. The final topic is the impact of the advance of digital technology on discovery and invention.
Article
O uso da computação em nuvem tem crescido de forma acelerada no mundo inteiro. Diante dessa realidade, cresce também a demanda por profissionais qualificados para trabalhar com tecnologias de nuvem, criando uma necessidade urgente de formação prática nos ambientes educacionais. Desta forma, o objetivo deste trabalho é analisar como os recursos da Oracle Cloud Infrastructure (OCI), disponibilizados através do programa educacional Oracle Academy, podem ser aplicados nas disciplinas de um curso de Sistemas de Informação, auxiliando tanto professores quanto alunos. A estratégia metodológica adotada baseia-se no cruzamento entre os recursos gratuitos disponibilizados pela nuvem da Oracle e as ementas das disciplinas obrigatórias do curso, sendo complementada pela experiência dos autores como docentes e profissionais atuantes na área. Após análise, constatou-se que mais da metade das disciplinas pode ser beneficiada com o uso da OCI. A acessibilidade via navegador e o acesso gratuito a tecnologias avançadas tornam a OCI uma ferramenta valiosa para a formação prática dos estudantes, promovendo inclusão digital e alinhamento com as demandas do mercado de trabalho.
Article
Full-text available
This study aimed to find out the reality of faculty members’ employment of the cloud computing applications at Kuwait University, used a descriptive survey methodology. The sample consisted of 216 faculty members, and the study instrument comprised 52 items whose validity and reliability were verified. The results showed a high degree of employment of cloud computing applications by faculty members, and their perception of the obstacles related to using cloud computing was moderate. The study also revealed statistically significant differences in terms of the variable experience, favouring those with less than 5 years of experience. Considering these findings, the researchers recommended that the university administration pay increased attention to the importance of incorporating and effectively utilizing cloud computing applications in education.
Chapter
Data centralisation has emerged as a transformative force in modern education, reshaping how institutions manage information, deliver educational content, and support student learning. At the core of this evolution lies the integration of advanced technologies such as cloud computing.
Article
Full-text available
The cloud-based software-as-a-service (SaaS) model delivers corporate software to organizations as a service over the internet, minimizing investment in on-premises facilities and automatically adapting IT resources to meet demand variations. Integrating the two popular technology adoption frameworks, the technology acceptance model (TAM) and the technology, organization, and environment (TOE) framework, this study employs a structural equation modeling technique on a carefully chosen sample of 204 technology-intensive small and medium-sized enterprises (SMEs) in Sweden to investigate the effect of various antecedents on the intention and actual utilization of SaaS-based cloud applications. The results are counterintuitive regarding the relationship between perceived ease of use and intention and the inverse relationship between risk and trust. The central construct of TAM has an insignificant relationship with the intention to adopt SaaS applications, leading to substantial practical implications for Swedish SMEs. Similarly, a significant effect of trust with an insignificant impact of risk on intention challenges conventional wisdom. The novel integration of the two models also makes substantial theoretical contributions.
Article
Full-text available
Cloud technology has a significant impact on the accounting function, financial reporting, and the application of International Financial Reporting Standards (IFRS). This research aims to examine the correlation between the implementation of cloud technology, enhanced IFRS adoption, and the efficiency of the accounting function. Using a descriptive method and correlation coefficient, the study analyzes the relationships and degree of connection between these phenomena. The research involved 32 employees from financial operations in companies and accounting agencies in southeastern Serbia, primarily in the regions of Leskovac and Niš. The paper defines two hypotheses, presents the research findings, and draws conclusions based on them. The results indicate a modest relationship between the implementation of cloud technology and the increased efficiency of the accounting function, as evidenced by faster execution of accounting tasks, reduced overall costs, and the easier adoption of new systems. Additionally, the implementation of cloud technology has a slight positive impact on the improvement of IFRS application, through the use of specialized knowledge, the provision of reliable information, and the reduction of inconsistencies in IFRS application.
Article
Full-text available
Cloud computing has revolutionized the way data is processed and stored, leading to increased demand for efficient machine learning models. However, the current centralized nature of cloud-based machine learning poses challenges in terms of scalability and privacy protection. This paper addresses these obstacles by proposing a novel approach called Adaptive Hierarchical Federated Learning. This approach enables the efficient distribution of machine learning tasks across multiple layers of a hierarchical cloud architecture, allowing for improved scalability and enhanced privacy preservation. The innovative method presented in this paper harnesses the power of federated learning while adapting dynamically to the varying computational resources within the hierarchical cloud environment. Through extensive experiments, the effectiveness and efficiency of the proposed Adaptive Hierarchical Federated Learning are demonstrated, highlighting its potential to significantly advance the field of cloud computing.
Thesis
Full-text available
In the context of increasingly complex and dynamic networks, the automatic management of IP addresses is essential. This work presents the full implementation of the Dynamic Host Configuration Protocol (DHCP) using the Rust programming language. The objective is to reproduce the standard behavior of a DHCP server while benefiting from Rust’s strong guarantees in memory safety, performance, and concurrency. The developed system supports key features such as IP lease tracking, client MAC address identification, and basic security protections (blacklist, anti-flood). Built with asynchronous programming using Tokio and modular architecture, the resulting DHCP server shows high reliability and responsiveness in simulated environments. This project demonstrates the suitability of Rust for building efficient and secure network services.
Article
Full-text available
Sanayi devrimleriyle birlikte ortaya çıkan dijital teknolojiler ve kavramlar, tüketicilerin istek ve arayışlarını büyük ölçüde değiştirmiş, bu durum birçok sektörü etkilemiş ve firmaların iş modelleri ile üretim süreçlerinde köklü değişimlere neden olmuştur. Özellikle Endüstri 4.0 dönemi; ülkeleri, sektörleri ve tüketicileri derinden etkileyen bir dönüşüm süreci yaratmıştır. Bu dönüşümle birlikte, tüketici talepleri, üretim sistemleri ve sunulan hizmetler dijitalleşmekte, yeni teknolojilere yönelik AR-GE faaliyetleri hız kazanmaktadır. Literatür ve uygulayıcıların çalışmaları incelendiğinde, dijital dönüşümün üretim, rekabet ve tüketim süreçlerinde sağladığı avantajlar, doğru ve etkin uygulamalara bağlıdır. Bu bağlamda, birçok araştırma yapılmış, uygulamalar geliştirilmiş ve sektör yöneticileri tarafından konunun önemi vurgulanmıştır. Endüstri 4.0 yalnızca global firmaları değil, Türk firmalarını da etkileyen bir süreçtir. Bu çalışmada, sanayi devrimleri ve Endüstri 4.0’ın gelişim süreci, teknolojileri ve uygulamaları ele alınarak; global ve yerel ölçekte yapılan araştırmalar ışığında Türk firmalarına etkileri, farklı sektörlerdeki yöneticilerin görüşleri doğrultusunda değerlendirilmiş ve bu dönüşümün önemi ortaya konulmuştur. With the industrial revolutions, many new digital technologies and concepts have emerged, significantly transforming consumer demands and expectations. These shifts have deeply influenced various sectors and compelled companies to fundamentally change their business models and production processes, making creative and innovative practices essential. The latest industrial revolution, known as Industry 4.0, continues to impact countries, industries, and consumers on a large scale. In this context, consumer expectations, sectoral production systems, and public services are undergoing a digital transformation driven by Industry 4.0 technologies, while R&D projects for upcoming innovations are accelerating. Studies in the literature and by practitioners highlight that the advantages of digital transformation in production, competition, and consumption depend on accurate and effective implementations. Numerous studies have been conducted, applications developed, and sector leaders have emphasized the importance of this transformation. Industry 4.0 affects not only global firms but also local businesses, including Turkish companies. This study examines the development process, technologies, and applications of industrial revolutions and Industry 4.0, evaluates its effects on Turkish firms within the scope of global and local research, and presents insights from managers in different sectors, revealing the significance of this transformation from a global-to-local perspective.
Chapter
As organizations seek to navigate the complexities of contemporary energy environments, the key role of artificial intelligence (AI) in optimizing energy management has emerged as a critical area in organizations. This study examines the relationship between AI and energy management in organizations. It examines how AI can be used to increase sustainability and efficiency in energy management. Using AI-driven solutions allows organizations to use machine learning algorithms, predictive modeling, and advanced data analytics; gain actionable insights into energy consumption patterns; identify inefficiencies; and implement targeted strategies for optimization. Integration of AI into energy management not only increases efficiency in resource use but also facilitates the transition to sustainable practices. AI enables organizations to dynamically respond to fluctuations in energy demand, optimize operational costs, and minimize environmental impact through real-time monitoring and adaptive control systems. In addition, AI-supported energy management systems contribute to the development of smart and resilient infrastructures, promoting a more harmonious and responsive organizational ecosystem. This study examines the business and management literature on AI and energy management via a systematic literature review. In doing so, this study aims to shed light on the potential of AI to revolutionize traditional energy paradigms and provide a path toward energy sustainability, cost-effectiveness, and environmental responsibility for organizations across a variety of industries. It also explores the many benefits and challenges of the convergence of AI and energy management.
Article
In a dynamically developing global economy, technology is becoming a key factor in increasing efficiency, innovation potential, and competitiveness of companies. This article analyzes the impact of technological solutions on business processes, including optimization of operations, automation, implementation of data analytics and digital transformation. The role of artificial intelligence (AI), cloud services, and the Internet of Things (IoT) in modernizing traditional approaches is considered. Particular attention is paid to the need for organizations to adapt to technological changes in order to maintain sustainability. Based on case studies and scientific research, the relationship between technology and the improvement of business models is demonstrated.
Article
Full-text available
Additive Manufacturing (AM), often referred to as 3D printing, has revolutionized production capabilities in small and medium-sized enterprises (SMEs), offering unparalleled customization, rapid prototyping, and localized manufacturing. However, this innovation introduces significant cyber risks due to the digitization of design files, integration of IoT-enabled devices, and reliance on cloud-based systems. SMEs often lack the sophisticated cybersecurity infrastructure required to counteract these threats, making them vulnerable to intellectual property theft, sabotage, and data breaches. This article presents a comprehensive cyber risk management framework tailored for AM operations in SMEs. The study employs a mixed-methods approach combining qualitative analysis of SME cybersecurity posture with a quantitative risk assessment model to identify, evaluate, and prioritize threats. Recommendations include a multilayered defense strategy comprising secure file protocols, employee training, network segmentation, and adherence to cybersecurity standards such as NIST SP 800-171. The proposed framework aims to enhance cyber resilience and ensure sustainable integration of AM in SME manufacturing ecosystems.
Article
Full-text available
As organizations increasingly migrate to cloud-based Enterprise Resource Planning (ERP) systems, the collections module has emerged as a critical area affecting cash flow and financial performance. Despite its potential for automation and scalability, many users encounter persistent challenges that hinder timely and efficient collections. This study investigates the common pain points experienced by cloud ERP customers, including system usability issues, integration gaps, limited customization, and workflow inefficiencies. Through qualitative interviews and analysis of user feedback from various industries, the paper identifies recurring obstacles that impede collections performance. Based on these insights, it proposes a set of strategic and technological enhancements aimed at streamlining the collections process, improving user experience, and accelerating receivables. The recommendations focus on improved dashboard design, AI-driven follow-up mechanisms, better integration with third-party tools, and enhanced training resources. This research contributes practical guidance for both ERP vendors and end-users striving to optimize cloud-based collections operations.
ResearchGate has not been able to resolve any references for this publication.