Conference Paper

An updated performance comparison of virtual machines and Linux containers

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Containers provide lightweight, isolated environments that encapsulate applications and their dependencies, ensuring consistency across different computing environments (Bernstein, 2014). Unlike virtual machines (VMs), which require a full operating system (OS) instance for each application, containers share the host OS kernel while maintaining isolated processes, significantly reducing resource overhead and improving performance (Felter et al, 2015). ...
... Kubernetes automated orchestration, scaling, and deployment, becoming the industry standard for containerized workloads (Kratzke & Quint, 2017). This shift has driven the transition from monolithic architectures to microservices-based and DevOps-driven workflows, improving scalability, deployment efficiency, and resource utilization (Felter et al, 2015). Today, containerization is central to modern cloud computing, enabling enterprises to build and manage applications more efficiently in distributed environments. ...
... Compared to traditional virtualization, containerization is more efficient as it eliminates the need for separate OS instances, reducing overhead and improving startup times (Felter et al, 2015). Studies show that containers consume 30-50% fewer resources than VMs while maintaining high performance (Morabito et al, 2015). ...
Article
Full-text available
Containerization has revolutionized modern application development, providing a lightweight, scalable, and efficient alternative to traditional virtualization. Containers enable faster deployment, improved resource utilization, and seamless integration with cloud-native architectures. Their adoption has accelerated across industries due to benefits such as portability, DevOps automation, and cost efficiency. As organizations continue to embrace microservices and cloud-native ecosystems, container technologies are evolving with advancements in serverless computing, AI-driven orchestration, and edge computing. This paper explores the growing significance of containers in modern applications, highlighting their impact on scalability, innovation, and the future of software development.
...  Serverless Frameworks: Knative and OpenFaaS enable event-driven execution and fine-grained scaling of functions. These frameworks enhance the architecture's ability to handle dynamic workloads and reduce resource overheads (Felter et al., 2015).  AI Pipeline Tools: Kubeflow automates ML workflows, while TensorFlow Serving enables efficient model deployment and inference. ...
...  Knative: Configured for event-driven workflows, leveraging Kubernetes' autoscaling capabilities to handle spikes in real-time inference tasks.  OpenFaaS: Deployed for executing lightweight AI tasks, such as data preprocessing and feature extraction (Felter et al., 2015). ...
... Serverless architectures demonstrated superior scalability, handling spikes in request loads with minimal degradation in performance. However, cold start latencies introduced delays ranging from 100ms to 300ms, particularly in Knative-based deployments (Felter et al., 2015). By contrast, containerized workflows exhibited consistent latency due to pre-initialized containers but struggled with resource contention during peak loads. ...
Preprint
Full-text available
This paper aims to explore the integration of serverless architectures with Kubernetes and OpenStack to optimize AI workflows. The study evaluates the architecture's scalability, performance, resource utilization, cost efficiency, and energy consumption while addressing challenges such as cold start latency and GPU resource management.The research involves an experimental setup consisting of OpenStack for IaaS, Kubernetes for container orchestration, and serverless frameworks like Knative and OpenFaaS for function execution. AI workflows, including data preprocessing, model training, and real-time inference, were implemented. Performance metrics such as latency, throughput, energy efficiency, and cost were analyzed using tools like Prometheus, Grafana, and TensorFlow Profiler. Comparisons were drawn between serverless and containerized workflows across diverse workload scenarios. The integration demonstrated significant benefits for AI workflows, particularly in real-time inference tasks, with serverless architectures exhibiting better scalability and cost efficiency. Containerized workflows achieved superior GPU utilization and cost performance for batch processing tasks. Serverless workflows reduced energy consumption by up to 25% during idle periods but were impacted by cold start latencies and resource contention during peak workloads. The findings emphasize the transformative potential of serverless-Kubernetes-OpenStack integration, particularly for scalable and energy-efficient AI workflows. However, trade-offs in performance and architectural complexity were noted. The study contributes by optimizing GPU utilization, reducing energy consumption, and supporting hybrid workloads, addressing key gaps in prior research. Recommendations include strategies for latency mitigation, resource orchestration, and workload placement.
...  Serverless Frameworks: Knative and OpenFaaS enable event-driven execution and fine-grained scaling of functions. These frameworks enhance the architecture's ability to handle dynamic workloads and reduce resource overheads (Felter et al., 2015).  AI Pipeline Tools: Kubeflow automates ML workflows, while TensorFlow Serving enables efficient model deployment and inference. ...
...  Knative: Configured for event-driven workflows, leveraging Kubernetes' autoscaling capabilities to handle spikes in real-time inference tasks.  OpenFaaS: Deployed for executing lightweight AI tasks, such as data preprocessing and feature extraction (Felter et al., 2015). ...
... Serverless architectures demonstrated superior scalability, handling spikes in request loads with minimal degradation in performance. However, cold start latencies introduced delays ranging from 100ms to 300ms, particularly in Knative-based deployments (Felter et al., 2015). By contrast, containerized workflows exhibited consistent latency due to pre-initialized containers but struggled with resource contention during peak loads. ...
Article
Full-text available
This paper aims to explore the integration of serverless architectures with Kubernetes and OpenStack to optimize AI workflows. The study evaluates the architecture’s scalability, performance, resource utilization, cost efficiency, and energy consumption while addressing challenges such as cold start latency and GPU resource management.The research involves an experimental setup consisting of OpenStack for IaaS, Kubernetes for container orchestration, and serverless frameworks like Knative and OpenFaaS for function execution. AI workflows, including data preprocessing, model training, and real-time inference, were implemented. Performance metrics such as latency, throughput, energy efficiency, and cost were analyzed using tools like Prometheus, Grafana, and TensorFlow Profiler. Comparisons were drawn between serverless and containerized workflows across diverse workload scenarios. The integration demonstrated significant benefits for AI workflows, particularly in real-time inference tasks, with serverless architectures exhibiting better scalability and cost efficiency. Containerized workflows achieved superior GPU utilization and cost performance for batch processing tasks. Serverless workflows reduced energy consumption by up to 25% during idle periods but were impacted by cold start latencies and resource contention during peak workloads. The findings emphasize the transformative potential of serverless-Kubernetes-OpenStack integration, particularly for scalable and energy-efficient AI workflows. However, trade-offs in performance and architectural complexity were noted. The study contributes by optimizing GPU utilization, reducing energy consumption, and supporting hybrid workloads, addressing key gaps in prior research. Recommendations include strategies for latency mitigation, resource orchestration, and workload placement.
... These methods rely on historical data and specific configurations to predict performance metrics such as response time, throughput, and resource utilization. However, these approaches have several limitations, Felter, W., Ferreira, A., Rajamony, R., & Rubio, J. (2015): ...
... Various performance metrics, including CPU and memory utilization, disk I/O, and network latency for each VM and container instance represent the state of the system. The state also includes workload patterns, such as the number of incoming requests, the type of tasks being processed, and the criticality of those tasks, Felter, W., Ferreira, A., Rajamony, R., & Rubio, J. (2015). ...
... The AI system's ability to reduce overprovisioning and optimize resource utilization without compromising performance, compared to traditional methods that may result in resource wastage, Felter, W., Ferreira, A., Rajamony, R., & Rubio, J. (2015), Smith, J., & Chen, T. (2021). ...
Article
Full-text available
The accurate calculation and comparison of performance in cloud environments are critical for optimizing resource utilization, particularly with the increasing use of virtual machines (VMs) and containers. This research proposes an AI-driven resource management framework that surpasses traditional machine learning algorithms by enabling real-time, autonomous performance optimization. While machine learning models provide predictive capabilities, they often require manual tuning and retraining for changing workloads. In contrast, the proposed AI-driven system, utilizing techniques such as reinforcement learning and adaptive optimization, continuously adjusts resource allocation based on real-time performance metrics like response time, throughput, and server utilization. This dynamic, self-improving system can respond to fluctuating workloads and network conditions without the need for constant retraining, offering superior flexibility and faster response times. The framework will be validated through extensive experiments across multi-cloud and edge computing environments, demonstrating its ability to significantly reduce calculation time while improving scalability and efficiency. Additionally, this approach incorporates enhanced security mechanisms, combining the isolation benefits of VMs with the lightweight efficiency of containers, providing a comprehensive, real-time solution for cloud-native applications.
... The overhead for compute-intensive tasks was around 6% and for IO tasks around 9%. Felter et al. [12] also looked at the performance differential of non-scientific software within virtualised and containerised environments. The performance studies published in the literature [10,12,24] found negligible differences between container and native performance, as long as the container is setup and used appropriately, e.g. using data mounts for I/O intensive tasks [12] and not setting up many containers for individual tasks with run times on the order of container setup time (typically fractions of a second) [10]. ...
... Felter et al. [12] also looked at the performance differential of non-scientific software within virtualised and containerised environments. The performance studies published in the literature [10,12,24] found negligible differences between container and native performance, as long as the container is setup and used appropriately, e.g. using data mounts for I/O intensive tasks [12] and not setting up many containers for individual tasks with run times on the order of container setup time (typically fractions of a second) [10]. ...
... Felter et al. [12] also looked at the performance differential of non-scientific software within virtualised and containerised environments. The performance studies published in the literature [10,12,24] found negligible differences between container and native performance, as long as the container is setup and used appropriately, e.g. using data mounts for I/O intensive tasks [12] and not setting up many containers for individual tasks with run times on the order of container setup time (typically fractions of a second) [10]. ...
Preprint
Containers are an emerging technology that hold promise for improving productivity and code portability in scientific computing. We examine Linux container technology for the distribution of a non-trivial scientific computing software stack and its execution on a spectrum of platforms from laptop computers through to high performance computing (HPC) systems. We show on a workstation and a leadership-class HPC system that when deployed appropriately there are no performance penalties running scientific programs inside containers. For Python code run on large parallel computers, the run time is reduced inside a container due to faster library imports. The software distribution approach and data that we present will help developers and users decide on whether container technology is appropriate for them. We also provide guidance for the vendors of HPC systems that rely on proprietary libraries for performance on what they can do to make containers work seamlessly and without performance penalty.
... In our framework, Linux containers are employed to provide applications isolation on the edge node through virtualisation. Our rationale for choosing container virtualisation technology instead of VMs is due to the reduced boot up times and enhanced isolation provided by the former [6]. Moreover, on limited hardware platforms, such as edge nodes, containers are appropriate given that they are relatively lightweight and have low overheads. ...
... if P rt i ∈ P rt then 3 assign the same no. of ports from P rt to P rt i ; 4 end 5 assign an additional port from P rt to P rt i ; 6 initialise LXC container on s i , P rt i ; 7 update the firewall of P rt i ; // using iptables command 8 send response to cloud server manager; 9 else 10 reject(request i ); 11 end A resource check and a priority check is performed when the setup request is received (basic service takes highest priority and other workloads with a high priority can be executed). If the new request ranked lower than any of the currently executing edge servers or there is not sufficient resource to launch a new container, then the request is rejected (Line 10). ...
... The Pokémon Go server is known to have crashed multiple times during its launch due to extreme user activity 6,7 . For such a game not only is it essential to replicate servers in different data centers, but given the large number of user connections, an edge layer between the cloud and the user can reduce the distance of communication. ...
Preprint
Current computing techniques using the cloud as a centralised server will become untenable as billions of devices get connected to the Internet. This raises the need for fog computing, which leverages computing at the edge of the network on nodes, such as routers, base stations and switches, along with the cloud. However, to realise fog computing the challenge of managing edge nodes will need to be addressed. This paper is motivated to address the resource management challenge. We develop the first framework to manage edge nodes, namely the Edge NOde Resource Management (ENORM) framework. Mechanisms for provisioning and auto-scaling edge node resources are proposed. The feasibility of the framework is demonstrated on a PokeMon Go-like online game use-case. The benefits of using ENORM are observed by reduced application latency between 20% - 80% and reduced data transfer and communication frequency between the edge node and the cloud by up to 95\%. These results highlight the potential of fog computing for improving the quality of service and experience.
... IV. RELATED WORK Some papers have explored the overhead of containerbased virtualization tools as presented by [13], [43], [22]. Mostly they compare the performance overhead of COS versus classic Virtual machine technologies (e.g KVM, LinuxVserver). ...
... Performance evaluations on literature usually does not put under test the HPL-LAPACK benchmark, instead a version of HPL-LINPACK, where matrix size could impact on the final result (e.g CPU cycles). Reports like [13] used a compiled version of LINPACK from INTEL, here we compiled the binary inside of each COS, to replicate a normal work flow, when running on a HPC cluster. Moreover, HPL results may vary significantly depending on the CPU architecture. ...
Preprint
Full-text available
Virtualization technologies have evolved along with the development of computational environments since virtualization offered needed features at that time such as isolation, accountability, resource allocation, resource fair sharing and so on. Novel processor technologies bring to commodity computers the possibility to emulate diverse environments where a wide range of computational scenarios can be run. Along with processors evolution, system developers have created different virtualization mechanisms where each new development enhanced the performance of previous virtualized environments. Recently, operating system-based virtualization technologies captured the attention of communities abroad (from industry to academy and research) because their important improvements on performance area. In this paper, the features of three container-based operating systems virtualization tools (LXC, Docker and Singularity) are presented. LXC, Docker, Singularity and bare metal are put under test through a customized single node HPL-Benchmark and a MPI-based application for the multi node testbed. Also the disk I/O performance, Memory (RAM) performance, Network bandwidth and GPU performance are tested for the COS technologies vs bare metal. Preliminary results and conclusions around them are presented and discussed.
... Krylovskiy [10] evaluates Docker for resource constrained devices, identifying negligible overhead and even outperforming native execution in some tests. Similar performance characteristics are identified by Felter et al. [11], identifying negligible overhead for CPU and memory performance. Felter et al. recommend that for input/outputintensive workloads, virtualization should be used carefully. ...
... While the latency is improved by 13.9 times when utilizing a preempt rt real-time Linux kernel in comparison to a generic Linux kernel. Further research has also shown similar results when utilizing Docker for the deployment strategy where Felter et al. [11] found that Docker has negligible overhead in regards to CPU and memory performance, and Krylovskiy [10] presents negligible overhead introduced by Docker when executed with an ARM CPU. ...
Preprint
Companies developing and maintaining software-only products like web shops aim for establishing persistent links to their software running in the field. Monitoring data from real usage scenarios allows for a number of improvements in the software life-cycle, such as quick identification and solution of issues, and elicitation of requirements from previously unexpected usage. While the processes of continuous integration, continuous deployment, and continuous experimentation using sandboxing technologies are becoming well established in said software-only products, adopting similar practices for the automotive domain is more complex mainly due to real-time and safety constraints. In this paper, we systematically evaluate sandboxed software deployment in the context of a self-driving heavy vehicle that participated in the 2016 Grand Cooperative Driving Challenge (GCDC) in The Netherlands. We measured the system's scheduling precision after deploying applications in four different execution environments. Our results indicate that there is no significant difference in performance and overhead when sandboxed environments are used compared to natively deployed software. Thus, recent trends in software architecting, packaging, and maintenance using microservices encapsulated in sandboxes will help to realize similar software and system engineering for cyber-physical systems.
... (Ferrari et al., 2021) studied Wasm performance on edge devices, while (Fiedler et al., 2020) analyzed Wasm startup latency in the context of serverless computing. (Felter et al., 2015) focused on docker container performance on x86 ar-chitectures but did not explore Wasm. ...
Conference Paper
Docker and WebAssembly (Wasm) are two pivotal technologies in modern software development, each offering unique strengths in portability and performance. The rise of Wasm, particularly in conjunction with container runtimes, highlights its potential to enhance efficiency in diverse application stacks. However, a notable gap remains in understanding how Wasm containers perform relative to traditional multi-platform containers across multiple architectures and workloads, especially when optimizations are employed. In this paper, we aim to empirically assess the performance and usability implications of native multi-platform containers versus Wasm containers under optimized configurations. We focus on critical metrics including startup time, pull time (both fresh and cached), and image sizes for three distinct workloads and architectures- AMD64 (AWS bare metal) and two embedded boards: Nvidia Jetson Nano with ARM64 and Starfive VisionFive2 with RISCV64. To address these objectives, we conducted a series of experiments using docker and containerd in multi-platform built images for native containers and Wasmtime as the WebAssembly runtime within containerd/docker’s ecosystem. Our findings show that while native containers achieve slightly faster startup times, Wasm containers excel in agility and maintain image sizes of approximately 27.0% of their native counterparts and a significant reduction in pull times across all three architectures of up to 25% using containerd. With continued optimizations, Wasm has the potential to emerge as a viable choice in environments that demand both reduced image size and cross-platform portability. It will not replace the current container paradigm soon; rather, it will be integrated into this framework and complement containers instead of replacing them.
... The inserted security checks for every memory access slow down the performance of SFI-based approaches [13]. Although the vmfunc instruction accelerates EPT switching [18,19,37], due to nested paging and I/O virtualization, running systems in VMs additionally incurs extra overhead [49,50]. BULKHEAD benefits from efficient permission validations and switches with PKS. ...
... This enables NeuroStore to better match project requirements while reducing the overhead of smaller projects. c) Ease of Use: NeuroStore is deployed using containerization [16,17] technology, which enables users to deploy system components with a single click after making a few configuration changes. The system does not require high-performance servers or specialized software knowledge for deployment and management, and provides concise and easy-to-understand interfaces that are accessible to interested researchers. ...
Preprint
With the rapid advancement of brain-computer interface (BCI) technology, the volume of physiological data generated in related research and applications has grown significantly. Data is a critical resource in BCI research and a key factor in the development of BCI technology, making efficient storage and management of this data increasingly vital. In the realm of research, ample data can facilitate the development of novel algorithms, which can be more accurately validated. In terms of applications, well-organized data can foster the emergence of new business opportunities, thereby maximizing the commercial value of the data. Currently, there are two major challenges in the storage and management of BCI data: providing different classification storage modes for multi-modal data, and adapting to varying application scenarios while improving storage strategies. To address these challenges, this study has developed the NeuroStore BCI data persistence system, which provides a general and easily scalable data model and can effectively handle multiple types of data storage. The system has a flexible distributed framework and can be widely applied to various scenarios. It has been utilized as the core support platform for efficient data storage and management services in the "BCI Controlled Robot Contest in World Robot Contest."
... Containerization refers to a method of operating system virtualization in which applications are run in isolated user spaces, called containers, while using the same shared operating system kernel [3]. A container encapsulates an application along with its dependencies, libraries, binaries, and configuration files into a single package that can run consistently across different computing environments [4]. ...
Article
Full-text available
Containerization has emerged as a transformative approach to application deployment and management in cloud environments, offering lightweight, portable, and efficient alternatives to traditional virtualization. This paper presents a systematic review of containerization technologies, their architectural principles, and their impact on modern software development and deployment practices. We analyze the benefits of containerization including improved resource utilization, deployment consistency, application isolation, and rapid scaling capabilities. We also examine challenges related to security, orchestration complexity, and performance overhead. Through a comprehensive analysis of recent research and industry practices, we aim to provide insights into the current state of containerization and its role in the evolving cloud computing landscape.
... Containers are lightweight, executable software packages that encapsulate an application along with its dependencies, including libraries, configuration files, and binaries, ensuring consistent execution across different environments [19]. Unlike virtual machines (VMs) that run their operating system on top of the host system, containers share the host operating system kernel to improve resource utilization [20]. Additionally, containers operate in isolated environments to enhance the security and stability of software applications [21]. ...
Preprint
Full-text available
Software development industries are increasingly adopting containers to enhance the scalability and flexibility of software applications. Security in containerized projects is a critical challenge that can lead to data breaches and performance degradation, thereby directly affecting the reliability and operations of the container services. Despite the ongoing effort to manage the security issues in containerized projects in software engineering (SE) research, more focused investigations are needed to explore the human perspective of security management and the technical approaches to security management in containerized projects. This research aims to explore security management in containerized projects by exploring how SE practitioners perceive the security issues in containerized software projects and their approach to managing such issues. A clear understanding of security management in containerized projects will enable industries to develop robust security strategies that enhance software reliability and trust. To achieve this, we conducted two separate semi-structured interview studies to examine how practitioners approach security management. The first study focused on practitioners perceptions of security challenges in containerized environments, where we interviewed 15 participants between December 2022 and October 2023. The second study explored how to enhance container security, with 20 participants interviewed between October 2024 and December 2024. Analyzing the data from both studies reveals how SE practitioners address the various security challenges in containerized projects. Our analysis also identified the technical and non-technical enablers that can be utilized to enhance security.
... However, these environments lack flexibility and resource sharing, leading to lower utilization in multi-tenant setups (Kominos et al., 2016). Virtual machines, supported by OpenStack's Nova component using hypervisors such as KVM, provide strong resource isolation but incur performance overheads due to virtualization, particularly in CPU, memory, and disk I/O operations (Felter et al., 2015). Containers, managed through Nova-docker, offer lightweight virtualization and faster boot times but face networking challenges such as reduced bandwidth efficiency and additional latency when integrated with Neutron (Kominos et al., 2016). ...
Preprint
Full-text available
The integration of Kubernetes with OpenStack offers a promising approach for deploying scalable and efficient AI workflows in multi-tenant cloud environments. This study evaluates the performance, scalability, and security implications of this integration, focusing on bare-metal, virtual machines (VMs), and containers as resource provisioning methods. The objectives include designing a unified framework combining Kubernetes' orchestration capabilities with OpenStack's infrastructure provisioning, assessing resource optimization strategies, and proposing best practices for AI workload management. The methodology involves benchmarking CPU, memory, disk I/O, and network performance using tools such as PXZ, Sysbench, and Netperf. The study also examines the scalability and security of multi-tenant deployments, leveraging Kubernetes namespaces, GPU sharing, and Neutron overlays. Key metrics include latency, throughput, and cost-effectiveness. The results highlight that bare-metal environments provide the highest raw performance, while containers achieve a balance between efficiency and scalability. VMs offer robust isolation but incur significant overheads. Scalability analysis demonstrates containers' superior resource utilization and cost-effectiveness, whereas bare-metal systems struggle with underutilization in multi-tenant setups. The discussion addresses challenges such as network overheads and resource contention, proposing solutions like optimizing Kubernetes-Neutron configurations and adopting advanced GPU scheduling. This research contributes to the optimization of cloud-native AI workflows and underscores the importance of secure, scalable infrastructure in hybrid cloud environments.
... However, these environments lack flexibility and resource sharing, leading to lower utilization in multi-tenant setups (Kominos et al., 2016). Virtual machines, supported by OpenStack's Nova component using hypervisors such as KVM, provide strong resource isolation but incur performance overheads due to virtualization, particularly in CPU, memory, and disk I/O operations (Felter et al., 2015). Containers, managed through Nova-docker, offer lightweight virtualization and faster boot times but face networking challenges such as reduced bandwidth efficiency and additional latency when integrated with Neutron (Kominos et al., 2016). ...
Article
Full-text available
The integration of Kubernetes with OpenStack offers a promising approach for deploying scalable and efficient AI workflows in multi-tenant cloud environments. This study evaluates the performance, scalability, and security implications of this integration, focusing on bare-metal, virtual machines (VMs), and containers as resource provisioning methods. The objectives include designing a unified framework combining Kubernetes' orchestration capabilities with OpenStack's infrastructure provisioning, assessing resource optimization strategies, and proposing best practices for AI workload management. The methodology involves benchmarking CPU, memory, disk I/O, and network performance using tools such as PXZ, Sysbench, and Netperf. The study also examines the scalability and security of multi-tenant deployments, leveraging Kubernetes namespaces, GPU sharing, and Neutron overlays. Key metrics include latency, throughput, and cost-effectiveness. The results highlight that bare-metal environments provide the highest raw performance, while containers achieve a balance between efficiency and scalability. VMs offer robust isolation but incur significant overheads. Scalability analysis demonstrates containers' superior resource utilization and cost-effectiveness, whereas bare-metal systems struggle with underutilization in multi-tenant setups. The discussion addresses challenges such as network overheads and resource contention, proposing solutions like optimizing Kubernetes-Neutron configurations and adopting advanced GPU scheduling. This research contributes to the optimization of cloud-native AI workflows and underscores the importance of secure, scalable infrastructure in hybrid cloud environments.
... Abdellatief et al. [15] provided performance comparison among Type-I and Type-II hypervisors in different scenarios. Felter et al. [16] focused on the hypervisor KVM and the container engine Docker for cloud computing. Zhanibek [17] examined Docker and LXC's performance within the Infrastructure-as-a-Server Cloud model. ...
Preprint
The emergence of Software-Defined Vehicles (SDVs) signifies a shift from a distributed network of electronic control units (ECUs) to a centralized computing architecture within the vehicle's electrical and electronic systems. This transition addresses the growing complexity and demand for enhanced functionality in traditional E/E architectures, with containerization and virtualization streamlining software development and updates within the SDV framework. While widely used in cloud computing, their performance and suitability for intelligent vehicles have yet to be thoroughly evaluated. In this work, we conduct a comprehensive performance evaluation of containerization and virtualization on embedded and high-performance AMD64 and ARM64 systems, focusing on CPU, memory, network, and disk metrics. In addition, we assess their impact on real-world automotive applications using the Autoware framework and further integrate a microservice-based architecture to evaluate its start-up time and resource consumption. Our extensive experiments reveal a slight 0-5% performance decline in CPU, memory, and network usage for both containerization and virtualization compared to bare-metal setups, with more significant reductions in disk operations-5-15% for containerized environments and up to 35% for virtualized setups. Despite these declines, experiments with actual vehicle applications demonstrate minimal impact on the Autoware framework, and in some cases, a microservice architecture integration improves start-up time by up to 18%.
... The below-mentioned table 2 presents the hardware specification of the physical devices. [27], so it simplifies our hardware-software building blocks and does not affect the evaluation measurement. ROS2 provides two client libraries, rclcpp (for C++) and rclpy (for Python), to create nodes, subscribers, and publishers. ...
Preprint
Full-text available
In the autonomous vehicle and self-driving paradigm, cooperative perception or exchanging sensor information among vehicles over wireless communication has added a new dimension. Generally, an autonomous vehicle is a special type of robot that requires real-time, highly reliable sensor inputs due to functional safety. Autonomous vehicles are equipped with a considerable number of sensors to provide different required sensor data to make the driving decision and share with other surrounding vehicles. The inclusion of Data Distribution Service(DDS) as a communication middleware in ROS2 has proved its potential capability to be a reliable real-time distributed system. DDS comes with a scoping mechanism known as domain. Whenever a ROS2 process is initiated, it creates a DDS participant. It is important to note that there is a limit to the number of participants allowed in a single domain. The efficient handling of numerous in-vehicle sensors and their messages demands the use of multiple ROS2 nodes in a single vehicle. Additionally, in the cooperative perception paradigm, a significant number of ROS2 nodes can be required when a vehicle functions as a single ROS2 node. These ROS2 nodes cannot be part of a single domain due to DDS participant limitation; thus, different domain communication is unavoidable. Moreover, there are different vendor-specific implementations of DDS, and each vendor has their configurations, which is an inevitable communication catalyst between the ROS2 nodes. The communication between vehicles or robots or ROS2 nodes depends directly on the vendor-specific configuration, data type, data size, and the DDS implementation used as middleware; in our study, we evaluate and investigate the limitations, capabilities, and prospects of the different domain communication for various vendor-specific DDS implementations for diverse sensor data type.
... On the other hand, the research challenges associated with elastic services include the ability to accurately predict computational demand and performance of applications under different resource allocations [124,199], the use of these workload and performance models in informing resource management decisions in middleware [126], and the ability of applications to scale up and down, including dynamic creation, mobility, and garbage collection of VMs, containers, and other resource abstractions [212]. While virtualization (e.g., VMs) has achieved steady maturity in terms of performance guarantees rivalling native performance for CPU-intensive applications, ease of use of containers (especially quick restarts) has led to the adoption of containers by the developers community [75]. Programming models that enable dynamic reconfiguration of applications significantly help in elasticity [211], by allowing middleware to move computations and data across Clouds, between public and private Clouds, and closer to edge resources as needed by future Cloud applications running over sensor networks such as the IoT. ...
Preprint
Full-text available
The Cloud computing paradigm has revolutionised the computer science horizon during the past decade and has enabled the emergence of computing as the fifth utility. It has captured significant attention of academia, industries, and government bodies. Now, it has emerged as the backbone of modern economy by offering subscription-based services anytime, anywhere following a pay-as-you-go model. This has instigated (1) shorter establishment times for start-ups, (2) creation of scalable global enterprise applications, (3) better cost-to-value associativity for scientific and high performance computing applications, and (4) different invocation/execution models for pervasive and ubiquitous applications. The recent technological developments and paradigms such as serverless computing, software-defined networking, Internet of Things, and processing at network edge are creating new opportunities for Cloud computing. However, they are also posing several new challenges and creating the need for new approaches and research strategies, as well as the re-evaluation of the models that were developed to address issues such as scalability, elasticity, reliability, security, sustainability, and application models. The proposed manifesto addresses them by identifying the major open challenges in Cloud computing, emerging trends, and impact areas. It then offers research directions for the next decade, thus helping in the realisation of Future Generation Cloud Computing.
... Gupta et al. [61] ran experiments using benchmarks and applications on various computing environments, including supercomputers and clouds, to answer the question "why and who should choose cloud for HPC, for what applications, and how should cloud be used for HPC?". They also considered thin Virtual Machines (VMs) 1 , OS-level containers [109] [49], and hypervisorand application-level CPU affinity [84]. They concluded that public clouds are cost-effective for small scale applications but can complement supercomputers (i.e. ...
Preprint
High Performance Computing (HPC) clouds are becoming an alternative to on-premise clusters for executing scientific applications and business analytics services. Most research efforts in HPC cloud aim to understand the cost-benefit of moving resource-intensive applications from on-premise environments to public cloud platforms. Industry trends show hybrid environments are the natural path to get the best of the on-premise and cloud resources---steady (and sensitive) workloads can run on on-premise resources and peak demand can leverage remote resources in a pay-as-you-go manner. Nevertheless, there are plenty of questions to be answered in HPC cloud, which range from how to extract the best performance of an unknown underlying platform to what services are essential to make its usage easier. Moreover, the discussion on the right pricing and contractual models to fit small and large users is relevant for the sustainability of HPC clouds. This paper brings a survey and taxonomy of efforts in HPC cloud and a vision on what we believe is ahead of us, including a set of research challenges that, once tackled, can help advance businesses and scientific discoveries. This becomes particularly relevant due to the fast increasing wave of new HPC applications coming from big data and artificial intelligence.
... Other work has evaluated container performance metrics such as run time, CPU usage, and network utilization. Felter et al. [11] compared CPU, memory, I/O, and network performance of Docker and KVM against bare-metal Linux. In most cases, Docker adds little overhead, and almost always outperforms KVM. ...
Preprint
Full-text available
Context: Virtual machines provide isolation of services at the cost of hypervisors and more resource usage. This spurred the growth of systems like Docker that enable single hosts to isolate several applications, similar to VMs, within a low-overhead abstraction called containers. Motivation: Although containers tout low overhead performance, do they still have low energy consumption? Methodology: This work statistically compares (t-test, Wilcoxon) the energy consumption of three application workloads in Docker and on bare-metal Linux. Results: In all cases, there was a statistically significant (t-test and Wilcoxon p<0.05p < 0.05) increase in energy consumption when running tests in Docker, mostly due to the performance of I/O system calls.
... However, most containers that run in the cloud are too complicated to trust. The primary source of security problems in containers is system calls that are not namespace-aware [3]. Nonnamespace-aware system call interface facilitates the adversary to compromise untrusted containers to exploit kernel vulnerabilities to elevate privileges, bypass access control policy enforcement, and escape isolation mechanisms. ...
Preprint
A container is a group of processes isolated from other groups via distinct kernel namespaces and resource allocation quota. Attacks against containers often leverage kernel exploits through system call interface. In this paper, we present an approach that mines sandboxes for containers. We first explore the behaviors of a container by leveraging automatic testing, and extract the set of system calls accessed during testing. The set of system calls then results as a sandbox of the container. The mined sandbox restricts the container's access to system calls which are not seen during testing and thus reduces the attack surface. In the experiment, our approach requires less than eleven minutes to mine sandbox for each of the containers. The enforcement of mined sandboxes does not impact the regular functionality of a container and incurs low performance overhead.
... We have run a couple of compute-intensive workloads to experiment the speed of computation on Docker containers as well as on native platforms. As the results show in 3, the computation speed of containers are close to native platform performance within 1% region [22] [23]. To ease the creation of the container, we provide a Dockerfile that can automate the process of creating a container for the server-side part of MAMoC. ...
Preprint
This paper presents MAMoC, a framework which brings together a diverse range of infrastructure types including mobile devices, cloudlets, and remote cloud resources under one unified API. MAMoC allows mobile applications to leverage the power of multiple offloading destinations. MAMoC's intelligent offloading decision engine adapts to the contextual changes in this heterogeneous environment, in order to reduce the overall runtime for both single-site and multi-site offloading scenarios. MAMoC is evaluated through a set of offloading experiments, which evaluate the performance of our offloading decision engine. The results show that offloading computation using our framework can reduce the overall task completion time for both single-site and multi-site offloading scenarios.
... The closest work to ours is the CPU-oriented study [15] and the IBM research report [16] on the performance comparison of VM and Linux containers. However, both studies are incomplete (e.g., the former was not concerned with the non-CPU features, and the latter did not finish the container's network evaluation). ...
Preprint
The current virtualization solution in the Cloud widely relies on hypervisor-based technologies. Along with the recent popularity of Docker, the container-based virtualization starts receiving more attention for being a promising alternative. Since both of the virtualization solutions are not resource-free, their performance overheads would lead to negative impacts on the quality of Cloud services. To help fundamentally understand the performance difference between these two types of virtualization solutions, we use a physical machine with "just-enough" resource as a baseline to investigate the performance overhead of a standalone Docker container against a standalone virtual machine (VM). With findings contrary to the related work, our evaluation results show that the virtualization's performance overhead could vary not only on a feature-by-feature basis but also on a job-to-job basis. Although the container-based solution is undoubtedly lightweight, the hypervisor-based technology does not come with higher performance overhead in every case. For example, Docker containers particularly exhibit lower QoS in terms of storage transaction speed.
... Popular virtualization technologies such as virtual machine, or even containers, may be too heavy-weight for edge devices. These need a relatively significant amount of hardware resources to execute, e.g., VMWare's ESX hypervisor needs a recommended 8GB of RAM, while a Docker container with NAT enabled doubles the latency of a UDP stream [5] (see Figure 3). Thus the challenge will be to find a lighter weight solution for multi-tenancy, possibly at the expense of reducing the isolation among the different applications and limiting the total number of applications. ...
Preprint
Edge computing is the practice of placing computing resources at the edges of the Internet in close proximity to devices and information sources. This, much like a cache on a CPU, increases bandwidth and reduces latency for applications but at a potential cost of dependability and capacity. This is because these edge devices are often not as well maintained, dependable, powerful, or robust as centralized server-class cloud resources. This article explores dependability and deployment challenges in the field of edge computing, what aspects are solvable with today's technology, and what aspects call for new solutions. The first issue addressed is failures, both hard (crash, hang, etc.) and soft (performance-related), and real-time constraint violation. In this domain, edge computing bolsters real-time system capacity through reduced end-to-end latency. However, much like cache misses, overloaded or malfunctioning edge computers can drive latency beyond tolerable limits. Second, decentralized management and device tampering can lead to chain of trust and security or privacy violations. Authentication, access control, and distributed intrusion detection techniques have to be extended from current cloud deployments and need to be customized for the edge ecosystem. The third issue deals with handling multi-tenancy in the typically resource-constrained edge devices and the need for standardization to allow for interoperability across vendor products. We explore the key challenges in each of these three broad issues as they relate to dependability of edge computing and then hypothesize about promising avenues of work in this area.
... Considering the performance within Docker, only I/O operations and network access have a measurable overhead [66]. Operations that only depend on memory and CPU do not see any performance penalty, as these operations are not virtualized. ...
Preprint
In modern computer systems, user processes are isolated from each other by the operating system and the hardware. Additionally, in a cloud scenario it is crucial that the hypervisor isolates tenants from other tenants that are co-located on the same physical machine. However, the hypervisor does not protect tenants against the cloud provider and thus the supplied operating system and hardware. Intel SGX provides a mechanism that addresses this scenario. It aims at protecting user-level software from attacks from other processes, the operating system, and even physical attackers. In this paper, we demonstrate fine-grained software-based side-channel attacks from a malicious SGX enclave targeting co-located enclaves. Our attack is the first malware running on real SGX hardware, abusing SGX protection features to conceal itself. Furthermore, we demonstrate our attack both in a native environment and across multiple Docker containers. We perform a Prime+Probe cache side-channel attack on a co-located SGX enclave running an up-to-date RSA implementation that uses a constant-time multiplication primitive. The attack works although in SGX enclaves there are no timers, no large pages, no physical addresses, and no shared memory. In a semi-synchronous attack, we extract 96% of an RSA private key from a single trace. We extract the full RSA private key in an automated attack from 11 traces within 5 minutes.
... The following are some of the negative aspects of docker containers [21] [22]: ...
Article
Full-text available
Docker, developed and released in 2013 by Docker Inc. under the Apache 2.0 license, is an open-source container engine. Due to their significance in infrastructure virtualisation, containers have a special position in the annals of computing. This paper provides a complete discussion of Docker technology and how it has been applied in the current software development cycle, specifically in containerisation, to arrive at a highly efficient, repeatable, and portable application deployment method. Docker is an application container platform that packs code and dependencies together to run everywhere at scale. This capability is critical in the DevOps and CI/CD settings to cut down on deployment problems while using up less of the software development life cycle. Docker Essentials-Docker engine, Docker client-server, Docker images, Docker containers, and the role of these derivatives in constructing applications inside isolated containers are explained. Further, Docker utility in the attainable scalabilities is followed by frames such as Docker Swarm and Kubernetes that address the idea of horizontal scaling, Load Balancing, and features like microservices. The paper also revisits basic Docker tools and practices like Docker Compose and CI/CD for easier management of containers. A literature review addresses current research, emphasising gaps in Docker-based systems related to cloud environments and security. Last but not least, this paper provides a future work outlook which aims at improving the use of Docker in large-scale distributed systems.
... Zahlreiche wissenschaftliche Publikationen heben den offensichtlichen Vorteil einer leichtgewichtigen Containerlösung gegenüber virtuellen Maschinen hervor. Dies deutet darauf hin, dass insbesondere im DaaS-Kontext ein containerbasierter Ansatz gegenüber einem virtualisierten Ansatz vorzuziehen ist [5,9]. Allerdings kann nicht jede Anwendung in einer containerbasierten Umgebung ausgeführt werden, weshalb bei der Entwicklung von DESIGN sowohl der Betrieb von Anwendungen in Containern als auch in VMs berücksichtigt wurde, um eine möglichst weitreichende Abdeckung potenzieller Anwendungsszenarien zu erzielen. ...
Article
Full-text available
Zusammenfassung Die neuartige und freie Lösung DESIGN realisiert einen modernen Desktop-as-a-Service (DaaS), der den Betrieb unmodifizierter Linux/Windows-Anwendungen ermöglicht und durch den Verzicht auf eine spezielle Client-Software keine Änderungen am lokalen Client erfordert. Die Interaktion mit DESIGN und die Nutzung der integrierten Anwendungen geschehen vollständig über den Browser. Dieser Artikel stellt das moderne Architektur- und Nutzungskonzept des DaaS vor, beschreibt im Detail die Integration verschiedener Funktionen wie persistente Datenhaltung, Drucken oder Sound und erläutert deren charakteristische Performanceaspekte.
Article
The growth of IoT devices has generated an increasing demand for effective, agile, and scalable deployment frameworks. Traditional IoT architectures are generally strained by interoperability, real-time responsiveness, and resource optimization due to inherent complexity in managing heterogeneous devices and large-scale deployments. While containerization and dynamic API frameworks are seen as solutions, current methodologies are founded primarily on static API architectures that cannot be adapted in real time with evolving data structures and communication needs. Dynamic routing has been explored, but current solutions lack database schema flexibility and endpoint management. This work presents a Dockerized framework that integrates Dynamic RESTful APIs with containerization to achieve maximum flexibility and performance in IoT configurations. With the use of FastAPI for asynchronous processing, the framework dynamically scales API schemas as per real-time conditions, achieving maximum device interaction efficiency. Docker provides guaranteed consistent, portable deployment across different environments. An emulated IoT environment was used to measure significant performance parameters, including functionality, throughput, response time, and scalability. The evaluation shows that the framework maintains high throughput, with an error rate of 3.11% under heavy loads and negligible latency across varying traffic conditions, ensuring fast response times without compromising system integrity. The framework demonstrates significant advantages in IoT scenarios requiring the addition of new parameters or I/O components where dynamic endpoint generation enables immediate monitoring without core application changes. Architectural decisions involving RESTful paradigms, microservices, and containerization are also discussed in this paper to ensure enhanced flexibility, modularity, and performance. The findings provide a valuable addition to dynamic IoT API framework design, illustrating how dynamic, Dockerized RESTful APIs can improve the efficiency and flexibility of IoT systems.
Chapter
Software businesses are continuously increasing their presence in the cloud. While cloud computing is not a new research topic, designing software for the cloud is still challenging, requiring engineers to invest in research to become proficient at working with it. Design patterns can be used to facilitate cloud adoption, as they provide valuable design knowledge and implementation guidelines for recurrent engineering problems. This work introduces a pattern language for designing software for the cloud. We believe developers can significantly reduce their R&D time by adopting these patterns to bootstrap their cloud architecture. The language comprises 10 patterns, organized into four categories: Automated Infrastructure Management, Orchestration and Supervision, Monitoring, and Discovery and Communication.
Article
The runtime verification of multi-domain software applications implementing the behaviors of modern robots is a challenging task. On the one hand, assertion-based verification (ABV) has shown great potential to check the correctness of complex systems at runtime. On the other hand, the computational overhead introduced by runtime ABV can be substantial, variable and non-deterministic. As a consequence, applying accurate ABV at runtime to autonomous robots, which are often characterized by resource-constrained computing architectures, can lead to severe slowdowns of the software execution and failures of temporal constraints, thus compromising the overall system’s correctness. We address this challenge by proposing a platform for runtime ABV that implements monitor synthesis from signal temporal logic assertions and dynamic monitor migration across edge devices and the cloud. The synthesized monitors are wrapped into ROS-compliant nodes and connected to the system under verification. The overall ABV framework and the related migration mechanism are then containerized with Docker for both edge and cloud computing. To evaluate the proposed platform, we present the results obtained with a set of synthetic benchmarks and with an industrial case study, which implements the mission of a Robotnik RB-Kairos mobile robot in a smart manufacturing production line. Note to Practitioners . This paper was motivated by the need for accurate and runtime verification of robotic systems software. Verification and validation of intelligent systems are often incomplete, as they cannot anticipate all potential scenarios, including errors or unexpected events. On top of this, assertion-based verification can also be resource-intensive; therefore, careful use of resources is required to avoid overloading the robot’s computational resources with the monitors. To achieve this, we used signal temporal logic, a widely accepted solution to monitor robotic and distributed applications. The main contribution of this work is a framework that can automatically synthesize the monitors that interface with the Robot Operating System (ROS) and also the capability of optimizing the end-to-end latency of verification at runtime by exploiting a distributed computing architecture (i.e., edge-cloud). In future work, we will address not only the minimization of end-to-end latency but also the timing upper bound of monitors to achieve runtime deterministic verification.
Article
Full-text available
Virtualization plays a crucial role in enhancing the efficiency of cloud computing by enabling resource optimization, scalability, and flexibility. This paper examines the impact of virtualization technologies on cloud computing efficiency, focusing on how they enable effective resource management, improve hardware utilization, and reduce operational costs. Key virtualization techniques, such as server, network, and storage virtualization, are discussed in terms of their contribution to cloud infrastructure. The paper also highlights the challenges associated with virtualization, including security vulnerabilities, resource contention, and performance overhead. Through a comparative analysis of various virtualization solutions, such as hypervisors and container-based approaches, this study provides insights into how virtualization influences the overall performance and cost-effectiveness of cloud computing environments. The findings offer valuable recommendations for optimizing cloud efficiency through the adoption of the most appropriate virtualization technologies based on organizational needs and workloads.
Article
Docker has revolutionized application deployment by providing lightweight, scalable, and efficient containerization, but its shared kernel architecture introduces challenges in performance and security management. This study explores performance optimization techniques for Docker-based workloads, emphasizing resource management, orchestration, and hybrid deployment models. Using experimental benchmarks and case studies, the research evaluates the effectiveness of tools like cgroups, namespace isolation, Kubernetes, and security integrations such as ZAP and OWASP Dependency Check. Results demonstrate that resource isolation and orchestration optimizations significantly reduce CPU and memory contention, improving workload predictability and scalability. Kubernetes' horizontal autoscaling enhances responsiveness under high-traffic conditions, though proactive scaling strategies such as pre-scaling pods further minimize latency. Hybrid architectures, including Docker within VMs and microVM solutions like Kata Containers, offer strong isolation without excessive performance penalties, making them ideal for high-security applications. However, challenges in container networking and the overhead of security tools highlight the need for adaptive resource allocation and workload-specific optimizations. Future research directions include leveraging AI-driven resource management, Zero Trust security architectures, and confidential computing to address the growing complexity of containerized environments. This study contributes actionable insights for developers, DevOps engineers, and researchers seeking to enhance the performance, scalability, and security of Docker deployments.
Article
Cloud computing has revolutionized modern IT infrastructure, with virtualization technologies playing a crucial role in efficient resource utilization and deployment. Docker containers and Virtual Machines (VMs) are two dominant virtualization approaches, each presenting unique security implications. While Docker enhances agility, scalability, and resource efficiency, its shared kernel model introduces security risks, such as kernel exploits, container escape attacks, and privilege escalation vulnerabilities. In contrast, VMs offer robust isolation, mitigating risks through complete OS separation but at the cost of increased resource consumption and management complexity. This study presents a comprehensive analysis of security concerns associated with Docker and VMs, highlighting their attack vectors, vulnerabilities, and best practices for mitigating risks. Security architecture considerations, including hypervisor hardening, namespace isolation, and role-based access control, are explored. Additionally, hybrid approaches, such as running Docker inside VMs, are discussed as viable solutions to balance performance and security. Findings from this research emphasize the importance of tailored security strategies based on operational requirements. High-security industries, such as healthcare and financial services, may favor VMs due to their strong isolation properties, while development and CI/CD pipelines benefit from Docker's efficiency. Security automation, AI-driven anomaly detection, and confidential computing are identified as key areas for future advancements in securing virtualized workloads. By understanding the risks and implementing robust security frameworks, organizations can optimize both security and performance in their cloud infrastructure.
Article
Docker has revolutionized application deployment by providing lightweight, scalable, and efficient containerization, but its shared kernel architecture introduces challenges in performance and security management. This study explores performance optimization techniques for Docker-based workloads, emphasizing resource management, orchestration, and hybrid deployment models. Using experimental benchmarks and case studies, the research evaluates the effectiveness of tools like cgroups, namespace isolation, Kubernetes, and security integrations such as ZAP and OWASP Dependency Check. Results demonstrate that resource isolation and orchestration optimizations significantly reduce CPU and memory contention, improving workload predictability and scalability. Kubernetes' horizontal autoscaling enhances responsiveness under high-traffic conditions, though proactive scaling strategies such as pre-scaling pods further minimize latency. Hybrid architectures, including Docker within VMs and microVM solutions like Kata Containers, offer strong isolation without excessive performance penalties, making them ideal for high-security applications. However, challenges in container networking and the overhead of security tools highlight the need for adaptive resource allocation and workload-specific optimizations. Future research directions include leveraging AI-driven resource management, Zero Trust security architectures, and confidential computing to address the growing complexity of containerized environments. This study contributes actionable insights for developers, DevOps engineers, and researchers seeking to enhance the performance, scalability, and security of Docker deployments.
Article
Full-text available
The proliferation of microservices architectures in cloud environments necessitates robust security mechanisms to ensure the integrity and confidentiality of inter-service communication. Traditional security methods, such as centralized attestation and encryption, often struggle to scale effectively in multi-cloud deployments. Federated attestation presents a promising solution by enabling distributed trust models across different cloud providers, allowing microservices to authenticate and verify each other's integrity without relying on a centralized authority. In a multi-cloud environment, where services may span across various cloud platforms with different security infrastructures, federated attestation facilitates secure and seamless communication between microservices, ensuring that each service instance is genuine and trustworthy. This paper explores the use of federated attestation protocols in securing microservice interactions in multi-cloud ecosystems. The approach leverages decentralized trust anchors to attest to the authenticity of microservices, enhancing resistance to man-in-the-middle attacks and ensuring compliance with security policies. We also investigate the performance overhead and scalability of federated attestation in real-world multi-cloud environments. Our findings highlight the effectiveness of federated attestation in securing microservice communication while minimizing latency and resource consumption, even in complex multi-cloud deployments. This approach provides a scalable solution to one of the key challenges in modern cloud-native architectures: maintaining trust and security across diverse, distributed environments.
Article
Full-text available
Enterprise application design deployment and scaling continue to advance due to microservices integration with cloud computing adoption. The analysis covers how microservices architecture with cloud computing relations create agile, scalable, and resilient enterprise infrastructure for modern business operations. Microservices that adopt a modular decentralized application development method thrive alongside cloud computing's flexible platform because it provides automatic scaling capabilities for resources. This analysis presents the fundamental advantages of integrating microservices architecture with cloud computing, including stronger fault protection, diminished development timelines, and improved ability to create new solutions. Direct adoption of these technologies creates management difficulties dedicated to distributed infrastructures, ensuring data reliability, and handling additional operational charges. We will demonstrate proven methods for addressing these challenges using automation, containerization, and robust system monitoring practices. This work explores future paths in microservices and cloud methodologies and investigates the influence of upcoming technologies, including artificial intelligence and edge computing. Through its detailed examination of technological relationships, the paper demonstrates how these systems jointly create efficiency improvements, adaptability, and competitive superiority. The union of microservices with cloud computing development has transformed application design approaches through fast market adaptation and technological innovation reaction speeds.
Thesis
Full-text available
Server virtualization and container orchestration are fundamental for creating efficient and scalable IT infrastructures. This Master’s Thesis (TFM) focuses on the creation of a virtual infrastructure applied to an online sales environment. For this, a Dell R710 server has been used, configuring several virtual machines. The project is not limited to virtualization. A Docker Swarm has also been developed to efficiently manage and orchestrate containers, integrating GlusterFS to optimize this process. Throughout the work, essential tools such as Portainer, Authentik, Redis, Postgres, Poste.io, MySQL, WordPress, Nginx Proxy Manager, and PhpMyAdmin have been used, each contributing robustness and functionality to the system. One of the most interesting challenges was the configuration of redirections with Nginx, allowing the management of domains and subdomains, and connecting services hosted on other physical machines within the network. To enhance security and improve DNS management, Cloudflare was integrated between the DNS and the server. Additionally, Cockpit was installed on the machines with Ubuntu to provide a user-friendly web management interface. The practical application of this infrastructure is materialized in the creation of an online sales environment, which includes email services and redirection to external providers, thus demonstrating its applicability in the real world. This TFM has been an enriching experience, combining specific technologies and tools to build a complete and functional operating system capable of supporting an e-commerce environment. This demonstrates the versatility and potential of virtualization and container orchestration.
Article
Full-text available
Novel software architecture patterns, including microservices, have surfaced in the last ten years to increase the modularity of applications and to simplify their development, testing, scaling, and component replacement. In response to these emerging trends, new approaches such as DevOps methods and technologies have arisen to facilitate automation and monitoring across the whole software construction lifecycle, fostering improved collaboration between software development and operations teams. The resource management (RM) strategies of Kubernetes and Docker Swarm, two well-known container orchestration technologies, are compared in this article. The main distinctions between RM, scheduling, and scalability are examined, with an emphasis on Kubernetes' flexibility and granularity in contrast to Docker Swarm's simplicity and use. In this article, a case study comparing the performance of two popular container orchestrators—Kubernetes and Docker Swarm—over a Web application built using the microservices architecture is presented. By raising the number of users, we compare how well Docker Swarm and Kubernetes perform under stress. This study aims to provide academics and practitioners with an understanding of how well Docker Swarm and Kubernetes function in systems built using the suggested microservice architecture. The authors' Web application is a kind of loyalty program, meaning that it offers a free item upon reaching a certain quantity of purchases. According to the study's findings, Docker Swarm outperforms Kubernetes in terms of efficiency as user counts rise.
Preprint
Online judges are systems designed for the reliable evaluation of algorithm source code submitted by users, which is next compiled and tested in a homogeneous environment. Online judges are becoming popular in various applications. Thus, we would like to review the state of the art for these systems. We classify them according to their principal objectives into systems supporting organization of competitive programming contests, enhancing education and recruitment processes, facilitating the solving of data mining challenges, online compilers and development platforms integrated as components of other custom systems. Moreover, we introduce a formal definition of an online judge system and summarize the common evaluation methodology supported by such systems. Finally, we briefly discuss an Optil.io platform as an example of an online judge system, which has been proposed for the solving of complex optimization problems. We also analyze the competition results conducted using this platform. The competition proved that online judge systems, strengthened by crowdsourcing concepts, can be successfully applied to accurately and efficiently solve complex industrial- and science-driven challenges.
ResearchGate has not been able to resolve any references for this publication.