Figure 6 - uploaded by Jamal Zemerly
Content may be subject to copyright.
Hardware-assisted virtualization concepts 

Hardware-assisted virtualization concepts 

Source publication
Article
Full-text available
Virtualization is an emerging technology which provides organizations with a wide range of benefits. But unluckily, from a security standpoint, functionality often takes precedence over a main area like security, leaving security to be retrofitted in later. This paper mainly emphasizes on several security threats that exists today in a virtualizati...

Context in source publication

Context 1
... this can be solved with the introduction of the second generation of hardware-assisted technologies. Figure 6 illustrates the hardware-assisted virtualization concepts [6]. ...

Similar publications

Article
Full-text available
The research aims to optimize a workflow of architecture documentation: starting from panoramic photos, tackling available instruments and technologies to propose an integrated, quick and low-cost solution of Virtual Architecture. The broader research background shows how to use spherical panoramic images for the architectural metric survey. The in...
Article
Full-text available
The research aims to optimize a workflow of architecture documentation: starting from panoramic photos and tackling available instruments and technologies the aim is to propose an integrated, quick and low-cost solution of Virtual Architecture. The broader research background shows how to use spherical panoramic images for the architectural metric...

Citations

... The ever-expanding list of Common Vulnerabilities and Exposures (CVEs) is paramount to this research. These CVE vulnerability targets include the guest operating system (OS), hypervisor, and host OS [8]. Each includes its own vulnerabilities and problems; combining them creates a quickly expanding list of potential problems and issues to fix. ...
Preprint
Full-text available
With the increasing use of multi-cloud environments, security professionals face challenges in configuration, management, and integration due to uneven security capabilities and features among providers. As a result, a fragmented approach toward security has been observed, leading to new attack vectors and potential vulnerabilities. Other research has focused on single-cloud platforms or specific applications of multi-cloud environments. Therefore, there is a need for a holistic security and vulnerability assessment and defense strategy that applies to multi-cloud platforms. We perform a risk and vulnerability analysis to identify attack vectors from software, hardware, and the network, as well as interoperability security issues in multi-cloud environments. Applying the STRIDE and DREAD threat modeling methods, we present an analysis of the ecosystem across six attack vectors: cloud architecture, APIs, authentication, automation, management differences, and cybersecurity legislation. We quantitatively determine and rank the threats in multi-cloud environments and suggest mitigation strategies.
... Initially a technology that enabled computing enthusiasts to run multiple operating systems concurrently on a single PC, hardware virtualisation has evolved into an indispensable tool for gaining improved business efficiencies. As noted by Bose [14] and Bazargan [28], the use of it can lead to various benefits, including: ...
... Additionally, the reduction of physical servers also leads to lower power, cooling, and space costs as well as lower operating expenses [14]. As noted by Bazargan [28], the primary goal of virtualisation is to combine and unify workloads on fewer physical platforms that can sustain their demand for computing resources, such as CPU, memory, and I/O. ...
... The achievement of high availability (HA) by allowing a VM to be restarted on a new server if the server running the VM fails [14]. According to Bazargan [28], a system fault on one VM or its partition will have no effect on other partitions that are running on the same physical platform. ...
Book
In this thesis, an approach for evaluating the performance of web hosting solutions is developed to facilitate the comparison of offerings from different providers and aid companies in choosing a web hosting service. The focus of the research is on identifying and investigating methodologies for collecting reproducible performance metrics within the constraints of common virtualized web hosting environments, using only PHP and MySQL functions. A comprehensive review of existing literature and research in web development and web hosting provides the theoretical basis for benchmarking web servers and measuring web performance. Industry expert interviews supplement this approach with practical experience and best practices. The analysis of commonly used benchmarks identified in the interviews showed that well-established benchmarks and web server load testing tools were infeasible due to the lack of command line interface access in shared and managed hosting environments. Benchmarks relying solely on PHP and MySQL functions appear promising, but they do not account for all relevant aspects and cannot incorporate external factors. External load testing tools and page speed tools were found to cover aspects not considered by server-side benchmarks, such as caching mechanisms and networking. The research also highlights the need to evaluate website loading speed and back-end administrative interface performance separately, as they are both relevant to customers but are affected differently by caching mechanisms. The integration of these data sources is recommended for a comprehensive and meaningful analysis. However, different user groups may attach different importance to certain aspects of hosting, depending on their intended use case. Therefore, a weighted evaluation of different benchmark data for different use cases is essential, leading to the development of a multi-layered approach to benchmarking WordPress-specific hosting environments. This thesis proposes to evaluate the performance of web hosting solutions by utilizing synthetic workloads across multiple metrics to simulate real-world scenarios for improved repeatability and consistency in performance measurements. The proposed approach leads to a thorough understanding of the performance characteristics of web hosting solutions, which can be applied, evaluated, and further developed in practical settings.
... (1999) and Amazon Web Services (2002). Virtualizations, an essential element of cloud service approaches like infrastructure as a Service (IaaS), were first seen in 2006 with Amazon's EC2 [4]. A VMM or a hypervisor is a software component that maintains the accessibility of host resources to a group of VMs. ...
... (3) Host to VMM-These intruders control VMMs through attacks on hosts where most of them may be privileged users. (4) Guest to Guest-Guest VM attacks can interrupt VMMs aggregations and extract information or resources of other guest VMs controlling VMMs [7]. (5) Guest to Self and Host to Self-These operations enhance privileges under similar circumstances using guest VMs or host OSs. ...
... Have faith in supporting wireless networks for Mininet, particularly IEEE 802.11 (Wi-Fi) standard, open a large range of experimental platforms particularly exploring. Software define wireless network (SDWN).An integration of Wi-Fi and Mininet [29] is provided with support of ns-3 for simulating components in real time environment such as virtual machine or test beds particularly, it use IEEE 802.11 as physical layer and MAC layer for ns-3 in order to offer wireless link emulation using Mininet referring this techniques called Mininet-ns3-Wifi. ...
Article
Full-text available
SDN enables a new networking paradigm probable to improve system efficiency where complex networks are easily managed and controlled. SDN allows network virtualization and advance programmability for customizing the behaviour of networking devices with user defined features even at run time. SDN separates network control and data planes. Intelligently controlled network management and operation, such that routing is eliminated from forwarding elements (switches) while shifting the routing logic in a centralized module named SDN Controller. Mininet is Linux based network emulator which is cost effective for implementing SDN having in built support of OpenFlow switches. This paper presents practical implementation of Mininet with ns-3 using Wi-Fi. Previous results reported in literature were limited upto 512 nodes in Mininet. Tests are conducted in Mininet by varying number of nodes in two distinct scenarios based on scalability and resource capabilities of the host system. We presented a low cost and reliable method allowing scalability with authenticity of results in real time environment. Simulation results show a marked improvement in time required for creating a topology designed for 3 nodes with powerful resources i.e. only 0.077 sec and 4.512 sec with limited resources, however with 2047 nodes required time is 1623.547 sec for powerful resources and 4615.115 sec with less capable resources respectively.
... There are 3 popular types of virtual servers for businesses to choose from: Full virtualization, Para-virtualization and OS-level virtualization [4]. ...
Preprint
Full-text available
The rapid development of the digital age has been pushing people to access a mobile working environment when handsets are becoming more diverse and convenient with the help of Virtualization Technology. The speed and usability of Virtualization Technology are astounding for saving initial investment costs and optimizing IT infrastructure. Such Virtualization Technology is what businesses are interested in and makes the virtual server market growing strongly, especially for businesses that have many branches. However, virtual systems (hypervisors) are more vulnerable than traditional servers according due to many network attacks from curious users. Therefore, it's necessary to prepare for the worst circumstances, understand clearly, and research for new threats that can break down the virtual system. In this paper, we attempt to demonstrate the TCP ACK storm based DoS (Denial of Service) attack on virtual and Docker networks to show the threats that easily are happen on services deploying on virtual networks. Based on such consequence, we propose some solutions to prevent our virtual system from potential risks.
... Moreover, checking the security & integrity of the Hypervisor; harden and secure the Guest OS; isolate & secure the virtualized network; achieve a real-time Zero-day malicious activities detection; put other controls and security policies in place and restore guest VMs to a clean state automatically [8]. ...
... Not minding the various state of art methodologies for preventing malware threats, this attack is still increasing not only in numbers but also advancement, therefore, there is a need to mitigate this dangerous trend which violates privacy [9], security [10] and leads to data theft [11]. This study, however, aims at developing a simple but effective model to expose compromised connection port used by hackers after dropping their malicious codes on a vulnerable system [12]. ...
Chapter
Malware is malicious code that tends to take control of the system remotely. The author of these codes drops their malicious payload on to the vulnerable system and continues to maintain access to this system at will. In order to unravel and establish the ability of rootkit to hide system network interface, we developed a network model, and implementation of this model was carried out on four notable live rootkits. Our results show the ability of the four rootkits to hide the system network interfaces, which are being used by the attackers to gain access and communicate correctly with the compromised system.
... As stated in [4], [3], [ ...
... In a lucid form, Virtualization is the underlying technology that lays in-between the physical platform or machine and the host Operating System (OS) that is, the interface between the underlying hardware and the OS.This includes the guest OS or Virtual MachineMonitor (VMM) together with the associated applications running on top of it that produces the software abstraction layer (SAL) [4]. The Virtualization Special Interest Group defines virtualization as the logical abstraction of computing resources from physical constraints [5]. ...
... The guest OSs shares the hardware of the host computer, such that each OS appears to have its own processor, memory and other hardware resources. A hypervisor is also known as Virtual Machine Manager or Monitors (VMM) [4]. The term hypervisor stems from IBM in the mid-1950s to refer to software programs distributed with IBM RPQ for the IBM 360/65. ...
Article
Full-text available
This paper reviews and provides clarification to the meaning and concept of cloud computing with particular reference to Infrastructure as a Service (IaaS) and its underlying virtualization technologies. The categories of cloud computing and key characteristics of cloud environment are also discussed. A review of virtualization technologies and approaches is presented with key vulnerabilities to security threats and mitigation strategies and countermeasures are also presented. This knowledge is imperative in making virtual Information Technology (IT) environment more secure and robust and can help improve the operational efficiency of Virtual Machines (VMs)in such a manner that organizations can benefit from virtualization technology in particular and the cloud computing systems in general.
... Virtualization technology has been used as an enabler for safety and security in embedded systems [4], [5], leveraging workloads separation in secure partitions to increase system reliability and stability. However, virtualization, per-si, is not enough to provide the desired security level (i.e., it provides to some extent system integrity but no confidentiality) [6] and it must be extended with new security-oriented technologies which promote hardware as the initial root of trust. ARM Trustzone [7] and Intel TXT (Intel Trusted Execution Technology) are examples of existing hardware-based security foundations which have been exploited to design trustable and safe embedded devices from the outset [8], [9]. ...
Conference Paper
Full-text available
The pervasive use of embedded computing systems in modern societies altogether with the industry trend towards consolidating workloads, openness and interconnectedness, have raised security, safety, and real-time concerns. Virtualization has been used as an enabler for safety and security, but research works have proven that it must be extended and improved with hardware-based security foundations. ARM Trustzone has been used for the realization of Trusted Environments, however in this case real-time requirements are completely disregarded. This work in progress paper presents FreeTEE, an embedded architecture that emphasizes and preserves the real-time properties of the system but still guarantees security from the outset. TrustZone technology is exploited to implement the basic building blocks of a Trusted Execution Environment (TEE) as a lower-priority thread of a RTOS. Preliminary results demonstrated that the real-time properties of the RTOS remain practically intact.
... System virtualization is a technology that allows the physical machine resources to be shared among different Virtual Machines (VMs) via the use of a software layer called hypervisor or Virtual Machine Monitor (VMM). The hypervisor can be either Type-1 which runs directly on the system hardware and thus is often referred to as bare-metal hypervisor, or Type-2 which runs as an application on top of a conventional Operating System (OS) and is referred to as hosted hypervisor [1,2]. ...
Article
Full-text available
System virtualization is one of the hottest trends in information technology today. It is not just another nice to use technology but has become fundamental across the business world. It is successfully used with many business application classes where cloud computing is the most visual one. Recently, it started to be used for soft Real-Time (RT) applications such as IP telephony, media servers, audio and video streaming servers, automotive and communication systems in general. Running these applications on a traditional system (Hardware + Operating System) guarantee their Quality of Service (QoS); virtualizing them means inserting a new layer between the hardware and the (virtual) Operating System (OS), and thus adding extra overhead. Although these applications’ areas do not always demand hard time guarantees, they require the underlying virtualization layer supports low latency and provide adequate computational resources for completion within a reasonable or predictable timeframe. These aspects are intimately intertwined with the logic of the hypervisor scheduler. In this paper, a series of tests are conducted on three hypervisors (VMware ESXi, Hyper-V server and Xen) to provide a benchmark of the latencies added to the applications running on top of them. These tests are conducted for different scenarios (use cases) to take into consideration all the parameters and configurations of the hypervisors’ schedulers. Finally, this benchmark can be used as a reference for choosing the best hypervisor-application combination.