A Functional Classification Based Inter-VM Communication Mechanism with Multi-core Platform.
ABSTRACT With the resurgence of virtualization technologies and the development of multi-core technologies, the combination of the two becomes a trend. Therefore, inter-VM communication becomes a key part in how to improve the performance of virtual machines (VMs) basing on multi-core platform. In this paper, we first analyze the characteristics of multi-core tasks and the properties of virtual machine environment, and then classify processor cores into two categories basing on their different functions. According to the classification, we design an inter-VM communication mechanism with multi-core platform. It discards the traditional communication path between VMs which needs to via a trusted VM, sets up communication channels between virtual CPUs in different VMs and uses shared memory space to implement high-throughput communication of inter-VM. Experiment results have proved the efficiency of them.
- SourceAvailable from: psu.edu[show abstract] [hide abstract]
ABSTRACT: The goal of Denali is to safely execute many independent, untrusted server applications on a single physical machine. This would enable any developer to inject a new service into third-party Internet infrastructure; for example, dynamic content generation code could be introduced into content-delivery networks or caching systems. We believe that virtual machine monitors (VMMs) are ideally suited to this application domain. A VMM provides strong isolation by default, since one virtual machine cannot directly name a resource in another. In addition, VMMs defer the implementation of high-level abstractions to guest OSs, which greatly simplifies the kernel and avoids "layer-below" attacks. The main challenge in using a VMM for this application domain is in scaling the number of concurrent virtual machines that can simultaneously execute on it.03/2002;
Conference Proceeding: Virtualizing I/O Devices on VMware Workstation's Hosted Virtual Machine Monitor.[show abstract] [hide abstract]
ABSTRACT: Virtual machines were developed by IBM in the 1960's to provide concurrent, interactive access to a mainframe computer. Each virtual machine is a replica of the un- derlying physical machine and users are given the illu- sion of running directly on the physical machine. Virtual machines also provide benefits like isolation and resource sharing, and the ability to run multiple flavors and con- figurations of operating systems. VMware Workstation brings such mainframe-class virtual machine technology to PC-based desktop and workstation computers. This paper focuses on VMware Workstation's approach to virtualizing I/O devices. PCs have a staggering variety of hardware, and are usually pre-installed with an oper- ating system. Instead of replacing the pre-installed OS, VMware Workstation uses it to host a user-level applica- tion (VMApp) component, as well as to schedule a priv- ileged virtual machine monitor (VMM) component. The VMM directly provides high-performance CPU virtualiza- tion while the VMApp uses the host OS to virtualize I/O devices and shield the VMM from the variety of devices. A crucial question is whether virtualizing devices via such a hosted architecture can meet the performance required of high throughput, low latency devices. To this end, this paper studies the virtualization and per- formance of an Ethernet adapter on VMware Workstation. Results indicate that with optimizations, VMware Work- station's hosted virtualization architecture can match na- tive I/O throughput on standard PCs. Although a straight- forward hosted implementation is CPU-limited due to vir- tualization overhead on a 733 MHz Pentium R III system on a 100 Mb/s Ethernet, a series of optimizations targeted at reducing CPU utilization allows the system to match native network throughput. Further optimizations are dis- cussed both within and outside a hosted architecture.Proceedings of the General Track: 2001 USENIX Annual Technical Conference, June 25-30, 2001, Boston, Massachusetts, USA; 01/2001
Conference Proceeding: XenSocket: A High-Throughput Interdomain Transport for Virtual Machines.[show abstract] [hide abstract]
ABSTRACT: This paper presents the design and implementation of XenSocket, a UNIX-domain-socket-like construct for high-throughput in- terdomain (VM-to-VM) communication on the same system. The design of XenSocket replaces the Xen page-flipping mechanism with a static cir- cular memory buffer shared between two domains, wherein information is written by one domain and read asynchronously by the other domain. XenSocket draws on best-practice work in this field and avoids incurring the overhead of multiple hypercalls and memory page table updates by aggregating what were previously multiple operations on multiple net- work packets into one or more large operations on the shared buffer. While the reference implementation (and name) of XenSocket is written against the Xen virtual machine monitor, the principle behind XenSocket applies broadly across the field of virtual machines.Middleware 2007, ACM/IFIP/USENIX 8th International Middleware Conference, Newport Beach, CA, USA, November 26-30, 2007, Proceedings; 01/2007