Conference Paper

SVL: Storage virtualization engine leveraging DBMS technology

IBM Silicon Valley Lab, San Jose, CA, USA
DOI: 10.1109/ICDE.2005.138 Conference: Data Engineering, 2005. ICDE 2005. Proceedings. 21st International Conference on
Source: DBLP


The demands on storage systems are increasingly requiring expressiveness, fault-tolerance, security, distribution, etc. Such functionalities have been traditionally provided by DBMS. We propose a storage management system, SVL that leverages DBMS technology. The primary problem in block storage management is block virtualization, which is essentially an abstraction layer that separates the user view of storage from the implementation of storage. Storage virtualization standardizes storage management in a heterogeneous storage and/or host environment, and plays a crucial role in enhancing storage functionality and utilization. Currently specialized hardware or microcode-based solutions are popular for implementing block storage management systems, commonly referred to as disk controllers. We demonstrate how to take a general purpose commercial RDBMS, rather than a specialized solution, to support block storage management. We exploit the simple semantics of storage management systems to streamline database performance and thus attain acceptability from a storage point of view. This work promises to pave the way for diverse and innovative industrial applications of database management systems.

Full-text preview

Available from:
  • [Show abstract] [Hide abstract]
    ABSTRACT: Many existing systems are written in C and are not re-entrant or thread safe. Sometimes these systems are required in a context for which they were not first designed, possibly meaning they now need to be re-entrant. This article introduces a program that filters C source code, modifying shared resources (the global variables) to make the code re-entrant: virtualizing the code. The code is then compiled as normal. This approach allows programmatic virtualization with little cost at runtime. Copyright © 2007 John Wiley & Sons, Ltd.
    No preview · Article · Apr 2008 · Software Practice and Experience
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The advances in data storage technologies like Storage Area Networking (SAN), virtualization of servers and storage, cloud computing have revolutionized the way the data is stored. A large number of business organizations, universities, hospitals, research organizations are now deploying SAN, not as a luxury but as a necessity. Scientific research organizations like NASA process terabytes of data every day. Accurate analysis and processing of the experimental data call for a need to efficiently store and retrieve the data to and from data storage media. Similarly social websites like YouTube, FaceBook handle large amounts of data every minute. So, the robust performance of any computing and retrieval applications demands a reduction in the latency of data access. Hidden Markov Models (HMM) have been successfully used by researchers to predict data patterns in the areas of speech recognition, gene prediction, cryptanalysis etc. The goal of this research is to reduce the scheduling delay in hypervisors and the latency of reading blocks of data from the disk array using Hidden Markov Models (HMM) in a server virtualized environment. HMM was implemented to identify patterns of read requests issued and exploited to reduce the overall read response time of a server. A Gaussian HMM is used to reduce the scheduling delay and a discrete HMM is used to reduce the read response time. Results observed using HMM were very promising compared to results without HMM in decreasing the overall latency in data access.
    Preview · Conference Paper · Nov 2010