Conference Paper

Buffer management in multimedia database systems

Dept. of Comput. Sci., State Univ. of New York, Buffalo, NY
DOI: 10.1109/MMCS.1996.534973 Conference: Multimedia Computing and Systems, 1996., Proceedings of the Third IEEE International Conference on
Source: IEEE Xplore

ABSTRACT

Investigates the principles of buffer management for multimedia data presentations in object-oriented database environments. The primary goal is to minimize the response time of multimedia presentations while ensuring that all continuity and synchronization requirements are satisfied. Minimum buffering requirements to guarantee the continuity and synchrony of the presentation of multimedia data are proposed. These principles provide users with the full range of information required to develop a database environment for multimedia presentations

Full-text preview

Available from: psu.edu
  • Source
    • "Neogi et al.[15] proposed to prebuffer the data blocks when a fraction of a cycle is unused. There have been a number of proposals for prefetching the leading portion of the requested video file to minimize the startup latency[10] [16]. These works are based upon the best effort approach and does not provide any scheduling nor cycle length allocation for pre-buffering. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The objective of this study is to determine the right cycle management policy to service periodic soft real-time disk retrieval. Cycle-based disk scheduling provides an effective way of exploiting the disk bandwidth and meeting the soft real-time requirements of individual I/O requests. It is widely used in real-time retrieval of multimedia data blocks. Interestingly, the issue of cycle management with respect to dynamically changing workloads has not been receiving proper attention despite its significant engineering implications on the system behavior. When cycle length remains constant regardless of varying I/O workload intensity, it may cause under-utilization of disk bandwidth capacity or unnecessarily long service startup latency. In this work, we present a novel cycle management policy which dynamically adapts to the varying workload. We develop pre-buffering policy which makes the adaptive cycle management policy robust against starvation. The proposed approach elaborately determines the cycle length and the respective buffer size for pre-buffering. Performance study reveals a number of valuable observations. Adaptive cycle length management with incremental pre-buffering exhibits superior performance to the other cycle management policies in startup latency, jitter and buffer requirement. It is found that servicing low playback rate contents such as video contents for 3G cellular network requires rather different treatment in disk subsystem capacity planning and call admission criteria because relatively significant fraction of I/O latency is taken up by plain disk overhead.
    Preview · Article · Dec 2006 · Information Systems
  • Source
    • "Neogi et al.[15] proposed to prebuffer the data blocks when a fraction of a cycle is unused. There have been a number of proposals for prefetching the leading portion of the requested video file to minimize the startup latency[10] [16]. These works are based upon the best effort approach and does not provide any scheduling nor cycle length allocation for pre-buffering. "

    Preview · Article · Jan 2006
  • Source
    • "Storage systems can reduce the number of disk I/O operations by sharing the data already retrieved from disk among all of the clients, using buffer cache[13]. For the purpose of reducing disk I/O, recent works[7] [11] have proposed the use of a global buffer cache similar to the buffer cache in traditional storage systems. Owing to the access pattern for CM objects, however, LRU and MRU, which are regarded as good replacement algorithms, do not yield a high cache hit ratio in continuous media servers[2]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In continuous media servers, disk load can be reduced by using buffer cache. In order to utilize the saved disk bandwidth by caching, a continuous media server must employ an admission control scheme to decide whether a new client can be admitted for service without violating the requirements of clients already being serviced. A scheme providing deterministic QoS guarantees in servers using caching has already been proposed. Since, however, deterministic admission control is based on the worst case assumption, it causes the wastage of the system resources. If we can exactly predict the future available disk bandwidth, both high disk utilization and hiccup-free service are achievable. However, as the caching effect is not analytically determined, it is difficult to predict the disk load without substantial computation overhead. In this paper, we propose a statistical admission control scheme for continuous media servers where caching is used to reduce disk load. This scheme improves disk utilization and allows more streams to be serviced while maintaining near-deterministic service. The scheme, called Shortsighted Prediction Admission Control (SPAC), combines exact prediction through on-line simulation and statistical estimation using a probabilistic model of future disk load in order to reduce computation overhead. It thereby exploits the variation in disk load induced by VBR-encoded objects and the decrease in client load by caching. Through trace-driven simulations, it is demonstrated that the scheme provides near-deterministic QoS and keeps disk utilization high.
    Full-text · Article · Feb 2003 · Multimedia Tools and Applications
Show more