On the relative dominance of paging algorithms.

Theor. Comput. Sci 01/2009; 410:3694-3701.
Source: DBLP
Download full-text


Available from: J. Ian Munro, Jan 04, 2016
  • Source
    • "In addition to probabilistic considerations, they analyzed deterministic algorithms using competitive analysis. We analyze the frequent items problem using relative interval analysis [13] and relative worst order analysis [4]. In addition, we tighten the competitive analysis [16] [15] results from [14]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we strengthen the competitive analysis results obtained for a fundamental online streaming problem, the Frequent Items Problem. Additionally, we contribute with a more detailed analysis of this problem, using alternative performance measures, supplementing the insight gained from competitive analysis. The results also contribute to the general study of performance measures for online algorithms. It has long been known that competitive analysis suffers from drawbacks in certain situations, and many alternative measures have been proposed. However, more systematic comparative studies of performance measures have been initiated recently, and we continue this work, using competitive analysis, relative interval analysis, and relative worst order analysis on the Frequent Items Problem.
    Full-text · Article · Jun 2013 · International Journal of Foundations of Computer Science
  • Source
    • "(See e.g. [11], and [9] [10] for a survey of performance measures for online algorithms.) "
    [Show abstract] [Hide abstract]
    ABSTRACT: In the last few years, multicore processors have become the dominant processor architec-ture. While cache eviction policies have been widely studied both in theory and practice for sequential processors, in the case in which various simultaneous processes use a shared cache the performance of even the most common eviction policies is not yet fully understood, nor do we know if current best strategies in practice are optimal. In particular, there is almost no theoretical backing for the use of current eviction policies in multicore processors. Recently, a work by Hassidim [14] initiated the theoretical study of cache eviction policies for multicore pro-cessors under the traditional competitive analysis, showing that LRU is not competitive against an offline policy that has the power of arbitrarily delaying requests sequences to its advantage. In this paper we study caching under the more conservative model in which requests must be served as they arrive. We perform a thorough all-to-all comparison of strategies providing lower and upper bounds for the ratios between performances. We show that if the cache is partitioned, the partition policy has a greater influence on performance than the eviction policy. On the other hand, we show that sharing the cache among cores is in general better than partitioning the cache, unless the partition is dynamic and changes frequently, in which case shared cache strategies and dynamic partitions are essentially equivalent when serving disjoint requests.
    Preview · Article · Sep 2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: It is well-established that input sequences for paging and list up-date have locality of reference. In this paper we analyze the performance of algorithms for these problems in terms of the amount of locality in the in-put sequence. We define a measure for locality that is based on Denning's working set model and express the performance of well known algorithms in term of this parameter. This introduces parameterized-style analysis to online algorithms. The idea is that rather than normalizing the performance of an online algorithm by an (optimal) offline algorithm, we explicitly express the behavior of the algorithm in terms of two more natural parameters: the size of the cache and Denning's working set measure. This technique creates a per-formance hierarchy of paging algorithms which better reflects their intuitive relative strengths. It also reflects the intuition that a larger cache leads to a better performance. We obtain similar separation for list update algorithms. Lastly, we show that, surprisingly, certain randomized algorithms which are superior to MTF in the classical model are not so in the parameterized case, which matches experimental results.
    Preview · Article · Sep 2009 · Algorithmica
Show more