Article

On the relative dominance of paging algorithms.

Theor. Comput. Sci 01/2009; 410:3694-3701.
Source: DBLP
0 Bookmarks
 · 
70 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The paging algorithm Least Recently Used Second Last Request (LRU-2) was proposed for use in database disk buffering and shown experimentally to perform better than Least Recently Used (LRU). We compare LRU-2 and LRU theoretically, using both the standard competitive analysis and the newer relative worst order analysis. The competitive ratio for LRU-2 is shown to be 2k for cache size k, which is worse than LRU’s competitive ratio of k. However, using relative worst order analysis, we show that LRU-2 and LRU are comparable in LRU-2’s favor, giving a theoretical justification for the experimental results. Many of our results for LRU-2 also apply to its generalization, Least Recently Used Kth Last Request.
    Acta Informatica 01/2010; 47:359-374. · 0.47 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: It is well-established that input sequences for paging and list up-date have locality of reference. In this paper we analyze the performance of algorithms for these problems in terms of the amount of locality in the in-put sequence. We define a measure for locality that is based on Denning's working set model and express the performance of well known algorithms in term of this parameter. This introduces parameterized-style analysis to online algorithms. The idea is that rather than normalizing the performance of an online algorithm by an (optimal) offline algorithm, we explicitly express the behavior of the algorithm in terms of two more natural parameters: the size of the cache and Denning's working set measure. This technique creates a per-formance hierarchy of paging algorithms which better reflects their intuitive relative strengths. It also reflects the intuition that a larger cache leads to a better performance. We obtain similar separation for list update algorithms. Lastly, we show that, surprisingly, certain randomized algorithms which are superior to MTF in the classical model are not so in the parameterized case, which matches experimental results.
    Algorithmica 01/2009; · 0.49 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In the last few years, multicore processors have become the dominant processor architec-ture. While cache eviction policies have been widely studied both in theory and practice for sequential processors, in the case in which various simultaneous processes use a shared cache the performance of even the most common eviction policies is not yet fully understood, nor do we know if current best strategies in practice are optimal. In particular, there is almost no theoretical backing for the use of current eviction policies in multicore processors. Recently, a work by Hassidim [14] initiated the theoretical study of cache eviction policies for multicore pro-cessors under the traditional competitive analysis, showing that LRU is not competitive against an offline policy that has the power of arbitrarily delaying requests sequences to its advantage. In this paper we study caching under the more conservative model in which requests must be served as they arrive. We perform a thorough all-to-all comparison of strategies providing lower and upper bounds for the ratios between performances. We show that if the cache is partitioned, the partition policy has a greater influence on performance than the eviction policy. On the other hand, we show that sharing the cache among cores is in general better than partitioning the cache, unless the partition is dynamic and changes frequently, in which case shared cache strategies and dynamic partitions are essentially equivalent when serving disjoint requests.

Full-text

Download
0 Downloads
Available from