Table 3 - uploaded by Armin Weiß
Content may be subject to copyright.
Different strategies for pivot selection for basic QuickHeapsort tested on 10 4 and 10 6 elements. The standard deviation of our experiments is given in percent of the average number of comparisons. 

Different strategies for pivot selection for basic QuickHeapsort tested on 10 4 and 10 6 elements. The standard deviation of our experiments is given in percent of the average number of comparisons. 

Source publication
Conference Paper
Full-text available
We present a new analysis for QuickHeapsort splitting it into the analysis of the partition-phases and the analysis of the heap-phases. This enables us to consider samples of non-constant size for the pivot selection and leads to better theoretical bounds for the algorithm. Furthermore we introduce some modifications of QuickHeapsort, both in-place...

Contexts in source publication

Context 1
... the heap construction we imple- mented the normal algorithm due to Floyd [9] as well as the algorithm using the extra bit-array (which is the same as in MDR-Heapsort). More results with other pivot selection strategies are in Table 2 and Table 3 in App. B confirming that a sample size of √ n is optimal for pivot selection with re- spect to the number of comparisons and also that the o(n)-terms in Thm. ...
Context 2
... test sample of sizes of one, three, approxi- mately lg n, 4 √ n, n/ lg n, √ n, and n 3 4 for the pivot selection. In Table 3 the average number of comparisons and the standard deviations are listed. We ran the algorithms on arrays of length 10000 and one million. ...

Similar publications

Conference Paper
Full-text available
Numerous approximation algorithms for unit disk graphs have been proposed in the literature, exhibiting sharp trade-offs between running times and approximation ratios. We propose a method to obtain linear-time approximation algorithms for unit disk graph problems. Our method yields linear-time (4+eps)-approximations to the maximum-weight independe...
Conference Paper
Full-text available
We present a general transformation for combining a constant number of binary search tree data structures (BSTs) into a single BST whose running time is within a constant factor of the minimum of any "well-behaved" bound on the running time of the given BSTs, for any online access sequence. (A BST has a well behaved bound with $f(n)$ overhead if it...
Conference Paper
Full-text available
We prove that in an n-vertex graph, induced chordal and interval subgraphs with the maximum number of vertices can be found in time $O(2^{\lambda n})$ for some $\lambda<1$. These are the first algorithms breaking the trivial $2^n n^{O(1)}$ bound of the brute-force search for these problems.
Article
Full-text available
This report has been written by members of the consortium from LIRIS partner.

Citations

... The sorting problem has been studied thoroughly, and many research papers have focused on designing fast and optimal algorithms [9][10][11][12][13][14][15][16]. Also, some studies have focused on implementing these algorithms to obtain an efficient sorting algorithm on different platforms [15][16][17]. ...
Article
Full-text available
Sorting an array of n elements represents one of the leading problems in different fields of computer science such as databases, graphs, computational geometry, and bioinformatics. A large number of sorting algorithms have been proposed based on different strategies. Recently, a sequential algorithm, called double hashing sort (DHS) algorithm, has been shown to exceed the quick sort algorithm in performance by 10–25%. In this paper, we study this technique from the standpoints of complexity analysis and the algorithm’s practical performance. We propose a new complexity analysis for the DHS algorithm based on the relation between the size of the input and the domain of the input elements. Our results reveal that the previous complexity analysis was not accurate. We also show experimentally that the counting sort algorithm performs significantly better than the DHS algorithm. Our experimental studies are based on six benchmarks; the percentage of improvement was roughly 46% on the average for all cases studied.
... Other examples for QuickXsort are QuickHeapsort [5,9] and QuickWeakheapsort [10,11] and Ultimate Heapsort [21]. QuickXsort with median-of-√ n pivot selection uses at most n log n + cn + o(n) comparisons on average to sort n elements given that X also uses at most n log n + cn + o(n) comparisons on average [11]. ...
... UltimateHeapsort is inferior to QuickHeapsort in terms of the average case number of comparisons, although, unlike QuickHeapsort, it allows an n lg n + O(n) bound for the worst case number of comparisons. Diekert and Weiß [4] analyzed QuickHeapsort more thoroughly and described some improvements requiring less than n lg n − 0.99n + o(n) comparisons on average (choosing the pivot as median of √ n elements). However, both the original analysis of Cantone and Cincotti and the improved analysis could not give tight bounds for the average case of median-of-k QuickMergesort. ...
... We consider both the case where k is a fixed constant and where k = k(n) is an increasing function of the (sub)problem size. Previous results in [4,35] for Quicksort suggest that sample sizes k(n) = Θ( √ n) are likely to be optimal asymptotically, but most of the relative savings for the expected case are already realized for k ≤ 10. It is quite natural to expect similar behavior in QuickXsort, and it will be one goal of this article to precisely quantify these statements. ...
... We use a symmetric variant (with a min-oriented heap) if the left segment shall be sorted by X. For detailed code for the above procedure, we refer to [3] or [4]. ...
Preprint
Full-text available
QuickXsort is a highly efficient in-place sequential sorting scheme that mixes Hoare's Quicksort algorithm with X, where X can be chosen from a wider range of other known sorting algorithms, like Heapsort, Insertionsort and Mergesort. Its major advantage is that QuickXsort can be in-place even if X is not. In this work we provide general transfer theorems expressing the number of comparisons of QuickXsort in terms of the number of comparisons of X. More specifically, if pivots are chosen as medians of (not too fast) growing size samples, the average number of comparisons of QuickXsort and X differ only by $o(n)$-terms. For median-of-$k$ pivot selection for some constant $k$, the difference is a linear term whose coefficient we compute precisely. For instance, median-of-three QuickMergesort uses at most $n \lg n - 0.8358n + O(\log n)$ comparisons. Furthermore, we examine the possibility of sorting base cases with some other algorithm using even less comparisons. By doing so the average-case number of comparisons can be reduced down to $n \lg n- 1.4106n + o(n)$ for a remaining gap of only $0.0321n$ comparisons to the known lower bound (while using only $O(\log n)$ additional space and $O(n \log n)$ time overall). Implementations of these sorting strategies show that the algorithms challenge well-established library implementations like Musser's Introsort.
... Other examples for QuickXsort are QuickHeapsort [5,9] and QuickWeakheapsort [10,11] and Ultimate Heapsort [21]. QuickXsort with median-of-√ n pivot selection uses at most n log n+cn+o(n) comparisons on average to sort n elements given that X also uses at most n log n + cn + o(n) comparisons on average [11]. ...
Preprint
Full-text available
The two most prominent solutions for the sorting problem are Quicksort and Mergesort. While Quicksort is very fast on average, Mergesort additionally gives worst-case guarantees, but needs extra space for a linear number of elements. Worst-case efficient in-place sorting, however, remains a challenge: the standard solution, Heapsort, suffers from a bad cache behavior and is also not overly fast for in-cache instances. In this work we present median-of-medians QuickMergesort (MoMQuickMergesort), a new variant of QuickMergesort, which combines Quicksort with Mergesort allowing the latter to be implemented in place. Our new variant applies the median-of-medians algorithm for selecting pivots in order to circumvent the quadratic worst case. Indeed, we show that it uses at most $n \log n + 1.6n$ comparisons for $n$ large enough. We experimentally confirm the theoretical estimates and show that the new algorithm outperforms Heapsort by far and is only around 10% slower than Introsort (std::sort implementation of stdlibc++), which has a rather poor guarantee for the worst case. We also simulate the worst case, which is only around 10% slower than the average case. In particular, the new algorithm is a natural candidate to replace Heapsort as a worst-case stopper in Introsort.
... Based on QuickHeapsort [5,7], Edelkamp and Weiß [9] developed the concept of QuickXsort and applied it to X = WeakHeapsort [8] and X = Mergesort. The idea -going back to UltimateHeapsort [17] -is very simple: as in Quicksort the array is partitioned into the elements greater and less than some pivot element, respectively. ...
... Then, QuickXsort with a Median-of-√ n pivot selection also needs at most n log n + cn + o(n) comparisons on average [9]. Sample sizes of approximately √ n are likely to be optimal [7,22]. ...
... state-of-the-art library implementations in C++ and Java on basic data types is surprisingly high. For example, all Heapsort variants we are aware of fail this test, we checked refined implementations of Binary Heapsort [12,28], Bottom-Up Heapsort [26], MDR Heapsort [25], QuickHeapsort [7], and Weak-Heapsort [8]. Some of these algorithm even use extra space. ...
Preprint
Full-text available
We consider the fundamental problem of internally sorting a sequence of $n$ elements. In its best theoretical setting QuickMergesort, a combination Quicksort with Mergesort with a Median-of-$\sqrt{n}$ pivot selection, requires at most $n \log n - 1.3999n + o(n)$ element comparisons on the average. The questions addressed in this paper is how to make this algorithm practical. As refined pivot selection usually adds much overhead, we show that the Median-of-3 pivot selection of QuickMergesort leads to at most $n \log n - 0{.}75n + o(n)$ element comparisons on average, while running fast on elementary data. The experiments show that QuickMergesort outperforms state-of-the-art library implementations, including C++'s Introsort and Java's Dual-Pivot Quicksort. Further trade-offs between a low running time and a low number of comparisons are studied. Moreover, we describe a practically efficient version with $n \log n + O(n)$ comparisons in the worst case.
... The idea to combine Quicksort and a secondary sorting method was suggested by Contone and Cincotti [2,1]. They study Heapsort with an output buffer (external Heapsort), 3 and combine it with Quicksort to QuickHeapsort. They analyze the average costs for external Heapsort in isolation and use a differencing trick for dealing with the QuickXSort recurrence; however, this technique is hard to generalize to median-of-k pivots. ...
... Diekert and Weiß [3] suggest optimizations for QuickHeapsort (some of which need extra space again), and they give better upper bounds for QuickHeapsort with random pivots and median-of-3. Their results are still not tight since they upper bound the total cost of all Heapsort calls together (using ad hoc arguments on the form of the costs for one Heapsort round), without taking the actual subproblem sizes into account that Heapsort is used on. ...
... In this case the behavior coincides with the simpler strategy to always sort the smaller segment by Mergesort since the segments are of almost equal size with high probability. 3 Not having to store the heap in a consecutive prefix of the array allows to save comparisons over classic in-place Heapsort: After an delete-max operation, we can fill the gap at the root of the heap by promoting the largest child and recursively moving the gap down the heap. (We then fill the gap with a −∞ sentinel value). ...
Article
Full-text available
QuickXSort is a strategy to combine Quicksort with another sorting method X, so that the result has essentially the same comparison cost as X in isolation, but sorts in place even when X requires a linear-size buffer. We solve the recurrence for QuickXSort precisely up to the linear term including the optimization to choose pivots from a sample of k elements. This allows to immediately obtain overall average costs using only the average costs of sorting method X (as if run in isolation). We thereby extend and greatly simplify the analysis of QuickHeapsort and QuickMergesort with practically efficient pivot selection, and give the first tight upper bounds including the linear term for such methods.
... UltimateHeapsort is inferior to QuickHeapsort in terms of average case number of comparisons, although, unlike QuickHeapsort, it allows an n log n + O(n) bound for the worst case number of comparisons. Diekert and Weiß [3] analyzed QuickHeapsort more thoroughly and described some improvements requiring less than n log n − 0.99n + o(n) comparisons on average. Edelkamp and Stiegeler [5] applied the idea of QuickXsort to WeakHeapsort (which was first described by Dutton [4]) introducing QuickWeakHeapsort. ...
... For the rest of the paper, we assume that the pivot is selected as the median of approximately √ n randomly chosen elements. Sample sizes of approximately √ n are likely to be optimal as the results in [3,13] suggest. ...
... First, we compare the different algorithms we use as base cases, i. e., MergeInsertion, its improved variant, and Insertionsort. The results can be seen in Fig. 4. Depending on the size of the arrays the displayed numbers are averages over 10-10000 runs 3 . The data elements we sorted were randomly chosen 32-bit integers. ...
Conference Paper
Full-text available
In this paper we generalize the idea of QuickHeapsort leading to the notion of QuickXsort. Given some external sorting algorithm X, QuickXsort yields an internal sorting algorithm if X satisfies certain natural conditions. We show that up to o(n) terms the average number of comparisons incurred by QuickXsort is equal to the average number of comparisons of X. We also describe a new variant of WeakHeapsort. With QuickWeakHeapsort and QuickMergesort we present two examples for the QuickXsort construction. Both are efficient algorithms that perform approximately n logn − 1.26n + o(n) comparisons on average. Moreover, we show that this bound also holds for a slight modification which guarantees an \(n \log n + \mathcal{O}(n)\) bound for the worst case number of comparisons. Finally, we describe an implementation of MergeInsertion and analyze its average case behavior. Taking MergeInsertion as a base case for QuickMergesort, we establish an efficient internal sorting algorithm calling for at most n logn − 1.3999n + o(n) comparisons on average. QuickMergesort with constant size base cases shows the best performance on practical inputs and is competitive to STL-Introsort.
... Ulti-mateHeapsort is inferior to QuickHeapsort in terms of average case running time, although, unlike QuickHeapsort, it allows an n log n + O(n) bound for the worst case number of comparisons. Diekert and Weiß [2] analyzed QuickHeapsort more thoroughly and showed that it needs less than n log n − 0.99n + o(n) comparisons in the average case when implemented with approximately √ n elements as sample for pivot selection and some other improvements. Edelkamp and Stiegeler [4] applied the idea of QuickXsort to WeakHeapsort (which was first [3,5] O(n/w) O(n log n) 0.09 - Abbreviations: # in this paper, MI MergeInsertion, -not analyzed, * for n = 2 k , w: computer word width in bits; we assume log n ∈ O(n/w). ...
... For the rest of the paper, we assume that the pivot is selected as the median of approximately √ n randomly chosen elements. Sample sizes of approximately √ n are likely to be optimal as the results in [2,11] suggest. ...
... For the normal QuickMergesort we used base cases of size ≤ 9. We also implemented QuickMergesort with median of three for pivot selection, which turns out to be practically efficient, although it needs slightly more comparisons than QuickMergesort with median of √ n. However, since also the larger half of the partitioned array can be sorted with Mergesort, the difference to the median of √ n version is not as big as in QuickHeapsort [2]. As suggested by the theory, we see that our improved QuickMergesort implementation with growing size base cases MergeInsertion yields a result for the constant in the linear term that is in the range of [−1.41, −1.40] -close to the lower bound. ...
Preprint
Full-text available
In this paper we generalize the idea of QuickHeapsort leading to the notion of QuickXsort. Given some external sorting algorithm X, QuickXsort yields an internal sorting algorithm if X satisfies certain natural conditions. With QuickWeakHeapsort and QuickMergesort we present two examples for the QuickXsort-construction. Both are efficient algorithms that incur approximately n log n - 1.26n +o(n) comparisons on the average. A worst case of n log n + O(n) comparisons can be achieved without significantly affecting the average case. Furthermore, we describe an implementation of MergeInsertion for small n. Taking MergeInsertion as a base case for QuickMergesort, we establish a worst-case efficient sorting algorithm calling for n log n - 1.3999n + o(n) comparisons on average. QuickMergesort with constant size base cases shows the best performance on practical inputs: when sorting integers it is slower by only 15% to STL-Introsort.
... Recently, in [16], the idea of QuickHeapsort [2,5] was generalized to the notion of QuickXsort: Given some black-box sorting algorithm X, QuickXsort can be used to speed X up provided that X satisfies certain natural conditions. QuickWeakHeapsort and QuickMergesort were described as two examples of this construction. ...
Conference Paper
Full-text available
A weak heap is a variant of a binary heap where, for each node, the heap ordering is enforced only for one of its two children. In 1993, Dutton showed that this data structure yields a simple worst-case-efficient sorting algorithm. In this paper we review the refinements proposed to the basic data structure that improve the efficiency even further. Ultimately, minimum and insert operations are supported in O(1) worst-case time and extract-min operation in O(lgn) worst-case time involving at most lgn+O(1) element comparisons. In addition, we look at several applications of weak heaps. This encompasses the creation of a sorting index and the use of a weak heap as a tournament tree leading to a sorting algorithm that is close to optimal in terms of the number of element comparisons performed. By supporting insert operation in O(1) amortized time, the weak-heap data structure becomes a valuable tool in adaptive sorting leading to an algorithm that is constant-factor optimal with respect to several measures of disorder. Also, a weak heap can be used as an intermediate step in an efficient construction of binary heaps. For graph search and network optimization, a weak-heap variant, which allows some of the nodes to violate the weak-heap ordering, is known to be provably better than a Fibonacci heap.