Conference Paper

Optimistic Sorting and Information Theoretic Complexity.

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Entropy considerations provide a natural estimate of the number of comparisons to sort incompletely shuffled data, which subsumes most previous measures of configurational complexity. Simple modifications to insertion sort and merge sort improve their performance on such data. The modified merge sort proves efficient both in theory and in practice.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... A concurrent line of work in adaptive sorting initiated by Mcilroy concerns itself with the information-theoretic properties of a measure of disorder rather than its strict comparison with other measures [32]. ...
... A Note on Independent Work. McIlroy's paper on information-theoretic properties of sorting algorithms [32] proposes a sequential sorting algorithm called mergesort with exponential search, which performs the same number of comparisons as our adaptive mergesort. However, since his algorithm uses a merge step which takes ( ) time, the overall cost of his algorithm is still ( lg ). ...
... McIlroy's Properties Our work is also influenced by McIlroy's perspective on the desirable properties of an adaptive sorting algorithm. Here we describe the six properties he proposes in his 1993 paper [32]. For a permutation , let ( ) denote the number of comparisons required to sort . ...
Preprint
Full-text available
We study the connections between sorting and the binary search tree model, with an aim towards showing that the fields are connected more deeply than is currently known. The main vehicle of our study is the log-interleave bound, a measure of the information-theoretic complexity of a permutation $\pi$. When viewed through the lens of adaptive sorting -- the study of lists which are nearly sorted according to some measure of disorder -- the log-interleave bound is comparable to the most powerful known measure of disorder. Many of these measures of disorder are themselves virtually identical to well-known upper bounds in the BST model, such as the working set bound or the dynamic finger bound, suggesting a connection between BSTs and sorting. We present three results about the log-interleave bound which solidify the aforementioned connections. The first is a proof that the log-interleave bound is always within a $\lg \lg n$ multiplicative factor of a known lower bound in the BST model, meaning that an online BST algorithm matching the log-interleave bound would perform within the same bounds as the state-of-the-art $\lg \lg n$-competitive BST. The second result is an offline algorithm in the BST model which uses $O(\text{LIB}(\pi))$ accesses to search for any permutation $\pi$. The technique used to design this algorithm also serves as a general way to show whether a sorting algorithm can be transformed into an offline BST algorithm. The final result is a mergesort algorithm which performs work within the log-interleave bound of a permutation $\pi$. This mergesort also happens to be highly parallel, adding to a line of work in parallel BST operations.
... The main drawback of this algorithm is that it has no adaptivity and goes possibly quadratic time on some inputs [19]. Both Splaysort [21] and McIlroy's mergesort [20] seems adaptive with respect to almost all accepted measures of presortedness. However, they are not practical. ...
... Let C(X) denote the number of comparisons needed to sort the sequence X. According to the theory of optimal adaptivity202122, a sorting algorithm is Rem-optimal if ...
... where m = Rem(X). Based on McIlroy's analysis, his mergesort is Rem-optimal [20]. Psort [5], which is a library sort function based on PEsort, has a bit Rem-optimal flavor, while its improved version [6] is close to Rem-optimal. ...
Article
The purpose of the paper is twofold. First, we want to search for a more efficient sample sort. Secondly, by analyzing a variant of Samplesort, we want to settle an open problem: the average case analysis of Proportion Extend Sort (PEsort for short). An efficient variant of Samplesort given in the paper is called full sample sort. This algorithm is simple. It has a shorter object code and is almost as fast as PEsort. Theoretically, we show that full sample sort with a linear sampling size performs at most nlogn+O(n) comparisons and O(nlogn) exchanges on the average, but comparisons in the worst case. This is an improvement on the original Samplesort by Frazer and McKellar, which requires comparisons on the average and O(n2) comparisons in the worst case. On the other hand, we use the same analyzing approach to show that PEsort, with any p>0, performs also at most nlogn+O(n) comparisons on the average. Notice, Cole and Kandathil analyzed only the case p=1 of PEsort. For any p>0, they did not. Namely, their approach is suitable only for a special case such as p = 1, while our approach is suitable for the generalized case.
... Quicksort should not be favored by giving it two places. Thus, we have five different sorting methods as participants in this race: insertion sort, shellsort [Knuth, D.E. 1997], mergesort [McIlroy, P. 1993], heapsort [Williams, J. W. J. 1964], and quicksort [Hoare, C.A.R. 1961]. ...
... This qsort is interesting in that it shuns quicksort in favor of mergesort, due to the popularity of timsort [Peters, T. 2002]. Tim Peters created timsort based on Peter McIlroy's idea of natural mergesort [McIlroy, P. 1993], which consists of two processes: in process 1, sorted subarrays (called runs) are identified from the input; in process 2, two runs are merged into one run, until only one run remains. The two processes can be interleaved. ...
... As the first goal of this article, we will try to identify which techniques are more effective and which are less effective through extensive experiments. From now on, by mergesort, we mean natural mergesort [McIlroy, P. 1993]. ...
Article
Sorting is one of the oldest computing problems and is still very important in the age of big data. Various algorithms and implementation techniques have been proposed. In this study, we focus on comparison based, internal sorting algorithms. We created 12 data types of various sizes for experiments and tested extensively various implementations in a single setting. Using some effective techniques, we discovered that quicksort is adaptive to nearly sorted inputs and is still the best overall sorting algorithm. We also identified which techniques are effective in timsort, one of the most popular and efficient sorting method based on natural mergesort, and created our version of mergesort, which runs faster than timsort on nearly sorted instances. Our implementations of quicksort and mergesort are different from other implementations reported in all textbooks or research articles, faster than any version of the C library qsort functions, not only for randomly generated data, but also for various types of nearly sorted data. This experiment can help the user to choose the best sorting algorithm for the hard sorting job at hand. This work provides a platform for anyone to test their own sorting algorithm against the best in the field.
... There is extensive literature on adaptive sorts: e.g., for theoretical foundations see [14,7,16,15] and for more applied investigations see [5,9,4,21,18]. The present paper will consider only stable, natural merge sorts. ...
... Note that in this case, if ℓ ≥ 2, |Z| < |X| since (B-1) and (B-2) imply that |X| ≥ α|Y | ≥ α(α − 1)|Z| and α 2 − α > 1 since ϕ < α. This is the reason why Algorithm 6 does not check for the condition |X| < |Z| in lines[14][15][16] (unlike what is done on line 7). ...
Preprint
Full-text available
We introduce new stable, natural merge sort algorithms, called 2-merge sort and $\alpha$-merge sort. We prove upper and lower bounds for several merge sort algorithms, including Timsort, Shiver's sort, $\alpha$-stack sorts, and our new 2-merge and $\alpha$-merge sorts. The upper and lower bounds have the forms $c \cdot n \log m$ and $c \cdot n \log n$ for inputs of length $n$ comprising $m$ runs. For Timsort, we prove a lower bound of $ (1.5 - o(1)) n \log n $. For 2-merge sort, we prove optimal upper and lower bounds of approximately $ (1.089 \pm o(1))n \log m $. We prove similar asymptotically matching upper and lower bounds for $\alpha$-merge sort, when $\varphi < \alpha < 2$, where $\varphi$ is the golden ratio. These merge strategies can be used for any stable merge sort, not just natural merge sorts. The new 2-merge and $\alpha$-merge sorts have better worst-case merge cost upper bounds and are slightly simpler to implement than the widely-used Timsort; they also perform better in experiments.
... As discussed earlier, for the network model described in Section 3, the query-processing load in the network can be estimated by d τ when the average search distance is τ. Hence, using the results in Section 5, we obtain: Theorem 3: The query-processing load for a flooding search in the clustered peer-to-peer network defined in Section 3 is (18) for searches initiated in the high-density cluster, and is ...
... Some other places where the entropy expression shows up are: (i) the average number of steps to find a key (i.e. the average depth to which one has to go to find a key in the search tree) in the optimal binary balanced search tree [17], and (ii) as the lower bound on the minimum number of comparisons needed to sort a set of keys (where multiple items in the set may have the same value) [18] where the entropy is on the frequency of occurrence of different keys in the entire set. To the best of our knowledge, no precise discussion of the relevance of entropy to these problems exists in the literature. ...
Article
Full-text available
This paper derives the optimal search time and the optimal search cost that can be achieved in unstructured peer-to-peer networks when the demand pattern exhibits clustering (i.e. file popularities vary across the set of nodes in the network). Clustering in file popularity patterns is evident from measurements on deployed peer-to-peer file sharing networks. In this paper, we provide mechanisms for modeling clustering in file popularity distributions and the consequent non-uniform distribution of file replicas. We derive relations that show the effect of the number of replicas of a file on the search time and on the search cost for a search for that file for the clustered demands case in such networks for both random walk and flooding search mechanisms. The derived relations are used to obtain the optimal search performance for the case of flooding search mechanisms. The potential performance benefit that clustering in demand patterns affords is captured by our results. Interestingly, the performance gains are shown to be independent of whether the search network topology reflects the clustering in file popularity (the optimal file replica distribution to obtain these performance gains, however, does depend on the search network topology).
... In terms of Rem-adaptivity (its definition will appear below), it has good behavior. In some cases, it can achieve even the adaptive performance of McIlroy's Mergesort with Exponential Search (MSES for short) [13]. Like PEsort, it can be guaranteed theoretically to require O(n log n) comparisons in the worst case, and n log n +O(n) comparisons on the average case. ...
... For small values of log Rem(X), Adp SymPsort achieves the same Rem-adaptivity as Merge Sort with Exponential Search (MSES for short) ? due to McIlroy [13]. For large values of log Rem(X), the situation is a little different. ...
Article
In this paper, we propose a useful replacement for quicksort-style utility functions. The replacement is called Symmetry Partition Sort, which has essentially the same principle as Proportion Extend Sort. The maximal difference between them is that the new algorithm always places already partially sorted inputs (used as a basis for the proportional extension) on both ends when entering the partition routine. This is advantageous to speeding up the partition routine. The library function based on the new algorithm is more attractive than Psort which is a library function introduced in 2004. Its implementation mechanism is simple. The source code is clearer. The speed is faster, with O(n log n) performance guarantee. Both the robustness and adaptivity are better. As a library function, it is competitive.
... There is extensive literature on adaptive sorts: e.g., for theoretical foundations see [21,9,26,23] and for more applied investigations see [6,12,5,31,27]. The present paper will consider only stable, natural merge sorts. ...
... and java.util.Collection.sort() methods is an ideal candidate: it is based on a complex combination of merge sort and insertion sort [15,19]. It had a bug history (see www.bugs.java.com/view_bug.do?bug_id=8011944), but was reported as fixed as of Java 8. We decided to verify the actual implementation with only two minor modifications: we stripped the code of generics and we modified one execution path that is irrelevant to the sorting result (see Sect. 7 for details). ...
Article
Full-text available
TimSort is the main sorting algorithm provided by the Java standard library and many other programming frameworks. Our original goal was functional verification of TimSort with mechanical proofs. However, during our verification attempt we discovered a bug which causes the implementation to crash by an uncaught exception. In this paper, we identify conditions under which the bug occurs, and from this we derive a bug-free version that does not compromise performance. We formally specify the new version and verify termination and the absence of exceptions including the bug. This verification is carried out mechanically with KeY, a state-of-the-art interactive verification tool for Java. We provide a detailed description and analysis of the proofs. The complexity of the proofs required extensions and new capabilities in KeY, including symbolic state merging.
... Standard Java sort algorithm [49] for non-primitive types is used for sorting the array. The sort algorithm, called timsort, is a stable, adaptive, iterative mergesort, which implementation is adapted from Tim Peters's list sort for Python [50], that uses techniques from [51] Writing output Writing buckets data to a single output file in the correct order is the last step. This is the most sequential part of the application. ...
Article
Full-text available
Sorting algorithms are among the most commonly used algorithms in computer science and modern software. Having efficient implementation of sorting is necessary for a wide spectrum of scientific applications. This paper describes the sorting algorithm written using the partitioned global address space (PGAS) model, implemented using the Parallel Computing in Java (PCJ) library. The iterative implementation description is used to outline the possible performance issues and provide means to resolve them. The key idea of the implementation is to have an efficient building block that can be easily integrated into many application codes. This paper also presents the performance comparison of the PCJ implementation with the MapReduce approach, using Apache Hadoop TeraSort implementation. The comparison serves to show that the performance of the implementation is good enough, as the PCJ implementation shows similar efficiency to the Hadoop implementation.
... Some other places where the entropy expression shows up are: (i) the average number of steps to find a key (i.e. the average depth to which one has to go to find a key in the search tree) in the optimal binary balanced search tree [17], and (ii) as the lower bound on the minimum number of comparisons needed to sort a set of keys (where multiple items in the set may have the same value) [18] where the entropy is on the frequency of occurrence of different keys in the entire set. To the best of our knowledge, no precise discussion of the relevance of entropy to these problems exists in the literature. ...
Conference Paper
This paper derives the optimal search time and the optimal search cost that can be achieved in unstructured peer-to-peer networks when the demand pattern exhibits clustering (i.e. file popularities vary from region to region in the network). Previous work in this area had assumed a uniform distribution of file replicas throughout the network with an implicit or explicit assumption of uniform file popularity distribution whereas in reality, there is clear evidence of clustering in file popularity patterns. The potential performance benefit that the clustering in demand patterns affords is captured by our results. Interestingly, the performance gains are shown to be independent of whether the search network topology reflects the clustering in file popularity. We also provide the relation between the query-processing load and the number of replicas of each file for the clustered demands case showing that flooding searches may have lower query-processing load than random walk searches in the clustered demands case.
... In fact, one can efficiently sort all distinct states in C. The standard python implementation is timsort [21]. It was developed by Tim Peters based on McIlroy's techniques in [22]. In the worst case, its space and time complexities are O(N) and O(N log N) respectively. ...
Preprint
Full-text available
We put forward new general criteria to design successor rules that generate binary de Bruijn sequences. Prior fast algorithms based on successor rules in the literature are then shown to be special instances. We implement the criteria to join the cycles generated by a number of simple feedback shift registers (FSRs). These include the pure cycling register (PCR) and the pure summing register (PSR). For PCR, we define a preorder on its cycles, based on their weights. For PSR, we define two orders, namely a necklace order on its cycles and a mixed order on the cycles based on both the weight and the necklace orders. Using the new orders, we propose numerous classes of successor rules that can efficiently generate binary de Bruijn sequences. Producing the next bit takes no more than $O(n)$ memory and between $O(n)$ and $O(n \, \log n)$ time. We implemented computational routines to confirm the claims.
... 5 In fact, among a dozen different Unix libraries we found no qsort that could not easily be driven to quadratic behavior; all were derived from the Seventh Edition or from the 1983 Berkeley function. The Seventh 0038-0644/93/111249–17$13.50 Received 21 August 1992 © 1993 by John Wiley & Sons, Ltd. Revised 10 May 1993 Edition qsort and several others had yet another problem. ...
Article
SUMMARY We recount the history of a new qsort function for a C library. Our function is clearer, faster and more robust than existing sorts. It chooses partitioning elements by a new sampling scheme; it partitions by a novel solution to Dijkstra's Dutch National Flag problem; and it swaps efficiently. Its behavior was assessed with timing and debugging testbeds, and with a program to certify performance. The design techniques apply in domains beyond sorting.
... This complexity is much bigger than the baseline approach and corresponds to long scheduling time. The complexity of SNO and RNO depends on the sorting algorithm, which can be O(n log n) using TimSort [34], [35]. This complexity is much smaller than that of Adapted A-Star and is slightly bigger than that of Baseline while it corresponds to small total cost as presented in Section V. ...
Article
Full-text available
bfCooperative Intelligent Transport Systems (C-ITS) is a promising technology to make transportation safer and more efficient. Ridesharing for long-distance is becoming a key means of transportation in C-ITS. In this paper, we focus on private long-distance ridesharing, which reduces the total cost of vehicle utilization for long-distance journeys. In this context, we investigate journey scheduling problem with shared vehicles to reduce the total cost of vehicle utilization. Most of the existing works directly schedule journeys to vehicles with long scheduling time and only consider the cost of driving travellers instead of the total cost. In contrast, to reduce the total cost and scheduling time, we propose a comprehensive cost model and a two-phase journey scheduling approach, which includes path generation and path scheduling. On this basis, we propose two path generation methods: a simple near optimal method and a reset near optimal method as well as a greedy based path scheduling method. Finally, we present an experimental evaluation with different path generation and path scheduling methods with synthetic data generated based on real-world data. The results reveal that the proposed scheduling approach significantly outperforms baseline methods in terms of total cost (up to 69.8%) and scheduling time (up to 84.0%) and the scheduling time is reasonable (up to 0.16s). The results also show that our approach has higher efficiency (up to 141.7%) than baseline methods.
... and java.util.Collection.sort() method is an ideal candidate: it is based on a complex combination of merge sort and insertion sort [12, 15] . It had a bug history 5 , but was reported as fixed as of Java version 8. We decided to verify the implementation, stripped of generics, but otherwise completely unchanged and fully executable. ...
Conference Paper
Full-text available
We investigate the correctness of TimSort, which is the main sorting algorithm provided by the Java standard library. The goal is functional verification with mechanical proofs. During our verification attempt we discovered a bug which causes the implementation to crash. We characterize the conditions under which the bug occurs, and from this we derive a bug-free version that does not compromise the performance. We formally specify the new version and mechanically verify the absence of this bug with KeY, a state-of-the-art verification tool for Java.
Chapter
Group signatures are considered as one of the most prominent cryptographic primitives to ensure privacy. In essence, group signatures ensure the authenticity of messages while the author of the message remains anonymous. In this study, we propose a dynamic post-quantum group signature (GS) extending the static G-Merkle group signature (PQCRYPTO 2018). In particular, our dynamic G-Merkle (DGM) allows new users to join the group at any time. Similar to G-Merkle scheme, our DGM only involves symmetric primitives and makes use of a One-Time Signature scheme (OTS). Each member of the group receives a certain amount of OTS key pairs and can ask the Manager \(\mathcal {M}\) for more if needed. Our DGM also provides an innovative way of signing revocation by employing Symmetric Puncturable Encryption (SPE) recently appeared in (ACM CCS 2018). DGM provides a significantly smaller signature size than other GSs based on symmetric primitives and also reduces the influence of the number of group members on the signature size and on the limitations of the application of G-Merkle.
Thesis
Full-text available
This thesis deals with the design of algorithms in computational geometry whose complexity depends on the output-size, the so-called output-sensitive algorithms. We first describe the main paradigms that allow algorithms to be output-sensitive. Then, we give a near-optimal output-sensitive algorithm to compute the convex hull of general planar objects such that the output-size of the convex hull of any pair of objects is bounded. We extend the results to the case of envelopes and the partial decomposition of convex and maximal layers. Finally, we consider the pierceability problem for families of convex objects which has been proven NP-hard. We first study the case of isothetic boxes and give an output-sensitive heuristic that is precision-sensitive. Then, we consider the combinatorial properties of convex objects from the pierceability point of view. We obtain a collection of algorithms for various class of objects, some of them implying Helly-type theorems.
Article
Full-text available
This thesis deals with the design of algorithms in computational geometry whose complexity depends on the output-size, the so-called output-sensitive algorithms. We first describe the main paradigms that allow algorithms to be output-sensitive. Then, we give a near-optimal output-sensitive algorithm to compute the convex hull of general planar objects such that the output síze of the convex hull of any pair of objects is bounded. We extend the results to the case of envelopes and the partial decomposition of convex and maxima layers. Finally, we consider the problem for familles of convex objects which has been proven NP-hard. We first study the case of isothetic boxes and give an output-sensitive heuristic that is precision sensitive. Then,, We Consider the combinatorial properties of convex objects from the piercability point of view. We obtain a collection of algorithms for various class of objects, some of them implying Helly-type theorems.
Article
Adaptivity in sorting algorithms is sometimes gained at the expense of practicality. We give experimental results showing that Splaysort — sorting by repeated insertion into a Splay tree — is a surprisingly efficient method for in-memory sorting. Splaysort appears to be adaptive with respect to all accepted measures of presortedness, and it outperforms Quicksort for sequences with modest amounts of existing order. Although Splaysort has a linear space overhead, there are many applications for which this is reasonable. In these situations Splaysort is an attractive alternative to traditional comparison-based sorting algorithms such as Heapsort, Mergesort, and Quicksort.
Article
Up to now, the most efficient sort function for a C library was a variant of Hoare's Quicksort (qsort, for short), which was proposed by Bentley and Mcllroy in the early 1990s. Here we call this library function BM qsort. This paper introduces a library function which is based on Chen's Proportion Extend Sort (psort). This work is inspired by the fact that psort is generally faster than qsort, and in the worst case, qsort requires O(n2) comparisons, while psort requires O(n log n). In our library function, many tricks used in BM qsort have been adopted, such as sorting a small array by an insertion sort and the handling of equal elements. Some new tricks are also proposed, however, such as the adaptive partitioning scheme. To assess more effectively the behavior of our function, the test cases are enhanced. The empirical results show that our function is robust and faster than BM qsort. On particular classes of inputs, such as 'already sorted', it is linear time.
Conference Paper
Studies have indicated that sorting comprises about 20% of all computing on mainframes. Perhaps the largest use of sorting in computing (particularly business computing) is the sort required for large database operations (e.g. required by joint operations). In these applications the keys are many words long. Since our sorting algorithm hashes the key (rather than compare entire keys as in comparison sorts such as quicksort), our algorithm is even more advantageous in the case of large key lengths; in that case the cutoff is much lower. In case that the compression ratio is high, which can be determined after building the dictionary, we just adopt the previous sorting algorithm, e.g. quick sort. The same techniques can be extended to other problems (e.g. computational geometry problems) to decrease computation by learning the distribution of the inputs
Article
There is an upsurge in interest in the Markov model and also more general stationary ergodic stochastic distributions in theoretical computer science community recently (e.g. see [Vitter,Krishnan91], [Karlin,Philips,Raghavan92], [Raghavan92] for use of Markov models for on-line algorithms, e.g., cashing and prefetching). Their results used the fact that compressible sources are predictable (and vise versa), and showed that on-line algorithms can improve their performance by prediction. Actual page access sequences are in fact somewhat compressible, so their predictive methods can be of benefit. This paper investigates the interesting idea of decreasing computation by using learning in the opposite way, namely to determine the difficulty of prediction. That is, we will approximately learn the input distribution, and then improve the performance of the computation when the input is not too predictable, rather than the reverse. To our knowledge, this is first case of a computational prob...
Conference Paper
Full-text available
A new sorting algorithm is presented. Its running time is O(n(1+log(F/n)) where F=|{(i,j); iij}| is the total number of inversions in the input sequence xn xn–1 xn–2 ... x2 x1. In other words, presorted sequences are sorted quickly, and completely unsorted sequences are sorted in O(n log n) steps. Note that F2/2 always. Furthermore, the constant of proportionality is fairly small and hence the sorting method is competitive with existing methods.
Article
Encroaching lists are a generalization of monotone sequences in permutations. Since ordered permutations contain fewer encroaching lists than random ones, the number of such listsm provides a measure of presortedness with advantages over others in the literature. Experimental and analytic results are presented to cast light on the properties of encroaching lists. Also, we describe a new sorting algorithm,melsort, with complexityO(nlogm). Thus it is linear for well ordered sets and reduces to mergesort andO(nlogn) in the worst case.