## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

We present an efficient and practical algorithm for the internal sorting problem. Our algorithm works in-place and, on the average, has a running-time of in the size n of the input. More specifically, the algorithm performs comparisons and element moves on the average. An experimental comparison of our proposed algorithm with the most efficient variants of Quicksort and Heapsort is carried out and its results are discussed.

To read the full-text of this research,

you can request a copy directly from the authors.

... QuickHeapsort is a combination of Quicksort and Heapsort which was first described by Cantone and Cincotti [2]. It is based on Katajainen's idea for Ultimate Heapsort [11]. ...

... In [2] the basic version of QuickHeapsort with a fixed position in the array as pivot is analyzed, but no other method for pivot selection is considered. However, the authors compare an implementation of the median-of-three version with other Quick-and Heapsort variants, too. ...

... Two-layer-heaps were defined in [11]. In [2] for the same concept a different language is used (they describe the algorithm in terms of External Heapsort). Now, we are ready to describe the QuickHeapsort algorithm as it has been proposed in [2]. ...

QuickHeapsort is a combination of Quicksort and Heapsort. We show that the expected number of comparisons for QuickHeapsort is always better than for Quicksort if a usual median-of-constant strategy is used for choosing pivot elements. In order to obtain the result we present a new analysis for QuickHeapsort splitting it into the analysis of the partition-phases and the analysis of the heap-phases. This enables us to consider samples of non-constant size for the pivot selection and leads to better theoretical bounds for the algorithm. Furthermore, we introduce some modifications of QuickHeapsort. We show that for every input the expected number of comparisons is at most nlog2n−0.03n+o(n)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$n\log _{2}n - 0.03n + o(n)$\end{document} for the in-place variant. If we allow n extra bits, then we can lower the bound to nlog2n−0.997n+o(n)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ n\log _{2} n -0.997 n+ o (n)$\end{document}. Thus, spending n extra bits we can save more that 0.96n comparisons if n is large enough. Both estimates improve the previously known results. Moreover, our non-in-place variant does essentially use the same number of comparisons as index based Heapsort variants and Relaxed-Weak-Heapsort which use nlog2n−0.9n+o(n)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ n\log _{2}n -0.9 n+ o (n)$\end{document} comparisons in the worst case. However, index based Heapsort variants and Relaxed-Weak-Heapsort require Θ(nlogn)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}${\Theta }(n\log n)$\end{document} extra bits whereas we need n bits only. Our theoretical results are upper bounds and valid for every input. Our computer experiments show that the gap between our bounds and the actual values on random inputs is small. Moreover, the computer experiments establish QuickHeapsort as competitive with Quicksort in terms of running time.

... This new upper bound also allows us to strengthen Iwama and Teruyama's bound for their combined algorithm to n lg n − 1.4112n. [3] were the first to explicitly give a name to the mixture of Quicksort with another sorting method; they proposed QuickHeapsort. However, the concept of QuickXsort (without calling it like that) was first used in UltimateHeapsort by Katajainen [28]. ...

... Also InSituMergesort only uses an expected-case lineartime algorithm for the median computation. 3 In the conference paper [10], the first and second author introduced the name QuickXsort and first considered QuickMergesort as an application (including weaker forms of the results in Sect. 4.2 and Sect. ...

... 2. Under reasonable assumptions, sample sizes of Θ( √ n) are optimal among all polynomial size sample sizes. 3. The probability that median-of-√ n QuickXsort needs more than x wc (n) + 6n comparisons decreases exponentially in 4 √ n (Proposition 4.5). ...

QuickXsort is a highly efficient in-place sequential sorting scheme that mixes Hoare’s Quicksort algorithm with X, where X can be chosen from a wider range of other known sorting algorithms, like Heapsort, Insertionsort and Mergesort. Its major advantage is that QuickXsort can be in-place even if X is not. In this work we provide general transfer theorems expressing the number of comparisons of QuickXsort in terms of the number of comparisons of X. More specifically, if pivots are chosen as medians of (not too fast) growing size samples, the average number of comparisons of QuickXsort and X differ only by o(n)-terms. For median-of-k pivot selection for some constant k, the difference is a linear term whose coefficient we compute precisely. For instance, median-of-three QuickMergesort uses at most nlgn-0.8358n+O(logn) comparisons. Furthermore, we examine the possibility of sorting base cases with some other algorithm using even less comparisons. By doing so the average-case number of comparisons can be reduced down to nlgn-1.4112n+o(n) for a remaining gap of only 0.0315n comparisons to the known lower bound (while using only O(logn) additional space and O(nlogn) time overall). Implementations of these sorting strategies show that the algorithms challenge well-established library implementations like Musser’s Introsort.

... Previous work on QuickXsort. Cantone and Cincotti [3] were the first to explicitly naming the mixture of Quicksort with another sorting method; they proposed QuickHeapsort. However, the concept of QuickXsort (without calling it like that) was first used in UltimateHeapsort by Katajainen [28]. ...

... We use a symmetric variant (with a min-oriented heap) if the left segment shall be sorted by X. For detailed code for the above procedure, we refer to [3] or [4]. ...

... To see the last step, let us verify that n ε k(n) 3 ...

QuickXsort is a highly efficient in-place sequential sorting scheme that mixes Hoare's Quicksort algorithm with X, where X can be chosen from a wider range of other known sorting algorithms, like Heapsort, Insertionsort and Mergesort. Its major advantage is that QuickXsort can be in-place even if X is not. In this work we provide general transfer theorems expressing the number of comparisons of QuickXsort in terms of the number of comparisons of X. More specifically, if pivots are chosen as medians of (not too fast) growing size samples, the average number of comparisons of QuickXsort and X differ only by $o(n)$-terms. For median-of-$k$ pivot selection for some constant $k$, the difference is a linear term whose coefficient we compute precisely. For instance, median-of-three QuickMergesort uses at most $n \lg n - 0.8358n + O(\log n)$ comparisons. Furthermore, we examine the possibility of sorting base cases with some other algorithm using even less comparisons. By doing so the average-case number of comparisons can be reduced down to $n \lg n- 1.4106n + o(n)$ for a remaining gap of only $0.0321n$ comparisons to the known lower bound (while using only $O(\log n)$ additional space and $O(n \log n)$ time overall). Implementations of these sorting strategies show that the algorithms challenge well-established library implementations like Musser's Introsort.

... Our transfer theorem covers this refined version of QuickMergesort, as well, which had not been analyzed before. 2 The rest of the paper is structured as follows: In Section 2, we summarize previous work on QuickXSort with a focus on contributions to its analysis. Section 3 collects mathematical facts and notations used later. ...

... The idea to combine Quicksort and a secondary sorting method was suggested by Contone and Cincotti [2,1]. They study Heapsort with an output buffer (external Heapsort), 3 and combine it with Quicksort to QuickHeapsort. ...

... By combining QuickMergesort with Ford and Johnson's MergeInsertion [7] for subproblems of logarithmic size, Edelkamp and Weiß obtained an in-place sorting method that uses on the average a close to minimal number of comparisons of n lg n − 1.3999n + o(n). 2 Edelkamp and Weiß do consider this version of QuickMergesort [5], but only analyze it for median-of-√ n pivots. In this case the behavior coincides with the simpler strategy to always sort the smaller segment by Mergesort since the segments are of almost equal size with high probability. ...

QuickXSort is a strategy to combine Quicksort with another sorting method X, so that the result has essentially the same comparison cost as X in isolation, but sorts in place even when X requires a linear-size buffer. We solve the recurrence for QuickXSort precisely up to the linear term including the optimization to choose pivots from a sample of k elements. This allows to immediately obtain overall average costs using only the average costs of sorting method X (as if run in isolation). We thereby extend and greatly simplify the analysis of QuickHeapsort and QuickMergesort with practically efficient pivot selection, and give the first tight upper bounds including the linear term for such methods.

... Therefore the top-k sort algorithm is introduced to help minimize the time complexity of the knee point search in this paper. There have been many efforts for bounding and evaluating the time and space complexity of sort algorithms [7][8][9][10][11][12]. These works provide component algorithms for our work. ...

... Once the pivot is presented, different recursive steps are to be taken depending on the position of the pivot. If the pivot falls after position k, it should be pushed into the stack and then run further sorting on the original list subtracting the pivot, hoping to help afterwards steps (see Lines 9,11). If the pivot is located exactly at position k, the pivot itself is the last element of the output and thereafter only top-k-1 sorting on the original list subtracting the pivot is needed (see Line 15). ...

... Search the sorted list for the knee point e (14) if e is found then (15) break (16) else / * Update the parameters for the next round of optimization problem * / (17) update +1 according to (5) Search the sorted list for the knee point (6) return / * the algorithm is completed * / (7) else / * top-sorting is the optimal option * / (8) QuickSortTopK( , 1, length( ), ) (9) Search the sorted list for the knee point e (10) if is found then (11) return e (12) else / * Update the parameters for the next round of optimization problem * / (13) update according to (5) / * the posterior probability * / (14) update by exluding the top-item from list (15) FindKnee( , ) Algorithm 3: FindKnee( , ). then 1 log ( ) + 1 log ( ) ...

Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-
k
sorting when a priori probability distribution of the knee point is known. First, a top-
k
sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number
k
is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.

... QuickHeapsort is a combination of Quicksort and Heapsort which was first described by Cantone and Cincotti [2]. It is based on Katajainen's idea for Ultimate Heapsort [12]. ...

... In [2] the basic version with fixed index as pivot is analyzed and -together with the median of three version -implemented and compared with other Quickand Heapsort variants. In [8] Edelkamp and Stiegeler compare these variants with so called Weak-Heapsort [7] and some modifications of it (e.g. ...

... Two-layerheaps were defined in [12]. In [2] for the same concept a different language is used (they describe the algorithm in terms of External Heapsort). Now we are ready to describe the QuickHeapsort algorithm as it has been proposed in [2]. ...

We present a new analysis for QuickHeapsort splitting it into the analysis of
the partition-phases and the analysis of the heap-phases. This enables us to
consider samples of non-constant size for the pivot selection and leads to
better theoretical bounds for the algorithm. Furthermore we introduce some
modifications of QuickHeapsort, both in-place and using n extra bits. We show
that on every input the expected number of comparisons is n lg n - 0.03n + o(n)
(in-place) respectively n lg n -0.997 n+ o (n). Both estimates improve the
previously known best results. (It is conjectured in Wegener93 that the
in-place algorithm Bottom-Up-Heapsort uses at most n lg n + 0.4 n on average
and for Weak-Heapsort which uses n extra-bits the average number of comparisons
is at most n lg n -0.42n in EdelkampS02.) Moreover, our non-in-place variant
can even compete with index based Heapsort variants (e.g. Rank-Heapsort in
WangW07) and Relaxed-Weak-Heapsort (n lg n -0.9 n+ o (n) comparisons in the
worst case) for which no O(n)-bound on the number of extra bits is known.

... Based on QuickHeapsort [2], we develop the concept of QuickXsort in this paper and apply it to Mergesort and WeakHeapsort, what yields efficient internal sorting algorithms. The idea is very simple: as in Quicksort the array is partitioned into the elements greater and less than some pivot element. ...

... Finding the median means significant additional effort. Cantone and Cincotti [2] weakened the requirement for the pivot and designed QuickHeapsort which uses only a sample of smaller size to select the pivot for partitioning. UltimateHeapsort is inferior to QuickHeapsort in terms of average case number of comparisons, although, unlike QuickHeapsort, it allows an n log n + O(n) bound for the worst case number of comparisons. ...

In this paper we generalize the idea of QuickHeapsort leading to the notion of QuickXsort. Given some external sorting algorithm X, QuickXsort yields an internal sorting algorithm if X satisfies certain natural conditions. We show that up to o(n) terms the average number of comparisons incurred by QuickXsort is equal to the average number of comparisons of X.
We also describe a new variant of WeakHeapsort. With QuickWeakHeapsort and QuickMergesort we present two examples for the QuickXsort construction. Both are efficient algorithms that perform approximately n logn − 1.26n + o(n) comparisons on average. Moreover, we show that this bound also holds for a slight modification which guarantees an \(n \log n + \mathcal{O}(n)\) bound for the worst case number of comparisons.
Finally, we describe an implementation of MergeInsertion and analyze its average case behavior. Taking MergeInsertion as a base case for QuickMergesort, we establish an efficient internal sorting algorithm calling for at most n logn − 1.3999n + o(n) comparisons on average. QuickMergesort with constant size base cases shows the best performance on practical inputs and is competitive to STL-Introsort.

... These modifications attempt to be a solution for some cases which the performance of sorting algorithm decreases like equal keys [38,44]. Some algorithms use two sorting algorithms together [5][6][7]33]. However, these sorting algorithms are not considered a dynamic algorithm selection model, thus their capabilities are limited. ...

... heapsort, merge sort) can be preferred. Selecting the switch point (called cut-off) somewhere around 10 and choosing the algorithm as heapsort or insertion sort, often increase the performance more [5][6][7]. ...

... Based on QuickHeapsort [1], in this paper we develop the concept of QuickXsort and apply it to other sorting algorithms as Mergesort or WeakHeapsort. This yields efficient internal sorting algorithms. ...

... Finding the median means significant additional effort. Cantone and Cincotti [1] weakened the requirement for the pivot and designed QuickHeapsort which uses only a sample of smaller size to select the pivot for partitioning. Ulti-mateHeapsort is inferior to QuickHeapsort in terms of average case running time, although, unlike QuickHeapsort, it allows an n log n + O(n) bound for the worst case number of comparisons. ...

In this paper we generalize the idea of QuickHeapsort leading to the notion
of QuickXsort. Given some external sorting algorithm X, QuickXsort yields an
internal sorting algorithm if X satisfies certain natural conditions.
With QuickWeakHeapsort and QuickMergesort we present two examples for the
QuickXsort-construction. Both are efficient algorithms that incur approximately
n log n - 1.26n +o(n) comparisons on the average. A worst case of n log n +
O(n) comparisons can be achieved without significantly affecting the average
case.
Furthermore, we describe an implementation of MergeInsertion for small n.
Taking MergeInsertion as a base case for QuickMergesort, we establish a
worst-case efficient sorting algorithm calling for n log n - 1.3999n + o(n)
comparisons on average. QuickMergesort with constant size base cases shows the
best performance on practical inputs: when sorting integers it is slower by
only 15% to STL-Introsort.

... This idea is not original since it has been successfully used in literature. The first sorting algorithm which introduced such a technique was the Quick-Heap-Sort algorithm by Cantone and Cincotti[3]. Recently it has been also adopted to obtain fast internal variants of many recursive external sorting algorithms[5] ...

In this paper we present Fast-Insertion-Sort, a sequence of efficient external variants of the well known Insertion-Sort algorithm which achieve by nesting an O(n 1+ε) worst-case time complexity, where ε = 1 h , for h ∈ N. Our new solutions can be seen as the generalization of Insertion-Sort to multiple elements block insertion and, likewise the original algorithm, they are stable, adaptive and very simple to translate into programming code. Moreover they can be easily modified to obtain in-place variations at the cost of a constant factor. Moreover, by further generalizing our approach we obtain a representative recursive algorithm achieving O(n log n) worst case time complexity. From our experimental results it turns out that our new variants of the Insertion-Sort algorithm are very competitive with the most effective sorting algorithms known in literature, outperforming fast implementations of the Hoare's Quick-Sort algorithm in many practical cases, and showing an O(n log n) behaviour in practice.

... Other examples for QuickXsort are QuickHeapsort [5,9] and QuickWeakheapsort [10,11] and Ultimate Heapsort [21]. QuickXsort with median-of-√ n pivot selection uses at most n log n + cn + o(n) comparisons on average to sort n elements given that X also uses at most n log n + cn + o(n) comparisons on average [11]. ...

... Other examples for QuickXsort are QuickHeapsort [5,9] and QuickWeakheapsort [10,11] and Ultimate Heapsort [21]. QuickXsort with median-of-√ n pivot selection uses at most n log n+cn+o(n) comparisons on average to sort n elements given that X also uses at most n log n + cn + o(n) comparisons on average [11]. ...

The two most prominent solutions for the sorting problem are Quicksort and Mergesort. While Quicksort is very fast on average, Mergesort additionally gives worst-case guarantees, but needs extra space for a linear number of elements. Worst-case efficient in-place sorting, however, remains a challenge: the standard solution, Heapsort, suffers from a bad cache behavior and is also not overly fast for in-cache instances. In this work we present median-of-medians QuickMergesort (MoMQuickMergesort), a new variant of QuickMergesort, which combines Quicksort with Mergesort allowing the latter to be implemented in place. Our new variant applies the median-of-medians algorithm for selecting pivots in order to circumvent the quadratic worst case. Indeed, we show that it uses at most $n \log n + 1.6n$ comparisons for $n$ large enough. We experimentally confirm the theoretical estimates and show that the new algorithm outperforms Heapsort by far and is only around 10% slower than Introsort (std::sort implementation of stdlibc++), which has a rather poor guarantee for the worst case. We also simulate the worst case, which is only around 10% slower than the average case. In particular, the new algorithm is a natural candidate to replace Heapsort as a worst-case stopper in Introsort.

... Based on QuickHeapsort [5,7], Edelkamp and Weiß [9] developed the concept of QuickXsort and applied it to X = WeakHeapsort [8] and X = Mergesort. The idea -going back to UltimateHeapsort [17] -is very simple: as in Quicksort the array is partitioned into the elements greater and less than some pivot element, respectively. ...

We consider the fundamental problem of internally sorting a sequence of $n$ elements. In its best theoretical setting QuickMergesort, a combination Quicksort with Mergesort with a Median-of-$\sqrt{n}$ pivot selection, requires at most $n \log n - 1.3999n + o(n)$ element comparisons on the average. The questions addressed in this paper is how to make this algorithm practical. As refined pivot selection usually adds much overhead, we show that the Median-of-3 pivot selection of QuickMergesort leads to at most $n \log n - 0{.}75n + o(n)$ element comparisons on average, while running fast on elementary data. The experiments show that QuickMergesort outperforms state-of-the-art library implementations, including C++'s Introsort and Java's Dual-Pivot Quicksort. Further trade-offs between a low running time and a low number of comparisons are studied. Moreover, we describe a practically efficient version with $n \log n + O(n)$ comparisons in the worst case.

... Recently, in [16], the idea of QuickHeapsort [2,5] was generalized to the notion of QuickXsort: Given some black-box sorting algorithm X, QuickXsort can be used to speed X up provided that X satisfies certain natural conditions. QuickWeakHeapsort and QuickMergesort were described as two examples of this construction. ...

A weak heap is a variant of a binary heap where, for each node, the heap ordering is enforced only for one of its two children. In 1993, Dutton showed that this data structure yields a simple worst-case-efficient sorting algorithm. In this paper we review the refinements proposed to the basic data structure that improve the efficiency even further. Ultimately, minimum and insert operations are supported in O(1) worst-case time and extract-min operation in O(lgn) worst-case time involving at most lgn+O(1) element comparisons. In addition, we look at several applications of weak heaps. This encompasses the creation of a sorting index and the use of a weak heap as a tournament tree leading to a sorting algorithm that is close to optimal in terms of the number of element comparisons performed. By supporting insert operation in O(1) amortized time, the weak-heap data structure becomes a valuable tool in adaptive sorting leading to an algorithm that is constant-factor optimal with respect to several measures of disorder. Also, a weak heap can be used as an intermediate step in an efficient construction of binary heaps. For graph search and network optimization, a weak-heap variant, which allows some of the nodes to violate the weak-heap ordering, is known to be provably better than a Fibonacci heap.

... anna.fi.muni.cz/divine8 We have not included the original MP5 Sort algorithm to our data, as it performs worse than GPU Quicksort or CPU Quicksort ...

In this paper we improve large-scale disk-based model checking by shifting complex numerical operations to the graphic card, enjoying that during the last decade graphics processing units (GPUs) have become very powerful. For disk-based graph search, the delayed elim- ination of duplicates is the performance bottleneck as it amounts to sorting large state vector sets. We perform parallel processing on the GPU to improve the sorting speed significantly. Since existing GPU sorting solutions like Bitonic Sort and Quicksort do not obey any speed-up on state vectors, we propose a refined GPU-based Bucket Sort algorithm. Alternatively, we study sorting a compressed state vector and obtain speed-ups for delayed duplicate detection of more than one order of magnitude with a single GPU, located on an ordi- nary graphic card.

... The MacDiarmid and Reed's variant of BOTTOM -UP-HEAP SORT algorithm [13,[16][17][18][19] uses, on average, about 1.52n comparisons to build a heap. Reinhard [20] shows that MERGESORT can be designed in place with nlogn -1.3n + O (logn) comparisons in worst case, but practical purpose algorithm is too slow. ...

� Abstract—In field of Computer Science and Mathematics, sorting algorithm is an algorithm that puts elements of a list in a certain order i.e. ascending or descending. Sorting is perhaps the most widely studied problem in computer science and is frequently used as a benchmark of a system's performance. This paper presented the comparative performance study of four sorting algorithms on different platform. For each machine, it is found that the algorithm depends upon the number of elements to be sorted. In addition, as expected, results show that the relative performance of the algorithms differed on the various machines. So, algorithm performance is dependent on data size and there exists impact of hardware also. Keywords—Algorithm, Analysis, Complexity, Sorting. I. INTRODUCTION ORTING algorithms are classified by several criteria such as Computational complexity where worst, average and best number of comparisons for several typical test cases in terms of the size of the list are computed. Stability is based on memory usage and use of other computer resources. The difference between worst case and average behavior, behaviors on practically important data sets. The data sets could be completely sorted, inversely sorted and almost sorted. There many algorithms are available for sorting. Such case requires comparison of algorithms to implement sorting on that data structure so that better one is chosen. The analysis of an algorithm is based on time complexity and space complexity. The amount of memory needed by program to run to completion is referred as space complexity. The amount of time needed by an algorithm to run to completion is referred as time complexity. For an algorithm time complexity depends upon the size of input. In this paper, a comparative performance evaluation of improved heap sort algorithm is done with three traditional

In industrial measuring application, common point pairs must be manually selected for coordinate system transformation in comparative analysis on measured point set and designed point set. Aiming at this problem, a comparative analysis algorithm are proposed on the basis of random-order point set matching and principle of least square, in order to realize automatic matching and comparative analysis of precision in this kind of application and acquire optimum precision control. At first, two incompletely-corresponding unordered point sets in different coordinate systems are taken as processed objects, and point pair matching of measurement point set and design point set is automated through analysis on relative topological relationship of space, so as to realize coordinate system transformation computation. Then all point pairs are applied and filtered according to elementary matching deviation, converting parameters are recalculated to improve precision of overall coordinate transformation, and measurement points with errors are acquired. Experiments and analysis prove validity of the algorithm and its application conditions are discussed.

Search has been vital to artificial intelligence from the very beginning as a core technique in problem solving. The authors present a thorough overview of heuristic search with a balance of discussion between theoretical analysis and efficient implementation and application to real-world problems. Current developments in search such as pattern databases and search with efficient use of external memory and parallel processing units on main boards and graphics cards are detailed. Heuristic search as a problem solving tool is demonstrated in applications for puzzle solving, game playing, constraint satisfaction and machine learning. While no previous familiarity with heuristic search is necessary the reader should have a basic knowledge of algorithms, data structures, and calculus. Real-world case studies and chapter ending exercises help to create a full and realized picture of how search fits into the world of artificial intelligence and the one around us. The content is organized into five parts as follows: Search Primer: State-Space Search, Basic Search Algorithms, Dictionary Data Structures, and Automatically Created Heuristics Search under Memory Constraints: Linear-Space Search, Memory-Restricted Search, Symbolic Search, External Search Search Under Time Constraints: Distributed Search, State-Space Pruning, and Real-Time Search Search Variants: Adversary Search, Constraint Satisfaction Search, and Local Search Search Applications: Robotics, Automated System Verification, Action Planning, Vehicle Navigation, and Computational Biology.

It’s known that attitudes are effective on several factors at education
sciences literature. Opinion, feeling and behavior are assumed
among these factors. The purpose of this study is to survey
undergraduate and graduate students’ perceptions, opinions and
attitudes about their departments of Faculty of Forestry at
Kahramanmaraş Sütçü İmam University. In this research, 247
students filled out the questionnaire. Descriptive statistics and
crosstabs of the data collected through the survey have been
analyzed on SPSS. According to the results of the study, it is found
that a great majority of students don’t have adequate knowledge
about their departments. This finding is at the level of 77% on
students of department of forest industry engineering, respectively
52% on students of department of forest engineering. Moreover,
40% of the students attended to this study state that departments
can’t meet their expectations. Results are expected to contribute to
short- and long-term planning efforts of forestry and other faculties.
Additionally, programs of the departments can be adjusted according
to students’ expectation and requirement.

Packaging is an industrial product that carries out wrapping, storing,
stocking, transporting and selling functions of manufactured product
in a sustainable manner. Moreover, it takes an important role on
marketing of multifarious agricultural, food and non-food products,
especially on imported products. Therefore, the packaging acts as a
locomotive in whole manufacturing industry. The aim of this study is
to examine paper-carton and wooden packaging industries with other
packaging industries (plastic, metal and glass) that constituted
packaging sector with production, consumption, and import and
export dimensions. The magnitude of Turkey’s packaging sector,
approximating $12 billion, grew 15% at 2011 as compared with prior
year. Factors such as increasing urbanization ratio, population
growth, improvement on life standard, increasing on women’s roles
at business life, changes in consumption patterns shows that
Turkey’s packaging sector will also continue its growth on coming
years.

We present an analysis of the remedian, an efficient, known algorithm, for the approximate median selection problem, that is easy to implement. The algorithm can be used for data in an array, as well as for streaming data. In an array it performs in-place, recursively dividing the candidate values into sets of size b, from which exact medians are selected for the next phase. On streaming data it performs a filter operation, requiring, by the time n items are processed, the storage of log"bn candidate entries. The contribution of the article is a precise characterization, combinatorial and asymptotic, of the accuracy of the algorithm, showing explicitly the role of the critical design parameter b. In addition, we compute the time and space costs of the algorithm, and present experimental illustrations of its accuracy.

A new variant of HEAPSORT is presented in this paper. The algorithm is not an internal sorting algorithm in the strong sense,
since extra storage for n integers is necessary. The basic idea of the new algorithm is similar to the classical sorting algorithm HEAPSORT, but the
algorithm rebuilds the heap in another way. The basic idea of the new algorithm is it uses only one comparison at each node.
The new algorithm shift walks down a path in the heap until a leaf is reached. The request of placing the element in the root
immediately to its destination is relaxed. The new algorithm requires about n log n − 0.788928n comparisons in the worst case and n log n − n comparisons on the average which is only about 0.4n more than necessary. It beats on average even the clever variants of QUICKSORT, if n is not very small. The difference between the worst case and the best case indicates that there is still room for improvement
of the new algorithm by constructing heap more carefully.

As part of a study of the general issue of complexity of comparison based problems, as well as interest in the specific problem, we consider the task of performing the basic priority queue operations on a heap. We show that in the worst case:
lg lg n +- O(1) comparisons are necessary and sufficient to insert an element into a heap. (This improves the previous upper and lower bounds of lg n and O(1).)
lg n+lg*(n) +- O(1) comparisons are necessary and sufficient to replace the maximum in a heap. (This improves the previous upper and lower bounds of 2 lg n and lg n.)
1.625n+0(lg n * lg*(n)) comparisons are sufficient to create a heap. 1.37 ... n comparisons are necessary not only in the worst case but also on the average.
Here lg indicates the logarithm base 2 and lg* denotes the iterated logarithm or number of times the logarithm base 2 may be taken before the quantity is at most 0.

An abstract is not available.

The expected number of interchanges and comparisons in Floyd's well-known algorithm to construct heaps and derive the probability generating functions for these quantities are considered. From these functions the corresponding expected values are computed.

We present an algorithm to construct a heap which uses on average (α + o(1))n comparisons to build a heap on n elements, where α ≈ 1.52. Indeed on the overwhelming proportion of inputs our algorithm uses this many comparisons. This average complexity is better than that known for any other algorithm. We conjecture that it is optimal. Our method is a natural variant of the standard heap construction method due to Floyd.

A variant of HEAPSORT, called BOTTOM-UP-HEAPSORT, is presented. It is based on a new reheap procedure. This sequential sorting algorithm is easy to implement and beats, on an average, QUICKSORT if n⩾400 and a clever version of QUICKSORT (where the split object is the median of 3 randomly chosen objects) if n⩾16000. The worst-case number of comparisons is bounded by 1.5n log n+O(n). Moreover, the new reheap procedure improves the delete procedure for the heap data structure for all n.

An algorithm, which asymptotically halves the number of comparisons made by the common Heapsort, is presented and analysed in the worst case. The number of comparisons is shown to be (n+1)(log(n+1)+log log(n+1)+1.82)+O(log n) in the worst case to sort n elements, without using any extra space. Quicksort, which usually is referred to as the fastest in-place sorting method, uses 1.38n log n − O(n) in the average case (see Gonnet (1984)).

The standard Quicksort algorithm requires a stack of size O(log2n) to sort a set of n elements. We introduce a simple nonrecursive version of Quicksort, which requires only a constant, O(1) additional space because the unsorted subsets are searched instead of stacking their boundaries as in the standard Quicksort. Our O(1)-space Quicksort is probably the most efficient of all the sorting algorithms which need a constant workspace only.

A table of formulas for certain integrals involving Legendre functions has been constructed mechanically by a program which performed algebraic operations. The formulas are all rational algebraic expressions in a single variable and were constructed ...

This paper is a practical study of how to implement the Quicksort sorting algorithm and its best variants on real computers, including how to apply various code optimization techniques. A detailed implementation combining the most effective improvements to Quicksort is given, along with a discussion of how to implement it in assembly language. Analytic results describing the performance of the programs are summarized. A variety of special situations are considered from a practical standpoint to illustrate Quicksort's wide applicability as an internal sorting method which requires negligible extra storage.

BOTTOM-UP HEAPSORT is a variant of HEAPSORT which beats on average even the clever variants of QUICKSORT, if n is not very small. Up to now, the worst case complexity of BOTTOM-UP HEAPSORT has been able to be estimated only by 1.5n log n. McDiarmid and Reed (1989) have presented a variant of BOTTOM-UP HEAPSORT which needs extra storage for n bits. The worst case number of comparisons of this (almost internal) sorting algorithm is estimated by n log n + 1.1 n. It is discussed how many comparisons can be saved on average.

Many algebraic translators provide the programmer with a limited ability to allocate storage. Of course one of the most desirable features of these translators is the extent to which they remove the burden of storage allocation from the programmer. Nevertheless, ...

. A variant of Heapsort---named Ultimate Heapsort---is presented that sorts n elements in-place in Theta(n log 2 (n+ 1)) worst-case time by performing at most n log 2 n + Theta(n) key comparisons and n log 2 n + Theta(n) element moves. The secret behind Ultimate Heapsort is that it occasionally transforms the heap it operates with to a two-layer heap which keeps small elements at the leaves. Basically, Ultimate Heapsort is like Bottom-Up Heapsort but, due to the two-layer heap property, an element taken from a leaf has to be moved towards the root only O(1) levels, on an average. Let a[1::n] be an array of n elements each consisting of a key and some information associated with this key. This array is a (maximum) heap if, for all i 2 f2; : : : ; ng, the key of element a[bi=2c] is larger than or equal to that of element a[i]. That is, a heap is a pointer-free representation of a left complete binary tree, where the elements stored are partially ordered according to their key...

Library of EEcient Data structures and Algorithms

- Leda

LEDA, Library of EEcient Data structures and Algorithms, http:==www.mpi-sb.mpg.de=LEDA= leda.html.

Top-down not-up heapsort

- J Katajainen
- T Pasanen
- J Tehuola

J. Katajainen, T. Pasanen, J. Tehuola, Top-down not-up heapsort, Proc. The Algorithm Day in
Copenhagen, Department of Computer Science, University of Copenhagen, 1997, pp. 7-9.

- B M E Moret
- H D Shapiro

B.M.E. Moret, H.D. Shapiro, Algorithms from P to NP, Vol. 1: Design and E ciency, The Benjamin
Cummings Publishing Company, Menlo Park, CA, 1990.

The ultimate heapsort, DIKU Report 96=42

- J Katajainen

J. Katajainen, The ultimate heapsort, DIKU Report 96=42, Department of Computer Science, University
of Copenhagen, 1996.

Sorting and Searching

- D E Knuth

D.E. Knuth, The Art of Computer Programming, Vol. 3: Sorting and Searching, Addison-Wesley,
Reading, MA, 1973.

Improving Katajainen's ultimate heapsort

- L Rosaz

L. Rosaz, Improving Katajainen's ultimate heapsort, Technical Report No. 1115, Laboratoire de
Recherche en Informatique, UniversitÃ e de Paris Sud, Orsay, 1997.

Treesort 3 (alg. 245)

- Floyd