Conference Paper

A New Friends Sort Algorithm

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Sorting algorithms and the sorting itself is an important concept in computational field. In this research study, we are proposing a unique sorting algorithm, based on assuming the first value as smallest and comparing it with the rest of the list and assuming the last value as biggest and comparing it with the rest of the list. Running cost analysis and its results obtained after implementation are provided in graphical form with an objective to compare the efficiency of the proposed technique with some existing well known techniques of sorting.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... According to Thabit, K., & Bawazir, A. (2013), they presented a new sorting algorithm called MMBPSS which is an improvement on Bidirectional Selection sort [21], and Friend sort [22]. MMBPSS algorithm was presented and its efficiency was computed that is more efficient than traditional bidirectional selection sort [21]. ...
... There are several studies that provide detailed review of published sorting methods like [70]. Few prominent sorting methods are Bubble, Selection and Insertion Sort [22]. These are comparison-based sorting methods [1]. ...
... Many computer scientists have made numerous efforts and contributed a lot in boosting the performance of sorting processes in terms of efficiency and effectiveness. For example, there is a new Friends Sort [22] algorithm that is compared with bubble, cocktail, selection, and insertion sort which is best only when input data is small as the amount of the input data rise, the performance of the friend sort starts decreasing. For example, in case of comparison with selection sort when input numbers are more than 35000 and in comparison, with insertion sort when input numbers are more than 25000, Friends sort performance starts declining [22]. ...
Article
In computation, sorting is highly essential to access and manipulate data efficiently. High-performance sorting algorithms have always been in demand so that computers may process data more quickly. Since the computer has become a vital tool in various domains of human life, the various researchers have investigated and presented numerous sorting algorithms to sort elements of list with minimal execution time and least space. As the volume of data increases, the urgency for efficient data processing algorithms also increases. In this novel work, we present a comparison-based sorting algorithm titled “2mm” which is a modification of the sorting algorithm Min-Max Bidirectional Parallel Selection Sort (MMBPSS). The proposed algorithm follows divide and conquers method and uses constant space complexity i.e. O(n). Although 2mm sort differs slightly from MMBPSS sort, its computational cost is very low and it reduces more than 50% comparison cost hence presents a more efficient solution to the sorting problems in the modern era. For performance evaluation and comparison, extensive experimentation has been performed that shows the better performance of the proposed method.
... Selection Sort algorithm is inefficient on large lists, because it has O (n2) complexity, and generally performs worse than the similar Insertion Sort. Many research works have been conducted to find out better enhancement for Selection Sort [1,[3][4][5] that speed up the sorting process such as bidirectional Selection Sort Algorithm, which can position two items in each pass thus reducing the number of loops required for sorting. This algorithm also called "Friends Sort" [3] . ...
... Many research works have been conducted to find out better enhancement for Selection Sort [1,[3][4][5] that speed up the sorting process such as bidirectional Selection Sort Algorithm, which can position two items in each pass thus reducing the number of loops required for sorting. This algorithm also called "Friends Sort" [3] . Lakra and Divy [4] suggested "Double Selection Sort" which makes sorting an efficient and convenient way for larger data set by saving almost 25% to 35% than the classic Selection Sort Algorithm. ...
... We suggest a new third algorithm called Min-Max Bidirectional Parallel Dynamic Selection Sort "MMBPDSS" which combine DSS and MMBPSS. Our hypothesis "MMBPDSS" makes sorting an efficient and convenient way for smaller and larger data set by saving almost 50% than the classic Selection Sort and Friend Sort algorithms [3] due to the parallel implementation of the algorithm. ...
... The computational problems always have a bulky effect on the researchers on one hand and open the opportunities for them on the other hand. The ultimate intention of so much sorting techniques is the cost and complexity reduction of the algorithms [5] II. RESEARCH OBJECTIVES The ultimate objective of this research is to propose a new sorting algorithm that have efficient in terms of time and space complexity than previously proposed sorting algorithms. ...
... Complexity of bubble sort for average case and worst case is O(n²). When we have a sorted list and apply bubble sort it shows a behavior of O(n), showing its best case complexity [5]. ...
... The average and the worst case complexity of cocktail sort is equal to bubble sort i.e. O(n²) [5]. ...
Article
Full-text available
After the emergence of computer systems; the number and objects needs to be arranged in a particular order either ascending or descending orders. The ordering of these numbers is generally referred to as sorting. Studies showed that more than 50% of computing is based on sorting. Sorting has many applications in computer systems, file management, memory management. Sorting algorithm is an algorithm by which elements are arranged in a particular order following some characteristic or law. Sorting is very important from the dawn of the computing till now and many sorting algorithms have been proposed with different time and space complexities. In this research authors develop a new sorting technique to keep in view the existing techniques. Authors also proposed the algorithm i.e. Relative Split and Concatenate Sort V2, implements the algorithm and then compared results with some of the existing sorting algorithms. It is discovered that the algorithms proposed in this research is relatively simpler and efficient than some of the existing well known sorting algorithms i.e. bubble sort, insertion sort and selection sort. A simulator is designed to compare the Relative Split and Concatenate Sort V2 algorithm with other sorting algorithms. Simulation results have been summarized as graphs with number of elements on x-axis and time taken in milliseconds on the y-axis. Relative Split and Concatenate Sort V2 (RSCS-V2) algorithm is a successor to Relative Split and Concatenate Sort V1 (RSCS-V1) that is already presented in a conference. RSCS-V2 has much better results than RSCS-V1.
... The computational problems always have a cumbersome effect on the researchers on one hand and open the opportunities for them on the other hand. The ultimate intention of so much sorting techniques is the cost and complexity reduction of the algorithms [5]. To insert images in Word, position the cursor at the insertion point and either use Insert | Picture | From File or copy the image to the Windows clipboard and then Edit | Paste Special | Picture (with " Float over text " unchecked). ...
... Complexity of bubble sort for average case and worst case is O(n² ). When we have a sorted list and apply bubble sort it shows a behavior of O(n), showing its best case complexity [5]. Bubble sort is more advantages in terms of memory as it takes less memory. ...
... The average and the worst case complexity of cocktail sort is equal to bubble sort i.e. O(n² )[5]. 3) Friends Sort: Friends sort is a previous effort of the authors of this paper. ...
Article
Full-text available
Computational problems have significance from the early civilizations. These problems and solutions are used for the study of universe. Numbers and symbols have been used for mathematics, statistics. After the emergence of computers the number and objects needs to be arranged in a particular order either ascending and descending orders. The ordering of these numbers is generally referred to as sorting. Sorting has many applications in computer systems, file management, memory management. Sorting algorithm is an algorithm by which elements are arranged in a particular order following some characteristic or law. A number of sorting algorithms have been proposed with different time and space complexities. In this research author develop a new sorting technique to keep in view the existing techniques. Author also proposed the algorithm i.e. Relative Split and Concatenate Sort, implements the algorithm and then compared results with some of the existing sorting algorithms. Algorithm’s time and space complexity is also being the part of this paper. With respect to complexity sorting algorithms mainly can be divided into 2 categories: O(n2) and O(nlogn). The proposed algorithm Split and Concatenate Sort is under the category of O(n2) and is efficient, in terms of time complexity, than existing algorithms lay in this category. It is discovered that the algorithms proposed in this research is relatively simpler and efficient than some of the existing well known sorting algorithms i.e. bubble sort, insertion sort and selection sort.
... Therefore, the minimum computational complexity is O(n 2 ). This computational complexity is the same with that of Bubble sort and insertion sort [10]. These methods are fast enough, however, just as we know, the fastest algorithm is merge sort the computational complexity of which is able to reach O(nlog 2 (n)) [10]. ...
... This computational complexity is the same with that of Bubble sort and insertion sort [10]. These methods are fast enough, however, just as we know, the fastest algorithm is merge sort the computational complexity of which is able to reach O(nlog 2 (n)) [10]. O(nlog 2 (n)) is much smaller than O(n 2 ), this fact indicates that O(n 2 ) is not the minimum computational complexity and O(nlog 2 (n)) takes place of it. ...
... n n e n n n n ≈ − ≈ Therefore, the minimum computational complexity is O(nlog 2 (n)). The result is the same with that of the merge sort [10]. This fact confirms the correctness of the theory again. ...
Article
In order to find out the limiting speed of solving a specific problem using computer, this essay provides a method based on information entropy. The relationship between the minimum computational complexity and information entropy change is illustrated. A few examples are served as evidence of such connection. Meanwhile some basic rules of modeling problems are established. Finally, the nature of solving problems with computer programs is disclosed to support this theory and a redefinition of information entropy in this filed is proposed. This will develop a new field of science.
... Algorithms that rely on comparisons for sorting are considered to be comparison-based sorting methods and those that do not use comparisons for sorting are known as noncomparison-based sorts. Many researchers have worked on existing sorting methods to improve their efficiency to reduce sort method complexity [19]. Different types of sorting methods perform differently on different types of input [20] and there is no particular standard sorting method that is appropriate for every type of problem instead every method is problem-specific [20]. ...
Article
Full-text available
Data can be processed quickly if it is in some order, whereas unsequenced data can take more time to obtain results. Sorting is used for data arrangement. It is also one of the essential requirement for most applications and this step helps to boost performance. Sorting is also a prerequisite in several computer applications like databases. Over time computer scientists have not only introduced new sorting techniques considering various factors to be improved but they have also presented enhanced variants of existing sorting methods. The main objective has always been to reduce the execution time and space of the sorting algorithms. With every passing day, digital content is growing rapidly, which is a significant cause that encourages researchers to design new time-space efficient sorting algorithms. This paper presents some preprocessing strategies for quicksort and insertion sort to improve their performances. Tha main idea of using these preprocessings is to make input data more suitable for sorting algorithm, as most sorting function performs extraordinary for a specific type of input, such as insertion sort works better on nearyly sorted data. To authenticate the efficiency of existing sorting algorithms, these have been compared with proposed preprocessing strategies. The results with proposed techniqes outperforms the results of original sorting methods. It also helps to convert worst case into average case. By using this approch complexity of many algorithms can be reduced, therfore this is very important.
Article
Full-text available
Designing an efficient data sorting algorithm that requires less time and space complexity is essential for computer science, different engineering disciplines, data mining systems, wireless networks, and the Internet of things. This paper proposes a general low-complex data sorting framework that distinguishes the sorted or similar data, makes independent subarrays approximately in equal length, and sorts the subarrays’ data using one of the popular comparison-based sorting algorithms. Two frameworks, one for serial realization and another for parallel realization, are proposed. The time complexity analyses of the proposed framework demonstrate an improvement compared to the conventional Merge and Quick sorting algorithms. Following complexity analysis, the simulation results indicate slight improvements in the elapsed time and the number of swaps of the proposed serial Merge-based and Quick-based frameworks compared to the conventional ones for low/high variance integer/non-integer data sets, in different data sizes and the number of divisions. It is about $(1-1.6\%)$ to $(3.5-4\%)$ and $(0.3-1.8\%)$ to $(2-4\%)$ improvements in the elapsed times for 1, 2, 3, and 4 divisions, respectively for small and very large data sets in Merge-based and Quick-based scenarios. Although these improvements in serial realization are minor, making independent low-variance subarrays allows the sorted components to be extracted sequentially and gradually before the end of the sorting process. Also, it proposes a general framework for parallelizing conventional sorting algorithms using non-connected (independent) or connected (dependent) multi-core structures. As the second experiment, the numerical analyses that compare the results of the parallel realization of the proposed framework to the serial one in 1, 2, 3, and 4 divisions, show a speedup factor of $(2-4)$ for small to $(2-16)$ for very large data sets. The third experiment shows the effectiveness of the proposed parallel framework to the parallel sorting based on the random-access machine model. Finally, we prove that the mean-based pivot is as efficient as the median-based and much better than the random pivot for making subarrays of approximately equal length.
Article
Full-text available
Computer and communication systems and networks deal with many cases that require rearrangement of data either in descending or ascending order. This operation is called sorting, and the purpose of an efficient sorting algorithm is to reduce the computational complexity and time taken to perform the comparison, swapping, and assignment operations. In this paper, we propose an efficient mean-based sorting algorithm that sorts integer/non-integer data by making approximately the same length independent quasi-sorted subarrays. It gradually finds sorted data and checks if the elements are partially sorted or have similar values. The elapsed time, the number of divisions and swaps, and the difference between the locations of the sorted and unsorted data in different samples demonstrate the superiority of the proposed algorithm to the Merge, Quick, Heap, and conventional mean-based sorts for both integer and non-integer large data sets which are random or partially/entirely sorted. Numerical analyses indicate that the mean-based pivot is appropriate for making subarrays with approximately similar lengths. Also, the complexity study shows that the proposed mean-based sorting algorithm offers a memory complexity same as the Quick-sort and a time complexity better than the Merge, Heap, and Quick sorts in the best-case. It is similar to the Merge and Heap sorts in view of the time complexity of the worst-case much better than the Quick-sort while these algorithms experience identical complexity in the average-case. In addition to finding part by part incremental (or decremental) sorted data before reaching the end, it can be implemented by parallel processing the sections running at the same time faster than the other conventional algorithms due to having independent subarrays with similar lengths.
Article
Full-text available
Sorting is one of the fundamental issues in computer science. Sorting problem gain more popularity, as efficient sorting is more important to optimize other algorithms e.g. searching algorithms. A number of sorting algorithms has been proposed with different constraints e.g. number of iterations (inner loop, outer loop), complexity, and CPU consuming problem. This paper presents a comparison of different sorting algorithms (Sort, Optimized Sort, Selection Sort, Quick Sort, and Merge Sort) with different data sets (small data, medium data, and large data), with Best Case, Average Case, and worst case constraint. All six algorithms are analyzed, implemented, tested, compared and concluded that which algorithm is best for small, average, and large data sets, with all three constraints (best case, average case, and worst case).
Conference Paper
Sorting is technique by which elements are arranged in a particular order following some characteristic or law [1]. In this paper we presented an algorithm called as Relative Concatenate Sort, which is based on the idea of Selection Sort, but it divides the list into two. After dividing the list it takes the average of both halves of the list. Comparing this average with elements it sorts both arrays and then put these arrays together to get the final sorted list.
Article
Full-text available
This paper proposes and evaluates an approach to facilitate semantic interoperability between Ontologies built in SHIQ description logic language in an attempt to overcome the heterogeneity problem of Ontologies. The structural definition of Ontologies is used as a key point to predict their similarities. Based on SHIQ General Concept Inclusion, Ontologies to be mapped are translated into hierarchical trees and a graph matching technique is used to find out similarities between the trees. Similarity between concepts is predicted based on their level of hierarchy and their logical definition. Semantic similarities between concepts are evaluated by putting more emphasis on the logical operators used in defining concepts with less reference to concepts syntactic similarities analysis. The obtained result shows that a pure structural comparison based mainly on logical operators used in defining Ontologies concepts provides a better approximation than a comparison combining the logical and syntactic similarities analysis evaluated based on the edit distance function.
Conference Paper
Full-text available
In this article we propose a novel sorting algorithm based on comparing the arithmetic mean with each item in the list. Running cost analysis and results obtained after various implementations are also provided with the intention of comparing the efficiency of the proposed mechanism with other existing sorting methodologies.
Quicksort: Algorithm 64," and "Find: Algorithm 65
  • C A R Hoare
Hoare, C. A. R. "Partition: Algorithm 63," "Quicksort: Algorithm 64," and "Find: Algorithm 65." Comm. ACM 4(7), 321-322, 1961.
Theory and Problems of Data Structures,Schaum's Outline Series: International Edition
  • Seymour Lipschutz
Seymour Lipschutz. Theory and Problems of Data Structures,Schaum's Outline Series: International Edition, McGraw-Hill, 1986. ISBN 0-07-099130-8., pp. 322-323, of Section 9.3: Insertion Sort.
  • R Sedgewick
R. Sedgewick, Algorithms in C++. Reading, Massachusetts: Addison-Wesley, 1992.
Sorting and Searching, Third Edition
  • Donald Knuth
Donald Knuth. The Art of Computer Programming, Volume 3: Sorting and Searching, Third Edition. Addison-Wesley, 1997. ISBN 0-201-89685-0., pp. 138-141, of Section 5.2.3: Sorting by Selection.