Content uploaded by Narasimha Karumanchi

Author content

All content in this area was uploaded by Narasimha Karumanchi on Nov 30, 2021

Content may be subject to copyright.

A preview of the PDF is not available

BookPDF Available

Content uploaded by Narasimha Karumanchi

Author content

All content in this area was uploaded by Narasimha Karumanchi on Nov 30, 2021

Content may be subject to copyright.

A preview of the PDF is not available

... This is due to the number of various storage devices and the significant increase in the volume of data today from different sources such as social networks, business transactions, and many other areas. The main factor that affects the search process to search and retrieve information efficiently is the way the data is arranged [4]. Hashing is one of the efficient data retrieval algorithms for searching an element from a collection of elements in a hash table T. Hashing, arrange keys in hash table T using a hash function h(key) which defines a location to insert the key and the location to look for a key in T. this is done by mapping an item into a bucket or slot in a hash table [1,2,3] to achieve constant time O(1) complexity to find and insert an item. ...

... However, for an algorithm that does not have iterative or recursion then it means there is no dependency of the running time on an input size. Therefore, the running time of the algorithm is going to be constant O(1) [4]. In this research we were able to design an algorithm that does not have iteration and recursion. ...

The rapid development of various applications in networking system, business, medical, education, and other domains that use basic data access operations such as insert, edit, delete and search makes data structure venerable and crucial in providing an efficient method for day to day operations of those numerous applications. One of the major problems of those applications is achieving constant time to search a key from a collection. A number of different methods which attempt to achieve that have been discovered by researchers over the years with different performance behaviors. This work evaluated these methods, and found out that almost all the existing methods have non-constant time for adding and searching a key. In this work, we designed a multi-indexes hashing algorithm that handles a collision in a hash table T efficiently and achieved constant time O(1) for searching and adding a key. Our method employed two-level of hashing which uses pattern extraction h1(key) and h2(key). The second hash function h2(key) is used for handling collision in T. Here, we eliminated the wasted slots in the search space T which is another problem associated with the existing methods.

... respectively by using openpyxl package of python [11]. The common feature is extracted from the extracted features present in DictionaryFeatures.xlsx' by using Longest Common Subsequence (LCS) [12,13,14,15,16] and the common feature is written to the excel file, 'CommonFeature.xlsx' by using openpyxl package of python. ...

... For basic terminologies related to algorithm and its time complexity, we refer the reader to [6]. Recall that the edge-connectivity κ ′ (G) of a graph G is the size of minimum cut. ...

A signed graph $\Sigma=(G,\sigma)$ is said to be parity signed if there exists a bijection $f : V(G) \rightarrow \{1,2,...,|V(G)|\}$ such that $\sigma(uv)=+$ if and only if $f(u)$ and $f(v)$ are of same parity, where $uv$ is an edge of $G$. The rna number of a graph $G$, denoted $\sigma^{-}(G)$, is the minimum number of negative edges among all possible parity signed graphs over $G$. The rna number is also equal to the minimum cut size that has nearly equal sides. In this paper, for generalized Petersen graph $P(n,k)$, we prove that $3 \leq \sigma^{-}(P(n,k)) \leq n$ and these bounds are sharp. The exact value of $\sigma^{-}(P(n,k))$ is determined for $k=1,2$. Some famous generalized Petersen graphs namely, Petersen graph $P(5,2)$, Durer graph $P(6,2)$, Mobius-Kantor graph $P(8,3)$, Dodecahedron $P(10,2)$, Desargues graph $P(10,3)$ and Nauru graph $P(12,5)$ are also treated. We show that the minimum order of a $(4n-1)$-regular graph having rna number one is bounded above by $12n-2$. The sharpness of this upper bound is also shown for $n=1$. We also show that the minimum order of a $(4n+1)$-regular graph having rna number one is $8n+6$. Finally, for any simple connected graph of order $n$, we propose an $O(2^n + n^{\lfloor \frac{n}{2} \rfloor})$ time algorithm for computing its rna number.

... Merge sort like quick sort also based on the algorithmic technique known as divide and conquer i.e. divide the list into two halves, recursively sort both half lists, and then merge the two sorted sublists [12]. Merge sort splits the list into two halves, then each half is conquered separately [13]. The advantage of merge sort is that it is used for both internal as well as external sorting and merge sort is a stable algorithm. ...

The analysis of algorithms is a subject that has always arouses enormous inquisitiveness. It helps us to determine the efficient algorithm in terms of time and space consumed. There are valid methods of calculating the complexity of an algorithm. In general, a suitable solution is to calculate the run time analysis of the algorithm. The present study documents the comparative analysis of seven different sorting algorithms of data structures viz. Bubble sort, Selection sort, Insertion sort, Shell sort, Heap sort, Quick sort and Merge sort. The implementation is carried out in Visual Studio C# by creating a Graphical User Interface to calculate the running time of these seven algorithms.

... Bubble sort [22]: α Tesla = cn 2 β Tesla = cn 2 O(2n 2 ) ...

This paper presents and puts forward an execution technique that could potentially address the need of Centralized Self-Organizing Networks (SON) use cases, considering the high data load and quick processing need for network wide data. Key challenges faced in Centralized SON use cases are to do with processing Key Performance Indicators (KPIs) of the network quickly and also to cater the need for evolving network topology. KPIs are generally derived from network events, performance counters that are periodically collected from the multi-technology, multi-vendor and multi-layer Heterogeneous Network. The needs of the SON use cases are addressed by applying well-known Map-Reduce [1] programming model with newly emerging container based virtualization [2, 3] techniques. To demonstrate the validity of proposed execution technique, performances of generic algorithms used by SON use-cases are evaluated. Evaluation results illustrate that these execution techniques can achieve significantly higher performance with commodity hardware.

As one of the authors (VC) notes a few days ago, there is a book we have just
published by one of the publishers in Jakarta, entitled Koinomics: Relational
Economics to bring Pancasila to life (Jakarta: Bina Warga, 2022). At the end
of the book, a senior writer as well as a lecturer and practitioner at one of
Management Studies Schoolin Jakarta, wrote a Reflective Closing as follows:
”We have long observed that various fields of science are engrossed in playing
with methods and techniques each other’s logic. Also what is the focus is rarely
enriched by focus and findings in other areas that are completely beyond his
concerns. As if they prison in strict scientific rules according to their fields,
such as methods of theology, diction, and tradition of their respective logics
and basic assumptions that are rarely explored repeat. This book is different.
When the metaphor of a river is used, the flow of the water brings with it
gravel, leaves, roots, stones, and sand. As a result, going with the flow is not
always easy. Especially if the author is someone who is brave and tries to find
the relationship of one thing in a field of science with other things outside the
field of science which at first glance have absolutely nothing to do and make
jumps. Moreover, linking theology or spirituality with economics, mysticism,
ways of working brain, mathematics, and so on will indeed stretch the ability of
appreciation- active reader.” Then we can ask: How do we approach a creative
process? If you look at the latest articles, it seems that an ”sane-acceptable level
of insanity” is needed, or perhaps it can be called, within the framework of Fuzzy
Logic or Neutrosophic Logic: ”Neutrosophic Degree of Madness in Creativity
Theoretical Development.”

Relational database management systems and the SQL language itself do not have any built-in mechanisms for storing and managing hierarchical structures. There are several different ways to represent trees in relational databases. This paper considers the method of modeling hierarchical data structures in the form of Adjacency Lists and Closure Tables. For each method, there are examples of writing queries to solve typical problems encountered when working with tree structures: finding all descendant leaves, all descendants and ancestors of a given leaf, moving a leaf to another ancestor leaf, and deleting leaves with all its descendants. The possibility of using recursive queries when displaying the entire tree in the Adjacency List model is considered. If the depth of the tree is not known, or it is not known at what level the specified element is, the query can not be built by standard means of the SELECT statement, then you need to create a recursive procedure, or write a recursive query. In order to avoid recursion when outputting the whole tree, all nodes of the subtree, and finding the path from a certain place to the root, the modeling of hierarchical data structures is performed in the form of a connection table (Closure Table). This complicates the process of adding a new leaf and moving the leaf to another ancestor leaf. In this case, to simplify the writing of queries, it is suggested to create triggers that will build or rebuild the links. Given the fact that sometimes there is a need to preserve dependent, in particular hierarchical structures in a relational database, you need to be able to plow the model of preservation of such data. The choice of method for solving a specific problem is influenced by the speed of basic operations with trees. Exploring different options for organizing SQL tree structures will allow you to understand and choose the best way to build such a structure in a relational database for a specific task. All SQL queries in this paper were created and tested for Oracle relational databases.

ResearchGate has not been able to resolve any references for this publication.