BookPDF Available

Fundamentals of Data Structure in C++

Authors:
A preview of the PDF is not available
... The remaining voxels are moved to FAR and their distances are set to a number which is much larger than the extents of the BB in all the axes. In order to speed up the computation, CLOSE is implemented by using a priority queue [22] such that the voxel belonging to CLOSE and having the smallest distance is always at the top position of CLOSE. ...
... Proof: Without the loss of generality, we assume that the 1D problem domain is the xaxis and the distance field propagations from -∞ to ∞ as shown in part (a) of Fig. 18. Hence, the governing equation can be simplified as 22 , ...
Article
Full-text available
Layered Manufacturing (LM) techniques have been successfully employed to construct scanned objects from 3D medical image data sets. The printed physical models are useful tools for anatomical exploration, surgical planning, teaching, and related medical applications. Before fabricating scanned objects, we have to build watertight geometrical representations of the target objects from medical image data sets first. Many algorithms had been developed to fulfill this duty. However, some of these methods require extra efforts to resolve ambiguity problems and to fix broken surfaces. Other methods cannot generate legitimate models for LM. To alleviate these problems, this article presents a modeling procedure to efficiently create geometrical representations of objects from CT-scan and MRI data sets. The proposed procedure extracts the iso-surface of the target object from the input data set at the first step. Then it converts the iso-surface into a 3D image and filters this 3D image by using morphological operators to remove dangling parts and noises. At the next step, a distance field is computed in the 3D image space to approximate the surface of the target object. Then the proposed procedure smooths the distance field to soothe sharp corners and edges of the target object. Finally, a Boundary Representation (B-rep) is built from the distance field to model the target object. Compared with conventional modeling techniques, the proposed method possesses the following advantages: (1) It reduces human efforts involved in the geometrical modelling process. (2) It can construct both solid and hollow models for the target object; and wall-thickness of hollow models is adjustable. (3) The resultant B-rep guarantees to form a watertight solid geometry, which is printable by using 3D printers. (4) The proposed procedure allows users to tune the precision of the geometrical model to compromise with the available computational resources.
... Various traditional texts [1][2] [3][4] [5] have introduced the list, map, tree and graph as fundamental data structures in computer science. These data structures form the foundation for many curricula around the world and they are well studied by students of computer science, engineering, and information systems. ...
... Interestingly, both Knuth [1] and Wirth [4] provide the fundamental building blocks to create map-like structures (known as dictionaries, hash tables, key-value-pairs, or indexes), but neither of them lists them as first-class data structures. This similar approach is evident in Horowitz and Sahni in their early reference of "Fundamentals of Data Structures" [5]. Later work by Horowitz and Sahni also doesn't explicitly mention maps, but it does provide a thorough introduction into graphs. ...
Preprint
A new data structure called the Nano Version Control (NanoVC) repo is shown to emerge from computer science and the software industry. This data structure is used to effectively encode entities at the nano-scale of the modelling spectrum and it gives us a natural place to encode the provenance data-lineage for that entity. The nature of the repo provides an intuitive representation of the history of the entity and the benefits of the repo are transferable to the provenance data, thus simplifying certain provenance information that can be computed on demand. A simple algorithm for preservation of manual changes in the light of new data and changing algorithms utilizes the commit history in the repo to give us a sustainable way to merge information while keeping the provenance intact. This approach has successfully been demonstrated in the field and is an exciting new avenue for further research in the area of data quality.
... The desired value of each nodal attribute is a minimum value that is retrieved and stored in P idmin , P ijtsmin, and P ifcmin for distance, joining timestamp, and failure count nodal attributes respectively. The retrieval time for the root of the min-heap tree is constant O(1) hence selection of candidate nodes takes constant time [14]. In the first phase of the election process, a set of candidate nodes C is generated. ...
... Greedy algorithms are a family of algorithms that employs practical methods that are not guaranteed to be optimal. Hierarchical clustering algorithms are part of that family (Horowitz and Sahni 1983). The steps of the algorithm are as follows. ...
... C. Clustering algorithm 1) Literature Review: greedy algorithms are a family of algorithms that employs practical methods that are not guaranteed to be optimal. Hierarchical clustering algorithms are part of that family [22]. The steps of the algorithm are as follows. ...
Preprint
Full-text available
Though it is presented as a hypothesis, we discuss the historical events that have led to the rise of Cryptocurrencies as a legitimate new asset class. We also discuss issues around cryptocurrency fundamentals as a means to explain the lack of sectors which exists for the other asset classes such as equities or commodities. To address this issue we propose a new methodology based on a hybrid approach between k-Means and Hierarchical Clustering (HC) with alternative data gathered from web-scraping. We then reintroduce a couple of mathematical models, namely Risk Parity (RP) and Momentum. We finally test our geopolitical hypothesis through a long only strategy using RP, and test our abstract sectorisation through a Long/Short (L/S) strategy.
... In another word, a sorting method generating the permutation σs will be called stable [16]. ...
Conference Paper
Time complexity and memory complexity are significant for all algorithms, especially sorting algorithms. Using the right sorting algorithm for our data can possibly decrease time and memory usage. The sorting problem has attracted a great deal of attention because efficient sorting is essential to optimize other algorithms as well. Most of the time, a sorting algorithm consists of two nested loops, which can determine the complexity of the algorithm; however, other factors such as the number of data and data types play an important role as well. Thus, by using the right sorting algorithm, we can make more efficient use of time and memory. In this paper, we use different sorting algorithms for different data types in order to determine the optimum use of time and memory for these algorithms. It means that if we know what kind of dataset we have, it can help us to use a more efficient algorithm even close to linear time. One of the interesting results is shell sort. So, by checking each type of data, like primes, Fibonacci, odd, and even datasets, we can know more information about each sorting algorithm. Also, we can determine which algorithm needs additional memory for sorting and which does not. To reduce the time and memory allocation, we use various data, such as numbers in random order or reverse order, as well as a large number of data and a small number of data. By comparing the results, the optimal algorithm in various scenarios may be recognized.
... • clustering radius, i.e. the maximum distance at which a PS can be considered as part or not of a cluster. The clustering approach implemented in the software relies on a Depth First Search method (Horowitz and Sahni, 1976) to identify the connected components between PS points. The clustering radius is here equal to 28 m, i.e. two times the ground resolution of Sentinel-1. ...
Article
Full-text available
Multi-Temporal Interferometric Synthetic Aperture Radar (MTInSAR) data offer a valuable support to landslide mapping and to landslide activity estimation in mountain environments, where in situ measures are sometimes difficult to gather. Nowadays, the interferometric approach is more and more used for wide-areas analysis, providing useful information for risk management actors but at the same time requiring a lot of efforts to correctly interpret what satellite data are telling us. In this context, hot-spot-like analyses that select and highlight the fastest moving areas in a region of interest, are a good operative solution for reducing the time needed to inspect a whole interferometric dataset composed by thousands or millions of points. In this work, we go beyond the concept of MTInSAR data as simple mapping tools by proposing an approach whose final goal is the quantification of the potential loss experienced by an element at risk hit by a potential landslide. To do so, it is mandatory to evaluate landslide intensity. Here, we estimate intensity using Active Deformation Areas (ADA) extracted from Sentinel-1 MTInSAR data. Depending on the localization of each ADA with respect to the urban areas, intensity is derived in two different ways. Once exposure and vulnerability of the elements at risk are estimated, the potential loss due to a landslide of a given intensity is calculated. We tested our methodology in the Eastern Valle d'Aosta (north-western Italy), along four lateral valleys of the Dora Baltea Valley. This territory is characterized by steep slopes and by numerous active and dormant landslides. The goal of this work is to develop a regional scale methodology based on satellite radar interferometry to assess the potential impact of landslides on the urban fabric.
Article
Full-text available
Generation of rooted trees is a major area of research in graph theory. The number of rooted trees increases as the order of the graph increases. Many researchers around the world have given different algorithms generating rooted trees with different efficiency. Generation of rooted trees of a given order can be used to solve other combinatorial optimization problems. In this article, we have proposed an algorithm, Rooted-Trees, generating all labelled rooted trees of a given order, n and up to a given height, h (where 1 ≤ h ≤ n − 1). The algorithm, Rooted-Trees, generates the rooted trees of our desire, which are the spanning trees of a given complete graph of order n, assuming one of the vertices of the graph as the root.
ResearchGate has not been able to resolve any references for this publication.