## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

We use polyharmonic Radial Basis Functions (RBFs) to reconstruct smooth, manifold surfaces from point-cloud data and to repair incomplete meshes. An object's surface is defined implicitly as the zero set of an RBF fitted to the given surface data. Fast methods for fitting and evaluating RBFs allow us to model large data sets, consisting of millions of surface points, by a single RBF---previously an impossible task. A greedy algorithm in the fitting process reduces the number of RBF centers required to represent a surface and results in significant compression and further computational advantages. The energy-minimisation characterisation of polyharmonic splines result in a "smoothest" interpolant. This scale-independent characterisation is well-suited to reconstructing surfaces from nonuniformly sampled data. Holes are smoothly filled and surfaces smoothly extrapolated. We use a non-interpolating approximation when the data is noisy. The functional representation is in effect a solid model, which means that gradients and surface normals can be determined analytically. This helps generate uniform meshes and we show that the RBF representation has advantages for mesh simplification and remeshing applications. Results are presented for real-world rangefinder data.

To read the full-text of this research,

you can request a copy directly from the authors.

... regression, classification, and time series prediction; Orr, 1996), is being increasingly used because of its network simplicity and training efficiency. (ii) RBFN counts on the advantage of fast learning and the ability to detect outliers during estimation and implements an input-output mapping using a linear combination of radially symmetric functions (Carr et al., 2001). (iii) Contrary to the traditional control point-based models, the RBFN-based method does not require a tedious and challenging parametrization process. ...

... The parameters w ik and v k are obtained by solving equation (12) by using methods (Carr et al., 2001;Hryniowski & Wong, 2019). ...

With the rapid advancement of the multimaterial additive manufacturing (AM) technology, the heterogeneous lattice structures (HLSs) comprising the multiphase materials with gradual variations have become feasible and accessible to the industry. However, the multimaterial AM capabilities have far outpaced the modeling capability of design systems to model and thus design novel HLSs. To further expand the design space for the utilization of AM technology, this paper proposes a method for modeling HLS with complex geometries and smooth material transitions. The geometric modeling and material modeling problems are formulated in a rigorous and computationally effective manner. The geometric complexity of HLS is significantly reduced by a semi-Analytical unit cell decomposition strategy that is applied to split HLS into material units: struts and connectors. The smooth material transitions of the connector associated with multimaterial struts are realized by interpolating the discrete material property values defined at control points using a multiquadric radial basis function network. © 2021 The Author(s) 2021. Published by Oxford University Press on behalf of the Society for Computational Design and Engineering.

... The 3D model was created using the software Leapfrog Geo (Seequent Limited), in which the lithological contact surfaces are defined using polyharmonic Radial Basis Functions (RBF; Carr et al., 2001). RBFs are a class of implicit functions used to reconstruct smooth, manifold surfaces from point-cloud data and to repair incomplete meshes (Carr et al., 2001;Alcaraz et al., 2011). ...

... The 3D model was created using the software Leapfrog Geo (Seequent Limited), in which the lithological contact surfaces are defined using polyharmonic Radial Basis Functions (RBF; Carr et al., 2001). RBFs are a class of implicit functions used to reconstruct smooth, manifold surfaces from point-cloud data and to repair incomplete meshes (Carr et al., 2001;Alcaraz et al., 2011). The implicit geological modeling (3D) (see Carr et al., 2001 Wellmann andCaumon, 2018;Stewart et al., 2014) has been mostly used in mineral research since the early 2000s (Cowan et al., 2003;Jessell et al., 2014), and allows the use of a large data set in a more efficient and dynamic way, providing continuous improvement of the model. ...

The Pitangui Greenstone Belt (PGB) is a Meso to Neoarchean metavolcano-sedimentary sequence, inserted in the Pitangui Synclinorium, located in the northwest portion of the Archean-Paleoproterozoic metallogenic province of the Quadrilátero Ferrífero, southern of the São Francisco craton (Brazil). The PGB holds important gold mines like the Turmalina Complex and recently discovered deposits, as São Sebastião. The PGB is mapped in detail scale, but there is still little information about its subsurface configuration. This paper presents gravity surveys, constrained by geological, structural, boreholes, aeromagnetic and seismic data, aiming to provide a better understanding of the 3D geometry of this granite-greenstone terrane. The treatment of the available data resulted in Bouguer anomaly maps, 2D gravity modelling and a 3D geological-geophysical integrated model. The gravity Bouguer anomalies of the region present high amplitudes associated with the PGB, alternating with low zones related to the surrounding granite-gneiss rocks. The profiles and the model suggest that the maximum depth of the greenstone belt is approximately 5 km in its central portion, being strangled by granitic intrusions in its southernmost region. The PGB is about 100 km long, but most of it is covered by Neoproterozoic sediments of the Bambuí Group and Quaternary deposits, which significantly increase its metallogenic potential, especially in regions with few mineral exploration works. The 3D model contributes to the understanding of the tectonic framework and evolution of the PGB, showing the spatial relation with the surrounding granite-gneiss basement and intrusions, and younger sedimentary covers, providing valuable insights about their depth, vertical and lateral geometry. Additionally, this new data can become a valuable tool for developing strategies to select deep mineral targets, expanding the exploratory potential for this mineral province.

... To avoid the meaningless trivial solution of Eq. (10), additional points outside and inside the closed wildfire boundary are appended. A common practice to extend points (t) is generating off-surface points in their normalized external and internal normal directions (Carr et al. 2001), which are defined as ...

... δ is a small step size and its value can be selected follow the rule in Zhu and Wathen (2015), Cuomo et al. (2017) .To make the reconstructed boundary is relatively insensitively to the projection distance mentioned in Eq. (6), care must be taken when projecting off-surface points x + i and x − i along the normals to ensure that they do not intersect other parts of the surface. Thus, the closest points to these new constructed points are the corresponding base points generated them (Carr et al. 2001). The set of points ...

This paper tackles the problem of dynamic wildfire boundary tracking with UAVs. Wildfire boundary is treated as the zero-level set curve of an implicit function and is approximated with radial basis functions. Its propagation is modeled with the Hamilton–Jacobi equation with an arbitrary initial boundary as the input. To navigate UAVs to the wildfire boundary, an analytical velocity vector field, whose integral curves converge to the wildfire boundary, is constructed on the basis of the typical radial basis function thin-plate spline. Computer simulations with a single UAV and multiple UAVs have been conducted for the evaluation of the proposed solution, and numerical results show that the proposed algorithm can ensure the successful tracking of an arbitrarily shaped wildfire boundary.

... Furthermore, different types of functions are used to approximate distance functions for extracting zero-level sets. Carr et al. [17] presented using radial basis functions to model a large data set. Compactly supported radial basis functions were used in [18] for reducing computation effort. ...

Porous structures are widely used in various industries because of their excellent properties. Porous surfaces have no thickness and should be thickened to sheet structures for further fabrication. However, conventional methods for generating sheet structures are inefficient for porous surfaces because of the complexity of the internal structures. In this study, we propose a novel method for generating porous sheet structures directly from point clouds sampled on a porous surface. The generated sheet structure is represented by an implicit B-spline function, which ensures smoothness and closure. Moreover, based on the persistent homology theory, the topology structure of the generated porous sheet structure can be controlled, and a reasonable range of the uniform thickness of the sheet structure can be calculated to ensure manufacturability and pore existence. Finally, the implicitly B-spline represented sheet structures are sliced directly with the marching squares algorithm, and the contours can be used for 3D printing. Experimental results show the superiority of the developed method in efficiency over the traditional methods.

... After rigidly aligning the source vertices to the target mesh, non-rigid registration of the source mesh onto the target mesh is applied [38]. First, we mapped source vertices to the target mesh using a non-rigid transformation, modeled as a sum of Gaussian Radial Basis Functions with a variable defining the width of the Gaussian function [39][40][41][42]. Second, the previous mapping was refined by a local elastic deformation, defined as a weighted locally rigid transformation aligning source and target mesh [38]. ...

Objectives
The statistical shape model (SSM) is a model of geometric properties of a set of shapes based on statistical shape analysis. The SSM develops an average model of several objects using an automated algorithm that excludes the operator’s subjectivity. The aim of this study was to develop a three-dimensional (3D) SSM of normal dentition to provide virtual templates for efficient treatment.Materials and methodsDental casts were obtained from participants with normal dentition. After acquiring the 3D models, the SSMs of the individual teeth and whole dental arch were generated by an iterative closest point (ICP)-based rigid registration and point correspondences, respectively. Then, the individual tooth SSM was aligned to the whole dental arch SSM using ICP-based registration to generate an average model of normal dentition.ResultsThe generated 3D SSM showed specific morphological features of normal dentition similar to those previously reported. Moreover, on measuring the arch dimensions, all values in this study were similar to those previously reported using normal dentition.Conclusions
The 3D SSM of normal dentition may increase the diagnostic efficiency of orthodontic treatments by providing a visual objective. It can be also used as a 3D template in various fields of dentistry.Clinical relevanceOur SSM of normal dentition provides both quantitative and qualitative information on the 3D morphology of teeth and dental arches, which may provide valuable information on 3D virtual-setup, bracket fabrication, and aligner treatment.

... When considering the interpolation method, Yang et al. proposed a Kriging approach based on a divide-andconquer strategy through linear combination on multiple prediction results to improve the classical Co-Kriging method where all the input data are fitted at once [29]. Moreover, Car et al. proposed an effective method to recreate uniform meshes from point cloud data through improved interpolation based on the Radial Basis Function (RBF) [30], and it is similar to Dual Kriging [27]. These 3D interpolation methods provide modeling results that more concern the real world but have limits on reflecting the borehole data that have a vertically dense distribution. ...

Traditional inverse distance weighting (IDW) interpolation is a process employed to estimate unknown values based on neighborhoods in 2D space. Proposed in this study is an improved IDW interpolation method that uses 3D search neighborhoods for effective interpolation on vertically connected observation data, such as water level, depth, and altitude. Borehole data are the data collected by subsurface boring activities and exhibit heterogeneous spatial distribution as they are densely populated near civil engineering or construction sites. In addition, they are 3D spatial data that show different subsurface characteristics by depth. The subsurface characteristics observed as such are used as core data in spatial modeling in fields, such as geology modeling, estimation of groundwater table distribution, global warming assessment, and seismic liquefaction assessment, among others. Therefore, this study proposed a seismic liquefaction assessment and mapping workflow using an improved IDW application by combining geographic information system (GIS) (ArcGIS (Esri, Redlands, CA, USA)), NURBS-based 3D CAD system (Rhino/Grasshopper (Robert McNeel & Associates, Seattle, WA, USA)), and numerical analysis system (MATLAB (MathWorks, Natick, MA, USA)). The 3D neighborhood search was conducted by the B-rep-based 3D topology analysis, and the mapping was done under the 2.5D environment by combining the voxel layer, DEM, and aerial images. The experiment was performed by collecting data in Songpa-gu, Seoul, which has the highest population density among the OECD countries. The results of the experiment showed between 7 and 105 areas with liquefaction potentials according to the search distance and the method of the approach. Finally, this study improved users’ accessibility to interpolation results by producing a 3D web app that used REST API based on OGC I3S Standards. Such an approach can be applied effectively in spatial modeling that uses 3D observation data, and in the future, it can contribute to the expansion of 3D GIS application.

... Traditional Methods. Traditional point cloud completion methods are usually divided into those based on geometric structure information [27] and those based on template retrieval [28]. The method based on geometric structure usually reconstructs the surface of the point cloud manifold to repair the incomplete mesh or fills the hole by using the neighborhood information of the missing point cloud [29]. ...

The point cloud data from actual measurements are often sparse and incomplete, making it difficult to apply them directly to visual processing and 3D reconstruction. The point cloud completion task can predict missing parts based on a sparse and incomplete point cloud model. However, the disordered and unstructured characteristics of point clouds make it difficult for neural networks to obtain detailed spatial structures and topological relationships, resulting in a challenging point cloud completion task. Existing point cloud completion methods can only predict the rough geometry of the point cloud, but cannot accurately predict the local details. To address the shortcomings of existing point cloud complementation methods, this paper describes a novel network for adaptive point cloud growth, MAPGNet, which generates a sparse skeletal point cloud using the skeletal features in the composite encoder, and then adaptively grows the local point cloud in the spherical neighborhood of each point using the growth features to complement the details of the point cloud in two steps. In this paper, the Offset Transformer module is added in the process of complementation to enhance the contextual connection between point clouds. As a result, MAPGNet improves the quality of the generated point clouds and recovers more local detail information. Comparing our algorithm with other state-of-the-art algorithms in different datasets, experimental results show that our algorithm has advantages in dense point cloud completion.

... In the shader art community, analytic im-plicit representations have been used to render from simple primitives to complex scenes and fractal objects [14]. On the other hand, in the machine learning community earlier approaches have relied upon radial basis functions (RBFs) [15] and octrees [16] to express SDFs. The authors of [17] use points sampled on an implicit surface to control its shape. ...

In recent years, implicit surface representations through neural networks that encode the signed distance have gained popularity and have achieved state-of-the-art results in various tasks (e.g. shape representation, shape reconstruction, and learning shape priors). However, in contrast to conventional shape representations such as polygon meshes, the implicit representations cannot be easily edited and existing works that attempt to address this problem are extremely limited. In this work, we propose the first method for efficient interactive editing of signed distance functions expressed through neural networks, allowing free-form editing. Inspired by 3D sculpting software for meshes, we use a brush-based framework that is intuitive and can in the future be used by sculptors and digital artists. In order to localize the desired surface deformations, we regulate the network by using a copy of it to sample the previously expressed surface. We introduce a novel framework for simulating sculpting-style surface edits, in conjunction with interactive surface sampling and efficient adaptation of network weights. We qualitatively and quantitatively evaluate our method in various different 3D objects and under many different edits. The reported results clearly show that our method yields high accuracy, in terms of achieving the desired edits, while at the same time preserving the geometry outside the interaction areas.

... Reconstruction-based implicit conversion. By treating a B-Rep model as a collection of densely sampled points, many implicit-based reconstruction techniques [Berger et al. 2017;Carr et al. 2001;Ohtake et al. 2003] can be used to convert point clouds into implicit representations. For recovering shape features, Kazhdan et al. [2013] use an indicator field to represent a 3D shape and approximate features by penalizing the difference between the surface gradients and the oriented point normals. ...

We present a novel implicit representation -- neural halfspace representation (NH-Rep), to convert manifold B-Rep solids to implicit representations. NH-Rep is a Boolean tree built on a set of implicit functions represented by the neural network, and the composite Boolean function is capable of representing solid geometry while preserving sharp features. We propose an efficient algorithm to extract the Boolean tree from a manifold B-Rep solid and devise a neural network-based optimization approach to compute the implicit functions. We demonstrate the high quality offered by our conversion algorithm on ten thousand manifold B-Rep CAD models that contain various curved patches including NURBS, and the superiority of our learning approach over other representative implicit conversion algorithms in terms of surface reconstruction, sharp feature preservation, signed distance field approximation, and robustness to various surface geometry, as well as a set of applications supported by NH-Rep.

... Recently, shallow neural networks, such as radial basis function (RBF) Networks, have been used to solve a range of applications in the creation, morphing, and repair of geometrical meshes [7], [11], [12]. We propose a novel algorithm based on RBF Networks to remove initial overclosures and achieve a user-desired minimum gap between the meshes while ensuring smooth surfaces for 2D and 3D mesh types. ...

In biomechanics, geometries representing complicated organic structures are consistently segmented from sparse volumetric data or morphed from template geometries resulting in initial overclosure between adjacent geometries. In FEA, these overclosures result in numerical instability and inaccuracy as part of contact analysis. Several techniques exist to fix overclosures, but most suffer from several drawbacks. This work introduces a novel automated algorithm in an iterative process to remove overclosure and create a desired minimum gap for 2D and 3D finite element models. The RBF Network algorithm was introduced by its four major steps to remove the initial overclosure. Additionally, the algorithm was validated using two test cases against conventional nodal adjustment. The first case compared the ability of each algorithm to remove differing levels of overclosure between two deformable muscles and the effects on mesh quality. The second case used a non-deformable femur and deformable distal femoral cartilage geometry with initial overclosure to test both algorithms and observe the effects on the resulting contact FEA. The RBF Network in the first case study was successfully able to remove all overclosures. In the second case, the nodal adjustment method failed to create a usable FEA model, while the RBF Network had no such issue. This work proposed an algorithm to remove initial overclosures prior to FEA that has improved performance over conventional nodal adjustment, especially in complicated situations and those involving 3D elements. The work can be included in existing FEA modeling workflows to improve FEA results in situations involving sparse volumetric segmentation and mesh morphing. This algorithm has been implemented in MATLAB, and the source code is publicly available to download at the following GitHub repository: https://github.com/thor-andreassen/femors

... In addition to DSI, the potential field method (PFM) is another class of implicit approaches (Lajaunie et al., 1997;Jessell, 2001;McInerney et al., 2007;Phillips et al., 2007). PFM typically formulates structural modeling as a dual co-kriging interpolation (Chiles et al., 2004;Calcagno et al., 2008) or as a radial basis function interpolation (Carr et al., 2001). In comparison to DSI, although the models are evaluated on a volumetric mesh for a visual purpose, PFM does not use any mesh grids when computing the scalar function. ...

Implicit structural modeling using sparse and unevenly distributed data is essential for various scientific and societal purposes, ranging from natural source exploration to geological hazard forecasts. Most advanced implicit approaches formulate structural modeling as least squares minimization or spatial interpolation, using various mathematical methods to solve for a scalar field that optimally fits all the inputs under an assumption of smooth regularization. However, these approaches may not reasonably represent complex geometries and relationships of structures and may fail to fit a global structural trend when the known data are too sparse or unevenly distributed. Additionally, solving a large system of mathematical equations with iterative optimization solvers could be computationally expensive in 3-D. To deal with these issues, we propose an efficient deep learning method using a convolution neural network to create a full structural model from the sparse interpretations of stratigraphic interfaces and faults. The network is beneficial for the flexible incorporation of geological empirical knowledge when trained by numerous synthetic models with realistic structures that are automatically generated from a data simulation workflow. It also presents an impressive characteristic of integrating various types of geological constraints by optimally minimizing a hybrid loss function in training, thus opening new opportunities for further improving the structural modeling performance. Moreover, the deep neural network, after training, is highly efficient for the generation of structural models in many geological applications. The capacity of our approach for modeling complexly deformed structures is demonstrated by using both synthetic and field datasets in which the produced models can be geologically reasonable and structurally consistent with the inputs.

... Another popular implicit algorithm is Moving Least Squares (MLS) [59][60][61][62], which is also widely used to de-noise point clouds and perform locally weighted least-squares fitting of pointsets. Other possible types of implicit functions use B-Splines [63,64], radial basis functions [65,66], wavelets [67] or trigonometric polynomials [68]. A drawback of implementing implicit reconstruction methods is that surface details of the original model may get lost [52]. ...

Raytracing-based methods are widely used for quantifying irradiation on building surfaces. Urban 3D surface models are necessary input for raytracing simulations, which can be generated from open-source point cloud data with the help of surface reconstruction algorithms. In research and engineering practice, various algorithms are being used for this purpose; each leading to different mesh topologies and corresponding performance. This paper compares the impacts of four different reconstruction algorithms by investigating their performance using DAYSIM raytracing simulations. The analysis is carried out for five configurations with various urban morphologies. Results show that the reconstructed models consistently underestimate the shading influence due to geometrical shrinkages that emerge from the various model generation procedures. The explicit algorithms, with Generic Delaunay a notable example, have better performance with less embedded error than the implicit algorithms in both daily and annual simulations. Results also show that diffuse irradiance is responsible for larger contributions to the overall error than direct components. This effect becomes more prominent when modeling reflected irradiation in urban environments. Additionally, the work shows that solar elevation and shading geometry types also affect the error magnitude. The paper concludes by providing reconstruction algorithm selection criteria for photovoltaic practitioners and urban energy planners.

... For parametric modeling and force density computations on these immersed elastic structures, we utilize meshless interpolation based on radial basis functions (RBFs), which have been used for generating differentiation matrices for the solution of PDEs [30], surface reconstruction [31][32][33][34], and in the context of regularized Stokeslets to represent interfaces and approximate their geometries [35]. More relevantly, RBFs have been used in the context of the IB method to reconstruct platelet surfaces from point clouds and to compute Lagrangian force densities in 2D simulations [36], where the authors find that the RBF-IB method generates smoother Eulerian forces with a smaller set of points than traditional IB methods. ...

We present a new method for the geometric reconstruction of elastic surfaces simulated by the immersed boundary method with the goal of simulating the motion and interactions of cells in whole blood. Our method uses parameter-free radial basis functions for high-order meshless parametric reconstruction of point clouds and the elastic force computations required by the immersed boundary method. This numerical framework allows us to consider the effect of endothelial geometry and red blood cell motion on the motion of platelets. We find red blood cells to be crucial for understanding the motion of platelets, to the point that the geometry of the vessel wall has a negligible effect in the presence of RBCs. We describe certain interactions that force the platelets to remain near the endothelium for extended periods, including a novel platelet motion that can be seen only in 3-dimensional simulations that we term “unicycling.” We also observe red blood cell-mediated interactions between platelets and the endothelium for which the platelet has reduced speed. We suggest that these behaviors serve as mechanisms that allow platelets to better maintain vascular integrity.

... Because the radial basis function (RBF) interpolation method [39] has a better fitting effect on scattered data, this method has been widely used in geological modeling, shape design, and other fields for the interpolation or approximation of scattered data. In this study, for the implicit surface interpolation process of an orebody, we adopted the radial basis function interpolation method. ...

The 3D refinement modeling of the orebody provides an important guarantee for the estimation of the resources and reserves of an ore deposit. Implicit modeling techniques can effectively improve the efficiency of orebody modeling and facilitate the dynamic updating of the model. However, due to the problems of ambiguity and missing features during implicit surface interpolation and implicit surface reconstruction, the mesh models of orebodies obtained by means of implicit modeling techniques do not easily snap to the geological feature points and feature polylines obtained based on geological sampling data. In essence, all models are inaccurate, but geological sampling data are very useful and valuable, which should be accurately and effectively involved in the orebody modeling process. This would help to improve the reliability of resource estimation and mining design. The main contribution of this paper is the proposal of a method for accurately snapping orebody features after implicit modeling. This method enables the orebody model to snap accurately to the geological feature points and feature polylines and realizes the accurate clipping of the model boundary. We tested the method with real geological datasets. The results showed that the method is applicable and effective when the geological feature points and feature polylines are close to those of the orebody mesh model and the shape trend changes little, and the model can thus meet the practical application requirements of mines.

... This representation has been used for interpolation and approximation in various practical applications. Examples include but are not limited to neural networks (Park and Sandberg 1991), object recognition (Pauli, Benkwitz, and Sommer 1995), computer graphics (Carr et al. 2001), and medical imaging (Carr, Fright, and Beatson 1997). , and suppose < δ. ...

Optimization via continuation method is a widely used approach for solving nonconvex minimization problems. While this method generally does not provide a global minimum, empirically it often achieves a superior local minimum compared to alternative approaches such as gradient descent. However, theoretical analysis of this method is largely unavailable. Here, we provide a theoretical analysis that provides a bound on the endpoint solution of the continuation method. The derived bound depends on a problem specific characteristic that we refer to as optimization complexity. We show that this characteristic can be analytically computed when the objective function is expressed in some suitable basis functions. Our analysis combines elements of scale-space theory, regularization and differential equations.

... On the liver, boundary contour points are placed and using cubic splines for interpolation. e radial basis function [61] is used to generate the smooth surface when the user sets 6 to 8 points. is smooth surface passes through all contours and interpolates all the images. ...

Segmentation of a liver in computed tomography (CT) images is an important step toward quantitative biomarkers for a computer-aided decision support system and precise medical diagnosis. To overcome the difficulties that come across the liver segmentation that are affected by fuzzy boundaries, stacked autoencoder (SAE) is applied to learn the most discriminative features of the liver among other tissues in abdominal images. In this paper, we propose a patch-based deep learning method for the segmentation of a liver from CT images using SAE. Unlike the traditional machine learning methods, instead of anticipating pixel by pixel learning, our algorithm utilizes the patches to learn the representations and identify the liver area. We preprocessed the whole dataset to get the enhanced images and converted each image into many overlapping patches. These patches are given as input to SAE for unsupervised feature learning. Finally, the learned features with labels of the images are fine tuned, and the classification is performed to develop the probability map in a supervised way. Experimental results demonstrate that our proposed algorithm shows satisfactory results on test images. Our method achieved a 96.47% dice similarity coefficient (DSC), which is better than other methods in the same domain.

... Following early approaches such as the local fitting of tangent planes [14] or using radial basis functions [15], respective developments particularly focused on projection-based methods, sparsity-based methods and non-local methods. Projection-based Methods. ...

We present incomplete gamma kernels, a generalization of Locally Optimal Projection (LOP) operators. In particular, we reveal the relation of the classical localized $ L_1 $ estimator, used in the LOP operator for surface reconstruction from noisy point clouds, to the common Mean Shift framework via a novel kernel. Furthermore, we generalize this result to a whole family of kernels that are built upon the incomplete gamma function and each represents a localized $ L_p $ estimator. By deriving various properties of the kernel family concerning distributional, Mean Shift induced, and other aspects such as strict positive definiteness, we obtain a deeper understanding of the operator's projection behavior. From these theoretical insights, we illustrate several applications ranging from an improved Weighted LOP (WLOP) density weighting scheme and a more accurate Continuous LOP (CLOP) kernel approximation to the definition of a novel set of robust loss functions. These incomplete gamma losses include the Gaussian and LOP loss as special cases and can be applied for reconstruction tasks such as normal filtering. We demonstrate the effects of each application in a range of quantitative and qualitative experiments that highlight the benefits induced by our modifications.

... There is a vast range of algorithms for creating implicit surfaces interpolating or approximating point data. These can be categorized into several classes, such as methods based on local approximations [6,14], algebraic splines [7,4], radial basis functions [2,15], moving least squares [17], Poisson surface reconstruction [8], or even neural networks [19]. All of these methods are very general, and can be used on unorganized point sets. ...

The I-patch is a multi-sided surface representation, defined as a combination of implicit ribbon and bounding surfaces, whose pairwise intersections determine the natural boundaries of the patch. Our goal is to show how a collection of smoothly connected I-patches can be used to approximate triangular meshes. We start from a coarse, user-defined vertex graph which specifies an initial subdivision of the surface. Based on this, we create ribbons that tightly fit the mesh along its edges in both positional and tangential sense, then we optimize the free parameters of the patch to better approximate the interior. If the surfaces are not sufficiently accurate, the network needs to be refined; here we exploit that the I-patch construction naturally supports T-nodes. We also describe a normalization method that nicely approximates the Euclidean distance field, and can be efficiently evaluated. The capabilities and limitations of the approach are analyzed through several examples.

The world-class Jiaodong gold province, which is located in the Jiaodong Peninsula, eastern North China Craton, hosts >5,000t of gold resources. The major gold mineralization in this area is mainly controlled by NE-NNE faults, such as the Zhaoping, Jiaojia, and Sanshandao faults. While intensive efforts have been devoted to the investigation of the architecture of the three detachment faults, the fault geometry at depth remains highly uncertain and hotly controversial due to the scarcity of observations, non-uniqueness of the geophysical inversion, and underutilization of the geological knowledge. Along these lines, the deep three-dimensional (3D) geometry of detachment faults from a Bayesian inference perspective is reconstructed in this work. In the Bayesian framework, the multi-source information and the prior geological knowledge are combined into an integrated probabilistic model and the posterior probability distribution of detachment faults is inferred. The resulting posterior distribution permits an estimation of the uncertainty (information entropy) of the 3D fault geometry and the expected shape undulations compared with a prior reference model. More specifically, it demonstrates that specific and detailed 3D models of faults could be inferred by tracking surfaces with maximum information entropy of the geological units. The reconstructed models exhibit detailed 3D geometry of the fault surfaces and their reliability is validated by a recent drill hole at ∼2,300m depth. In terms of the reconstructed 3D models, the dip variations are analyzed and the possible stress directions at deep zones of the three faults are discussed. The shape features of the reconstructed faults are also quantified, which indicates that the fault sections with a steep-to-gentle transition in dip angle are closely associated with the gold mineralization. The extracted 3D gold prospectivity mapping with the spatial features as predicted indicators, highlights seven deep prospecting targets, which are generally consistent with the gently dipping zones and beneath the known orebodies.

We present a novel implicit representation --- neural halfspace representation (NH-Rep), to convert manifold B-Rep solids to implicit representations. NH-Rep is a Boolean tree built on a set of implicit functions represented by the neural network, and the composite Boolean function is capable of representing solid geometry while preserving sharp features. We propose an efficient algorithm to extract the Boolean tree from a manifold B-Rep solid and devise a neural network-based optimization approach to compute the implicit functions. We demonstrate the high quality offered by our conversion algorithm on ten thousand manifold B-Rep CAD models that contain various curved patches including NURBS, and the superiority of our learning approach over other representative implicit conversion algorithms in terms of surface reconstruction, sharp feature preservation, signed distance field approximation, and robustness to various surface geometry, as well as a set of applications supported by NH-Rep.

To reconstruct meshes from the widely-available 3D point cloud data, implicit shape representation is among the primary choices as an intermediate form due to its superior representation power and robustness in topological optimizations. Although different parameterizations of the implicit fields have been explored to model the underlying geometry, there is no explicit mechanism to ensure the fitting tightness of the surface to the input. We present in response, NeuralGalerkin, a neural Galerkin-method-based solver designed for reconstructing highly-accurate surfaces from the input point clouds. NeuralGalerkin internally discretizes the target implicit field as a linear combination of a set of spatially-varying basis functions inferred by an adaptive sparse convolution neural network. It then solves differentiably for a variational problem that incorporates both positional and normal constraints from the data in closed form within a single forward pass, highly respecting the raw input points. The reconstructed surface extracted from the implicit interpolants is hence very accurate and incorporates useful inductive biases benefiting from the training data. Extensive evaluations on various datasets demonstrate our method's promising reconstruction performance and scalability.

We explore a new idea for learning based shape reconstruction from a point cloud, based on the recently popularized implicit neural shape representations. We cast the problem as a few-shot learning of implicit neural signed distance functions in feature space, that we approach using gradient based meta-learning. We use a convolutional encoder to build a feature space given the input point cloud. An implicit decoder learns to predict signed distance values given points represented in this feature space. Setting the input point cloud, i.e. samples from the target shape function’s zero level set, as the support (i.e. context) in few-shot learning terms, we train the decoder such that it can adapt its weights to the underlying shape of this context with a few (5) tuning steps. We thus combine two types of implicit neural network conditioning mechanisms simultaneously for the first time, namely feature encoding and meta-learning. Our numerical and qualitative evaluation shows that in the context of implicit reconstruction from a sparse point cloud, our proposed strategy, i.e. meta-learning in feature space, outperforms existing alternatives, namely standard supervised learning in feature space, and meta-learning in euclidean space, while still providing fast inference.

We present PriFit, a semi‐supervised approach for label‐efficient learning of 3D point cloud segmentation networks. PriFit combines geometric primitive fitting with point‐based representation learning. Its key idea is to learn point representations whose clustering reveals shape regions that can be approximated well by basic geometric primitives, such as cuboids and ellipsoids. The learned point representations can then be re‐used in existing network architectures for 3D point cloud segmentation, and improves their performance in the few‐shot setting. According to our experiments on the widely used ShapeNet and PartNet benchmarks, PriFit outperforms several state‐of‐the‐art methods in this setting, suggesting that decomposability into primitives is a useful prior for learning representations predictive of semantic parts. We present a number of ablative experiments varying the choice of geometric primitives and downstream tasks to demonstrate the effectiveness of the method.

The generation of triangle meshes from point clouds, i.e. meshing, is a core task in computer graphics and computer vision. Traditional techniques directly construct a surface mesh using local decision heuristics, while some recent methods based on neural implicit representations try to leverage data-driven approaches for this meshing process. However, it is challenging to define a learnable representation for triangle meshes of unknown topology and size and for this reason, neural implicit representations rely on non-differentiable post-processing in order to extract the final triangle mesh. In this work, we propose a novel differentiable meshing algorithm for extracting surface meshes from neural implicit representations. Our method produces the mesh in an iterative fashion, which makes it applicable to shapes of various scales and adaptive to the local curvature of the shape. Furthermore, our method produces meshes with regular tessellation patterns and fewer triangle faces compared to existing methods. Experiments demonstrate the comparable reconstruction performance and favorable mesh properties over baselines.

Given a dataset of 3D points in which there is a hole, i.e., a region with a lack of information, we develop a method providing a surface that fits the dataset and fills the hole. The filling patch is required to fulfill a prescribed volume condition. The fitting-filling function consists of a radial basis functions that minimizes an energy functional involving both, the fitting of the dataset and the volume constraint of the filling patch, as well as the fairness of the function. We give a convergence result and we present some graphical and numerical examples.

We introduce a neural implicit framework that exploits the differentiable properties of neural networks and the discrete geometry of point-sampled surfaces to approximate them as the level sets of neural implicit functions.
To train a neural implicit function, we propose a loss functional that approximates a signed distance function, and allows terms with high-order derivatives, such as the alignment between the principal directions of curvature, to learn more geometric details. During training, we consider a non-uniform sampling strategy based on the curvatures of the point-sampled surface to prioritize points with more geometric details. This sampling implies faster learning while preserving geometric accuracy when compared with previous approaches.
We also use the analytical derivatives of a neural implicit function to estimate the differential measures of the underlying point-sampled surface.

The structure from motion (SfM) algorithm is widely used for point cloud reconstruction. However, one drawback of conventional SfM based methods is that the obtained final point sets may contain holes and noise with rich color information. At the same time, the point cloud collected based on coded structured light is dense and smooth, but lack of color information. To get better point cloud information by combining structured light point cloud and SFM point cloud, this paper proposes a bidirectional point cloud complementation for three-dimensional(3D) holes repairing combining coded structured light and structure from motion (SfM). Firstly, the point feature histogram which is used to automatically extract the point cloud features to determine the initial control point set is constructed according to the surface characteristics. Then, the principal components analysis (PCA) method is used for rough registration, and then the improved iterative nearest point algorithm is used for accurate registration. Secondly, the nearest point search algorithm is used to fuse the color information distributed in the Point cloud obtained by SfM data with the point cloud coordinates of the coded structured light. Finally, obtained by the reverse engineering software Geomagic Studio, the repaired point cloud model is subdivided into triangular meshes to reconstruct the 3D face surface. Experimental results and data analysis show that this method can accurately perform two-way repair fusion on the encoded point cloud obtained by structured light and point cloud obtained by SfM, and realize the hole repair of the face model with the complete surface reconstruction.

In this paper, we propose the implicit randomized progressive-iterative approximation (IR-PIA) method for curve and surface reconstruction. By introducing the effective probability criterion, the IR-PIA method selects the working elements of the collocation matrix to adjust the control coefficients. It is proved that the sequence of curves and surfaces generated by the control coefficients converge to the least-norm results in expectation. The IR-PIA method reduces the computation complexity and speeds up the curve and surface reconstruction compared with the implicit progressive-iterative approximation (I-PIA) method (Hamza et al., 2020). Numerical examples show that the IR-PIA method can be more efficient than the I-PIA method (Hamza et al., 2020).

We propose Parametric Gauss Reconstruction (PGR) for surface reconstruction from point clouds without normals. Our insight builds on the Gauss formula in potential theory, which represents the indicator function of a region as an integral over its boundary. By viewing surface normals and surface element areas as unknown parameters, the Gauss formula interprets the indicator as a member of some parametric function spaces. We can solve for the unknown parameters using the Gauss formula and simultaneously obtain the indicator function. Our method bypasses the need for accurate input normals as required by most existing non-data-driven methods, while also exhibiting superiority over data-driven methods since no training is needed. Moreover, by modifying the Gauss formula and employing regularization, PGR also adapts to difficult cases such as noisy inputs, thin structures, sparse or nonuniform points, for which accurate normal estimation becomes quite difficult. Our code is publicly available at https://github.com/jsnln/ParametricGaussRecon.

We present a simple, fast, and smooth scheme to approximate Algebraic Point Set Surfaces using non-compact kernels, which is particularly suited for filtering and reconstructing point sets presenting large missing parts. Our key idea is to consider a moving level-of-detail of the input point set which is adaptive w.r.t. to the evaluation location, just such as the samples weights are output sensitive in the traditional moving least squares scheme. We also introduce an adaptive progressive octree refinement scheme, driven by the resulting implicit surface, to properly capture the modeled geometry even far away from the input samples. Similarly to typical compactly-supported approximations, our operator runs in logarithmic time while defining high quality surfaces even on challenging inputs for which only global optimizations achieve reasonable results. We demonstrate our technique on a variety of point sets featuring geometric noise as well as large holes.

We present a new method for computing a smooth minimum distance function based on the LogSumExp function for point clouds, edge meshes, triangle meshes, and combinations of all three. We derive blending weights and a modified Barnes-Hut acceleration approach that ensure our method approximates the true distance, and is conservative (points outside the zero isosurface are guaranteed to be outside the surface) and efficient to evaluate for all the above data types. This, in combination with its ability to smooth sparsely sampled and noisy data, like point clouds, shortens the gap between data acquisition and simulation, and thereby enables new applications such as direct, co-dimensional rigid body simulation using unprocessed lidar data.

Neural implicit fields are quickly emerging as an attractive representation for learning based techniques. However, adopting them for 3D shape modeling and editing is challenging. We introduce a method for E diting I mplicit S hapes T hrough P art A ware G enera T ion, permuted in short as SPAGHETTI. Our architecture allows for manipulation of implicit shapes by means of transforming, interpolating and combining shape segments together, without requiring explicit part supervision. SPAGHETTI disentangles shape part representation into extrinsic and intrinsic geometric information. This characteristic enables a generative framework with part-level control. The modeling capabilities of SPAGHETTI are demonstrated using an interactive graphical interface, where users can directly edit neural implicit shapes. Our code, editing user interface demo and pre-trained models are available at github.com/amirhertz/spaghetti.

Synthesizing photo‐realistic images and videos is at the heart of computer graphics and has been the focus of decades of research. Traditionally, synthetic images of a scene are generated using rendering algorithms such as rasterization or ray tracing, which take specifically defined representations of geometry and material properties as input. Collectively, these inputs define the actual scene and what is rendered, and are referred to as the scene representation (where a scene consists of one or more objects). Example scene representations are triangle meshes with accompanied textures (e.g., created by an artist), point clouds (e.g., from a depth sensor), volumetric grids (e.g., from a CT scan), or implicit surface functions (e.g., truncated signed distance fields). The reconstruction of such a scene representation from observations using differentiable rendering losses is known as inverse graphics or inverse rendering. Neural rendering is closely related, and combines ideas from classical computer graphics and machine learning to create algorithms for synthesizing images from real‐world observations. Neural rendering is a leap forward towards the goal of synthesizing photo‐realistic image and video content. In recent years, we have seen immense progress in this field through hundreds of publications that show different ways to inject learnable components into the rendering pipeline. This state‐of‐the‐art report on advances in neural rendering focuses on methods that combine classical rendering principles with learned 3D scene representations, often now referred to as neural scene representations. A key advantage of these methods is that they are 3D‐consistent by design, enabling applications such as novel viewpoint synthesis of a captured scene. In addition to methods that handle static scenes, we cover neural scene representations for modeling non‐rigidly deforming objects and scene editing and composition. While most of these approaches are scene‐specific, we also discuss techniques that generalize across object classes and can be used for generative tasks. In addition to reviewing these state‐of‐the‐art methods, we provide an overview of fundamental concepts and definitions used in the current literature. We conclude with a discussion on open challenges and social implications.

Ceramics analysis, classification, and reconstruction are essential to know an archaeological site's history, economy, and art. Traditional methods used by the archaeologists for their investigation are time-consuming and are neither reproducible nor repeatable. The results depend on the operator's subjectivity, specialization, personal skills, and professional experience. Consequently, only a few indicative samples with characteristic components are studied with wide uncertainties. Several automatic methods for analysing sherds have been published in the last years to overcome these limitations. To help all the involved researchers, this paper aims to provide a complete and critical analysis of the state-of-the-art until the end of 2021 of the most important published methods on pottery analysis, classification, and reconstruction from a 3D discrete manifold model. To this end, papers in English indexed by the Scopus database are selected by using the following keywords: “computer methods in archaeology”, “3D archaeology”, “3D reconstruction”, “3D puzzling”, “automatic feature recognition and reconstruction”. Additional references complete the list found through the reading of selected papers. The 125 selected papers, referring to only archaeological potteries, are divided into six groups: 3D digitalization, virtual prototyping, Fragment features processing, geometric model processing of whole-shape pottery, 3D Vessel reconstruction from its fragments, classification, and 3D information systems for archaeological pottery visualization and documentation.
In the present review, the techniques considered for these issues are critically analysed to highlight their pros and cons and provide recommendations for future research.

In geometric design, reconstructing an implicit surface with high-quality geometry and expected topology from point clouds has been a challenging problem. However, traditional topological invariants, such as Betti numbers, are not easily used in controlling topology in implicit surface reconstruction. Persistent homology provides a quantitative measurement, i.e., persistence diagram (PD), and allows tracking of pairs of key points that generate and destroy topological holes. In this study, we propose a topology-controllable implicit surface reconstruction method from point clouds based on persistent homology. Specifically, given a point cloud with normals, the signed distance field is first constructed, and then a B-spline function represented distance function is generated by fitting the signed distance values through progressive iterative approximation. By designing a topological target function using the persistent pairs in PDs according to topological priors, the control coefficients of the B-spline function are optimized to extract an implicit surface, i.e., an iso-surface of the B-spline function, with expected topology. Experiments show that the proposed method can reconstruct surfaces with higher topological quality than other reconstruction methods. Moreover, the proposed method can be used to edit the topology of an implicit surface.

In the last years, collaborative human-robot applications have become more and more appealing thanks to the robot’s easiness of programming and the promise of increasing precision and safety. However, by combining two resources (the cobot and the human operator) there is a problem of safety since cobot and human operator have to work in the same workspace. To ensure human safety, the distance between robot and operator must be assessed and the robot must adapt accordingly either by reducing its velocity or by modifying its trajectory. In this paper, we propose a new online method to adapt the trajectory of the robot to the human movements using a single depth camera. This algorithm eliminates the robot from the scene using a simple calibration process. Then, it interpolates the shared workspace, captured by the depth camera, using Radial Basis Functions (RBFs). The result is a continuous function that is representative of the risk of collision with obstacles on the plane. Its gradient is used as a repulsive potential in the Artificial Potential Field (APF) method to generate the path. This method eliminates the need to calculate the distance between operator and robot since it is intrinsically considered in the potentials. Results shows the validity of the method.

The task of explicit surface reconstruction is to generate a surface mesh by interpolating a given point cloud. Explicit surface reconstruction is necessary when the point cloud is required to appear exactly on the surface. However, for a non-perfect input, e.g. lack of normals, low density, irregular distribution, thin and tiny parts, high genus, etc, a robust explicit reconstruction method that can generate a high-quality manifold triangulation is missing.
We propose a robust explicit surface reconstruction method that starts from an initial simple surface mesh, alternately performs a Filmsticking step and a Sculpting step of the initial mesh and converges when the surface mesh interpolates all input points (except outliers) and remains stable. The Filmsticking is to minimize the geometric distance between the surface mesh and the point cloud through iteratively performing a restricted Voronoi diagram technique on the surface mesh, while the Sculpting is to bootstrap the Filmsticking iteration from local minima by applying appropriate geometric and topological changes of the surface mesh.
Our algorithm is fully automatic and produces high-quality surface meshes for non-perfect inputs that are typically considered to be challenging for prior state-of-the-art. We conducted extensive experiments on simulated scans and real scans to validate the effectiveness of our approach.

In this paper, we implement an automatic modeling method for narrow vein type ore bodies based on Boolean combination constraints. Different from the direct interpolation approach, we construct the implicit functions of the hanging wall and foot wall surfaces, respectively. And then the combined implicit function is formed to represent the complete ore body model using the Boolean combination constraints. Finally, the complete ore body is obtained by Boolean operation of the hanging wall and foot wall surfaces. To model complex vein surfaces, some modeling rules are developed to allow the geological engineers to specify vein thickness constraints and vein boundary constraints. The method works for narrow vein type ore bodies (e.g., vein gold deposits and mineral sand deposits) which are large in two dimensions and narrow in the third. Taking the implicit function of radial basis functions interpolation as an example, several experiments are carried out by using the real geological sampling data of the mines. The experimental results show that the method is suitable for the modeling of narrow vein type ore bodies.

Axisymmetric disk structures with complex contour curves are widely used in aero-engines. The shape optimization is generally carried out to reduce the stress level of axisymmetric disks. In this paper, a shape optimization method for axisymmetric disks based on radial basis function (RBF) mesh deformation and Laplace smoothing approaches is proposed. This method can obtain the optimized reduced control points selection of mesh deformation under the influence of design space based on greedy algorithm. RBF mesh deformation is used to change the axisymmetric contour shape. And after deformation, the local mesh quality is monitored and improved by Laplace smoothing. In this paper, two illustrative examples used in aero-engines are carried out to validate the effectiveness of the proposed method, including an independent optimization of a turbine disk and a collaborative optimization of a turbine disk with a deflector for minimizing the maximum equivalent stress. In order to improve the computational efficiency, a two-dimensional (2D) axisymmetric FE model is established. Compared with initial results, optimized results in two examples obtained by the proposed optimization method reduce the maximum von Mises stress by 8.02% and 9.25% respectively. It can be concluded that the proposed method has significant potential in the shape optimization design of axisymmetric disks.

When obtaining three-dimensional (3D) face point cloud data based on structured light, factors related to the environment, occlusion, and illumination intensity lead to holes in the collected data, which affect subsequent recognition. In this study, we propose a hole-filling method based on stereo-matching technology combined with a B-spline. The algorithm uses phase information acquired during raster projection to locate holes in the point cloud, simultaneously extracting boundary point cloud sets. By registering the face point cloud data using the stereo-matching algorithm and the data collected using the raster projection method, some supplementary information points can be obtained at the holes. The shape of the B-spline curve can then be roughly described by a few key points, and the control points are put into the hole area as key points for iterative calculation of surface reconstruction. Simulations using smooth ceramic cups and human face models showed that our model can accurately reproduce details and accurately restore complex shapes on the test surfaces. Simulation results indicated the robustness of the method, which is able to fill holes on complex areas such as the inner side of the nose without a prior model. This approach also effectively supplements the hole information, and the patched point cloud is closer to the original data. This method could be used across a wide range of applications requiring accurate facial recognition.

The article describes research on the use of local linear smoothing methods to remove artefacts resulting from the lossy compression of seabed’s digital terrain model (DTM). In practice, when creating seabed models, DTM based on a regular grid is most often used. When recording larger surfaces, the amount of data collected in the structure can be very large (millions or even hundreds of millions of points) as discussed by Maleika et al. (2011). In such a case, it is possible to significantly reduce the amount of this data by using lossy compression methods. The vast majority of these methods divide the entire surface into small blocks and compress each of them independently. In the process of reconstruction (decompression), clearly visible distortions called artefacts form at the boundaries (edges) of these blocks. In the study, the author described the methods of linear data approximation, enabling the removal of distortions at the boundaries of blocks in the lossy compression/reconstruction process, while maintaining high model accuracy and International Hydrographic Organization (IHO) standards. During the research, methods based on polynomials (from the 1st to 9th degree) and linear approximation, cubic approximation and smoothing spline interpolation were tested. The developed smoothing method was then modified to work locally in places where compression artefacts occur. In the next stage, distortion-dependent smoothing was additionally developed so that the power of the smoothing method would be dependent on the amount of the distortion present. All tests were carried out with the use of three different test surfaces, assessing the obtained results both objectively (calculating the model error at the 95% confidence level) and subjectively (by visually assessing the distortions at the interface of the compression blocks). The results obtained were presented on many figures and tables and interpreted. Finally, the test plots after the developed distortion-dependent local smoothing method were shown in order to assess the obtained effects. The experiments presented in the paper and the results obtained show the true potential of linear smoothing methods in removing distortions resulting from the use of lossy compression methods of seabed’s DTM.

In many applications one encounters the problem of approximating surfaces from data given on a set of scattered points in a two-dimensional domain. The global interpolation methods with Duchon’s “thin plate splines” and Hardy’s multiquadrics are considered to be of high quality; however, their application is limited, due to computational difficulties, to $ \sim 150$ data points. In this work we develop some efficient iterative schemes for computing global approximation surfaces interpolating a given smooth data. The suggested iterative procedures can, in principle, handle any number of data points, according to computer capacity. These procedures are extensions of a previous work by Dyn and Levin on iterative methods for computing thin-plate spline interpolants for data given on a square grid. Here the procedures are improved significantly and generalized to the case of data given in a general configuration.
The major theme of this work is the development of an iterative scheme for the construction of a smooth surface, presented by global basis functions, which approximates only the smooth components of a set of scattered noisy data. The novelty in the suggested method is in the construction of an iterative procedure for low-pass filtering based on detailed spectral properties of a preconditioned matrix. The general concepts of this approach can also be used in designing iterative computation procedures for many other problems.
The interpolation and smoothing procedures are tested, and the theoretical results are verified, by many numerical experiments.

In this paper we consider domain decomposition methods for solving the radial basis function interpolation equations. There are three interwoven threads to the paper. The first thread provides good ways of setting up and solving small-to medium-sized radial basis function interpolation problems. These may occur as subproblems in a domain decomposition solution of a larger interpolation problem. The usual formulation of such a problem can suffer from an unfortunate scale dependence not intrinsic in the problem itself. This scale dependence occurs, for instance, when fitting polyharmonic splines in even dimensions. We present and analyze an alternative formulation, available for all strictly conditionally positive definite basic functions, which does not suffer from this drawback, at least for the very important example previously mentioned. This formulation changes the problem into one involving a strictly positive definite symmetric system, which can be easily and efficiently solved by Cholesky factorization. The second section considers a natural domain decomposition method for the interpolation equations and views it as an instance of von Neumann's alternating projection algorithm. Here the underlying Hilbert space is the reproducing kernel Hilbert space induced by the strictly conditionally positive definite basic function. We show that the domain decomposition method presented converges linearly under very weak nondegeneracy conditions on the possibly overlapping subdomains. The last section presents some algorithmic details and numerical results of a domain decomposition interpolatory code for polyharmonic splines in 2 and 3 dimensions. This code has solved problems with 5 million centers and can fit splines with 10,000 centers in approximately 7 seconds on very modest hardware.

An algorithm is presented for the rapid evaluation of the potential and force fields in systems involving large numbers of particles whose interactions are Coulombic or gravitational in nature. For a system of N particles, an amount of work of the order O(N2) has traditionally been required to evaluate all pairwise interactions, unless some approximation or truncation method is used. The algorithm of the present paper requires an amount of work proportional to N to evaluate all interactions to within roundoff error, making it considerably more practical for large-scale problems encountered in plasma physics, fluid dynamics, molecular dynamics, and celestial mechanics.

We present a sufficient criterion for the Bernstein-Bezier (BB) form of a trivariate polynomial within a tetrahedron, such that the real zero contour of the polynomial defines a smooth, connected and single sheeted algebraic surface patch. We call this an A-patch. We present algorithms to build a mesh of cubic A-patches to interpolate a given set of scattered point data in three dimensions, respecting the topology of any surface triangulation T of the given point set. In these algorithms we first specify "normals" on the data points, then build a simplicial hull consisting of tetrahedra surrounding the surface triangulation T and finally construct cubic A-patches within each tetrahedron. The resulting surface constructed is C 1 (tangent plane) continuous and single sheeted in each of the tetrahedra. We also show how to adjust the free parameters of the A-patches to achieve both local and global shape control. Categories and Subject Descriptors: F.2.1 [Analysis of Algorithms and Probl...

We present a new algorithm, called marching cubes, that creates triangle models of constant density surfaces from 3D medical data. Using a divide-and-conquer approach to generate inter-slice connectivity, we create a case table that defines triangle topology. The algorithm processes the 3D medical data in scan-line order and calculates triangle vertices using linear interpolation. We find the gradient of the original data, normalize it, and use it as a basis for shading the models. The detail in images produced from the generated surface models is the result of maintaining the inter-slice connectivity, surface data, and gradient information present in the original 3D data. Results from computed tomography (CT), magnetic resonance (MR), and single-photon emission computed tomography (SPECT) illustrate the quality and functionality of marching cubes. We also discuss improvements that decrease processing time and add solid modeling capabilities.

We present an approach for the reconstruction and approximation of 3D CAD models from an unorganized collection of points. Applications include rapid reverse engineering of existing objects for use in a virtual prototyping environment, including computer aided design and manufacturing. Our reconstruction approach is flexible enough to permit interpolation of both smooth surfaces and sharp features, while placing few restrictions on the geometry or topology of the object.
Our algorithm is based on alpha-shapes to compute an initial triangle mesh approximating the surface of the object. A mesh reduction technique is applied to the dense triangle mesh to build a simplified approximation, while retaining important topological and geometric characteristics of the model. The reduced mesh is interpolated with piecewise algebraic surface patches which approximate the original points.
The process is fully automatic, and the reconstruction is guaranteed to be homeomorphic and error bounded with respect to the original model when certain sampling requirements are satisfied. The resulting model is suitable for typical CAD modeling and analysis applications.

This paper describes an algorithm for the rapid evaluation of the potential and force fields in systems involving large numbers of particles whose interactions are described by Coulomb's law. Unlike previously published schemes, the algorithm of this paper has an asymptotic CPU time estimate of O(N), where N is the number of particles in the simulation, and does not depend on the statistics of the distribution for its efficient performance. The numerical examples the authors present indicate that it should be an algorithm of choice in many situations of practical interest.

As is now well known for some basic functions phi, hierarchical and fast multipole-like methods can greatly reduce the storage and operation counts for fitting and evaluating radial basis functions. In particular, for spline functions of the form [GRAPHICS] where p is a low degree polynomial and with certain choices of phi, the cost of a single extra evaluation can be reduced from O(N) to O(log N), or even O(1), operations and the cost of a matrix-vector product (i.e., evaluation at all centers) can be decreased from O(N(2)) to O(N log N), or even O(N), operations. This paper develops the mathematics required by methods of these types for polyharmonic splines in R(4). That is, for splines s built from abasic function from the list phi (r) = r(-2) or phi (r) = r(2n) ln(r), n = 0, 1,.... We give appropriate far and near field expansions, together with corresponding error estimates, uniqueness theorems, and translation formulae. A significant new feature of the current work is the use of arguments based on the action of the group of nonzero quaternions, realized as 2 x 2 complex matrices [GRAPHICS] acting on C(2) = R(4). Use of this perspective allows us to give a relatively efficient development of the relevant spherical harmonics and their properties.

This monograph is based on a series of 10 lectures at Ohio State University at Columbus, March 23–27, 1987, sponsored by the Conference Board of the Mathematical Sciences and the National Science Foundation. The selection of topics is quite personal and, together with the talks of the other speakers, the lectures represent a story, as I saw it in March 1987, of many of the interesting things that statisticians can do with splines. I told the audience that the priority order for topic selection was, first, obscure work of my own and collaborators, second, other work by myself and students, with important work by other speakers deliberately omitted in the hope that they would mention it themselves. This monograph will more or less follow that outline, so that it is very much slanted toward work I had some hand in, although I will try to mention at least by reference important work by the other speakers and some of the attendees. The other speakers were (in alphabetical order), Dennis Cox, Randy Eubank, Ker-Chau Li, Douglas Nychka, David Scott, Bernard Silverman, Paul Speckman, and James Wendelberger. The work of Finbarr O'Sullivan, who was unable to attend, in extending the developing theory to the non-Gaussian and nonlinear case will also play a central role, as will the work of Florencio Utreras.

For practical problems of data fitting in two dimensions two methods seem to be most popular: Thin Plate Splines (TPS) Duchon (1976) and Hardy’s Multiquadric Surfaces (MQS) Hardy (1971), (1982), see also Franke (1982). The theory for TPS has been developed in a series of papers (see Duchon (1976), Meinguet (1979)). However, beyond its numerical performance little seems to be known about MQS. For instance, in his lecture notes for a recent meeting, Franke (1983) raised (based on extensive numerical experience) the followine conjecture.

Solving large radial basis function (RBF) interpolation problems with non‐customised methods is computationally expensive and the matrices that occur are typically badly conditioned. For example, using the usual direct methods to fit an RBF with N centres requires O(N
2) storage and O(N
3) flops. Thus such an approach is not viable for large problems with N
≥10,000.
In this paper we present preconditioning strategies which, in combination with fast matrix–vector multiplication and GMRES iteration, make the solution of large RBF interpolation problems orders of magnitude less expensive in storage and operations. In numerical experiments with thin‐plate spline and multiquadric RBFs the preconditioning typically results in dramatic clustering of eigenvalues and improves the condition numbers of the interpolation problem by several orders of magnitude. As a result of the eigenvalue clustering the number of GMRES iterations required to solve the preconditioned problem is of the order of 10-20. Taken together, the combination of a suitable approximate cardinal function preconditioner, the GMRES iterative method, and existing fast matrix–vector algorithms for RBFs [4,5] reduce the computational cost of solving an RBF interpolation problem to O(N) storage, and O(N \log N) operations.

We define a family of semi-norms ‖μ‖m,s=(∫ℝn∣τ∣2s∣ℱ Dmu(τ)∣2 dτ)1/2 Minimizing such semi-norms, subject to some interpolating conditions, leads to functions of very simple forms, providing interpolation methods that: 1°) preserve polynomials of degree≤m−1; 2°) commute with similarities as well as translations and rotations of ℝn; and 3°) converge in Sobolev spaces Hm+s(Ω).
Typical examples of such splines are: "thin plate" functions (
with Σ λa=0, Σ λa a=0), "multi-conic" functions (Σ λa|t−a|+C with Σ λa=0), pseudo-cubic splines (Σ λa|t−a|3+α.t+β with Σ λa=0, Σ λa a=0), as well as usual polynomial splines in one dimension. In general, data functionals are only supposed to be distributions with compact supports, belonging to H−m−s(ℝn); there may be infinitely many of them. Splines are then expressed as convolutions μ
|t|2m+2s−n (or μ
|t|2m+2s−n Log |t|) + polynomials.

Marching cubes is a simple and popular method for extracting iso-surfaces from implicit functions or discrete three-dimensional (3-D) data. However, it does not guarantee the surface to be topologically consistent with the data, and it creates triangulations which contain many triangles of poor aspect ratio. Marching tetrahedra is a variation of marching cubes, which overcomes this topological problem, but further degrades the triangle aspect ratios. Improvement in triangle aspect ratio has generally been achieved by mesh simplification, a group of algorithms designed mainly to reduce the triangle count. Vertex clustering is one of the simplest, but does not necessarily maintain the topology of the original mesh. We present a new algorithm, regularised marching tetrahedra (RMT), which combines marching tetrahedra and vertex clustering to generate iso-surfaces which are topologically consistent with the data and contain a number of triangles appropriate to the sampling resolution (typically 70% fewer than marching tetrahedra) with significantly improved aspect ratios. This improvement in aspect ratio greatly enhances smooth shaded displays of the surface. Surface triangulations are shown for implicit surfaces, thresholded medical data, and surfaces created from object cross-sections — implementations of RMT appropriate to each of these situations are available.1 The application to data sampled on non-parallel planes is also considered.

The paper deals with the registration of images with non-linear local geometric distortions. It describes a new approach to the determination of a mapping function from given coordinates of control points. The processed image is recursively divided into subregions of various size and shape according to the distortion character. Each subregion is then transformed by a simple local transformation. The described method enables the registration with optional accuracy. A practical use of this method is presented and a comparison of its accuracy and computing complexity with previously published methods is given.

We describe and demonstrate an algorithm that takes as input an unorganized set of points {X1,...,nRTBC ⊂ IR3 on or near an unknown manifold M, and produces as output a simplicial surface that approximates M. Neither the topology, the presence of boundaries, nor the geometry of M are assumed to be known in advance - all are inferred automatically from the data. This problem naturally arises in a variety of practical situations such as range scanning an object from multiple view points, recovery of biological shapes from two-dimensional slices, and interactive surface sketching.

This paper presents a novel approach to the reconstruction of geometric models and surfaces from given sets of points using volume splines. It results in the representation of a solid by the inequality f(x; y; z) 0. The volume spline is based on use of the Green's function for interpolation of scalar function values of a chosen "carrier" solid. Our algorithm is capable of generating highly concave and branching objects automatically. The particular case where the surface is reconstructed from crosssections is discussed too. Potential applications of this algorithm are in tomography, image processing, animation and CAD for bodies with complex surfaces. 1. Introduction There are a number of applied problems that require interpolation or smoothing of large arrays of randomly measured points of a surface. The main sources of such data are physical measurements taken by scanning an object from different viewing directions. Scattered points arise also in mathematical simulation, for examp...

This paper concerns the fast evaluation of radial basis functions. It describes the mathematics of a methos for splines of the form $$s\left(x\right)=p\left(x\right)+\sum _{j=1}^{N}{d}_{j}{\left|x-{x}_{j}\right|}^{2m}log\left|x-{x}_{j}\right|,\hbox{ \hspace{0.17em} }\hbox{ \hspace{0.17em} }\hbox{ \hspace{0.17em} }\hbox{ \hspace{0.17em} }\hbox{ \hspace{0.17em} }\hbox{ \hspace{0.17em} }\hbox{ \hspace{0.17em} }\hbox{ \hspace{0.17em} }\hbox{ \hspace{0.17em} }x\in {R}^{2},$$ where p is a low-degree polynomial. Such functions are very useful for the interpolation of scattered data, but can be computationally expensive to use when N is large. The method described is a generalization of the fast multipole method of Greengard and Rokhlin for the potential case ( m =0), and reduces the incremental cost of a single extra evaluation from O( N ) operations to O(1) operations. The paper develops the required series expansions and uniqueness results. It pays particular attention to the rate of convergence of the series approximations involved, obtaining improved estimates which explain why numerical experiments reveal faster convergence than predicted by previous work for the potential ( m =0) and thin-plate spline ( m =1) cases.

We introduce a new method of creating smooth implicit surfaces of arbitrary manifold topology. These surfaces are described by specifying locations in 3D through which the surface should pass, and also identifying locations that are interior or exterior to the surface. A 3D implicit function is created from these constraints using a variational scattered data interpolation approach. We call the iso-surface of this function a \emph{variational implicit surface}. Like other implicit surface descriptions, these surfaces can be used for CSG and interference detection, may be interactively manipulated, are readily approximated by polygonal tilings, and are easy to ray trace. A key strength is that variational implicit surfaces allow the direct specification of both the location of points on the surface and surface normals. These are two important manipulation techniques that are difficult to achieve using other implicit surface representations such as sums of spherical or ellipsoidal Gaussian functions (``blobbies''). We show that these properties make variational implicit surfaces particularly attractive for interactive sculpting using the particle sampling technique introduced by Witkin and Heckbert in~\cite{Witkin}. Our formulation also yields a simple method for converting a polygonal model to a smooth implicit model.

This paper presents a new method for the fast evaluation of univariate radial basis functions of the form s(x) =ENn=1 dno(|x - xn|) to within accuracy e. The method can be viewed as a generalization of the fast multipole method in which calculations with far field expansions are replaced by calculations involving moments of the data. The method has the advantage of being adaptive to changes in o. That is, with this method changing to a new o requires only coding a one- or two-line function for the (slow) evaluation of o. In contrast, adapting the usual fast multipole method to a new o involves much mathematical analysis of appropriate series expansions and corresponding translation operators, followed by a substantial amount of work expressing this mathematics in code.

Radial basis functions are presented as a practical solution to the problem of interpolating incomplete surfaces derived from three-dimensional (3-D) medical graphics. The specific application considered is the design of cranial implants for the repair of defects, usually holes, in the skull. Radial basis functions impose few restrictions on the geometry of the interpolation centers and are suited to problems where the interpolation centers do not form a regular grid. However, their high computational requirements have previously limited their use to problems where the number of interpolation centers is small (< 300). Recently developed fast evaluation techniques have overcome these limitations and made radial basis interpolation a practical approach for larger data sets. In this paper radial basis functions are fitted to depth-maps of the skull's surface, obtained from X-ray computed tomography (CT) data using ray-tracing techniques. They are used to smoothly interpolate the surface of the skull across defect regions. The resulting mathematical description of the skull's surface can be evaluated at any desired resolution to be rendered on a graphics workstation or to generate instructions for operating a computer numerically controlled (CNC) mill.

Traditionally, shape transformation using implicit functions is performed in two distinct steps: 1) creating two implicit functions, and 2) interpolating between these two functions. We present a new shape transformation method that combines these two tasks into a single step. We create a transformation between two N- dimensional objects by casting this as a scattered data interpolation problem in N+1 dimensions. For the case of 2D shapes, we place all of our data constraints within two planes, one for each shape. These planes are placed parallel to one another in 3D. Zero-valued constraints specify the locations of shape boundaries and positivevalued constraints are placed along the normal direction in towards the center of the shape. We then invoke a variational interpolation technique (the 3D generalization of thin-plate interpolation), and this yields a single implicit function in 3D. Intermediate shapes are simply the zero-valued contours of 2D slices through this 3D function. Shape transformation between 3D shapes can be performed similarly by solving a 4D interpolation problem. To our knowledge, ours is the first shape transformation method to unify the tasks of implicit function creation and interpolation. The transformations produced by this method appear smooth and natural, even between objects of differing topologies. If desired, one or more additional shapes may be introduced that influence the intermediate shapes in a sequence. Our method can also reconstruct surfaces from multiple slices that are not restricted to being parallel to one another.

Implicit surfaces have long been used for a myriad of tasks in computer graphics, including modeling soft or organic objects, morphing, and constructive solid geometry. Although operating on implicit surfaces is usually straight-forward, creating them is not --- interactive techniques are impractical for complex models, and automatic techniques have been largely unexplored. We introduce a practical method for creating implicit surfaces from polygonal models that produces high-quality results for complex models. Whereas much previous work has been done with primitives such as "blobbies," we use surfaces based on a variational interpolation technique (the 3D generalization of thin-plate interpolation). Given a polygonal mesh, we convert the data to a volumetric representation and use this as a guide to create the implicit surface iteratively. Carefully chosen metrics evaluate each intermediate surface and control further refinement. We have applied this method successfully to a variety o...

Surface re-constuction from unorganized points Computer Graphics (SIGGRAPH'92 pro-ceedings)

- H Hoppe
- T Derose
- T Duchamp
- J Mcdonald
- W Stuetzle

H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, and W. Stuetzle. Surface re-constuction from unorganized points. Computer Graphics (SIGGRAPH'92 pro-ceedings), 26(2):71–78, July 1992.

Fast evaluation of radial basis functions: Methods for 3-dimensional polyharmonic splines

- R K Beatson
- A M Tan
- M J D Powell

R. K. Beatson, A. M. Tan, and M. J. D. Powell. Fast evaluation of radial basis
functions: Methods for 3-dimensional polyharmonic splines. In preparation.