Article

Guaranteed-quality triangular meshes

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... It is the principle that TetGen follows. Ruppert [1995] proposed a Delaunay-based algorithm, extending an algorithm of Chew [1989b], provably generates an optimal Delaunay triangular mesh of a 2D polygonal domain with a bounded smallest angle. Shewchuk [1996b] provided a robust and efficient implementation of this algorithm in the freely available program Triangle. ...
... The simple idea of Delaunay refinement leads to a group of methods that aim at generating meshes with a theoretical guarantee on the output mesh quality [Chew 1989b;Ruppert 1995;Shewchuk 1998b]. For this purpose, the input boundary mesh needs to be modified. ...
... A central question in quality mesh generation is how to efficiently place an appropriate number of Steiner points into the mesh domain such that they form a good quality tetrahedral mesh. Delaunay refinement [Chew 1989b;Ruppert 1995;Shewchuk 1998b] is one of few theoretical schemes that provides guarantees on mesh quality and mesh element size simultaneously. However, it may produce some very badly shaped tetrahedra, so-called slivers, which contain nearly 0∘ or 180∘ dihedral angles. ...
Technical Report
Full-text available
TetGen® is a C++ program for generating good quality tetrahedral meshes aimed to support numerical methods and scientific computing [Hang Si]. The problem of quality tetrahedral mesh generation is challenged by many theoretical and practical issues. TetGen uses Delaunay-based algorithms which have theoretical guarantee of correctness. It can robustly handle arbitrary complex 3D geometries and is fast in practice. The source code of TetGen is freely available.
... The first Delaunay mesh refinement algorithm was developed by Chew [5] to obtain constrained Delaunay meshes. Chew first splits the segments of the input PSLG such that the length of each subsegment is between h and √ 3h, where h is small enough that such a division in possible 1 . ...
... Instead of splitting a triangle based on the radius of the circumcircle, the radius-edge ratio became the criterion. Rupert showed that his algorithm can generate meshes with the ratio greater than √ 2, which 1 In his report [5], Chew provides more details about how to find h and how to split the PSLG. ...
... My technique is a generalization of a technique by Chew [5]. In my technique, I generate well-graded meshes by refining the PSLG such that the lengths of the split segments are asymptotically proportional to the local feature size (formally defined in Section 2) at the end points of the split segments. ...
Article
Full-text available
I present a generalization of Chew's first algorithm for Delaunay mesh refinement. I split the line segments of an input planar straight line graph (PSLG) such that the lengths of split segments are asymptotically proportional to the local feature size at their endpoints. By employing prior algorithms, I then refine the truly or constrained Delaunay triangulation of the PSLG by inserting off-center Steiner vertices of “skinny” triangles while prioritizing triangles with shortest edges first. This technique inserts Steiner vertices in an advancing front manner such that we obtain a size-optimal, truly or constrained Delaunay mesh if the desired minimum angle is less than 30° (in the absence of small input angles). This is an improvement over prior algorithms that produce size-optimal meshes with minimum angles of about 26.4° and 28.6° for truly and constrained Delaunay meshes, respectively. Even in the presence of small input angles, the upper bound on the maximum angle is an angle strictly greater than 120° (an improvement from about 137°). The lower bound on the minimum angle in the presence of small angles is identical to prior bounds.
... [4] and the references therein). Among these, Chew's is the most useful for our purposes ( [5]). It is based on Delaunay triangulations of what we have decided to call Gromov sets. ...
... The algorithm on Pages 7-8 of [5] combined with the Theorem on Page 9 and the Corollary on Page 10 of [5] give the following. ...
... The algorithm on Pages 7-8 of [5] combined with the Theorem on Page 9 and the Corollary on Page 10 of [5] give the following. ...
Preprint
Let $Y$ be a subset of a metric space $X.$ We say that $Y$ is $\eta $-Gromov provided $Y$ is $\eta $-separated and not properly contained in any other $\eta $-separated subset of $X.$ In this paper, we review a result of Chew which says that any $\eta $-Gromov subset of $\mathbb{R}^{2} $ admits a triangulation $\mathcal{T}$ whose smallest angle is at least $\pi /6 $ and whose edges have length between $\eta $ and $2\eta .$ We then show that given any $k = 1,2,3\ldots$, there is a subdivision $\mathcal{T} _{k}$ of $\mathcal{T}$ whose edges have length in $\left[ \frac{\eta}{10 k},\frac{2\eta}{10 k} \right] $ and whose minimum angle is also $\pi /6$. These results are used in the proof of the following theorem in [10]: For any $k\in R,v>0,$ and $D>0,$ the class of closed Riemannian $4$-manifolds with sectional curvature $\geq k,$ volume $\geq v,$ and diameter $\leq D$ contains at most finitely many diffeomorphism types. Additionally, these results imply that for any $\varepsilon >0$, if $\eta >0$ is sufficiently small, any $\eta $-Gromov subset of a compact Riemannian $2$-manifold admits a geodesic triangulation $\mathcal{T}$ for which all side lengths are in $\left[ \eta \left( 1-\varepsilon \right) ,2\eta \left( 1+\varepsilon \right) \right] $ and all angles are $\geq \frac{\pi }{6}-\varepsilon .
... The Delaunay refinement algorithms were originally designed for meshing planar domains, and were later generalized 90 for meshing surfaces and volumes. Chew's first algorithm [26] splits any triangle whose circumradius is greater than the prescribed shortest edge length parameter e and hence generates triangulation of uniform density and with no angle smaller than 30 • . But the number of triangles produced is not optimal. ...
... [29] unified the pioneering mesh generation algorithms of L. Paul Chew [26] and Jim Ruppert [28], improved the algorithms in several minor ways, and helped to solve the difficult problem of meshing non-manifold domains with small angles. Dey et al. ...
... borhood of the right eye and a neighborhood of the nose tip are selected. The discrete Gaussian curvature measure of each region is calculated using Eqn.26. Frame (b) shows the total curvature errors in the eye region on the original surface and the reconstructed mesh with different sampling rate based on APP. ...
Article
Surface meshing plays a fundamental important role in Visualization and Computer Graphics, which produces discrete meshes to approximate a smooth surface. Many geometric processing tasks heavily depend on the qualities of the meshes, especially the convergence in terms of topology, position, Riemannian metric, differential operators and curvature measures. Normal cycle theory points out that in order to guarantee the convergence of curvature measures, the discrete meshes are required to approximate not only the smooth surface itself, but also the normal cycle of the surface. This theory inspires the development of the remeshing method based on conformal parameterization and planar Delaunay refinement, which uniformly samples the smooth surface, and produces Delaunay triangulations with bounded minimal corner angles. This method ensures the Hausdorff distances between the normal cycles of the resulting meshes and the smooth normal cycle converges to 0, the discrete Gaussian curvature and mean curvature measures of the resulting meshes converge to their counter parts on the smooth surface. In the current work, the conformal parameterization based remeshing algorithm is further improved to speed up the curvature convergence. Instead of uniformly sampling the surface itself, the novel algorithm samples the normal cycle of the surface. The algorithm pipeline is as follows: first, two parameterizations are constructed, one is the surface conformal parameterization based on dynamic Ricci flow, the other is the normal cycle area-preserving parameterization based on optimal mass transportation; second, the normal cycle parameterization is uniformly sampled; third, the Delaunay refinement mesh generation is carried out on the surface conformal parameterization. The produced meshes can be proven to converge to the smooth surface in terms of curvature measures. Experimental results demonstrate the efficiency and efficacy of proposed algorithm, the convergence speeds of the curvatures are prominently faster than those of conventional methods.
... My technique is a generalization of a technique by Chew [5]. Chew splits the input segments in the PSLG into subsegments whose lengths are nearly identical. ...
... The first Delaunay mesh refinement algorithm was developed by Chew [5] to obtain constrained Delaunay meshes. Chew first splits the segments of the input PSLG such that the length of each subsegment is between h and √ 3h, where h is small enough that such a division in possible 1 . ...
... Miller, Pav, and Walkington [13,15] showed that in a modified version of Ruppert's technique (for truly Delaunay triangulation), at least three circumcenters have to be inserted between two refinements of a PSLG segment/subsegment. Thus, in the worst case and when the 1 In his report [5], Chew provides more details about how to find h and how to split the PSLG. ...
Preprint
Full-text available
I present a generalization of Chew's first algorithm for Delaunay mesh refinement. In his algorithm, Chew splits the line segments of the input planar straight line graph (PSLG) into shorter subsegments whose lengths are nearly identical. The constrained Delaunay triangulation of the subsegments is refined based on the length of the radii of the circumcircles of the triangles. This algorithm produces a uniform mesh, whose minimum angle can be at most $\pi/6$. My algorithm generates both truly Delaunay and constrained Delaunay size-optimal meshes. In my algorithm, I split the line segments of the input PSLG such that their lengths are asymptotically proportional to the local feature size (LFS) by solving ordinary differential equations (ODEs) that map points from a closed 1D interval to points on the input line segments in the PSLG. I then refine the Delaunay triangulation (truly or constrained) of the PSLG by inserting off-center Steiner vertices of "skinny" triangles while prioritizing such triangles with shortest edges first. As in Chew's algorithm, I show that the Steiner vertices do not encroach upon any subsegment of the PSLG. The off-center insertion algorithm places Steiner vertices in an advancing front manner such that we obtain a size-optimal Delaunay mesh (truly or constrained) if the desired minimum angle is less than $\pi/6$. In addition, even in the presence of a small angle $\phi < \pi/2$ in the PSLG, the bound on the minimum angle "across" the small angle tends to $\arctan{((\sin{\phi})/(2-\cos(\phi))}$ as the PSLG is progressively refined. Also, the bound on the maximum angle across any small input angle tends to $\pi/2 + \phi/2$ as the PSLG is progressively refined.
... This means that there is a maximum bound on the domain-point to nearest disk-center distance, a minimum bound on the distance between nearby disk-centers, and a maximum bound on the ratio of the two; further these all hold locally. Well-spaced points have Delaunay triangulations with good angles [11][12][13]. Traditional Delaunay refinement algorithms [12] flip the cause and effect, using the angles of an intermediate Delaunay triangulation to guide the generation of a well-spaced point set. Maximal Poisson-disk Sampling (MPS) is defined in terms of a process that sequentially creates disks and keeps a disk only if it is outside the prior disks. ...
... Well-spaced points have Delaunay triangulations with good angles [11][12][13]. Traditional Delaunay refinement algorithms [12] flip the cause and effect, using the angles of an intermediate Delaunay triangulation to guide the generation of a well-spaced point set. Maximal Poisson-disk Sampling (MPS) is defined in terms of a process that sequentially creates disks and keeps a disk only if it is outside the prior disks. ...
... These are summarized in the table in Fig. 6c, and their lengthy derivations are in Appendix A. We provide some intuition about why they hold here. The initial triangles from the packing are well-shaped by the standard reasons for (tworadii) maximal Poisson-disk packing [18] and Delaunay refinement [12] algorithms. Incircle refinement halves some triangle angles, but these angles are always combined with another one in a quad. ...
... The optimal position for the newly inserted node depends on the specic algorithm and can be the circumcenter or the center of the inscribed circle of a triangle or a weighted position in between (Frey , 1987). In three-dimensional space, the position could be the gravity center of the tetrahedron (Weatherill and Hassan , 1994) or its circumcenter (Chew , 1989;Ruppert , 1994;Shewchuck , 1997). Other approaches suggest a frontal method for node placement (Frey et al. , 1996). ...
... Water is injected from a well with a 0.2 m radius at a rate of 10 m 3 /day, while the bottomhole pressure in the production well is xed at 200 bars. Our software's implementation of the constrained Delaunay renement technique builds upon a series of advancements made by various researchers, such as Chew (1989), Ruppert (1994), Shewchuck (1997), Cheng et al, (2004), and Cheng et al, (2005 Other tetrahedral faces are omitted for clarity. On the right, we can see the hierarchical structure of all dened geometrical feature objects within the PLC, along with the graphical scene graph. ...
Preprint
Full-text available
In numerous environmental groundwater and reservoir modeling applications, the complexities of intricate geological features and underground engineered structures pose a formidable challenge. The accurate representation of such systems presents difficulties for many numerical methods and their associated grid generation techniques. Indeed, the prevailing approach in industry-standard flow simulators often introduces unwelcome geometric and numerical errors. Recognizing this gap, we have developed a novel, graphically driven software package designed to streamline the generation of three-dimensional meshes, encompassing both tetrahedral and all-hexahedral elements, to serve a wide range of applications, with a specific focus on finite element/volume methods. Our software package incorporates a computer implementation of a constrained Delaunay refinement algorithm, renowned for its efficiency in handling meshes of standard sizes. Through this innovation, we aim to address the challenges of modeling complex geological and engineering structures more effectively. To illustrate the capabilities and versatility of our software, we provide example problems that demonstrate its proficiency in seamlessly integrating with dedicated groundwater flow and reservoir two-phase flow simulators.
... I first present the algorithm to construct constrained Delaunay meshes and then extend it for truly Delaunay meshes. The algorithm is an extension of my recently published algorithm for 2D meshes [32], which may be viewed as a generalization of Chew's first algorithm [8] for Delaunay mesh refinement. Dey et al. [11] extended Chew's algorithm for 3D constrained Delaunay meshes. ...
... Pioneered by Frey [16], Delaunay mesh refinement is typically carried out by adding Steiner vertices at a poor-quality triangle's circumcenter. The first provably good 2D constrained Delaunay mesh refinement algorithm was developed by Chew [8]. In his algorithm, Chew first refines input line segments of a PSLG into subsegments such that their lengths are between some h and √ 3h. ...
Preprint
Full-text available
I present a 3D advancing-front mesh refinement algorithm that generates a constrained Delaunay mesh for any piecewise linear complex (PLC) and extend this algorithm to produce truly Delaunay meshes for any PLC. First, as in my recently published 2D algorithm, I split the input line segments such that the length of the subsegments is asymptotically proportional to the local feature size (LFS). For each facet, I refine the mesh such that the edge lengths and the radius of the circumcircle of every triangular element are asymptotically proportional to the LFS. Finally, I refine the volume mesh to produce a constrained Delaunay mesh whose tetrahedral elements are well graded and have a radius-edge ratio less than some $\omega^* > 2/\sqrt{3}$ (except ``near'' small input angles). I extend this algorithm to generate truly Delaunay meshes by ensuring that every triangular element on a facet satisfies Gabriel's condition, i.e., its diametral sphere is empty. On an ``apex'' vertex where multiple facets intersect, Gabriel's condition is satisfied by a modified split-on-a-sphere (SOS) technique. On a line where multiple facets intersect, Gabriel's condition is satisfied by mirroring meshes near the line of intersection. The SOS technique ensures that the triangles on a facet near the apex vertex have angles that are proportional to the angular feature size (AFS), a term I define in the paper. All tetrahedra (except ``near'' small input angles) are well graded and have a radius-edge ratio less than $\omega^* > \sqrt{2}$ for a truly Delaunay mesh. The upper bounds for the radius-edge ratio are an improvement by a factor of $\sqrt{2}$ over current state-of-the-art algorithms.
... Given points in the plane, the Delaunay graph of this set of points is defined as follows: two points are considered to be adjacent if there exists a disk containing these two points but no other point in its interior (see Figure 5a for an example). Delaunay graphs have plenty of geometrical properties which made them of particular interest in algorithms generating triangular meshes [42]. The class of TD-Delaunay graphs is defined in a similar way by replacing disks by equilateral triangles in the previous definition (see Figure 5b for an example). ...
... Les graphes de TD-Delaunay sont des variantes des graphes de Delaunay.Étant donné des points dans le plan, le graphe de Delaunay de cet ensemble de points est défini comme suit : deux points sont considérés comme adjacents s'il existe un disque contenant ces deux points mais aucun autre pointà l'intérieur (voir l'exemple sur la Figure 10a). Les graphes de Delaunay ont de nombreuses propriétés géométriques qui confèrent a ces graphes un intérêt particulier dans les algorithmes générant des maillages triangulaires [42]. La classe des graphes de TD-Delaunay est définie de manière similaire en remplaçant dans la définition précédente les disques par des triangleséquilatéraux (voir l'exemple sur la Figure 10b). ...
Thesis
In this thesis we look for generalizations of some properties of planar graphs to higher dimensions by replacing graphs by simplicial complexes.In particular we study the Dushnik-Miller dimension which measures how a partial order is far from being a linear order.When applied to simplicial complexes, this dimension seems to capture some geometric properties.In this idea, we disprove a conjecture asserting that any simplicial complex of Dushnik-Miller dimension at most d+1 can be represented as a TD-Delaunay complex in RR d, which is a variant of the well known Delaunay graphs in the plane.We show that any supremum section, particular simplicial complexes related to the Dushnik-Miller dimension, is collapsible, which means that it is possible to reach the single point by removing in a certain order the faces of the complex.We introduce the notion of stair packings and we prove that the Dushnik-Miller dimension is connected to contact complexes of such packings.We also prove new results on planar graphs.The two following theorems about representations of planar graphs are proved: any planar graph is an llcorner-intersection graph and any triangle-free planar graph is an {llcorner, | , -}-contact graph.We introduce and study a new notion on planar graphs called Möbius stanchion systems which is related to questions about unicellular embeddings of planar graphs.
... The algorithm on Pages 7-8 of [9] combined with the theorem on Page 9 and the corollary on Page 10 of [9] give the following. [36] for an alternative exposition.) ...
... The algorithm on Pages 7-8 of [9] combined with the theorem on Page 9 and the corollary on Page 10 of [9] give the following. [36] for an alternative exposition.) ...
Preprint
We prove that for any $k\in \mathbb{R},$ $v>0,$ and $D>0$ there are only finitely many diffeomorphism types of closed Riemannian $4$-manifolds with sectional curvature $\geq k,$ volume $\geq v,$ and diameter $\leq D.
... In 2D, Chew [6], [7] and Ruppert [8] proposed algorithms that output triangulations whose triangles have good attributes (such as no small angles) for an input of points and segments. On the same problem, Chen et al. [9] developed a GPU algorithm that unifies both Chew's and Ruppert's to achieve good speedup of over an order of magnitude. ...
... Output: Quality Mesh T , which is also a CDT // Refinement steps on the CPU 1 SplitEncSubsegments_CPU(T , B) 2 SplitEncSubfaces_CPU(T , B) 3 Copy T from CPU to GPU // Refinement steps on the GPU 4 Line 1 to 9 as in Algorithm 1 5 SplitBadElements(T , B) 6 End // Routine definitions 7 Procedure SplitBadElements(T , B) in CPU, which correspond to Line 10 and 11 of gQM3D, respectively; and (2) it replaces SplitBadTets (Line 12) of gQM3D by SplitBadElements (Line 5), which processes all elements of different types in one single routine, and still respects the priority of processing elements from lower to higher dimensions. In this way, the leftover capacity of GPU during the splitting of subsegments and subfaces can be utilized to eliminate bad tetrahedra to gain good speedup. ...
Conference Paper
Full-text available
We propose the first GPU algorithm for the 3D constrained Delaunay refinement problem. For an input of a piecewise linear complex $\mathcal{G}$ and a constant $B$, it produces, by adding Steiner points, a constrained Delaunay triangulation conforming to $\mathcal{G}$ and containing tetrahedra mostly with radius-edge ratios smaller than $B$. Our implementation of the algorithm shows that it can be an order of magnitude faster than the best CPU software while using similar quantities of Steiner points to produce triangulations of comparable qualities. It thus reduces the computing time of mesh refinement from possibly an hour to a few seconds or minutes for possible use in interactive applications.
... Automatic high-quality unstructured mesh generation is essential to scientific computing, ranging from fluid-flow simulations to optimization procedures [1]. The objective of mesh generation is to generate a mesh distribution that requires a minimum amount of mesh cells to achieve maximum global accuracy of the discrete approximation. ...
... Moreover, the turbulence modelling in Large Eddy Simulation (LES) is closely related to the mesh properties, such as the cell size, thus requiring much higher levels of mesh quality than classical RANS methods [5] [4]. Developments over the past decades can be roughly classified as advancing front/layers [6] [7], refinement based Delaunay triangulation [8] [1], centroidal Voronoi tessellations [9][10] and particle-based mesh generation [11] [12]. ...
Article
Full-text available
In this paper, we propose an unstructured mesh generation method based on Lagrangian-particle fluid relaxation, imposing a global optimization strategy. With the presumption that the geometry can be described as a zero level set, an adaptive isotropic mesh is generated by three steps. First, three characteristic fields based on three modeling equations are computed to define the target mesh-vertex distribution, i.e. target feature-size function and density function. The modeling solutions are computed on a multi-resolution Cartesian background mesh. Second, with a target particle density and a local smoothing-length interpolated from the target field on the background mesh, a set of physically-motivated model equations is developed and solved by an adaptive-smoothing-length Smoothed Particle Hydrodynamics (SPH) method. The relaxed particle distribution conforms well with the target functions while maintaining isotropy and smoothness inherently. Third, a parallel fast Delaunay triangulation method is developed based on the observation that a set of neighboring particles generates a locally valid Voronoi diagram at the interior of the domain. The incompleteness near domain boundaries is handled by enforcing a symmetry boundary condition. A set of two-dimensional test cases shows the feasibility of the method. Numerical results demonstrate that the proposed method produces high-quality globally optimized adaptive isotropic meshes even for high geometric complexity.
... The aspect ratio of a tetrahedron is the ratio between its volume and the cubic of its diameter. Bounded aspect ratio of simplices is a common criterion in mesh generation [BEG94,Che89,MV92,Rup93]. ...
... For a subset S ⊂ R 3 , we define the diameter of S to be the maximum Euclidean distance between any pair of points in S. We define the aspect ratio of S to be the ratio between the radius of the smallest ball containing S and the radius of the largest ball inscribed in S. In mesh generation, aspect ratio is a common criterion for individual elements (see for example [BEG94,Che89,MT90,MV92,Rup93]). ...
Conference Paper
We present an asymptotically faster algorithm for solving linear systems in well-structured 3-dimensional truss stiffness matrices. These linear systems arise from linear elasticity problems, and can be viewed as extensions of graph Laplacians into higher dimensions. Faster solvers for the 2-D variants of such systems have been studied using generalizations of tools for solving graph Laplacians [Daitch-Spielman CSC’07, Shklarski-Toledo SIMAX’08]. Given a 3-dimensional truss over n vertices which is formed from a union of k convex structures (tetrahedral meshes) with bounded aspect ratios, whose individual tetrahedrons are also in some sense well-conditioned, our algorithm solves a linear system in the associated stiffness matrix up to accuracy є in time O(k1/3n5/3 log(1 / є)). This asymptotically improves the running time O(n²) by Nested Dissection for all k ≪ n. We also give a result that improves on Nested Dissection even when we allow any aspect ratio for each of the k convex structures (but we still require well-conditioned individual tetrahedrons). In this regime, we improve on Nested Dissection for k ≪ n1/44. The key idea of our algorithm is to combine nested dissection and support theory. Both of these techniques for solving linear systems are well studied, but usually separately. Our algorithm decomposes a 3-dimensional truss into separate and balanced regions with small boundaries. We then bound the spectrum of each such region separately, and utilize such bounds to obtain improved algorithms by preconditioning with partial states of separator-based Gaussian elimination.
... The aspect ratio of a tetrahedron is the ratio between its volume and the cubic of its diameter. Bounded aspect ratio of simplices is a common criterion in mesh generation [BEG94,Che89,MV92,Rup93]. ...
... For a subset S ⊂ R 3 , we define the diameter of S to be the maximum Euclidean distance between any pair of points in S. We define the aspect ratio of S to be the ratio between the radius of the smallest ball containing S and the radius of the largest ball inscribed in S. In mesh generation, aspect ratio is a common criterion for individual elements (see for example [BEG94,Che89,MT90,MV92,Rup93]). ...
Preprint
We present an asymptotically faster algorithm for solving linear systems in well-structured 3-dimensional truss stiffness matrices. These linear systems arise from linear elasticity problems, and can be viewed as extensions of graph Laplacians into higher dimensions. Faster solvers for the 2-D variants of such systems have been studied using generalizations of tools for solving graph Laplacians [Daitch-Spielman CSC'07, Shklarski-Toledo SIMAX'08]. Given a 3-dimensional truss over $n$ vertices which is formed from a union of $k$ convex structures (tetrahedral meshes) with bounded aspect ratios, whose individual tetrahedrons are also in some sense well-conditioned, our algorithm solves a linear system in the associated stiffness matrix up to accuracy $\epsilon$ in time $O(k^{1/3} n^{5/3} \log (1 / \epsilon))$. This asymptotically improves the running time $O(n^2)$ by Nested Dissection for all $k \ll n$. We also give a result that improves on Nested Dissection even when we allow any aspect ratio for each of the $k$ convex structures (but we still require well-conditioned individual tetrahedrons). In this regime, we improve on Nested Dissection for $k \ll n^{1/44}$. The key idea of our algorithm is to combine nested dissection and support theory. Both of these techniques for solving linear systems are well studied, but usually separately. Our algorithm decomposes a 3-dimensional truss into separate and balanced regions with small boundaries. We then bound the spectrum of each such region separately, and utilize such bounds to obtain improved algorithms by preconditioning with partial states of separator-based Gaussian elimination.
... Yan and Wonka [16] studied the generation of maximal Poissondisk sets with varying radii, and applied the according algorithm to surface remeshing, which empirically makes the triangle angles between 32 • and 120 • . Chew presented an efficient technique based on Delaunay triangulation [17], which guarantees the angles in the resulting triangles are all between 30 • and 120 • . Sieger et al. [18] noticed that short edges in the Voronoi diagrams are harmful for simulation with polygonal meshes. ...
... We analyzed the meshing quality in uniform as well as adaptive density. For comparison, we implemented some typical methods for 2D mesh generation and optimization, including CDT [17], CVT [11], OVD [18] and ODT [13]. Since CDT is noniterative in nature, we executed only once; For other methods we executed with the same or more number of iterations than our method. ...
Article
In this paper, we present an efficient method to eliminate the obtuse triangles for high quality 2D mesh generation. Given an initialization (e.g., from Centroidal Voronoi Tessellation-CVT), a limited number of point insertions and removals are performed to eliminate obtuse or small angle triangles. A mesh smoothing and optimization step is then applied. These steps are repeated till a desired good quality mesh is reached. We tested our algorithm on various 2D polygonal domains and verified that our algorithm always converges after inserting a few number of new points, and generates high quality triangulation with no obtuse triangles.
... tally adding new samples to the surface. To that end, one could rely on an algorithm for constructing provably good quality Delaunay triangulations of surfaces (e.g., [4], [9], [10]). However, since the coarse mesh may not be topologically equivalent to the input surface, the refinement process must handle topological changes. ...
... The reason is that the point of R are generated as locally farthest points. This selection strategy tends to produce well-spaced samples, which in turn yields triangles with good quality [10], [14], [15], [36]. We are currently working on a sampling conditition that enables us to define, without computing the additional set R, a LRDT homeomorphic to surface |M |. ...
Article
Full-text available
We introduce the Hierarchical Poisson Disk Sampling Multi-Triangulation (HPDS-MT) of surfaces, a novel structure that combines the power of multi-triangulation (MT) with the benefits of Hierarchical Poisson Disk Sampling (HPDS). MT is a general framework for representing surfaces through variable resolution triangle meshes, while HPDS is a well-spaced random distribution with blue noise characteristics. The distinguishing feature of the HPDS-MT is its ability to extract adaptive meshes whose triangles are guaranteed to have good shape quality. The key idea behind the HPDS-MT is a preprocessed hierarchy of points, which is used in the construction of a MT via incremental simplification. In addition to proving theoretical properties on the shape quality of the triangle meshes extracted by the HPDS-MT, we provide an implementation that computes the HPDS-MT with high accuracy. Our results confirm the theoretical guarantees and outperform similar methods. We also prove that the Hausdorff distance between the original surface and any (extracted) adaptive mesh is bounded by the sampling distribution of the radii of Poisson-disks over the surface. Finally, we illustrate the advantages of the HPDS-MT in some typical problems of geometry processing.
... Three-dimensional mesh generation is a broad and evolving area of research. Many successful algorithms employ Delaunay-based strategies [5,6,7,8,9,10,11,12,13,14], based on the progressive refinement of a coarse initial Delaunay triangulation. Delaunay-refinement schemes are top-down algorithms -based on the incremental refinement of a bounding Delaunay tessellation. ...
... Elimination is achieved through the insertion of additional Steiner-vertices located at the so-called refinement-points associated with the elements in question. Delaunay-refinement algorithms have been developed for planar [5,6,7], surface [12,13] and volumetric domains [9,15,16]. The reader is referred to, for instance, [4] for additional information and summary. ...
Article
Full-text available
A Frontal-Delaunay refinement algorithm for mesh generation in piecewise smooth domains is described. Built using a restricted Delaunay framework, this new algorithm combines a number of novel features, including: (i) a consistent, conforming restricted Delaunay representation for domains specified as a (non-manifold) collection of piecewise smooth surface patches and curve constraints, (ii) a `protection' strategy for domains containing 1-dimensional features that meet at sharply acute angles, and (iii) a new class of `off-centre' refinement rules designed to achieve high-quality point-placement along embedded 1-dimensional constraints. Experimental comparisons show that the new method outperforms a classical (statically weighted) restricted Delaunay-refinement technique for a number of three-dimensional benchmark problems.
... Nevertheless, Delaunay refinement method is arguably the most popular due to its theoretical guarantee and performance in practice. Many versions of the Delaunay refinement is suggested in the literature [Che89b,EG01,Mil04,MPW03,Rup93,She97,Üng04]. ...
... Each new vertex is chosen from a set of candidates -the circumcenters of bad triangles (to improve mesh quality) and the mid-points of input segments (to conform to the domain boundary). Chew [Che89b] showed that Delaunay refinement can be used to produce quality-guaranteed triangulations in two dimensions. Ruppert [Rup93] extended the technique for computing not only quality-guaranteed but also size-optimal triangulations. ...
Article
We propose a new refinement algorithm to generate size-optimal quality-guaranteed Delaunay triangulations in the plane. The algorithm takes $O(n \log n + m)$ time, where $n$ is the input size and $m$ is the output size. This is the first time-optimal Delaunay refinement algorithm.
... However, relying solely on this approach can lead to a suboptimal quality mesh. To address this issue, the researchers in [4] and [14] proposed a technique to enhance the quality of triangles within a mesh by adding new nodes at the central points of the circumcircles or circumspheres of the element. This method ensures a lower bound on any angle present in the mesh, consequently boosting the quality of the triangle. ...
Article
Full-text available
In this work, we present improvements to the meshing algorithm, Distmesh, specifically regarding creating a nonuniform triangular mesh. We introduce a novel strategy for determining the placement of nodes in the domain targeted to be meshed, to generate an initial distribution based on a user-defined mesh size function. In our algorithm, the creation of well-shaped triangles, similar to the Distmesh algorithm, can be achieved by connecting the nodes using Delaunay triangulation and applying in the smoothing process a new internode force with an attractive effect. Finally, through a series of tests, we validate the benefits of our proposed enhancements with regard to computational efficiency and robustness while globally preserving mesh quality.
... Qing, W. and Angelo, L.D. based the combination of the partition method, the incremental inserting algorithm, and the triangular mesh growth algorithm to generate a high-quality triangular mesh with improvement [18,19]. Chew, L.P. proposed an effective automatic generation of ideal triangular divisions, ensuring the generated triangles have a range of angles [20]. In recent years, most innovative research on Delaunay triangular meshes has been directed towards the point localization problem of insertion nodes and improving the triangular mesh algorithm. ...
Article
Full-text available
Wind energy resources in complex terrain are abundant. However, the default mesh division of various terrains often needs more specificity, particularly in wind resource analysis. The mesh division method can diminish computational efficiency and quality in intricate topographical conditions. This article presents a combined algorithm for generating Delaunay triangular meshes in mountainous terrains with significant variations in terrain. The algorithm considers the uncertainty of inner nodes and mesh quality, addressing both the advantages and drawbacks of the Delaunay triangular mesh. The proposed method combines the triangular center of gravity insertion algorithm with an incremental inserting algorithm. Its main goal is to enhance the quality and efficiency of mesh generation, specifically tailored for this type of complex terrain. The process involves discretizing boundary edges and contour lines to obtain point sets, screening boundary triangles, and comparing the triangle area to the average boundary triangle area, combining with the incremental inserting algorithm to generate a triangular mesh of complex terrain. After an initial debugging of the mesh, it is determined whether increasing the internal nodes is necessary to insert the triangle centers of gravity. Upon implementing actual mountainous terrain in the simulation software, a comparison of the resulting meshing demonstrates that the proposed method is highly suitable for complex mountainous terrain with significant variations in elevation. Additionally, it effectively improves the quality of the Delaunay triangular mesh and reduces the occurrence of deformed cells during the meshing process.
... An earlier and alternative version of triangle and edge splitting technique was presented by Ruppert [34] and Chew [41], [42]. These approaches are then combined and modified by Shewchuk [3]. ...
Article
Full-text available
This work is a comparative study on the effect of Delaunay triangulation algorithms on discretization error for conduction-convection conservation problems. A structured triangulation and many unstructured Delaunay triangulations using three popular algorithms for node placement strategies are used. The numerical method employed is the vertex-centered finite volume method. It is found that when the computational domain can be meshed using a structured triangulation, the discretization error is lower for structured triangulations compared to unstructured ones for only low Peclet number values, i.e. when conduction is dominant. However, as the Peclet number is increased and convection becomes more significant, the unstructured triangulations reduce the discretization error. Also, no statistical correlation between triangulation angle extremums and the discretization error is found using 200 samples of randomly generated Delaunay and non-Delaunay triangulations. Thus, the angle extremums cannot be an indicator of the discretization error on their own and need to be combined with other triangulation quality measures, which is the subject of further studies. https://publications.waset.org/10013433/delaunay-triangulations-efficiency-for-conduction-convection-problems
... An earlier and alternative version of triangle and edge splitting technique was presented by Ruppert [34] and Chew [41], [42]. These approaches are then combined and modified by Shewchuk [3]. ...
Article
This work is a comparative study on the effect of Delaunay triangulation algorithms on discretization error for conduction-convection conservation problems. A structured triangulation and many unstructured Delaunay triangulations using three popular algorithms for node placement strategies are used. The numerical method employed is the vertex-centered finite volume method. It is found that when the computational domain can be meshed using a structured triangulation, the discretization error is lower for structured triangulations compared to unstructured ones for only low Peclet number values, i.e. when conduction is dominant. However, as the Peclet number is increased and convection becomes more significant, the unstructured triangulations reduce the discretization error. Also, no statistical correlation between triangulation angle extremums and the discretization error is found using 200 samples of randomly generated Delaunay and non-Delaunay triangulations. Thus, the angle extremums cannot be an indicator of the discretization error on their own and need to be combined with other triangulation quality measures, which is the subject of further studies.
... Concerning high-quality mesh generation, various mesh generation techniques have also been developed, e.g., advancing front/layer methods [253][254] initiating meshing from the boundary to domain interior, refinement based Delaunay triangulation [255][256] inserting new Steiner points into a Delaunay mesh, centroidal Voronoi tessellations (CVT) [257][258][259][260] and particle-based method [261][262][263][264][265] in which a relaxation strategy based on the physical analogy between a simple mesh and a truss structure is applied. Among them, the particle-based mesh generation method has been widely studied due to the efficiency and versatility feature. ...
Article
Full-text available
Since its inception, the full Lagrangian meshless smoothed particle hydrodynamics (SPH) has experienced a tremendous enhancement in methodology and impacted a range of multi-physics applications in science and engineering. This review presents a concise survey on latest developments and achievements of the SPH method, including: (1) Brief review of theory and fundamental with kernel corrections, (2) The Riemann-based SPH method with dissipation limiting and high-order data reconstruction by using MUSCL, WENO and MOOD schemes, (3) Particle neighbor searching with particle sorting and efficient dual-criteria time stepping schemes, (4) Total Lagrangian formulation with stablized, dynamics relaxation and hourglass control schemes, (5) Fluid-structure interaction scheme with interface treatments and multi-resolution discretizations, (6) Novel applications of particle relaxation in SPH methodology for mesh and particle generations. Last but not least, benchmark tests for validating computational accuracy, convergence, robustness and efficiency are also supplied accordingly.
... Concerning high-quality mesh generation, various mesh generation techniques have also been developed, e.g. advancing front/layer methods [253,254] initiating meshing from the boundary to domain interior, refinement based Delaunay triangulation [255,256] inserting new Steiner points into a Delaunay mesh, centroidal Voronoi tessellations (CVT) [257,258,259,260] and particle-based method [261,262,263,264,265] in which a relaxation strategy basing on the physical analogy between a simple mesh and a truss structure is applied. Among them, the particle-based mesh generation method has been widely studied due to the efficiency and versatility feature. ...
Preprint
Full-text available
Since its inception, the full Lagrangian meshless smoothed particle hydrodynamics (SPH) method has experienced a tremendous enhancement in methodology and impacted a range of multi-physics applications in science and engineering. The paper presents a concise review on latest developments and achievements of the SPH method, including (1) brief review of theory and fundamental with kernel corrections, (2) the Riemann-based SPH method with dissipation limiting and high-order data reconstruction by using MUSCL, WENO and MOOD schemes, (3) particle neighbor searching with particle sorting and efficient dual-criteria time stepping schemes, (4) total Lagrangian formulation with stablized, dynamics relaxation and hourglass control schemes, (5) fluid-structure interaction scheme with interface treatments and multi-resolution discretizations, (6) novel applications of particle relaxation for mesh and particle generations. Last but not least, benchmark tests for validating computational accuracy, convergence, robustness and efficiency are also supplied accordingly.
... Two popular techniques for generating unstructured meshes are those based on the advancing front technique [14] or the Delaunay mesh refinement [8,17,19]. In addition, there exist several less known techniques such as quadtree meshing [23], bubble packing [20], and hybrid techniques combining some of the above [15]. ...
Article
Full-text available
This work describes a concise algorithm for the generation of triangular meshes with the help of standard adaptive finite element methods. We demonstrate that a generic adaptive finite element solver can be repurposed into a triangular mesh generator if a robust mesh smoothing algorithm is applied between the mesh refinement steps. We present an implementation of the mesh generator and demonstrate the resulting meshes via examples.
... Geißler [40] chooses the Delaunay triangulation to introduce new breakpoints to the grid, which is also widespread in other domains, see e.g. [18,32,50,83]. In contrast, Rebennack et al. [79] introduce their own refinement algorithm to construct triangulations satisfying a given error bound for arbitrary, indefinite, bivariate functions. ...
Thesis
This theses is about the development and implementation of piecewise linear approximation techniques especially adapted to bilinear constraints of the form f (x, y) = xy. Those bilinear functions are a special class of nonlinearities arising in nonconvex MINLPs. Whereat previous work on piecewise linear approximation techniques prevalent considers the general case, we concentrate on bilinear functions. In doing so, we want to improve the corresponding tessellation of a piecewise linear approximation with regard to the number of used polytopes. The focus on bilinear functions is driven by their widespread occurrence in optimization of energy systems, as for example in gas networks, water supply networks or power systems. We are especially interested in the optimization of off-grid hybrid energy systems, which are a promising way for electrification of off-grid rural areas. To this end we deduce an MINLP formulation of an autarkic mini-grid of households comprising local solar panels, energy storage devices and diesel generators as well as different kinds of consumption loads with the objective to maximize the global welfare of the whole community. Additionally, we include transmission losses occurring between the households and consumption loads, which are deferrable in time. As the resulting model does not constitute a competitive equilibrium due to the nonconvexities caused by quasi-fixed cost structures and temporal coupling, we also present a suitable pricing scheme based on existing work. The main focus of this thesis is on the development of a proper tessellation well suited for bilinear functions. To this end, we review and expand existing studies concerning the linearization error over simplicial domains and upon this, we derive an optimal triangulation and tessellation respectively on R^2 for bilinear functions. Afterwards, we restrict our considerations to compact domains. To the best of our knowledge, we derive the first MINLP formulation of an optimal triangulation problem for bilinear functions with the objective to minimize the number of simplizes necessary to satisfy a given approximation accuracy. As the corresponding problem is too hard to solve for up-to-date MINLP solvers, besides for some very small instances, we also introduce some proper uniform tessellations of rectangular domains especially adapted to the characteristics of bilinear functions. For one of these, we are also able to give an MIP model which introduces only a number of binaries and extra constraints logarithmic in the number of simplices. The developed concepts are implemented within a new bilinear constraint handler integrated in the non-commercial solver SCIP. Finally, we compare well-established triangulations with our newly derived tessellations with respect to the number of elements needed to reach the predefined error bound. Afterwards, we evaluate the performance and behavior of the developed computational framework based on the welfare maximization problem. Therefore, we use a set of test instances of off-grid hybrid energy systems supplying a small community with 3 to 15 households with a time horizon ranging from 24 to 96 hours using a time discretization of one hour.
... Two popular techniques for generating unstructured meshes are those based on the advancing front technique [14] or the Delaunay mesh refinement [8,18,20]. In addition, there exists several less known techniques such as quadtree meshing [24], bubble packing [21], and hybrid techniques combining some of the above [15]. ...
Preprint
This work describes a concise algorithm for the generation of triangular meshes with the help of standard adaptive finite element methods. We demonstrate that a generic adaptive finite element solver can be repurposed into a triangular mesh generator if a robust mesh smoothing algorithm is applied between the mesh refinement steps. We present an implementation of the mesh generator and demonstrate the resulting meshes via several examples.
... This triangulation has the property that the circumcircle of any triangle is empty. The first provable Delaunay Refinement algorithm was given by Chew (1989), which takes a polygonal domain as input and generates a triangular mesh whose angles are all between 30 and 120 . In 1992, Ruppert (1993) gave a Delaunay refinement algorithm with guarantees on good grading and size optimality. ...
Article
Full-text available
This paper describes a framework to generate an unstructured Delaunay mesh of a two-dimensional domain whose boundary is specified by a point cloud data (PCD). The assumption is that the PCD is sampled from a smooth 1-manifold without a boundary definition and is significantly dense (at least ∊-sampled where ∊<1). Presently meshing of such a domain requires two explicit steps, namely the extraction of model definition from the PCD and the use of model definition to guide the unstructured mesh generation. For a densely sampled PCD, the curve reconstruction process is dependent on the size of input PCD and can become a time-consuming overhead. We propose an optimized technique that bypasses the explicit step of curve reconstruction by implicit access to the model information from a well-sampled PCD. A mesh thus generated will be optimal, as the fineness of the mesh is not dictated by the sampling of PCD, but only the geometric complexity of the underlying curve. The implementation and experiments of the proposed framework show significant improvement in expense over the traditional methodology. The main contribution of this paper is the circumvention of the explicit time-consuming step of boundary computation which is a function of the PCD sampling size and a direct generation of a mesh whose complexity is dictated by the geometry of the domain. Key points The algorithm gives a size optimal triangular mesh directly from a point cloud data. Intermediate step of model definition can be skipped completely. Generated mesh is independent of the number of points in the data. Mesh size and computational time depend on geometric complexity of the curve. For dense samples, this method is very efficient compared to traditional methods.
... After partitioning the domain, several robust algorithms can be utilized to build a massive conforming mesh in parallel, among which we can mention Delaunay triangulation based techniques [17], advancing front [19][20][21], octree/quadtree based [22][23][24][25] and edge subdivision methods [26]. In the Delaunay triangulation algorithm, a nonlinear system of equations is iteratively solved and new mesh nodes are created/relocated until a set of constraints on the mesh quality and element size are satisfied [27,28]. A parallel 3D implementation of this algorithm is introduced in [17], which achieves a linear speedup using data-parallel architectures and by expanding open faces via a bucketing technique. ...
Article
Full-text available
We present the parallel implementation of a non-iterative mesh generation algorithm, named Conforming to Interface Structured Adaptive Mesh Refinement (CISAMR). The partitioning phase is tightly integrated with a microstructure reconstruction algorithm to determine the optimized arrangement of partitions based on shapes/sizes of particles. Each processor then creates a structured sub-mesh with one layer of ghost elements for its designated partition. h-adaptivity and r-adaptivity phases of the CISAMR algorithm are also carried out independently in each sub-mesh. Processors then communicate to merge mesh/hanging nodes along faces shared between neighboring partitions. The final mesh is constructed by performing face-swapping and element subdivision phases, after which a minimal communication phase is required in 3D CISAMR to merge new nodes created on partition boundaries. Several example problems, together with scalability tests demonstrating a super-linear speedup, are provided to show the application of parallel CISAMR for generating massive conforming meshes.
... (3) Delaunay Triangulation ensures that the circumcircle/circumsphere associated to each triangle/tetrahedron does not contain any other point in its interior. This feature makes Delaunay-based methods [Chew, 1989;Ruppert, 1995;Chew, 1997;Shewchuk, 1998] robust and efficient. However, in 3-D they can generate very poorly shaped 'sliver' tetrahedra with four almost coplanar vertex nodes and a near zero volume. ...
Article
Full-text available
We present 2-D, 3-D, and spherical mesh generators for triangular and tetrahedral elements. The mesh nodes are treated as if they were linked by virtual springs that obey Hooke's law. Given the desired lengths for the springs, a finite element problem is solved for optimal (static equilibrium) nodal positions. A 'guide-mesh' approach allows the user to define embedded high-resolution sub-regions within a coarser mesh. The method converges rapidly. For example, the algorithm is able to refine within a few iterations a specific region embedded in an unstructured tetrahedral spherical shell so that the edge-length factor $l_{0r}/l_{0c} = 1/33$ where $l_{0r}$ and $l_{0c}$ are the desired spring length for elements inside the refined and coarse regions respectively. The algorithm also includes routines to locally improve the quality of the mesh and to avoid ill-shaped 'sliver-like' tetrahedra. We include a geodynamic modelling example as a direct application of the mesh generator.
... 蓝噪声采样是图形学领域的一个传统研究方向, 在真实 感绘制,反走样,点绘应用起着重要作用。蓝噪声采样的主 要目是生成均匀分布的无偏差点集, 理想的采样点集相邻采 样点之间需要保持一个最小距离 r ,以每个采样点为圆心, 以 2 / r 为半径的圆盘需要覆盖整个采样空间。这样的采样 点集成为最大化泊松圆盘采样(Maximal Poisson-disk Sampling-MPS) 。采样点集的质量主要通过其频谱性质来 进行评估。有关蓝噪声采样的更多细节,可以参考 Lagae 的综述性论文 [81]。 本文主要围绕蓝噪声采样在重新网格化 方面的应用进行讨论。 最近研究发现, 蓝噪声采样点集可以用来生成高质量的 网格模型。Ebeida 等人 [82] 首先提出在二维等半径采样情 况下,对 MPS 点集进行 Delaunay 三角化之后可以得到高质 量的三角网格,并且三角形角度上下界控制在[30∘ , 120 ∘ ]之间,这个上下界对很多需要求解有限元方程问题的应 用都非常重要 [4]。事实上,该上下界理论在 Chew [83]1989 年的论文中已经给出类似证明。 随后, Yan 和 Wonka [84,85] 首次提出变半径(自适应)最大化泊松圆盘采样。在二维情 况下,用正则三角化代替 Delaunay 三角化,每个顶点在正 则三角化中的权重等于改点采样半径的平方。随后,Yan 和 Wonka 将二维的变半径采样推广到网格曲面,通过计算网格 曲面的限制正则三角化在网格曲面上进行自适应最大化泊 松圆盘采样,同时提出基于 MPS 的重新网格化方法。实验发 现, 蓝噪声采样点集生成的三角网格模型虽然不具有规则的 网格结构,但是三角形质量仍然很高,可以用于裂痕模拟等 物理应用。 之后,Guo 等人 [86]继续改进网格曲面上 MPS 的效率, 通过简单的预处理,采用 Dijkstra 方法在采样点局部进行 冲突检测,最后采用离散聚类 [56]提取对偶三角网格,达到 重新网格化目的。最近的其它蓝噪声网格化主要包括:Yan 等人 [87] 将二维基于最远点优化的等半径蓝噪声采样方法 扩展到网格曲面及自适应采样, 提出基于最远点优化的重新 网格化方法。同时,Yan 等人 [88]将自适应最大化泊松圆盘 应用到等值面提取中, 可以直接从等值面或隐函数提取高质 量泊松圆盘采样网格,避免了传统的 Marching Cubes 带来 的走样效果。 其它蓝噪声采样方法 [89,90,91,92,93,94,95,96] 没有直接提及网格曲面的重新网格化应用,但是,在近期的 工作中, 我们对各种方法的网格化结果做了详细的评估和比 较,具体细节及讨论可以参照综述论文 [97]。图 7 给出了几 种蓝噪声网格化的结果。 由于篇幅关系, 这里不再详细讨论。 ...
... Many other methods were proposed in order to reduce the mesh errors through refinements to guarantee good elements. Some of these techniques can be seen in works of Chew [1], [2], Frey [4], and Shewchuk [11]. Most surged techniques for the mesh generation in literature derived from the Delaunay agorithm [3], [8], [9], [10]. ...
... In itself the Delaunay criteria is only a method for connecting existing points; a point insertion method is also required. Point insertion can be accomplished using a regular distribution of points, or by recursively adding points on the edges [1], at the centroids [2], at circumcentres [3][4][5], or at points along the Voronoi segments [6] on triangles from the initial triangulation. ...
Article
Full-text available
There are now many successful applications of 2D finite difference flood simulations on rectangular grids. However, finite volume algorithms are now starting to take over as the preferred choice for simulating flood inundation. The use of finite volume algorithms allows the simulation to be based on irregular meshes. Yet many users are not convinced of the benefits of irregular meshing. This is partly because meshing adds a rather unwelcome extra process to the modelling. But it is more likely because the process of generating irregular meshes has been problematic and time consuming. In the rush to develop excellent finite volume flow engines, meshing has been rather left behind. Yet the quality of the flow modelling depends on the quality of the meshing. The authors have been working on innovative mesh generation techniques. The aim has been to come up with a faster, more reliable mesh generation process. But also to improve the resulting mesh in order to speed up the hydraulic simulation and to generate better quality flood inundation results. This paper discusses and illustrates these advances in meshing. Results are presented to show how well the meshing techniques deal with problematic boundaries. The resulting meshes are applied to test cases to show the impact of either grids or different qualities of mesh on flood inundation results for a variety of circumstances.
Thesis
The aim of this work is to propose a practical and general stopping criterion using an a posteriori approach, that relies on the error estimates available from the mesh adaptation procedure. This stopping criterion has to be robust and applicable to the different types of equations used to describe the complex physics involved in a conjugate heat transfer problem. The final goal is to prove that with such stopping criterion is possible to drastically reduce the CPU time required for the solution of the linear system that stems from the Finite Element discretization.
Article
We describe a method to simplify a 2D triangle mesh through decimation while preserving its quality by means of maintaining a strict lower bound on inner angles. Conformance to boundaries, interfaces, and feature constraints is preserved as well. Multiple options and strategies for the choice of local decimation operators and for the settling of geometric degrees of freedom are proposed and explored. A systematic evaluation leads to a final recommendation for the most beneficial approach. The resulting method enables the efficient generation of more parsimonious meshes than those obtained from existing methods, while exhibiting the same quality in terms of worst element shape, as is relevant, for instance, in finite element analysis and numerical simulation.
Article
Optimal transportation plays a fundamental role in many fields in engineering and medicine, including surface parameterization in graphics, registration in computer vision, and generative models in deep learning. For quadratic distance cost, optimal transportation map is the gradient of the Brenier potential, which can be obtained by solving the Monge-Ampère equation. Furthermore, it is induced to a geometric convex optimization problem. The Monge-Ampère equation is highly non-linear, and during the solving process, the intermediate solutions have to be strictly convex. Specifically, the accuracy of the discrete solution heavily depends on the sampling pattern of the target measure. In this work, we propose a self-adaptive sampling algorithm which greatly reduces the sampling bias and improves the accuracy and robustness of the discrete solutions. Experimental results demonstrate the efficiency and efficacy of our method.
Article
Full-text available
We describe a new method for approximating an implicit surface F by a piecewise‐flat triangulated surface whose triangles are as close as possible to equilateral. The main advantage is improved mesh quality which is guaranteed for smooth surfaces. The GradNormal algorithm generates a triangular mesh that gives a piecewise‐differentiable approximation of F, with angles between 35.2 and 101.5 degrees. As the mesh size approaches 0, the mesh converges to F through surfaces that are isotopic to F.
Article
This work introduces a parallel software platform we developed, 3Ddevice, which is suitable for quantitative simulation of three-dimensional semiconductor devices and their radiation effects. This software is jointly developed by the Academy of Mathematics and Systems Science of the Chinese Academy of Sciences and the Microsystem and Terahertz Research Center of the China Academy of Engineering Physics. It can directly calculate the device’s electrical response property and the accumulation processes of charged oxide traps and interface traps of semiconductor devices, as well as the shift of electrical response after irradiation damage. We have simulated the total dose effect of device ionizing radiation and the enhancement effect of low dose rate, and the simulation results are quantitatively in good agreement with the experimental data. The software adopts C/S architecture and is divided into two major subsystems: local client and remote computing end. The client part is composed of pre-processing, post-processor, control module and communication module. The main functions of the control module are the mounting of the solver and the construction and management of the numerical simulation process. The pre-processing module is primarily used for geometric modelling and mesh generation. The communication module can be used to initialize the parameters of solvers and monitor the hardware system status. The post-processing module is used for analysis and visualization of the simulation results from the solver. The solver module includes two solvers (DevSim for general semiconductor device simulation based on the DD model and TIDSim for simulation of radiation effect). The solvers are developed based on the three-dimensional parallel adaptive finite element platform PHG [1]. Those solvers use MPI communication to support massive distributed parallelism and now can simulate ionization damage effect and electrical response of a device with a billion-scale mesh. The software system is going to be developed and improved continuously, the detailed ad updated usage please refer to its manual.
Chapter
In the context of three-dimensional acquisition and elaboration, it is essential to maintain a balanced approach between model accuracy and required resources. As a possible solution to this problem, the present paper proposes a method to obtain accurate and lightweight meshes of a real environment using the Microsoft® HoloLens™ as a device for point clouds acquisition. Firstly, we describe an empirical procedure to improve 3D models, with the use of optimal parameters found by means of a genetic algorithm. Then, a systematic review of the indexes for evaluating the quality of meshes is developed, in order to quantify and compare the goodness of the obtained outputs. Finally, in order to check the quality of the proposed approach, a reconstructed model is tested in a virtual scenario implemented in Unity® 3D Game Engine.
Thesis
La simulation numérique 3D à grande échelle d’écoulements complexes, tels que les turbulences en aéronautique, impliquent un temps de calcul conséquent pour une précision industrielle. Pour réduire ce temps, on peut adapter itérativement la discrétisation du domaine (ou maillage) à l’erreur de la solution afin de réduire le nombre de points requis, et recourir au parallélisme pour absorber la charge de calcul. Néanmoins il n’est pas trivial d'adapter ce type d’algorithmes pour tirer profit des spécificités des calculateurs récents. Ici, nous visons à fournir des noyaux pour l’adaptation anisotrope de maillages surfaciques qui soient adaptés aux machines à processeurs manycore faiblement cadencés et/ou à latence mémoire asymétrique. La difficulté est d'exposer une localité forte (voisinage impacté statique et minimal, pas d'entrelacement dynamique d'opérations, pas de plongement local de la surface) pour maximiser le rendement des cores, tout en restant efficace (convergence rapide en erreur et qualité de mailles, déformation minimale de la surface) pour s’aligner avec l’état de l’art. Les travaux entrepris sont à l'interface entre informatique et mathématiques appliquées.
Article
Full-text available
Surface remeshing plays a significant role in computer graphics and visualization. Numerous surface remeshing methods have been developed to produce high quality meshes. Generally, the mesh quality is improved in terms of vertex sampling, regularity, triangle size and triangle shape. Many of such surface remeshing methods are based on Delaunay refinement. In particular, some surface remeshing methods generate high quality meshes by performing the planar Delaunay refinement on the conformal uniformization domain. However, most of these methods can only handle topological disks. Even though some methods can cope with high-genus surfaces, they require partitioning a high-genus surface into multiple simply connected segments, and remesh each segment in the parameterized domain. In this work, we propose a novel surface remeshing method based on uniformization theorem using dynamic discrete Yamabe flow and Delaunay refinement, which is capable of handling surfaces with complicated topologies, without the need of partitioning. The proposed method has the following merits: Dimension deduction, it converts all 3D surface remeshing to 2D planar meshing; Theoretic rigor, the existence of the constant curvature measures and the lower bound of the corner angles of the generated meshes can be proven. Experimental results demonstrate the efficiency and efficacy of our proposed method.
Thesis
Full-text available
(http://hdl.handle.net/2123/13148) The field of mesh generation concerns the development of efficient algorithmic techniques to construct high-quality tessellations of complex geometrical objects. In this thesis, I investigate the problem of unstructured simplicial mesh generation for problems in two- and three-dimensional spaces, in which meshes consist of collections of triangular and tetrahedral elements. I focus on the development of efficient algorithms and computer programs to produce high-quality meshes for planar, surface and volumetric objects of arbitrary complexity. I develop and implement a number of new algorithms for mesh construction based on the Frontal-Delaunay paradigm - a hybridisation of conventional Delaunay-refinement and advancing-front techniques. I show that the proposed algorithms are a significant improvement on existing approaches, typically outperforming the Delaunay-refinement technique in terms of both element shape- and size-quality, while offering significantly improved theoretical robustness compared to advancing-front techniques. I verify experimentally that the proposed methods achieve the same element shape- and size-guarantees that are typically associated with conventional Delaunay-refinement techniques. In addition to mesh construction, methods for mesh improvement are also investigated. I develop and implement a family of techniques designed to improve the element shape quality of existing simplicial meshes, using a combination of optimisation-based vertex smoothing, local topological transformation and vertex insertion techniques. These operations are interleaved according to a new priority-based schedule, and I show that the resulting algorithms are competitive with existing state-of-the-art approaches in terms of mesh quality, while offering significant improvements in computational efficiency. Optimised C++ implementations for the proposed mesh generation and mesh optimisation algorithms are provided in the JIGSAW and JITTERBUG software libraries.
Conference Paper
Full-text available
The meshing process has been automatized through several algorithms under various computational systems to attempt the increasing " push " of meshing technology. In fact, human analysts expect to mesh complex domains constituted of thousands or even millions of elements with low level of interactions. In spite of high transparency, one difficulty arises: how to develop the necessary sensitivity to analyse the relationship between the mesh quality, in global sense, and the element quality, in local sense, with a minimum number of interactions? The objective of this paper is to provide a new quality measure to evaluate and compare both unstructured 2D-Meshes, in global sense, and their triangular elements, in local sense, called 'perimetral ratio' (" Relação Perimetral – RP "). This concept introduces five propositions and a virtual element of comparison, called 'equivalent ideal triangle' (" Triângulo Ideal Equivalente – TIE "). Its functionality becomes clear through a simple and didactic two dimensional meshing web application, that permits the evaluation of a given mesh, or a set of meshes, offering a measure of global quality and a respective local metric for each one of the elements. The web application was designed in an object oriented fashion using Java Language, in order to increase the access and to facilitate the ability for meshing construction and analysis. To demonstrate the process and the application capabilities, a set of Lake Superior´s meshes, that was generated by the Delaunay Refinement process, are measured.
ResearchGate has not been able to resolve any references for this publication.