Conference Paper

Authors:
Conference Paper

# A Remeshing Approach to Multiresolution Modeling

If you want to read the PDF, try requesting it from the authors.

## Abstract

Providing a thorough mathematical foundation, multiresolution modeling is the standard approach for global surface deformations that preserve fine surface details in an intuitive and plausible manner. A given shape is decomposed into a smooth low-frequency base surface and high-frequency detail information. Adding these details back onto a deformed version of the base surface results in the desired modification. Using a suitable detail encoding, the connectivity of the base surface is not restricted to be the same as that of the original surface. We propose to exploit this degree of freedom to improve both robustness and efficiency of multiresolution shape editing.In several approaches the modified base surface is computed by solving a linear system of discretized Laplacians. By remeshing the base surface such that the Voronoi areas of its vertices are equalized, we turn the unsymmetric surface-related linear system into a symmetric one, such that simpler, more robust, and more efficient solvers can be applied. The high regularity of the remeshed base surface further removes numerical problems caused by mesh degeneracies and results in a better discretization of the Laplacian operator.The remeshing is performed on the low-frequency base surface only, while the connectivity of the original surface is kept fixed. Hence, this functionality can be encapsulated inside a multiresolution kernel and is thus completely hidden from the user.

## No full-text available

... Isotropic remeshing is used to optimize a mesh into an isotropic one. It is constructed by three basic steps: split, collapse, and flip [47], which are used to cut the long edge, delete the short edge, and optimize the valence of the mesh after split and collapse, respectively. Compared to the original isotropic remeshing, we change the implementations to achieve better results. ...
... If the collapse number is equal to i s , then stop. The details of valence optimization and tangent space smoothing are introduced in [47]. Based on the modification, the point number can be controlled and the generation of obtuse triangle is limited which improves the quality of the mesh. ...
... Based on the modification, the point number can be controlled and the generation of obtuse triangle is limited which improves the quality of the mesh. Compared with the original isotropic remeshing method [47], our mesh optimization improves the quality of meshes. In Figure 13, we compare the reconstructed meshes from initial 10°6 0°F ig. 15. ...
Preprint
Full-text available
Mesh reconstruction from a 3D point cloud is an important topic in the fields of computer graphic, computer vision, and multimedia analysis. In this paper, we propose a voxel structure-based mesh reconstruction framework. It provides the intrinsic metric to improve the accuracy of local region detection. Based on the detected local regions, an initial reconstructed mesh can be obtained. With the mesh optimization in our framework, the initial reconstructed mesh is optimized into an isotropic one with the important geometric features such as external and internal edges. The experimental results indicate that our framework shows great advantages over peer ones in terms of mesh quality, geometric feature keeping, and processing speed.
... Isotropic remeshing is used to optimize a mesh into an isotropic one. It is constructed by three basic steps: split, collapse, and flip [47], which are used to cut the long edge, delete the short edge, and optimize the valence of the mesh after split and collapse, respectively. Compared to the original isotropic remeshing, we change the implementations to achieve better results. ...
... If the collapse number is equal to i s , then stop. The details of valence optimization and tangent space smoothing are introduced in [47]. Based on the modification, the point number can be controlled and the generation of obtuse triangle is limited which improves the quality of the mesh. ...
... Based on the modification, the point number can be controlled and the generation of obtuse triangle is limited which improves the quality of the mesh. Compared with the original isotropic remeshing method [47], our mesh optimization improves the quality of meshes. In Figure 13, we compare the reconstructed meshes from initial 10°6 0°F ig. 15. ...
Article
Full-text available
Mesh reconstruction from a 3D point cloud is an important topic in the fields of computer graphic, computer vision, and multimedia analysis. In this paper, we propose a voxel structure-based mesh reconstruction framework. It provides the intrinsic metric to improve the accuracy of local region detection. Based on the detected local regions, an initial reconstructed mesh can be obtained. With the mesh optimization in our framework, the initial reconstructed mesh is optimized into an isotropic one with the important geometric features such as external and internal edges. The experimental results indicate that our framework shows great advantages over peer ones in terms of mesh quality, geometric feature keeping, and processing speed. The source code of the proposed method is publicly available at: https://github.com/vvvwo/Parallel-Structure-for-Meshing.
... We then rigidly aligned all shapes of the sequence to avoid rotational and translational misalignments between time points. To achieve a high-quality surface approximation between time points, we used a remeshing technique based on ref. 46 to obtain a common triangulation that is suited for all keyframe shapes. We computed a target edge length field on each shape of the sequence 47 . ...
... In this way, if a shape in the sequence requires a high mesh resolution in some area, this demand will be translated to the whole collection in the common triangulation. We then iteratively improved the common triangulation by a series of edge splits, collapses, flips and tangential Laplacian smoothing 46 and simultaneously considered the reference mesh embedded on all shapes of the sequence 48 . Finally, to approximate a continuous morphological evolution, we uniformly interpolated 30 new time points between each pair of consecutive shapes in the sequence. ...
Article
Full-text available
Understanding organ morphogenesis requires a precise geometrical description of the tissues involved in the process. The high morphological variability in mammalian embryos hinders the quantitative analysis of organogenesis. In particular, the study of early heart development in mammals remains a challenging problem due to imaging limitations and complexity. Here, we provide a complete morphological description of mammalian heart tube formation based on detailed imaging of a temporally dense collection of mouse embryonic hearts. We develop strategies for morphometric staging and quantification of local morphological variations between specimens. We identify hot spots of regionalized variability and identify Nodal-controlled left–right asymmetry of the inflow tracts as the earliest signs of organ left–right asymmetry in the mammalian embryo. Finally, we generate a three-dimensional+t digital model that allows co-representation of data from different sources and provides a framework for the computer modeling of heart tube formation Using high-resolution confocal images and computational surface mapping, Esteban et al. provide a detailed pseudodynamic atlas of early heart tube development (E7.5–E8.5), develop a morphometric staging system based on landmark curves and distances in the surface of the tissues and identify parameters that can be used for precise embryo staging across different labs. This morphometric analysis reveals early signs of left–right asymmetry, before the cardiac looping stage, which is regulated by the Nodal signaling pathway.
... For each point, the neighbor structure has been improved by the efficient intrinsic control. An isotropic remeshing [57] is proposed to optimize the Equation 6 based on meshes. It can be regarded as a pure geometric update process. ...
... The convergence reports (A:θ(t) and B:Q(t)) of different mesh reconstruction methods (ISO: isotropic remeshing[57]). ...
Article
Full-text available
With rapid development of 3D scanning technology, 3D point cloud based research and applications are becoming more popular. However, major difficulties are still exist which affect the performance of point cloud utilization. Such difficulties include lack of local adjacency information, non-uniform point density, and control of point numbers. In this paper, we propose a two-step intrinsic and isotropic (I&I) resampling framework to address the challenge of these three major difficulties. The efficient intrinsic control provides geodesic measurement for a point cloud to improve local region detection and avoids redundant geodesic calculation. Then the geometrically-optimized resampling uses a geometric update process to optimize a point cloud into an isotropic or adaptively-isotropic one. The point cloud density can be adjusted to global uniform (isotropic) or local uniform with geometric feature keeping (being adaptively isotropic). The point cloud number can be controlled based on application requirement or user-specification. Experiments show that our point cloud resampling framework achieves outstanding performance in different applications: point cloud simplification, mesh reconstruction and shape registration. We provide the implementation codes of our resampling method at https://github.com/vvvwo/II-resampling.
... Garland and Heckbert [3] also proposed a kind of quadratic error metrics (QEM) simplification method, which is also regarded as the general model simplification algorithm. As for the optimization algorithms, Botsch and Konnelt [4] first introduced the Laplace-Beltrami (LB) operator to improve the quality of the irregular triangular meshes. Based on the LB operator, they were able to construct area-equalizing triangulations with vertex smoothing operation and optimize the original mesh, which was regarded as the basic optimization algorithms. ...
... In addition, we can find by the geometric feature parameter r that the algorithm proposed in this paper has a better preservation of the geometric feature information of the mesh. In order to demonstrate the performance of the proposed method, the results of the classical area-equaling triangulation method [4] and the Delaunay mesh construction algorithm [13] are performed with the same model and shown in Figure 9c and Figure 9d, respectively. The selected area-equaling triangulation method is a basic algorithm for model meshing of finite element method (FEM), and Liu's method [13] is a typical representation for the construction of the Delaunay mesh. ...
Article
Full-text available
Triangular meshes play critical roles in many applications, such as numerical simulation and additive manufacturing. However, the triangular meshes transformed from computer-aided design models using common algorithms may have many undesirable narrow triangles, which tends to affect the downstream applications. In this paper, we proposed two algorithms for Delaunay mesh construction and simplification to improve the quality of the triangular meshes. Two improved mesh operations of inserting vertices and collapsing vertices based on the principle of minimum volume destruction were designed. The improved vertex inserting operation is able to modify the local mesh so that it will conform to the local Delaunay property. The improved vertex collapsing operation can realize the simplification of the original mesh while maintaining the local Delaunay property. The results of visualized rendering and thermal diffusion simulations verified the improvement of the proposed algorithms in the aspects of the quantity and quality of the meshes.
... The generation of this graph happens by refining the object's surface, defining a new triangulated structure of lower resolution, but which faithfully describes the original surface. In the proposed system, the Incremental Triangle-based Isotropic Remeshing Algorithm [75] was implemented to generate the deformation graph. This algorithm performs simple operations incrementally on the surface edges and vertices; based on a predefined edge size, splitting, collapsing, inversion, as well as Laplacian smoothing (tangential projection) operations are performed. ...
Article
Full-text available
Technology has been promoting a great transformation in farming. The introduction of robotics; the use of sensors in the field; and the advances in computer vision; allow new systems to be developed to assist processes, such as phenotyping, of crop’s life cycle monitoring. This work presents, which we believe to be the first time, a system capable of generating 3D models of non-rigid corn plants, which can be used as a tool in the phenotyping process. The system is composed by two modules: an terrestrial acquisition module and a processing module. The terrestrial acquisition module is composed by a robot, equipped with an RGB-D camera and three sets of temperature, humidity, and luminosity sensors, that collects data in the field. The processing module conducts the non-rigid 3D plants reconstruction and merges the sensor data into these models. The work presented here also shows a novel technique for background removal in depth images, as well as efficient techniques for processing these images and the sensor data. Experiments have shown that from the models generated and the data collected, plant structural measurements can be performed accurately and the plant’s environment can be mapped, allowing the plant’s health to be evaluated and providing greater crop efficiency.
... The initial reference triangulation (transferred from the first shape to the rest of the 605 sequence) does not provide a high-quality surface approximation at every stage of the 606 morphological process. To this end, we employed a remeshing technique based on 607 (Botsch and Kobbelt, 2004) to obtain a common triangulation that is well suited for all 608 keyframe shapes. We computed a target edge length field on each shape of the sequence 609 (Dunyach et al., 2013). ...
Preprint
Full-text available
Understanding organ morphogenesis requires a precise geometrical description of the tissues involved in the process. In highly regulative embryos, like those of mammals, morphological variability hinders the quantitative analysis of morphogenesis. In particular, the study of early heart development in mammals remains a challenging problem, due to imaging limitations and innate complexity. Around embryonic day 7.5 (E7.5), the cardiac crescent folds in an intricate and coordinated manner to produce a pumping linear heart tube at E8.25, followed by heart looping at E8.5. In this work we provide a complete morphological description of this process based on detailed imaging of a temporally dense collection of embryonic heart morphologies. We apply new approaches for morphometric staging and quantification of local morphological variations between specimens at the same stage. We identify hot spots of regionalized variability and identify left-right asymmetry in the inflow region starting at the late cardiac crescent stage, which represents the earliest signs of organ left-right asymmetry in the mammalian embryo. Finally, we generate a 3D+t digital model that provides a framework suitable for co-representation of data from different sources and for the computer modelling of the process. SUMMARY STATEMENT We provide the first complete atlas for morphometric analysis and visualization of heart tube morphogenesis, reporting morphological variability and early emergence of left-right asymmetry patterns.
... We therefore also generated a second augmented dataset by remeshing all meshes in AU GM EN T ED Best , creating the AU GM EN T ED Rem query dataset. In particular, we have used the remeshing algorithm proposed by Botsch et al. [9] as it works on non-watertight meshes, which has been made available as part of PMP library [49]. An example of the effect of this remeshing step can be seen in Figure 9. ...
Preprint
Full-text available
A complete pipeline is presented for accurate and efficient partial 3D object retrieval based on Quick Intersection Count Change Image (QUICCI) binary local descriptors and a novel indexing tree. It is shown how a modification to the QUICCI query descriptor makes it ideal for partial retrieval. An indexing structure called Dissimilarity Tree is proposed which can significantly accelerate searching the large space of local descriptors; this is applicable to QUICCI and other binary descriptors. The index exploits the distribution of bits within descriptors for efficient retrieval. The retrieval pipeline is tested on the artificial part of SHREC'16 dataset with near-ideal retrieval results.
... December 2021 | Volume 8 | Article 692988 6 generated mesh. The remeshing and optimization strategies are adopted from Refs (Botsch and Kobbelt, 2004;. to obtain uniform triangular meshes with a target length 2.45 Å. ...
Article
Full-text available
Carbon nanotube based multi-terminal junction configurations are of great interest because of the potential aerospace and electronic applications. Multi-terminal carbon nanotube junction has more than one carbon nanotube meeting at a point to create a 2D or 3D structure. Accurate atomistic models of such junctions are essential for characterizing their thermal, mechanical and electronic properties via computational studies. In this work, computational methodologies that uses innovative Computer-Aided Design (CAD) based optimization strategies and remeshing techniques are presented for generating such topologically reliable and accurate models of complex multi-terminal junctions (called 3-, 4-, and 6-junctions). This is followed by the prediction of structure-property relationship via study of thermal conductivity and mechanical strength using molecular dynamics simulations. We observed high degradation in the thermal and mechanical properties of the junctions compared to pristine structures which is attributed to high concentration of non-hexagonal defects in the junction. Junctions with fewer defects have better thermal transport capabilities and higher mechanical strengths, suggesting that controlling the number of defects can significantly improve inherent features of the nanostructures.
... For one thing, unlike Yu et al. [2021], our preconditioning strategy cannot easily accommodate dense constraints (e.g., preservation of each triangle area), which would require a prohibitive number of iterative solves. Here one can instead use a stiff penalty; revisiting the multigrid approach via hierarchical coarsening [Botsch and Kobbelt 2004;Shi et al. 2006] may also prove fruitful. Our approximation of tangentpoint energy becomes inaccurate in situations of very tight contact (à la Sections 9.2.1 and 9.2.2), since we effectively have few quadrature points per unit surface area; adding additional quadrature points (or adaptive refinement) to elements in near-contact may help to achieve tighter fits. ...
Preprint
Functionals that penalize bending or stretching of a surface play a key role in geometric and scientific computing, but to date have ignored a very basic requirement: in many situations, surfaces must not pass through themselves or each other. This paper develops a numerical framework for optimization of surface geometry while avoiding (self-)collision. The starting point is the tangent-point energy, which effectively pushes apart pairs of points that are close in space but distant along the surface. We develop a discretization of this energy for triangle meshes, and introduce a novel acceleration scheme based on a fractional Sobolev inner product. In contrast to similar schemes developed for curves, we avoid the complexity of building a multiresolution mesh hierarchy by decomposing our preconditioner into two ordinary Poisson equations, plus forward application of a fractional differential operator. We further accelerate this scheme via hierarchical approximation, and describe how to incorporate a variety of constraints (on area, volume, etc.). Finally, we explore how this machinery might be applied to problems in mathematical visualization, geometric modeling, and geometry processing.
... In contrast, our mesh vertices are linked to the individual, fixed-size sets of nearest samples, easily updated with sets from their 1-ring neighbor vertices, respectively. Mesh integrity in CE is, as in CF, maintained with dedicated global remeshing [8]. Further, CE is invariant of the point cloud orientations and thus solely relies on the distance fields. ...
Article
Full-text available
Inspired by the ability of water to assimilate any shape, if being poured into it, regardless if flat, round, sharp, or pointy, we present a novel, high-quality meshing method. Our algorithm creates a triangulated mesh, which automatically refines where necessary and accurately aligns to any target, given as mesh, point cloud, or volumetric function. Our core optimization iterates over steps for mesh uniformity, point cloud projection, and mesh topology corrections, always guaranteeing mesh integrity and $$\epsilon$$ ϵ -close surface reconstructions. In contrast with similar approaches, our simple algorithm operates on an individual vertex basis. This allows for automated and seamless transitions between the optimization phases for rough shape approximation and fine detail reconstruction. Therefore, our proposed algorithm equals established techniques in terms of accuracy and robustness but supersedes them in terms of simplicity and better feature reconstruction, all controlled by a single parameter, the intended edge length. Due to the overall increased versatility of input scenarios and robustness of the assimilation, our technique furthermore generalizes multiple established approaches such as ballooning or shrink wrapping.
... The simplified meshes can be improved with typical post-processing like smoothing and edge flipping. We apply isotropic remeshing as described in [BK04]. First, we compute the average of the relative edge lengthsr e . ...
Thesis
Full-text available
... Hence, rst of all, the function isotropic_remeshing from the PMP package is applied, that makes the triangulation close to regular by reconstructing some edges and vertices with maximal preservation of geometry. This function is a program realization of the algorithm proposed in [17]. It takes only two arguments: the number of iterations and the length of the edge, to which the algorithm will approximate the lengths of the edges of triangulation. ...
... In such a case, we can perform a preprocessing step to subdivide the target region before the parameterization step. For this, we apply an incremental remeshing technique [27][28][29] to the target region. As shown in Figure 8c, incremental remeshing increases the resolution of the target region to be similar to that of the source region and improves the result of transferring the shape details. ...
Article
Full-text available
A shape detail transfer is the process of extracting the geometric details of a source region and transferring it onto a target region. In this paper, we present a simple and effective method, called GeoStamp, for transferring shape details using a Poisson equation. First, the mean curvature field on a source region is computed by using the Laplace–Beltrami operator and is defined as the shape details of the source region. Subsequently, the source and target regions are parameterized on a common 2D domain, and a mean curvature field on the target region is interpolated by the correspondence between two regions. Finally, we solve the Poisson equation using the interpolated mean curvature field and the Laplacian matrix of the target region. Consequently, the mean curvature field of the target region is replaced with that of the source region, which results in the transfer of shape details from the source region to the target region. We demonstrate the effectiveness of our technique by showing several examples and also show that our method is quite useful for adding shape details to a surface patch filling a hole in a triangular mesh.
... Therefore, one of our future endeavors is to perform the subdivision processing of the 3D model sequentially according to the viewpoint direction of the camera. Currently, we are studying the application of the local partitioning method of Botsch et al. [2004]. ...
Conference Paper
Full-text available
We present a method to add distorted perspective effects for scenes with a forward camera dolly (the camera moves in the depth direction). Our target scene is a first-person view of the main character traversing a long path. A distorted perspective has been applied to such scenes in hand-drawn animation films to create more dynamic and dramatic motions. Unfortunately, it is difficult to create a perspective for the camera direction in 3D animation. Therefore, we created an interactive tool to design cartoon-like perspectives for 3D computer-generated animations. Users can control the affine transformation, including translation, rotation, and scaling, in the depth direction of the camera coordinate system on a 2D screen. We implemented the proposed deformer on a vertex shader to ensure the real-time performance of our system.
... As the ribs we use are relatively simple, the meshing result are relatively uniform and no undersampled regions are observed. The remeshing technique [63] can be utilized if the given geometries are complicated. ...
Article
In this paper, we propose a topology optimization method for aircraft fuel tank structural design to reduce the fuel sloshing effect, which is beneficial for aircraft fuel gauge accuracy and flight safety. The method is applied to optimize the hole layouts in the tank rib to reduce fuel sloshing time, increase stiffness, and guarantee lightweight requirements. Specifically, we introduce a hybrid fluid–solid particle based simulator by modeling the impact and elastic forces between fluid (fuel) and solid (ribs, tank wall), to analysis violent fuel sloshing inside the complicated internal tanks, caused by large angle maneuvers and accelerations of an aircraft. Meanwhile, an effective optimizer is built for the rib design, in which the explicitly controlled hole layouts are projected onto the modeled solid particles, such that the process of finite element meshing and remeshing is no longer needed during each optimization iteration. In this way, our proposed method is able to work with complicated aircraft fuel tanks under violent fuel sloshing conditions. We then use a real aircraft wing tank as design case to validate the proposed framework. The result shows that by the generated layout of rib holes, the fuel sloshing time is shortened by 46%, the maximal fuel gauge error is reduced by 14%, and the maximal center of gravity (CG) shift is reduced by 0.15 m.
... Béchet et al. [2] combined the Delaunay criterion and edge splitting to remesh surface meshes. Wang et al. [9] and Botsch et al. [10] combined edge splitting, edge swapping, edge collapsing, and vertex relocating to remesh surface meshes. Although these methods are efficient, they all perform locally to change and optimize the original meshes. ...
Article
Triangular mesh has been a prevalent form of 3D model representation in various areas ranging from modeling to finite element analysis due to their simplicity and flexibility. In the paper, we present a triangular mesh remeshing method based on a sphere packing method and a node insertion/deletion method for surface meshes. First, a new set of nodes are generated on the surface mesh via a sphere packing method and added to the original surface mesh. Then, original nodes are deleted through some basic operations. Finally, the mesh is optimized by edge flipping. To regenerate an adaptive mesh, we consider some geometric features to calculate a size field and record and smooth it with an octree background grid. The proposed method remeshes the surface mesh without projection of local areas, the intersection of fronts, Lloyd relaxation, and other complicated calculations, and the proposed method can generate a high-quality mesh without dependence on the quality of the original mesh, which make the method efficient and effective.
... For this task, we set λ reg = 0.001 and λ dir = 0.2. Finally, for DASM+R and DALS we post process the results using five iterations of Botsch-Kobbelt remeshing [5]. ...
Preprint
Full-text available
Shape priors have long been known to be effective when reconstructing 3D shapes from noisy or incomplete data. When using a deep-learning based shape representation, this often involves learning a latent representation, which can be either in the form of a single global vector or of multiple local ones. The latter allows more flexibility but is prone to overfitting. In this paper, we advocate a hybrid approach representing shapes in terms of 3D meshes with a separate latent vector at each vertex. During training the latent vectors are constrained to have the same value, which avoids overfitting. For inference, the latent vectors are updated independently while imposing spatial regularization constraints. We show that this gives us both flexibility and generalization capabilities, which we demonstrate on several medical image processing tasks.
... In fact, as the mesh scaffold is created based on the SDF from teacher NeuS, we can handily guarantee a uniformed distribution of vertices with off-the-shelf mesh regularization algorithms (e.g., isotropic remeshing by Botsch et al . [3]). ...
Preprint
Full-text available
Very recently neural implicit rendering techniques have been rapidly evolved and shown great advantages in novel view synthesis and 3D scene reconstruction. However, existing neural rendering methods for editing purposes offer limited functionality, e.g., rigid transformation, or not applicable for fine-grained editing for general objects from daily lives. In this paper, we present a novel mesh-based representation by encoding the neural implicit field with disentangled geometry and texture codes on mesh vertices, which facilitates a set of editing functionalities, including mesh-guided geometry editing, designated texture editing with texture swapping, filling and painting operations. To this end, we develop several techniques including learnable sign indicators to magnify spatial distinguishability of mesh-based representation, distillation and fine-tuning mechanism to make a steady convergence, and the spatial-aware optimization strategy to realize precise texture editing. Extensive experiments and editing examples on both real and synthetic data demonstrate the superiority of our method on representation quality and editing ability. Code is available on the project webpage: https://zju3dv.github.io/neumesh/.
... This operation produces a new triangular mesh with an adaptive vertex distribution, isotropic sampling and almost equilateral triangles. The solution relies on the extension by Botsch and Kobbelt [100] of the technique described by Dunyach et al. [101]. The algorithm takes a target edge length as an input and then repeatedly splits long edges, collapses short edges and relocates vertices until all edges approximately reach the desired target length. ...
Article
Full-text available
Nowadays digital replicas of artefacts belonging to the Cultural Heritage (CH) are one of the most promising innovations for museums exhibitions, since they foster new forms of interaction with collections, at different scales. However, practical digitization is still a complex task dedicated to specialized operators. Due to these premises, this paper introduces a novel approach to support non-experts working in museums with robust, easy-to-use workflows based on low-cost widespread devices, aimed at the study, classification, preservation, communication and restoration of CH artefacts. The proposed methodology introduces an automated combination of acquisition, based on mobile equipment and visualization, based on Real-Time Rendering. After the description of devices used along the workflow, the paper focuses on image pre-processing and geometry processing techniques adopted to generate accurate 3D models from photographs. Assessment criteria for the developed process evaluation are illustrated. Tests of the methodology on some effective museum case studies are presented and discussed.
Article
State of the art quadrangulation methods are able to reliably and robustly convert triangle meshes into quad meshes. Most of these methods rely on a dense direction field that is used to align a parametrization from which a quad mesh can be extracted. In this context, the aforementioned direction field is of particular importance, as it plays a key role in determining the structure of the generated quad mesh. If there are no user-provided directions available, the direction field is usually interpolated from a subset of principal curvature directions. To this end, a number of heuristics that aim to identify significant surface regions have been proposed. Unfortunately, the resulting fields often fail to capture the structure found in meshes created by human experts. This is due to the fact that experienced designers can leverage their domain knowledge in order to optimize a mesh for a specific application. In the context of physics simulation, for example, a designer might prefer an alignment and local refinement that facilitates a more accurate numerical simulation. Similarly, a character artist may prefer an alignment that makes the resulting mesh easier to animate. Crucially, this higher level domain knowledge cannot be easily extracted from local curvature information alone. Motivated by this issue, we propose a data-driven approach to the computation of direction fields that allows us to mimic the structure found in existing meshes, which could originate from human experts or other sources. More specifically, we make use of a neural network that aggregates global and local shape information in order to compute a direction field that can be used to guide a parametrization-based quad meshing method. Our approach is a first step towards addressing this challenging problem with a fully automatic learning-based method. We show that compared to classical techniques our data-driven approach combined with a robust model-driven method, is able to produce results that more closely exhibit the ground truth structure of a synthetic dataset (i.e. a manually designed quad mesh template fitted to a variety of human body types in a set of different poses).
Article
We present a robust method to generate high-quality high-order tetrahedral meshes with bounded approximation errors and low mesh complexity. The success of our method relies on two key components. The first one is three novel local operations that robustly modify the topology of the high-order tetrahedral mesh while avoiding invalid (flipped or degenerate) elements. In practice, our meshing algorithm follows the edge-based remeshing algorithm that iteratively conducts these local topological operations and a geometric optimization operation to improve mesh quality. The second is a new containment check procedure that robustly judges whether the approximation error between the input mesh and the high-order mesh exceeds the user-specified bound. If one operation causes the error-bounded constraint to be violated, we reject this operation to ensure a bounded approximation error. Besides, the number of tetrahedrons of the high-order mesh is reduced by progressively increasing the target edge length in the edge-based remeshing algorithm. A large number of experimental results have shown the capability and feasibility of our method. Compared to other state-of-the-art methods, our method achieves higher robustness and quality.
Article
A complete pipeline is presented for accurate and efficient partial 3D object retrieval based on Quick Intersection Count Change Image (QUICCI) binary local descriptors and a novel indexing tree. It is shown how a modification to the QUICCI query descriptor makes it ideal for partial retrieval. An indexing structure called Dissimilarity Tree is proposed which can significantly accelerate searching the large space of local descriptors; this is applicable to QUICCI and other binary descriptors. The index exploits the distribution of bits within descriptors for efficient retrieval. The retrieval pipeline is tested on the artificial part of SHREC’16 dataset with near-ideal retrieval results.
Article
Direct triangular surface remeshing algorithms work directly on the surface meshes without involving any complex parameterization techniques. The typical goal of surface remeshing is to find a mesh that is a faithful approximation of the input mesh that maintains the geometric fidelity between input and output meshes, achieve a lower bound on the minimal angles to enhance high element quality while maintaining vertex regularity and sharp feature preservation of the input mesh. The goal of each surface remeshing algorithm varies as per the requirement of the end application and thus achieving a balance between all common goals is a challenging task in remeshing. In this paper, a survey of numerous direct surface remeshing algorithms that have been developed over the past decade is presented. A framework of direct surface remeshing is defined and various phases of the framework are discussed in detail to give new insights into both existing methods and issues. A detailed analysis of how significant challenges have already been overcome and promising perspectives of various methods are presented.
Article
In this paper, we propose a phase-field model to partition a curved surface into path-connected segments with minimal boundary length. Phase-fields offer a powerful tool to represent diffuse interfaces with controlled width and to optimize them in a variational framework. We demonstrate how the multiplicative combination of phase-field functions can be used to effectively compute a hierarchical partition of unity. This induces an associated hierarchy of atlases, whose charts naturally overlap and thus are well-suited for applications such as texture mapping. Furthermore, we obtain distortion minimizing segmentations via a PDE-constraint optimization approach where the phase-field model allows direct use of Lagrangian calculus. Following Sharp and Crane (2018), the Yamabe equation, which allows computing the distortion induced by segment flattening, is considered as the constraint. This way, we obtain end-to-end diffuse formulations of variational problems in surface segmentation that are straightforward to treat computationally. Various examples illustrate the flexibility and robustness of this approach.
Article
We propose a novel method to model and fabricate shapes using a small set of specified discrete equivalence classes of triangles. The core of our modeling technique is a fabrication-error-driven remeshing algorithm. Given a triangle and a template triangle, which are coplanar and have one-to-one corresponding vertices, we define their similarity error from a manufacturing point of view as follows: the minimizer of the maximum of the three distances between the corresponding pair of vertices concerning a rigid transformation. To compute the similarity error, we convert it into an easy-to-compute form. Then, a greedy remeshing method is developed to optimize the topology and geometry of the input mesh to minimize the fabrication error defined as the maximum similarity error of all triangles. Besides, constraints are enforced to ensure the similarity between input and output shapes and the smoothness of the resulting shapes. Since the fabrication error has been considered during the modeling process, the fabrication process is easy to proceed. To assist users in performing fabrication using common materials and tools manually, we present a straightforward manufacturing solution. The feasibility and practicability of our method are demonstrated over various examples, including seven physical manufacturing models with only nine template triangles.
Article
We propose a simple yet effective method to perform surface remeshing with hard constraints, such as bounding approximation errors and ensuring Delaunay conditions. The remeshing is formulated as a constrained optimization problem, where the variables contain the mesh connectivity and the mesh geometry. To solve it effectively, we adopt traditional local operations, including edge split, edge collapse, edge flip, and vertex relocation, to update the variables. Central to our method is an evolutionary vertex optimization algorithm, which is derivative‐free and robust. The feasibility and practicability of our method are demonstrated in two applications, including error‐bounded Delaunay mesh simplification and error‐bounded angle improvement with a given number of vertices, over many models. Compared to state‐of‐the‐art methods, our method achieves higher remeshing quality.
Article
Full-text available
Topology optimization techniques are typically performed on a design domain discretized with finite element meshes to generate efficient and innovative structural designs. The optimized structural topologies usually exhibit zig-zag boundaries formed from straight element edges. Existing techniques to obtain smooth structural topologies are limited. Most methods are computationally expensive, as they are performed iteratively with topology optimization. Other methods, such as post-processing methods, are applied after topology optimization, but they cannot guarantee to obtain equivalent structural designs, as the volume and geometric features may be changed. This study presents a new method that uses pre-built lookup tables to transform the shape of boundary elements obtained from topology optimization to create smoothed structural topologies. The new method is developed based on the combination of the bi-directional evolutionary structural optimization (BESO) technique and marching geometries to determine structural topologies and lookup tables, respectively. An additional step is used to ensure that the generated result meets a target volume. A variety of 2D and 3D examples are presented to demonstrate the effectiveness of the new method. This research shows that the new method is highly efficient, as it can be directly added to the last step of topology optimization with a low computational cost, and the volume and geometric features can be preserved in smoothed topologies. Finite element models are also created for original and smoothed structural topologies to show that the structural stiffness can be significantly enhanced after smoothing.
Article
We present a simple, fast, and smooth scheme to approximate Algebraic Point Set Surfaces using non-compact kernels, which is particularly suited for filtering and reconstructing point sets presenting large missing parts. Our key idea is to consider a moving level-of-detail of the input point set which is adaptive w.r.t. to the evaluation location, just such as the samples weights are output sensitive in the traditional moving least squares scheme. We also introduce an adaptive progressive octree refinement scheme, driven by the resulting implicit surface, to properly capture the modeled geometry even far away from the input samples. Similarly to typical compactly-supported approximations, our operator runs in logarithmic time while defining high quality surfaces even on challenging inputs for which only global optimizations achieve reasonable results. We demonstrate our technique on a variety of point sets featuring geometric noise as well as large holes.
Article
We present a novel method for joint optimization and remeshing and apply it to inverse rendering. Rapid advances in differentiable rendering during the last years paved the way for fast inverse rendering of complex scenes. But serious problems with gradient‐based optimization of triangle meshes remain. Applying gradient steps to the vertices can lead to mesh defects, such as flipped triangles, crumpled regions, and self‐intersections. Choosing a good vertex count is crucial for the optimization quality and performance but is usually done by hand. Moreover, meshes with fixed triangulation struggle to adapt to complex geometry. Our novel method tackles all these problems by applying an adaptive remeshing step in each single iteration of the optimization loop. By immediately collapsing suspicious triangles, we avoid and heal mesh defects. We use a closed‐loop‐controlled location‐dependent edge length. We compare our solution to state‐of‐the‐art methods and find that it is faster and more accurate. It produces finer meshes with fewer defects, requires less parameter tuning and can reconstruct more complex objects. We present a novel method for joint optimization and remeshing and apply it to inverse rendering. Our method applies an adaptive remeshing step in each single iteration of the reconstruction loop to keep the local triangle size within an optimal range for the optimization. Mesh defects are immediately healed by collapsing suspicious triangles.
Article
The parameterization of open and closed anatomical surfaces is of fundamental importance in many biomedical applications. Spherical harmonics, a set of basis functions defined on the unit sphere, are widely used for anatomical shape description. However, establishing a one-to-one correspondence between the object surface and the entire unit sphere may induce a large geometric distortion in case the shape of the surface is too different from a perfect sphere. In this work, we propose adaptive area-preserving parameterization methods for simply-connected open and closed surfaces with the target of the parameterization being a spherical cap. Our methods optimize the shape of the parameter domain along with the mapping from the object surface to the parameter domain. The object surface will be globally mapped to an optimal spherical cap region of the unit sphere in an area-preserving manner while also exhibiting low conformal distortion. We further develop a set of spherical harmonics-like basis functions defined over the adaptive spherical cap domain, which we call the adaptive harmonics. Experimental results show that the proposed parameterization methods outperform the existing methods for both open and closed anatomical surfaces in terms of area and angle distortion. Surface description of the object surfaces can be effectively achieved using a novel combination of the adaptive parameterization and the adaptive harmonics. Our work provides a novel way of mapping anatomical surfaces with improved accuracy and greater flexibility. More broadly, the idea of using an adaptive parameter domain allows easy handling of a wide range of biomedical shapes.
Article
Wire sculptures are objects sculpted by the use of wires. In this article, we propose practical methods to create 3D virtual wire sculptural art from a given 3D model. In contrast, most of the previous 3D wire art results are reconstructed from input 2D wire art images. Artists usually tend to design their wire art with a single wire if possible. If not possible, they try to create it with the least number of wires. To follow this general design trend, our proposed method generates 3D virtual wire art with the minimum number of continuous wire lines. To achieve this goal, we first adopt a greedy approach to extract important edges of a given 3D model. These extracted important edges become the basis for the subsequent lines to roughly represent the shape of the input model. Then, we connect them with the minimum number of continuous wire lines by the order obtained by optimally solving a traveling salesman problem with some constraints. Finally, we smooth the obtained 3D wires to simulate the real 3D wire results by artists. In addition, we also provide a user interface to control the winding of wires by their design preference. Finally, we experimentally show our 3D virtual wire results and evaluate these created results. As a result, the proposed method is computed effectively and interactively, and results are appealing and comparable to real 3D wire art work.
Chapter
Head-related transfer functions (HRTFs) describe the spatial filtering of acoustic signals by a listener’s anatomy. With the increase of computational power, HRTFs are nowadays more and more used for the spatialised headphone playback of 3D sounds, thus enabling personalised binaural audio playback. HRTFs are traditionally measured acoustically and various measurement systems have been set up worldwide. Despite the trend to develop more user-friendly systems and as an alternative to the most expensive and rather elaborate measurements, HRTFs can also be numerically calculated, provided an accurate representation of the 3D geometry of head and ears exists. While under optimal conditions, it is possible to generate said 3D geometries even from 2D photos of a listener, the geometry acquisition is still a subject of research. In this chapter, we review the requirements and state-of-the-art methods for obtaining personalised HRTFs, focusing on the recent advances in numerical HRTF calculation.
Article
Harmonic measured foliation has demonstrated its usefulness for many geometric problems, including conformal parameterization and mesh quadrangulation. Due to the non-linearity and hard constraints for its computation, existing iterative solvers converge very slowly, and so impractical for large meshes. Though the multigrid approach is well-known for speeding up iterative solvers, a general multigrid solver cannot be applied here in a plug-and-play fashion, because the constraints for computing the harmonic measured foliations would be broken. In this article, we design a novel multigrid solver for this problem, where we propose specific multi-resolution mesh hierarchies and interpolation schemes to fulfill the requirements of the harmonic measured foliation. Experimental results show that our multigrid solver converges much faster than the original algorithm on meshes ranging from a few thousands to over one million edges, even by over a hundred times. This would benefit scalable geometry processing using harmonic measured foliations.
Article
Achieving a geometrically faithful, feature preserved, highly regular remesh with lower mesh complexity and high element quality is an ill-posed problem of surface remeshing in research. Individual surface remeshing techniques differ based on their end goals while ignoring the other enhancements in the remesh. In this research work, we present a surface remeshing framework that aim to balance the various crucial remeshing goals to enhance the remesh quality. In surface remeshing, the mesh quality comprises—(i) mesh complexity, (ii) mesh element quality, (iii) vertex regularity, (iv) geometric fidelity, and (v) feature preservation. Our remeshing approach uses the local edge operators to achieve mesh decimation, enhance element quality, and regularize the vertex valence of the remesh while conserving the features in the remesh. During mesh decimation, we preserve features using an updated quadric error metric. The mesh element quality is enhanced by splitting maximal angles and uplifting the minimal angles using edge split and edge collapse respectively. We maintain dynamic priority queues to maintain the maximal and minimal angles that require attention and improve them using local edge operators. High vertex regularity is achieved by valence optimization. The geometric faithfulness of the remesh with the original input mesh is maintained by constraining the bounds on the approximation error computed by the two-sided Hausdorff distance. In succession with other local edge operators, our algorithm can remesh low-quality mesh surfaces efficiently. The remeshes generated using our remeshing framework were compared with recent remeshing approaches using various performance metrics.
Article
Recent advances in the design and fabrication of personalized figurines have made the creation of high-quality figurines possible for ordinary users with the facilities of 3D printing techniques. The hair plays an important role in gaining the realism of the figurines. Existing hair reconstruction methods suffer from the high demand for acquisition equipment, or the result is approximated very coarsely. Instead of creating hairs for figurines by scanning devices, we present a novel surface reconstruction method to generate a 3D printable hair model with geometric features from a strand-level hairstyle, thus converting the exiting digital hair database to a 3D printable database. Given a strand-level hair model, we filter the strands via bundle clustering, retain the main features, and reconstruct hair strands in two stages. First, our algorithm is the key to extracting the hair contour surface according to the structure of strands and calculating the normal for each vertex. Next, a close, manifold triangle mesh with geometric details and an embedded direction field is achieved with the Poisson surface reconstruction. We obtain closed-manifold hairstyles without user interactions, benefiting personalized figurine fabrication. We verify the feasibility of our method by exhibiting a wide range of examples.
Article
Inverse reconstruction from images is a central problem in many scientific and engineering disciplines. Recent progress on differentiable rendering has led to methods that can efficiently differentiate the full process of image formation with respect to millions of parameters to solve such problems via gradient-based optimization. At the same time, the availability of cheap derivatives does not necessarily make an inverse problem easy to solve. Mesh-based representations remain a particular source of irritation: an adverse gradient step involving vertex positions could turn parts of the mesh inside-out, introduce numerous local self-intersections, or lead to inadequate usage of the vertex budget due to distortion. These types of issues are often irrecoverable in the sense that subsequent optimization steps will further exacerbate them. In other words, the optimization lacks robustness due to an objective function with substantial non-convexity. Such robustness issues are commonly mitigated by imposing additional regularization, typically in the form of Laplacian energies that quantify and improve the smoothness of the current iterate. However, regularization introduces its own set of problems: solutions must now compromise between solving the problem and being smooth. Furthermore, gradient steps involving a Laplacian energy resemble Jacobi's iterative method for solving linear equations that is known for its exceptionally slow convergence. We propose a simple and practical alternative that casts differentiable rendering into the framework of preconditioned gradient descent. Our pre-conditioner biases gradient steps towards smooth solutions without requiring the final solution to be smooth. In contrast to Jacobi-style iteration, each gradient step propagates information among all variables, enabling convergence using fewer and larger steps. Our method is not restricted to meshes and can also accelerate the reconstruction of other representations, where smooth solutions are generally expected. We demonstrate its superior performance in the context of geometric optimization and texture reconstruction.
Article
Functionals that penalize bending or stretching of a surface play a key role in geometric and scientific computing, but to date have ignored a very basic requirement: in many situations, surfaces must not pass through themselves or each other. This paper develops a numerical framework for optimization of surface geometry while avoiding (self-)collision. The starting point is the tangent-point energy , which effectively pushes apart pairs of points that are close in space but distant along the surface. We develop a discretization of this energy for triangle meshes, and introduce a novel acceleration scheme based on a fractional Sobolev inner product. In contrast to similar schemes developed for curves, we avoid the complexity of building a multiresolution mesh hierarchy by decomposing our preconditioner into two ordinary Poisson equations, plus forward application of a fractional differential operator. We further accelerate this scheme via hierarchical approximation, and describe how to incorporate a variety of constraints (on area, volume, etc. ). Finally, we explore how this machinery might be applied to problems in mathematical visualization, geometric modeling, and geometry processing.
Article
Skeleton creation is an important phase in the character animation pipeline. However, handcrafting skeleton takes extensive labor time and domain knowledge. Automatic skeletonization provides a solution. However, most of the current approaches are far from real-time and lack the flexibility to control the skeleton complexity. In this paper, we present an efficient skeletonization method, which can be seamlessly integrated into the sketch-based modeling process in real-time. The method contains three steps: (i) local sub-skeleton extraction; (ii) sub-skeleton connection; and (iii) global skeleton refinement. Firstly, the local skeleton is extracted from processed polygon stroke and forms a subpart along with the sub-mesh. Then, local sub-skeletons are connected according to the intersecting relationships and modeling sequence of subparts. Lastly, a global refinement method is proposed to gives users coarse-to-fine control on the connected skeleton. We demonstrate the effectiveness of our method on a variety of examples created from both novices and professionals.
Preprint
Full-text available
Skeleton creation is an important phase in the character animation pipeline. However, handcrafting skeleton takes extensive labor time and domain knowledge. Automatic skeletonization provides a solution. However, most of the current approaches are far from real-time and lack the flexibility to control the skeleton complexity. In this paper, we present an efficient skeletonization method, which can be seamlessly integrated into the sketch-based modeling process in real-time. The method contains three steps: local sub-skeleton extraction; sub-skeleton connection; and global skeleton refinement. Firstly, the local skeleton is extracted from the processed polygon stroke and forms a subpart along with the sub-mesh. Then, local sub-skeletons are connected according to the intersecting relationships and the modeling sequence of subparts. Lastly, a global refinement method is proposed to give users coarse-to-fine control on the connected skeleton. We demonstrate the effectiveness of our method on a variety of examples created by both novices and professionals.
Conference Paper
Full-text available
Head-related transfer functions (HRTFs) describe the direction dependent free field sound propagation from a point source to the listener's ears and are an important tool for audio in virtual and augmented reality. Traditionally , HRTFs are measured acoustically with the listener being positioned in the center of a spherical loudspeaker array. Alternatively, they can be approximated by numerical simulation, for example applying the boundary element method (BEM) to a piece-wise linear representation (a surface mesh) of the listener's head. Numerical approximation may be more economical, particularly in combination with methods for the synthesis of head geometry. To decrease the computation time of the BEM, it has been suggested to gradually decrease the resolution of the mesh with increasing distance from the ear for which the HRTF is approximated. We improve this approach by also considering the curvature of the geometry. The resulting graded meshes lead to faster simulation with the same or better accuracy in the HRTF compared to previous work. Our software is available at https://cg-tub.github.io/hrtf_mesh_grading.
Book
Full-text available
§1 Introductory Model Problem §2 General Two-Grid Method §3 General Multi-Grid Iteration §4 Nested Iteration Technique §5 Convergence of the Two-Grid Iteration §6 Convergence of the Multi-Grid Iteration §7 Fourier Analysis §8 Nonlinear Multi-Grid Methods §9 Singular Perturbation Problems §10 Elliptic Systems §11 Eigenvalue Problems and Singular Equations §12 Continuation Techniques §13 Extrapolation and Defect Correction Techniques §14 Local Techniques §15 The Multi-Grid Method of the Second Kind
Conference Paper
Full-text available
We present a new remeshing scheme based on the idea of improving mesh quality by a series of local modifications of the mesh geometry and connectivity. Our contribution to the family of local modification techniques is an area-based smoothing technique. Area-based smoothing allows the control of both triangle quality and vertex sampling over the mesh, as a function of some criteria, e.g. the mesh curvature. To perform local modifications of arbitrary genus meshes we use dynamic patch-wise parameterization. The parameterization is constructed and updated on-the-fly as the algorithm progresses with local updates. As a post-processing stage, we introduce a new algorithm to improve the regularity of the mesh connectivity. The algorithm is able to create an unstructured mesh with a very small number of irregular vertices. Our remeshing scheme is robust, runs at interactive speeds and can be applied to arbitrary complex meshes.
Book
Full-text available
We have divided this book into five main chapters. Chapter 1 gives the motivation for this book and the use of templates. Chapter 2 describes stationary and nonstationary iterative methods. In this chapter we present both historical development and state-of-the-art methods for solving some of the most challenging computational problems facing researchers. Chapter 3 focuses on preconditioners. Many iterative methods depend in part on preconditioners to improve performance and ensure fast convergence. Chapter 4 provides a glimpse of issues related to the use of iterative methods. This chapter, like the preceding, is especially recommended for the experienced user who wishes to have further guidelines for tailoring a specic code to a particular machine. It includes information on complex systems, stopping criteria, data storage formats, and parallelism. Chapter 5 includes overviews of related topics such as the close connection between the Lanczos algorithm and the Conjugate Gradient algorithm, block iterative methods, red/black orderings, domain decomposition methods, multigrid-likemethods, and rowprojection schemes. The Appendices contain information on how the templates and BLAS software can be obtained. A glossary of important terms used in the book is also provided.
Article
Full-text available
Parameterization of unstructured surface meshes is of fundamental importance in many applications of digital geometry processing. Such parameterization approaches give rise to large and exceedingly ill-conditioned systems which are difficult or impossible to solve without the use of sophisticated multilevel preconditioning strategies. Since the underlying meshes are very fine to begin with, such multilevel preconditioners require mesh coarsening to build an appropriate hierarchy. In this paper we consider several strategies for the construction of hierarchies using ideas from mesh simplification algorithms used in the computer graphics literature. We introduce two novel hierarchy construction schemes and demonstrate their superior performance when used in conjunction with a multigrid preconditioner.
Conference Paper
Full-text available
This paper proposes a new method for isotropic remeshing of triangulated surface meshes. Given a triangulated surface mesh to be resampled and a user-specified density function defined over it, we first distribute the desired number of samples by generalizing error diffusion, commonly used in image halftoning, to work directly on mesh triangles and feature edges. We then use the resulting sampling as an initial configuration for building a weighted centroidal Voronoi tessellation in a conformal parameter space, where the specified density function is used for weighing. We finally create the mesh by lifting the corresponding constrained Delaunay triangulation from parameter space. A precise control over the sampling is obtained through a flexible design of the density function, the latter being possibly low-pass filtered to obtain a smoother gradation. We demonstrate the versatility of our approach through various remeshing examples.
Article
Full-text available
We consider the fitting of tensor product parametric spline surfaces to gridded data. The continuity of the surface is provided by the basis chosen. When tensor product splines are used with gridded data, the surface fitting problem decomposes into a sequence of curve fitting processes, making the computations particularly e#cient. The use of a hierarchical representation for the surface adds further e#ciency by adaptively decomposing the fitting process into subproblems involving only a portion of the data. Hierarchy also provides a means of storing the resulting surface in a compressed format. Our approach is compared to multiresolution analysis and the use of wavelets. 1 Introduction In [9] an adaptive process was presented for fitting surface data with a geometrically continuous collection of rectangular Bezier patches. The adaptivity resulted from fitting a portion of the data with a patch, testing the fit for satisfaction within a given tolerance, and subdividing the patch if th...
Article
Full-text available
We present a new algorithm to compute stable discrete minimal surfaces bounded by a number of fixed or free boundary curves in R 3, S 3 and H 3. The algorithm makes no restr iction on the genus and can handl e singular triangulations.Additionally, we present an algorithm that, starting from a discrete harmonic map, gives a conjugate harmonic map. This can be applied to the identity map on a minimal surface to produce its conjugate minimal surface, a procedure that often yields unstable solutions to a free boundary value problem for minimal surfaces. Symmetry properties of boundary curves are respected during conjugation.
Book
§1 Introductory Model Problem §2 General Two-Grid Method §3 General Multi-Grid Iteration §4 Nested Iteration Technique §5 Convergence of the Two-Grid Iteration §6 Convergence of the Multi-Grid Iteration §7 Fourier Analysis §8 Nonlinear Multi-Grid Methods §9 Singular Perturbation Problems §10 Elliptic Systems §11 Eigenvalue Problems and Singular Equations §12 Continuation Techniques §13 Extrapolation and Defect Correction Techniques §14 Local Techniques §15 The Multi-Grid Method of the Second Kind
Article
Multiresolution shape representation is a very effective way to decompose surface geometry into several levels of detail. Geometric modeling with such representations enables flexible modifications of the global shape while preserving the detail information. Many schemes for modeling with multiresolution decompositions based on splines, polygonal meshes and subdivision surfaces have been proposed recently. In this paper we modify the classical concept of multiresolution representation by no longer requiring a global hierarchical structure that links the different levels of detail. Instead we represent the detail information implicitly by the geometric difference between independent meshes. The detail function is evaluated by shooting rays in normal direction from one surface to the other without assuming a consistent tesselation. In the context of multiresolution shape deformation, we propose a dynamic mesh representation which adapts the connectivity during the modification in order to maintain a prescribed mesh quality. Combining the two techniques leads to an efficient mechanism which enables extreme deformations of the global shape while preventing the mesh from degenerating. During the deformation, the detail is reconstructed in a natural and robust way. The key to the intuitive detail preservation is a transformation map which associates points on the original and the modified geometry with minimum distortion. We show several examples which demonstrate the effectiveness and robustness of our approach including the editing of multiresolution models and models with texture.
Conference Paper
We describe a multiresolution representation for meshes based on subdivision, which is a natural extension of the existing patch-based surface representations. Combining subdivision and the smooth-ing algorithms of Taubin [26] allows us to construct a set of algo-rithms for interactive multiresolution editing of complex hierarchi-cal meshes of arbitrary topology. The simplicity of the underly-ing algorithms for refinement and coarsification enables us to make them local and adaptive, thereby considerably improving their effi-ciency. We have built a scalable interactive multiresolution editing system based on such algorithms.
Conference Paper
Normal meshes are new fundamental surface descriptions inspired by differential geometry. A normal mesh is a multiresolution mesh where each level can be written as a normal offset from a coarser version. Hence the mesh can be stored with a single float per vertex. We present an algorithm to approximate any surface arbitrarily closely with a normal semi-regular mesh. Normal meshes can be useful in numerous applications such as compression, filtering, rendering, texturing, and modeling.
Article
We propose a new representation for multiresolution models which uses volume elements enclosed between thedifferent resolution levels to encode the detail information. Keeping these displacement volumes locally constantduring a deformation of the base surface leads to a natural behaviour of the detail features. The correspondingreconstruction operator can be implemented efficiently by a hierarchical iterative relaxation scheme, providingclose to interactive response times for moderately complex models. Based on this representation we implement a multiresolution editing tool for irregular polygon meshes that allowsthe designer to freely edit the base surface of a multiresolution model without having to care about self-intersectionsin the respective detailed surface. We demonstrate the effectiveness and robustness of the reconstructionby several examples with real-world data.
Article
Parameterization of discrete surfaces is a fundamental and widely-used operation in graphics, required, for in- stance, for texture mapping or remeshing. As 3D data becomes more and more detailed, there is an increased need for fast and robust techniques to automatically compute least-distorted parameterizations of large meshes. In this paper, we present new theoretical and practical results on the parameterization of triangulated surface patches. Given a few desirable properties such as rotation and translation invariance, we show that the only admissible parameterizations form a two-dimensional set and each parameterization in this set can be computed using a sim- ple, sparse, linear system. Since these parameterizations minimize the distortion of different intrinsic measures of the original mesh, we call them Intrinsic Parameterizations. In addition to this partial theoretical analysis, we pro- pose robust, efficient and tunable tools to obtain least-distorted parameterizations automatically. In particular, we give details on a novel, fast technique to provide an optimal mapping without fixing the boundary positions, thus providing a unique Natural Intrinsic Parameterization. Other techniques based on this parameterization family, designed to ease the rapid design of parameterizations, are also proposed.
Article
Shape optimization and surface fairing for polygon meshes have been active research areas for the last few years. Existing approaches either require the border of the surface to be fixed, or are only applicable to closed surfaces. In this paper, we propose a new approach, that computes natural boundaries. This makes it possible not only to smooth an existing geometry, but also to extrapolate its shape beyond the existing border. Our approach is based on a global parameterization of the surface and on a minimization of the squared curvatures, discretized on the edges of the surface. The so-constructed surface is an approximation of a minimal energy surface (MES). Using a global parameterization makes it possible to completely decouple the outer fairness (surface smoothness) from the inner fairness (mesh quality). In addition, the parameter space provides the user with a new means of controlling the shape of the surface. When used as a geometry filter, our approach computes a smoothed mesh that is discrete conformal to the original one. This allows smoothing textured meshes without introducing distortions.
Conference Paper
In this paper, we develop methods to rapidly remove rough features from irregularly triangulated data intended to portray a smooth surface. The main task is to remove undesirable noise and uneven edges while retaining desirable geometric features. The problem arises mainly when creating high-fidelity computer graphics objects using imperfectly-measured data from the real world. Our approach contains three novel features: an implicit integration method to achieve efficiency, stability, and large time-steps; a scale-dependent Laplacian operator to improve the diffusion process; and finally, a robust curvature flow operator that achieves a smoothing of the shape itself, distinct from any parameterization. Additional features of the algorithm include automatic exact volume preservation, and hard and soft constraints on the positions of the points in the mesh. We compare our method to previous operators and related algorithms, and prove that our curvature and Laplacian operators have several mathematically-desirable qualities that improve the appearance of the resulting surface. In consequence, the user can easily select the appropriate operator according to the desired type of fairing. Finally, we provide a series of examples to graphically and numerically demonstrate the quality of our results.
Article
Triangle meshes are a flexible and generally accepted boundary representation for complex geometric shapes. In addition to their geometric qualities or topological simplicity, intrinsic qualities such as the shape of the triangles, their distribution on the surface and the connectivity are essential for many algorithms working on them. In this paper we present a flexible and efficient remeshing framework that improves these intrinsic properties while keeping the mesh geometrically close to the original surface. We use a particle system approach and combine it with an incremental connectivity optimization process to trim the mesh towards the requirements imposed by the user. The particle system uniformly distributes the vertices on the mesh, whereas the connectivity optimization is done by means of Dynamic Connectivity Meshes, a combination of local topological operators that lead to a fairly regular connectivity. A dynamic skeleton ensures that our approach is able to preserve surface features, which are particularly important for the visual quality of the mesh. None of the algorithms requires a global parameterization or patch layouting in a preprocessing step but uses local parameterizations only. In particular we will sketch several application scenarios of our general framework and we will show how the users can adapt the involved algorithms in a way that the resulting remesh meets their personal requirements.
Article
During the last years the concept of multi-resolution modeling has gained special attention in many fields of computer graphics and geometric modeling. In this paper we generalize powerful multiresolution techniques to arbitrary triangle meshes without requiring subdivision connectivity. Our major observation is that the hierarchy of nested spaces which is the structural core element of most multi-resolution algorithms can be replaced by the sequence of intermediate meshes emerging from the application of incremental mesh decimation. Performing such schemes with local frame coding of the detail coefficients already provides effective and efficient algorithms to extract multi-resolution information from unstructured meshes. In combination with discrete fairing techniques, i.e., the constrained minimization of discrete energy functionals, we obtain very fast mesh smoothing algorithms which are able to reduce noise from a geometrically specified frequency band in a multiresolution decomposition. Putting mesh hierarchies, local frame coding and multi-level smoothing together allows us to propose a flexible and intuitive paradigm for interactive detail-preserving mesh modification. We show examples generated by our mesh modeling tool implementation to demonstrate its functionality.
Article
Shape optimization and surface fairing for polygon meshes have been active research areas for the last few years. Existing approaches either require the border of the surface to be fixed, or are only applicable to closed surfaces. In this paper, we propose a new approach, that computes natural boundaries. This makes it possible not only to smooth an existing geometry, but also to extrapolate its shape beyond the existing border. Our approach is based on a global parameterization of the surface and on a minimization of the squared curvatures, discretized on the edges of the surface. The soconstructed surface is an approximation of a minimal energy surface (MES). Using a global parameterization makes it possible to completely decouple the outer fairness (surface smoothness) from the inner fairness (mesh quality). In addition, the parameter space provides the user with a new means of controlling the shape of the surface. When used as a geometry filter, our approach computes a smoothed mesh that is discrete conformal to the original one. This allows smoothing textured meshes without introducing distortions.
Multilevel Solvers for Unstructured Surface Meshes In review A taxonomy for conjugate gradient methods
• [ Aks
• B Aksoylu
• Khodakovsky A
• P Schröder
• Ashby S F Ams
• T A Manteuffel
• P E Saylor
[AKS] AKSOYLU B., KHODAKOVSKY A., SCHRÖDER P.: Multilevel Solvers for Unstructured Surface Meshes. In review, SIAM J. Sci. Comput. [AMS90] ASHBY S. F., MANTEUFFEL T. A., SAYLOR P. E.: A taxonomy for conjugate gradient methods. SIAM J. Numer. Anal. 27, 6 (1990), 1542–1568.
Numeri-cal Recipes: The Art of Scientific Computing Computing dis-crete minimal surfaces and their conjugates
• [ Pftv
• W H Press
• B P Flannery
• A Teukol-Sky S
• W T Vetterling
[PFTV92] PRESS W. H., FLANNERY B. P., TEUKOL-SKY S. A., VETTERLING W. T.: Numeri-cal Recipes: The Art of Scientific Computing, 2nd ed. Cambridge University Press, 1992. [PP93] PINKALL U., POLTHIER K.: Computing dis-crete minimal surfaces and their conjugates. Experimental Mathematics 2, 1 (1993), 15– 36.
Computer Solution of Large Sparse Positive Definite Matrices Matrix Com-putations Multiresolution signal processing for meshes
• [ Gl
• George A Liu
• J W Gl
• G H Golub
• C F V Loan
• Gss
• I Guskov
• W Sweldens
• Schröder P
[GL81] GEORGE A., LIU J. W.: Computer Solution of Large Sparse Positive Definite Matrices,. Prentice Hall, 1981. [GL89] GOLUB G. H., LOAN C. F. V.: Matrix Com-putations. Johns Hopkins University Press, Baltimore, 1989. [GSS99] GUSKOV I., SWELDENS W., SCHRÖDER P.: Multiresolution signal processing for meshes. In Proceedings of ACM SIGGRAPH 99 (1999), pp. 325–334.
Taucs: A library of sparse linear solvers Dy-namic remeshing and applications
• [ Tcr
• S Toledo
[TCR] TOLEDO S., CHEN D., ROTKIN V.: Taucs: A library of sparse linear solvers. http://www.tau.ac.il/ stoledo/taucs. [VRS03] VORSATZ J., RÖSSL C., SEIDEL H.-P.: Dy-namic remeshing and applications. In Pro-ceedings of Solid Modeling and Applications (2003), pp. 167–175.