ArticlePDF Available

Constrained Elastic Surface Nets: Generating smooth surfaces from binary segmented data

Authors:

Abstract and Figures

This paper describes a method for creating object surfaces from binary-segmented data that are free from aliasing and terracing artifacts.
Content may be subject to copyright.
a)
b)
a) b) c)
right
left
bottom
top
front
back
right
left
bottom
top
front
back
right
left
bottom
top
front
back
AB
C
D
... Nevertheless, mesh smoothing approaches that do not take into account the underlying data cannot ensure an accurate representation of the original volume. Consequently, approaches to smooth the meshes whilst constraining the mesh nodes to remain near the boundary surface defined in the segmented data have been proposed [1,2]. However, due to this latter constraint, the artifacts are not entirely removed. ...
... In this way, a first implicit function f 1 (x) is created to represent the material 1. This function takes positive values in the region with label 1 and negative values outside. A second function f 2 (x) is then defined to represent the second material region, labelled 2. As a result, the boundaries B 1,0 and B 2,1 are represented by the zero-levels of the MPU functions f 1 (x) = 0 and f 2 (x) = 0 respectively. ...
Article
Thanks to advances in medical imaging technologies and numerical methods, patient‐specific modelling is more and more used to improve diagnosis and to estimate the outcome of surgical interventions. It requires the extraction of the domain of interest from the medical scans of the patient, as well as the discretisation of this geometry. However, extracting smooth multi‐material meshes that conform to the tissue boundaries described in the segmented image is still an active field of research. We propose to solve this issue by combining an implicit surface reconstruction method with a multi‐region mesh extraction scheme. The surface reconstruction algorithm is based on multi‐level partition of unity implicit surfaces, which we extended to the multi‐material case. The mesh generation algorithm consists in a novel multi‐domain version of the marching tetrahedra. It generates multi‐region meshes as a set of triangular surface patches consistently joining each other at material junctions. This paper presents this original meshing strategy, starting from boundary points extraction from the segmented data to heterogeneous implicit surface definition, multi‐region surface triangulation and mesh adaptation. Results indicate that the proposed approach produces smooth and high‐quality triangular meshes with a reasonable geometric accuracy. Hence, the proposed method is well suited for subsequent volume mesh generation and finite element simulations. Copyright © 2011 John Wiley & Sons, Ltd.
... Surface nets [22] is another method for triangle mesh generation. It also employs a 3D grid formed by cubes, but generates a vertex per cube that intercepts the surface, unlike the method of Marching Cubes that generates a vertex by edge intercepted by the surface. ...
Chapter
The development of techniques and equipment for medical imaging has provided physicians with efficient, fast, and reliable resources for diagnostic tasks. Large volumes of three-dimensional (3D) data can be stored, analyzed, and visualized through non-invasive procedures, whose interpretation has led to advances in disease assessment, surgical planning, treatment monitoring, among other benefits in healthcare. This chapter describes relevant concepts related to medical imaging techniques for the construction of three-dimensional models for visualization and biofabrication.
... Thresholding and morphological operations, followed by a level-set segmentation [72], were used to locate the surfaces of the outermost layer of cells. Triangulated bladder surfaces were extracted from these binary masks using a basic 3D surface net [73], positioning vertices at the voxel centroids. Bilaplacian smoothing was used to give a smoother surface. ...
Article
Full-text available
Leaves display a remarkable range of forms, from flat sheets with simple outlines to cup-shaped traps. Although much progress has been made in understanding the mechanisms of planar leaf development, it is unclear whether similar or distinctive mechanisms underlie shape transformations during development of more complex curved forms. Here, we use 3D imaging and cellular and clonal analysis, combined with computational modelling, to analyse the development of cup-shaped traps of the carnivorous plant Utricularia gibba. We show that the transformation from a near-spherical form at early developmental stages to an oblate spheroid with a straightened ventral midline in the mature form can be accounted for by spatial variations in rates and orientations of growth. Different hypotheses regarding spatiotemporal control predict distinct patterns of cell shape and size, which were tested experimentally by quantifying cellular and clonal anisotropy. We propose that orientations of growth are specified by a proximodistal polarity field, similar to that hypothesised to account for Arabidopsis leaf development, except that in Utricularia, the field propagates through a highly curved tissue sheet. Independent evidence for the polarity field is provided by the orientation of glandular hairs on the inner surface of the trap. Taken together, our results show that morphogenesis of complex 3D leaf shapes can be accounted for by similar mechanisms to those for planar leaves, suggesting that simple modulations of a common growth framework underlie the shaping of a diverse range of morphologies.
... These models satisfy the requirements but they are not sufficiently smoothed; the sharp edges of the voxels have been removed but the surfaces of polygonal models form terraces -stepped formations on the surface of the model. To reduce this negative effect, the classical smoothing method can be applied [5]. These methods result in smoothing out the projecting edges within one voxel and reduce the effect of the terraces but do not completely remove it. ...
Conference Paper
Full-text available
Although a voxel model is an independent form of a 3D object representation, in some cases it should be converted into a polygonal model for further processing and visualization. As the result of this conversion, the obtained polygonal model can look rough because of original shape of voxels. The paper presents the method of a voxel model conversion into a polygonal model which enables producing a smooth polygonal model.
... Our method, however, applies, as it does not rely on precomputation, but we do need to rasterize the object. To do this, we compute a triangle mesh for an isosurface of Φ t using dual contouring [16] implemented in a geometry shader. This is done in a pre-pass to each frame where the geometry shader evaluates Φ t and its gradient directly based on the current time. ...
Article
Full-text available
Existing techniques for interactive rendering of deformable translucent objects can accurately compute diffuse but not directional subsurface scattering effects. It is currently a common practice to gain efficiency by storing maps of transmitted irradiance. This is, however, not efficient if we need to store elements of irradiance from specific directions. To include changes in subsurface scattering due to changes in the direction of the incident light, we instead sample incident radiance and store scattered radiosity. This enables us to accommodate not only the common distance-based analytical models for subsurface scattering but also directional models. In addition, our method enables easy extraction of virtual point lights for transporting emergent light to the rest of the scene. Our method requires neither preprocessing nor texture parameterization of the translucent objects. To build our maps of scattered radiosity, we progressively render the model from different directions using an importance sampling pattern based on the optical properties of the material. We obtain interactive frame rates, our subsurface scattering results are close to ground truth, and our technique is the first to include interactive transport of emergent light from deformable translucent objects.
... Colon Data Preprocessing The proposed algorithms are validated on real VC colon data from the public databases of NIBIB and NIH. In the data preprocessing, we perform digital cleansing, segmentation, denoising [4], and colon inner wall surface extraction [16] on raw CT scans, and then do smoothing, remeshing and simplification on triangular meshes (Fig. 1). ...
Conference Paper
Full-text available
In virtual colonoscopy, colon conformal flattening plays an important role, which unfolds the colon wall surface to a rectangle planar image and preserves local shapes by conformal mapping, so that the cancerous polyps and other abnormalities can be easily and thoroughly recognized and visualized without missing hidden areas. In such maps, the anatomical landmarks (taeniae coli, flexures, and haustral folds) are naturally mapped to convoluted curves on 2D domain, which poses difficulty for comparing shapes from geometric feature details. Understanding the nature of landmark curves to the whole surface structure is meaningful but it remains challenging and open. In this work, we present a novel and effective colon flattening method based on quasiconformal mapping, which straightens the main anatomical landmark curves with least conformality (angle) distortion. It provides a canonical and straightforward view of the long, convoluted and folded tubular colon surface. The computation is based on the holomorphic 1-form method with landmark straightening constraints and quasiconformal optimization, and has linear time complexity due to the linearity of 1-forms in each iteration. Experiments on various colon data demonstrate the efficiency and efficacy of our algorithm and its practicability for polyp detection and findings visualization; furthermore, the result reveals the geometric characteristics of anatomical landmarks on colon surfaces.
... Some issues, such as incomplete contrast agent dispersal or touching vessels, may also not be removed. There are several specific methods available, e.g., that detect features and adjust sampling of the data [17], that apply an additional trilinear interpolation and subdivision to the surface elements (Precise MC [1]), or that relax an initial surface iteratively with additional position constraints (e.g., Dual MC [22], Constrained Elastic Surface Nets (CESN) [9, 14]). These methods are, however, only specific solutions, especially to the staircase problem, but can not remove all potential artifacts. ...
Conference Paper
Full-text available
The generation of surface models for computational fluid dynamics and rapid prototyping implies several steps to re-move artifacts caused by image acquisition, segmentation, and mesh extraction. Moreover, specific requirements, such as minimum diameters and distances to neighboring struc-tures are essential for rapid prototyping. For the simulation of blood flow, model accuracy and mesh quality are impor-tant. Medical expert knowledge is often required to reliably differentiate artifacts and pathological malformations. Cur-rently, a number of software tools needs to be employed to manually solve the different artifact removal and mesh edit-ing tasks. Within this paper, we identify the related tasks and describe the procedure, which is used to receive artifact-free vascular surface models for these applications.
... In [20], Karatasheva et al. proposed to first extract a base mesh using [21], then apply a similar optimization to [16] in order to first polygonize the implicit surface before generating a volume mesh suitable for finite element analysis. Some other methods use an octree based decomposition of the domain [22], and/or add one vertex per cells which intersects the implicit surface [23]. Note that the location of this vertex can be optimized to effectively represent sharp edges [24]. ...
Article
In this paper, we propose a new algorithm to mesh implicit surfaces which produces meshes both with a good triangle aspect ratio as well as a good approximation quality. The number of vertices of the output mesh is defined by the end-user. For this goal, we perform a two-stage processing: an initialization step followed by an iterative optimization step. The initialization step consists in capturing the surface topology and allocating the vertex budget. The optimization algorithm is based on a variational vertices relaxation and triangulation update. In addition a gradation parameter can be defined to adapt the mesh sampling to the curvature of the implicit surface. We demonstrate the efficiency of the approach on synthetic models as well as real-world acquired data, and provide comparisons with previous approaches.
Article
We propose a novel technique for the automatic design of molds to cast highly complex shapes. The technique generates composite, two-piece molds. Each mold piece is made up of a hard plastic shell and a flexible silicone part. Thanks to the thin, soft, and smartly shaped silicone part, which is kept in place by a hard plastic shell, we can cast objects of unprecedented complexity. An innovative algorithm based on a volumetric analysis defines the layout of the internal cuts in the silicone mold part. Our approach can robustly handle thin protruding features and intertwined topologies that have caused previous methods to fail. We compare our results with state of the art techniques, and we demonstrate the casting of shapes with extremely complex geometry.
Conference Paper
In this work we present separate procedural methods to generate features that are found in natural terrains which are difficult to reproduce with heightmap-based methods. We approximate overhangs, arches and caves using procedural functions and a reduced set of parameters. This produces visually plausible terrain feature topologies as well as a high degree of artistic control. Our approach is more intuitive and art-directable than other existing volumetric methods that are more complex to integrate into existing voxel engines, due to the framework changes necessary, or rely on automatic procedural generation, thus reducing the ability to provide creative input.
Article
Full-text available
This paper describes different methods tested to haptically explore voxel-based data acquired through medical scanners. The goal of this project is a multi-modal virtual reality surgical simulator. This simulator will allow surgeons and surgical residents to practice and rehearse surgical procedures using patient-specific data. Several methods for calculating the displayed force were investigated and are presented. Additional medical dataset operations, graphical interfaces, and implementation issues are also presented.
Article
Full-text available
This paper discusses a linked volumetric representation for graphical objects that enables physics- based modeling of object interactions such as: collision detection, collision response, 3D object deformation, and interactive object modification by carving, cutting, tearing, and joining. Object manipulation algorithms are presented along with implementation details and the results of timing tests. 1 Introduction In volume graphics, objects are represented as 3-dimensional arrays of sampled data elements. Unlike surface-based graphics, where the surfaces of graphical objects are represented by contiguous 2D polygons or curved 2D spline patches, volumetric models can represent both object surfaces and object interiors. A volumetric object representation is necessary for visualizing complex internal structure and for physically accurate modeling of interactions between solid objects with arbitrary shape and material composition. In this paper, algorithms for manipulating volumetric objects are presented. These algorithms include collision detection, collision response calculation, object deformation, and object modification by carving, cutting, tearing, and joining. These algorithms use a linked-element object representation for efficient implementation. Implementation details are presented along with the results of tests for algorithm timing and efficacy. 2 Motivation Volumetric object representations are necessary whenever the internal object structure is important for the visual rendering of a graphical object or for the simulation of interactions between objects. For example, volumetric image data of human anatomy can be used to effectively visualize internal anatomical structure as illustrated in Figure 1. Because the physics of an arbitrary object cannot be determined from a hollow object model, the representation of internal structure is important for physically realistic modeling as well as visualization. The cut plane of the image in Figure 1 illustrates the detailed interior structure of human tissue. In order to accurately model the mechanical behavior of such tissue, we require a graphical representation that can incorporate this complex structure. Both the deformation of objects with complex geometry or heterogeneous material properties and the cutting or tearing of solid tissues require some representation of object interiors. Modeling the cutting or tearing of objects with complex structure is a challenging problem for surface-based object models, since object cutting requires the generation of new object surfaces along the cutting path. The problem of clipping a surface-based or geometric object model by an arbitrary 2D plane has been addressed in constructive solid geometry (CSG) (e.g. (32)) and polygon rendering. However, there has been little progress towards enabling cutting through surface-based graphical objects along an arbitrary curved path. Both determining the intersection of the cutting path with the object and constructing the new surface along the object's cut surface are challenging problems. Related work in CSG provides mathematical techniques for building new surfaces of intersecting solids (25), (26). However, applying these methods would require constructing a surface or solid representing the knife path, limiting interactivity. In addition, when the cut is made through a surface-based object model that does not contain information about interior structure, the color, texture, and other features of the cut surface must be fabricated in
Article
Full-text available
This paper addresses the problem of simulating deformations between objects and the hand of a synthetic character during a grasping process. A numerical method based on finite element theory allows us to take into account the active forces of the fingers on the object and the reactive forces of the object on the fingers. The method improves control of synthetic human behavior in a task level animation system because it provides information about the environment of a synthetic human and so can be compared to the sense of touch. Finite element theory currently used in engineering seems one of the best approaches for modeling both elastic and plastic deformation of objects, as well as shocks with or without penetration between deformable objects. We show that intrinsic properties of the method based on composition/decomposition of elements have an impact in computer animation. We also state that the use of the same method for modeling both objects and human bodies improves the modeling both objects and human bodies improves the modeling of the contacts between them. Moreover, it allows a realistic envelope deformation of the human fingers comparable to existing methods. To show what we can expect from the method, we apply it to the grasping and pressing of a ball. Our solution to the grasping problem is based on displacement commands instead of force commands used in robotics and human behavior.
Article
Full-text available
3D Active Net, which is a 3D extension of Snakes, is an energy-minimizing surface model which can extract a volume of interest from 3D volume data. It is deformable and evolves in 3D space t o b e attracted to salient features, according to its internal and image energy. The net can be tted to the contour of a target object by the denition of the image energy suitable for the contour property. It is an alternative way to the extraction of a desired 3D object by manual segmentation or by using a slice by slice approach. We present testing results of the extraction of a muscle from the Visible Human Data by two methods: manual segmentation and the application of 3D Active Net. We apply principal component analysis, which utilizes the color information of the 3D volume data to emphasize an ill-dened c ontour of the muscle, and then apply 3D Active Net. We recognize that the extracted object has a smooth and natural contour in contrast with a comparable manual segmentation, proving an advantage of our approach.