Conference PaperPDF Available

A simple and efficient approach for 3D mesh approximate convex decomposition

Authors:

Abstract and Figures

This paper presents an original approach for 3D mesh approximate convex decomposition. The proposed algorithm computes a hierarchical segmentation of the mesh triangles by applying a set of topological decimation operations to its dual graph. The decimation strategy is guided by a cost function describing the concavity and the shape of the detected clusters. The generated segmentation is finally exploited to construct a faithful approximation of the original mesh by a set of convex surfaces. This new representation is particularly adapted for collision detection. The experimental evaluation we conducted shows that the proposed technique efficiently decomposes a concave 3D mesh into a small set (with respect to the number of its facets) of nearly convex surfaces. Furthermore, it automatically detects the anatomical structure of the analyzed 3D models, which makes it an ideal candidate for skeleton extraction and patterns recognition applications.
Content may be subject to copyright.
A SIMPLE AND EFFICIENT APPROACH FOR 3D MESH
APPROXIMATE CONVEX DECOMPOSITION
Khaled Mamou and Faouzi Ghorbel
Groupe de recherche GRIFT, Laboratoire CRISTAL, Ecole Nationale des Sciences de l’Informatique
University of Manouba, 2010 Manouba-Tunisia
khaled mamou@yahoo.fr, faouzi.ghorbel@ensi.rnu.tn
ABSTRACT
This paper presents an original approach for 3D mesh approx-
imate convex decomposition. The proposed algorithm com-
putes a hierarchical segmentation of the mesh triangles by ap-
plying a set of topological decimation operations to its dual
graph. The decimation strategy is guided by a cost function
describing the concavity and the shape of the detected clus-
ters. The generated segmentation is finally exploited to con-
struct a faithful approximation of the original mesh by a set
of convex surfaces. This new representation is particularly
adapted for collision detection. The experimental evaluation
we conducted shows that the proposed technique efficiently
decomposes a concave 3D mesh into a small set (with respect
to the number of its facets) of nearly convex surfaces. Fur-
thermore, it automatically detects the anatomical structure of
the analyzed 3D models, which makes it an ideal candidate
for skeleton extraction and patterns recognition applications.
Index TermsApproximate convex decomposition, 3D
mesh, collision detection, hierarchical segmentation
1. INTRODUCTION
Collision detection is essential for realistic physical interac-
tions in video games, computer animation and physically-
based modeling. In order to ensure real-time interactivity with
the player/user, video game and 3D modeling software de-
velopers usually approximate the 3D models composing the
scene (e.g. animated characters, static objects...) by a set of
simple convex shapes such as ellipsoids, capsules or convex-
hulls. In practice, these simple shapes provide poor approxi-
mations for concave surfaces and generate false collision de-
tections.
In this article, we present a simple and efficient approach
to decompose a 3D mesh into a set of nearly convex surfaces.
This decomposition is directly exploited to compute a faithful
approximation of the original 3D mesh, particularly adapted
to collision detection.
The remainder of this paper is structured as follows.
Section 2 introduces the approximate convex decomposition
problem. The proposed segmentation technique is described
in Section 3 and its performances are evaluated in Section 4.
Finally, Section 5 concludes the paper and suggests possibili-
ties for future work.
2. APPROXIMATE CONVEX DECOMPOSITION
Let Sbe a 3D mesh defined in 3,ξ={v1,v
2, ..., vV}the
set of its vertices (Vrepresents the number of vertices) and
θ={t1,t
2, ..., tT}the set of its triangles (Trepresents the
number of triangles).
Computing an exact convex decomposition of Sconsists
in partitioning it into a minimal set of convex sub-surfaces.
In [1], the authors prove that computing such decomposition
is an NP-hard problem and propose different heuristics to re-
solve it. In [2, 3], Lien et al. point out the fact that the pro-
posed algorithms are non-practical since they produce a high
number of clusters. In order to provide a tractable solution,
they propose to relax the exact convexity constraint and con-
sider instead the problem of computing an approximate con-
vex decomposition of S. Here, for a fixed parameter ,the
goal is to determin a partition Π={π1
2, ..., πK}of θwith
a minimal number of clusters Kand verifying that each clus-
ter has concavity lower than .
To the best of our knowledge, the approximate convex de-
composition problem was only addressed by [2, 3]. This later
technique exploits a divide-and-conquer strategy, which con-
sists in iteratively dividing the mesh until the concavity of
each sub-part is lower than the threshold . Here, at each step
i,thevertexv
iwith the highest concavity is selected and the
cluster to which it belongs is divided into two sub-clusters by
considering a bisection plane incident to v
i. The main limita-
tion of this appraoch is related to the choice of the ”best” cut
plane, which requires a sophisticated analysis of the model
features [2, 3]. Moreover, considering only plane-based bi-
sections is in practice too restrictive and may lead to poor
decompositions (cf. Figure 3).
In order to overcome such limitations, this paper intro-
duces a novel hierarchical segmentation approach for 3D
mesh approximate convex decomposition, described in the
next section.
3501978-1-4244-5654-3/09/$26.00 ©2009 IEEE ICIP 2009
3. PROPOSED APPROACH
The proposed hierarchical segmentation approach proceeds as
follows. First, the dual graph of the mesh is computed. Then
its vertices are iteratively clustered by successively applying
topological decimation operations, while minimizing a cost
function related to the concavity and the aspect ratio of the
produced segmentation clusters.
Let’s first recall the definition of the dual graph of the
mesh.
3.1. Dual graph
The dual graph Sassociated to the mesh Sis defined as fol-
lows:
each vertex of Scorresponds to a triangle of S,
two vertices of Sare neighbours (i.e., connected by an
edge of the dual graph) if and only if their correspond-
ing triangles in Sshare an edge.
3.2. Decimation operator
Once the dual graph Sis computed, the algorithm starts
the decimation stage which consists in successively apply-
ing halfe-edge collapse [4] decimation operations. Each half-
edge collapse operation applied to an edge (v, w), denoted
hecol(v, w), merges the two vertices vand w.Thevertexw
is removed and all its incident edges are connected to v(cf.
Figure 1).
Fig. 1.Half-edge collapse decimation operation.
Let A(v)be the list of the ancestors of the vertex v.Ini-
tially, A(v)is empty. At each operation hecol(v, w)applied
to the vertex v, the list A(v)is updated as follows :
A(v)A(v)A(w)∪{w}.(1)
3.3. Simplification strategy
The decimation process described in the previous section is
guided by a cost function describing the concavity and the
aspect ratio [5] of the surface S(v, w)resulting from the uni-
fication of the vertices vand wand their ancestors:
S(v, w)=A(v)A(w)∪{w, v}.(2)
As in [5], we define the aspect ratio Eshape (v, w)of the
surface S(v, w)as follows:
Eshape (v, w)= ρ2(S(v, w))
4π×σ(S(v, w)) ,(3)
where ρ(S(v, w)) and σ(S(v, w)) are respectively the perime-
ter and the area of S(v, w).
The cost Eshape(v, w)was introduced in order to favor
the generation of compact clusters. In the case of a disk this
cost equals one. The more irregular a surface is the highier its
aspect ratio.
The decimation cost E(v, w)associated to the edge (v, w)
is given by:
E(v, w)=C(S(v, w))
D+αEshape(v, w),(4)
where
C(S(v, w)) is the concavity of S(v, w)(cf. Section
3.4),
Dis a normalization factor equal to the diagonal of the
bounding box of S,
αis a parameter controlling the contribution of the
shape factor Eshape(v, w)with respect to the concavity
cost (cf. Section 3.5).
At each step of the decimation process, the hecol oper-
ation with the lowest decimation cost is applied and a new
partition Π(n)={πn
1
n
2
n
3, ..., πn
K(n)}is computed as fol-
lows:
k∈{1, ..., K (n)}
n
k=pn
kA(pn
k),(5)
where (pn
k)k∈{1,...,K(n)}represent the vertices of the dual
graph Sobtained after nhalf-edge collapse operations. This
process is iterated until all the edges of Sgenerating clusters
with concavities lower than are decimated.
3.4. Concavity measure
As discussed in [2, 3], there is no consensus in the literature
on a quantitative concavity measure. In this work, we define
the concavity C(S)of a 3D mesh S, as follows (cf. Figure 2):
C(S) = arg max
MSMP(M),(6)
where P(M)represents the projection of the point Mon
the convex-hull CH(S)[6] of S, with respect to the half-ray
with origin Mand direction normal to the surface Sat M.
Let us note that the concavity of a convex surface is zero.
Intuitively, the more concave a surface the ”farther” it is from
its convex-hull. The definition (6) extends directly to open
meshes once oriented normals are provided for each vertex.
3.5. Choice of the parameter α
The clusters detected during the early stages of the algorithm
are composed of a low number of adjacent triangles with a
concavity almost equal to zero. Therfore, the decimation cost
Eis dominated by the aspect ratio cost Eshape ,whichfavors
3502
Fig. 2. Concavity measure for a 3D mesh.
the generation of compact surfaces. This behavior is progres-
sively inverted during the decimation process since the clus-
ters are becoming more and more concave. In order to ensure
that the cost (α.Eshape )has no influence on the choice of the
last decimation operations, we have set the parameter αas
follows:
α=
10 ×D.(7)
This choice garanty, for disk-shaped clusters, that the cost
(α.Eshape)is ten times lower than the concavity-related cost
C(S(v,w))
D.
4. EXPERIMENTAL RESULTS
In order to validate our approach, we have first compared it
segmentation results to those of the ACD technique1intro-
duced in [2, 3]. Figure 3 presents the convex-hulls gener-
ated by our approach to those of [2, 3]. The reported results
show that our approach provides better convex approxima-
tions while detecting a lower number of clusters.
Figure 4 presents the segmentation results and the convex-
hull approximations generated by our approach for diffrent
3D meshes. Let us note first that, for all the models, the gen-
erated decompositions ensures a concavity lower than while
generating a small number of clusters, which provides an ef-
ficient representation for collision detection. Moreover, the
proposed technique successfully detects the convex parts (e.g.
the four spheres of Figure 4.p) and the anatomical structure of
the analyzed 3D models.
5. CONCLUSION
We have presented a hierarchical segmentation approach for
approximate convex decomposition of 3D meshes. The gen-
erated segmentations are exploited to construct faithful ap-
proximations of the original mesh by a set of convex surfaces.
This new representation is particularly adapted for collision
detection. The experimental evaluation we conducted shows
that the proposed technique efficiently decomposes a concave
3D model into a small set of nearly convex surfaces, while
automatically detecting its anatomical structure which makes
1We have considered the only public implementation of the algo-
rithm avaible at http://codesuppository.blogspot.com/2006/04/approximate-
convex-decomposition.html
it an ideal candidate for skeleton extraction and pattern recog-
nition applications.
(a) T= 5804 (b) K=28 (c) K=23
(d) T= 39689 (e) K=12 (f) K=12
(g) T= 19563 (h) K=26 (i) K=22
Fig. 3. Comparative evaluation: (a,d,g) original meshes,
(b,e,h) convex-hulls generated by [2, 3] and (c,f,i) convex-
hulls generated by our approach (Knumber of clusters and T
number of triangles).
6. REFERENCES
[1] B. Chazelle, D.P. Dobkin, N. Shouraboura, and A. Tal,
“Strategies for polyhedral surface decomposition: an ex-
perimental study,” in Symposium on Computational Ge-
ometry, 1995, pp. 297–305.
[2] J.M. Lien and N.M. Amato, “Approximate convex de-
composition, in Symposium on Computational Geome-
try, 2004, pp. 457–458.
[3] J.M. Lien and N.M. Amato, “Approximate convex de-
composition of polyhedra and its applications,” Com-
puter Aided Geometric Design, pp. 503–522, 2008.
[4] H. Hoppe, “Progressive meshes,” in International
Conference on Computer Graphics and Interactive Tech-
niques, 1996, pp. 99–108.
[5] A. K. Jain, “Fundamentals of digital image processing,
Prentice-Hall International, 1989.
[6] F.P. Preparata and S.J. Hong, “Convex hulls of finite sets
of points in two and three dimensions, Communication
of the ACM, vol. 20(2), pp. 87–93, 1977.
3503
Fig. 4. Segmentation results and generated convex-hulls (=0.03 ×D,Dlength of the bounding box diagonal, Knumber of
clusters and Tnumber of triangles).
3504
... Instead, the approximate convex decomposition (ACD) problem [Lien and Amato 2007] proposes to lift the strict convexity constraint and only requires the decomposed components to be approximately convex. Since ACD approaches [Lien and Amato 2007;Mamou and Ghorbel 2009;Mamou et al. 2016;Thul et al. 2018] typically generate a much smaller number of components, whose convex hulls can then be used to approximate the original shape and speed up downstream applications, ACD works have recently received more attention. For example, V-HACD [Mamou et al. 2016] is currently one of the most popular open-source ACD methods and has been adopted by a wide range of game engines and physics simulation SDKs. ...
... They then design a heuristic cost function to decompose the 3D meshes greedily. There are three major shortcomings of existing ACD methods: (a) Concavity metric: Prior works mainly utilize two types of metrics: boundary-distance-based concavity [Ghosh et al. 2013;Lien and Amato 2004Liu et al. 2016;Mamou and Ghorbel 2009], which measures the distance between the boundary surfaces of the shape and its convex hull; and volume-based concavity [Attene et al. 2008;Mamou et al. 2016;Thul et al. 2018], which calculates the volume difference between the solid shape and its convex hull. However, both metrics may fail to preserve the collision conditions in some cases, which means some positions in the space are unlikely to collide shape, but collide with the decomposition results. ...
... (b) Component representation: There are two common strategies for representing components and decomposing shapes. The first one is to decompose shapes by grouping the triangle faces [Lien and Amato 2007;Liu et al. 2016;Mamou and Ghorbel 2009], which results in zig-zag boundaries of the components and intersecting convex hulls. In contrast, V-HACD [Mamou et al. 2016] first voxelizes the input mesh and then decomposes the voxels. ...
Preprint
Approximate convex decomposition aims to decompose a 3D shape into a set of almost convex components, whose convex hulls can then be used to represent the input shape. It thus enables efficient geometry processing algorithms specifically designed for convex shapes and has been widely used in game engines, physics simulations, and animation. While prior works can capture the global structure of input shapes, they may fail to preserve fine-grained details (e.g., filling a toaster's slots), which are critical for retaining the functionality of objects in interactive environments. In this paper, we propose a novel method that addresses the limitations of existing approaches from three perspectives: (a) We introduce a novel collision-aware concavity metric that examines the distance between a shape and its convex hull from both the boundary and the interior. The proposed concavity preserves collision conditions and is more robust to detect various approximation errors. (b) We decompose shapes by directly cutting meshes with 3D planes. It ensures generated convex hulls are intersection-free and avoids voxelization errors. (c) Instead of using a one-step greedy strategy, we propose employing a multi-step tree search to determine the cutting planes, which leads to a globally better solution and avoids unnecessary cuttings. Through extensive evaluation on a large-scale articulated object dataset, we show that our method generates decompositions closer to the original shape with fewer components. It thus supports delicate and efficient object interaction in downstream applications. We will release our implementation to facilitate future research.
... convex-hulls or decompose the shapes into a collection of convex sub-shapes [25]. The separation distance between A 1 and A 2 , denoted by dist(A 1 , A 2 ) ∈ R + , can be formulated as a minimization problem of the form: ...
... Stopping criterion. As the number of iteration k increases, δ k → k→∞ 1 in (25). Therefore, d k tends to be equal to d k−1 (25b) and thus s k = s k−1 (25c). ...
Preprint
Full-text available
Collision detection between two convex shapes is an essential feature of any physics engine or robot motion planner. It has often been tackled as a computational geometry problem, with the Gilbert, Johnson and Keerthi (GJK) algorithm being the most common approach today. In this work we leverage the fact that collision detection is fundamentally a convex optimization problem. In particular, we establish that the GJK algorithm is a specific sub-case of the well-established Frank-Wolfe (FW) algorithm in convex optimization. We introduce a new collision detection algorithm by adapting recent works linking Nesterov acceleration and Frank-Wolfe methods. We benchmark the proposed accelerated collision detection method on two datasets composed of strictly convex and non-strictly convex shapes. Our results show that our approach significantly reduces the number of iterations to solve collision detection problems compared to the state-of-the-art GJK algorithm, leading to up to two times faster computation times.
... convex-hulls or decompose the shapes into a collection of convex sub-shapes [25]. The separation distance between A 1 and A 2 , denoted by dist(A 1 , A 2 ) ∈ R + , can be formulated as a minimization problem of the form: ...
... Stopping criterion. As the number of iteration k increases, δ k → k→∞ 1 in (25). Therefore, d k tends to be equal to d k−1 (25b) and thus s k = s k−1 (25c). ...
Conference Paper
Full-text available
Collision detection between two convex shapes is an essential feature of any physics engine or robot motion planner. It has often been tackled as a computational geometry problem, with the Gilbert, Johnson and Keerthi (GJK) algorithm being the most common approach today. In this work we leverage the fact that collision detection is fundamentally a convex optimization problem. In particular, we establish that the GJK algorithm is a specific sub-case of the well-established Frank-Wolfe (FW) algorithm in convex optimization. We introduce a new collision detection algorithm by adapting recent works linking Nesterov acceleration and Frank-Wolfe methods. We benchmark the proposed accelerated collision detection method on two datasets composed of strictly convex and non-strictly convex shapes. Our results show that our approach significantly reduces the number of iterations to solve collision detection problems compared to the state-of-the-art GJK algorithm, leading to up to two times faster computation times.
... A complete guide to the variables that can affect the simulation's accuracy and performance can be found in reference [41]. The most significant parameters for this simulation can be divided into three groups: 1) the Rigidbody parameters, which recreates the physical properties of an object; 2) the physics settings, which adjust the physics' global simulation and, 3) the time setting parameters, which allow setting the properties that control the timing of the physics' calculations during the compaction. ...
... However, in Unity 3D, the shape of the rigid bodies' collision meshes are simplified to simple primitives, leading to inter-particle penetration type problems. A decomposition algorithm named Concave Collider is used to adapt the collision mesh to the natural aggregate forms [41] Spheres do not have a collision mesh applied. This algorithm's configuration has been set with intermediate precision, as shown in Fig. 4 (b), which reduces the computational effort (Fig. 5). ...
Article
In this paper, an impulse-based Discrete Element numerical Method (iDEM) included in a physics toolbox, has been used to compact virtual aggregates. Firstly, geometrical properties, such as area, aspect ratio, perimeter, minor and major feret, circularity and roundness, of eleven types of coarse aggregates were measured. Then, a mass of each of these aggregates was compacted under vibration. The aggregate packings’ properties, such as aggregate segregation and orientation, porosity, pore -diameter, -tortuosity, -connectivity, -aspect ratio, -circularity, and -vertical distribution, were measured from Computed Tomography scans. Secondly, the aggregates were simulated using a Perlin noise in spherical primitives, which adjusted their geometry until they achieved realistic morphologies and gradations. iDEM detects contacts between complex shapes, including concavities, and computes the interaction between large amounts of complex objects. Results show that the properties from the packing experiments and simulations are highly comparable. This paper demonstrates the capacity of the physics toolbox to simulate granular materials effectively.
... 5) Construct collision volumes. We explored several techniques to generate collision volumes for our models, including VHACD (Volumetric Hierarchical Approximate Convex Decomposition) [59] to generate convex collision volumes for use in simulation tools that require a convex decomposition. However, we identified different use cases which would require more or less complex collision meshes, and ultimately chose to use the visual mesh directly as the collision volumes. ...
Preprint
Interactive 3D simulations have enabled breakthroughs in robotics and computer vision, but simulating the broad diversity of environments needed for deep learning requires large corpora of photo-realistic 3D object models. To address this need, we present Google Scanned Objects, an open-source collection of over one thousand 3D-scanned household items released under a Creative Commons license; these models are preprocessed for use in Ignition Gazebo and the Bullet simulation platforms, but are easily adaptable to other simulators. We describe our object scanning and curation pipeline, then provide statistics about the contents of the dataset and its usage. We hope that the diversity, quality, and flexibility of Google Scanned Objects will lead to advances in interactive simulation, synthetic perception, and robotic learning.
... Because a point contact removes 3 DoF from the object, at least two contact points are required to immobilise an object on a resting surface. Therefore, we break down the complex meshes into convex bodies using v-hacd [33] and organize them as URDF. The object URDF holds the original mesh for visualisation, the mass properties, and the convex decomposition bodies as collision geometry (joined with fixed joints). ...
... Our environment consists of a range of 3D objects, including primitives, shapes, and models with high-resolution colliders and materials. The model processing pipeline can generate object colliders using V-HACD [31], which gives fast and accurate collision behavior. It also allows users to import their custom models and customize their visual appearance, rigid body dynamics, object-to-object dynamics, and limited object-to-fluid dynamics. ...
Preprint
Full-text available
Humans have mental models that allow them to plan, experiment, and reason in the physical world. How should an intelligent agent go about learning such models? In this paper, we will study if models of the world learned in an open-ended physics environment, without any specific tasks, can be reused for downstream physics reasoning tasks. To this end, we build a benchmark Open-ended Physics ENvironment (OPEn) and also design several tasks to test learning representations in this environment explicitly. This setting reflects the conditions in which real agents (i.e. rolling robots) find themselves, where they may be placed in a new kind of environment and must adapt without any teacher to tell them how this environment works. This setting is challenging because it requires solving an exploration problem in addition to a model building and representation learning problem. We test several existing RL-based exploration methods on this benchmark and find that an agent using unsupervised contrastive learning for representation learning, and impact-driven learning for exploration, achieved the best results. However, all models still fall short in sample efficiency when transferring to the downstream tasks. We expect that OPEn will encourage the development of novel rolling robot agents that can build reusable mental models of the world that facilitate many tasks.
... Simulations are performed using Pybullet [22]. We use volumetric hierarchical approximate decomposition to get convex meshes of objects from obj files for collision detection [23]. Domain randomization for sim-to-real transfer Generative models in simulation differ from their real world counterparts due to incorrect physical modelling and inaccurate physical parameters. ...
Preprint
Full-text available
Multi-fingered robotic grasping is an undeniable stepping stone to universal picking and dexterous manipulation. Yet, multi-fingered grippers remain challenging to control because of their rich nonsmooth contact dynamics or because of sensor noise. In this work, we aim to plan hand configurations by performing Bayesian posterior inference through the full stochastic forward simulation of the robot in its environment, hence robustly accounting for many of the uncertainties in the system. While previous methods either relied on simplified surrogates of the likelihood function or attempted to learn to directly predict maximum likelihood estimates, we bring a novel simulation-based approach for full Bayesian inference based on a deep neural network surrogate of the likelihood-to-evidence ratio. Hand configurations are found by directly optimizing through the resulting amortized and differentiable expression for the posterior. The geometry of the configuration space is accounted for by proposing a Riemannian manifold optimization procedure through the neural posterior. Simulation and physical benchmarks demonstrate the high success rate of the procedure.
Article
Full-text available
The convex hulls of sets of n points in two and three dimensions can be determined with O(n log n) operations. The presented algorithms use the “divide and conquer” technique and recursively apply a merge procedure for two nonintersecting convex hulls. Since any convex hull algorithm requires at least O(n log n) operations, the time complexity of the proposed algorithms is optimal within a multiplicative constant.
Conference Paper
Full-text available
Abstract Decomposition is a technique commonly,used to partition complex models into simpler components. While decompo- sition into convex components results in pieces that are easy to process, such decompositions can be costly to construct and can result in representations with an unmanageable number of components. In this paper, we explore an alternative parti- tioning strategy that decomposes,a given model,into “approximately convex” pieces that may,provide similar benefits as convex components, while the resulting decomposition is both significantly smaller (typically by orders of magnitude) and can be computed more efficiently. Indeed, for many applications, an approximate convex decomposition (ACD) can more accurately represent the important structural features of the model by providing a mechanism,for ignoring less significant features, such as surface texture. We describe a technique for computing ACDs of three-dimensional polyhedral solids and surfaces of arbitrary genus. We provide results illustrating that our approach results in high quality decompositions,with very few components,and applications showing that comparable or better results can be obtained using ACD decompositions in place of exact convex decompositions,(ECD) that are several orders of magnitude,larger. 1 ECD
Article
Decomposition is a technique commonly used to partition complex models into simpler components. While decomposition into convex components results in pieces that are easy to process, such decompositions can be costly to construct and can result in representations with an unmanageable number of components. In this paper we explore an alternative partitioning strategy that decomposes a given model into “approximately convex” pieces that may provide similar benefits as convex components, while the resulting decomposition is both significantly smaller (typically by orders of magnitude) and can be computed more efficiently. Indeed, for many applications, an approximate convex decomposition (acd) can more accurately represent the important structural features of the model by providing a mechanism for ignoring less significant features, such as surface texture. We describe a technique for computing acds of three-dimensional polyhedral solids and surfaces of arbitrary genus. We provide results illustrating that our approach results in high quality decompositions with very few components and applications showing that comparable or better results can be obtained using acd decompositions in place of exact convex decompositions (ecd) that are several orders of magnitude larger.
Article
This paper addresses the problem of decomposing a complex polyhedral surface into a small number of “convex” patches (i.e., boundary parts of convex polyhedra). The corresponding optimization problem is shown to be NP-complete and an experimental search for good heuristics is undertaken.
Article
We propose a strategy to decompose a polygon, containing zero or more holes, into “approximately convex” pieces. For many applications, the approximately convex components of this decomposition provide similar benefits as convex components, while the resulting decomposition is significantly smaller and can be computed more efficiently. Moreover, our approximate convex decomposition (ACD) provides a mechanism to focus on key structural features and ignore less significant artifacts such as wrinkles and surface texture. We propose a simple algorithm that computes an ACD of a polygon by iteratively removing (resolving) the most significant non-convex feature (notch). As a by product, it produces an elegant hierarchical representation that provides a series of ‘increasingly convex’ decompositions. A user specified tolerance determines the degree of concavity that will be allowed in the lowest level of the hierarchy. Our algorithm computes an ACD of a simple polygon with n vertices and r notches in time. In contrast, exact convex decomposition is NP-hard or, if the polygon has no holes, takes time. Models and movies can be found on our web-pages at: http://parasol.tamu.edu/groups/amatogroup/.