ArticlePDF Available

Silhouette Smoothing for Real-Time Rendering of Mesh Surfaces

Authors:

Abstract and Figures

Coarse piecewise linear approximation of surfaces causes undesirable polygonal appearance of silhouettes. We present an efficient method for smoothing the silhouettes of coarse triangle meshes using efficient 3D curve reconstruction and simple local re-meshing. It does not assume the availability of a fine mesh and generates only moderate amount of additional data at run time. Furthermore, polygonal feature edges are also smoothed in a unified framework. Our method is based on a novel interpolation scheme over silhouette triangles and this ensures that smooth silhouettes are faithfully reconstructed and always change continuously with respect to continuous movement of the view point or objects. We speed up computation with GPU assistance to achieve real-time rendering of coarse meshes with the smoothed silhouettes. Experiments show that this method outperforms previous methods for silhouette smoothing.
Content may be subject to copyright.
Silhouette Smoothing for Real-time Rendering of
Mesh Surfaces
Lu Wang, Changhe Tu*, Wenping Wang, Xiangxu Meng, Bin Chan and Dongming Yan
AbstractCoarse piecewise linear approximation of surfaces
causes undesirable polygonal appearance of silhouettes. We
present an efficient method for smoothing the silhouettes of
coarse triangle meshes using efficient 3D curve reconstruction
and simple local re-meshing. It does not assume the availability
of a fine mesh and generates only moderate amount of additional
data at run time. Furthermore, polygonal feature edges are also
smoothed in a unified framework. Our method is based on a novel
interpolation scheme over silhouette triangles and this ensures
that smooth silhouettes are faithfully reconstructed and always
change continuously with respect to continuous movement of
the view point or objects. We speed up computation with GPU
assistance to achieve real-time rendering of coarse meshes with
the smoothed silhouettes. Experiments show that this method
outperforms previous methods for silhouette smoothing.
Index TermsSilhouette Smoothing, Polygonal Mesh, Interpo-
lation, Hermite Curves.
I. INTRODUCTION
Polygonal meshes are widely used for representing 3D shapes
in computer graphics. Simplification methods produce meshes of
small triangle count that are needed in many applications where
there is considerable resource limitation, such as 3D real time
graphics on mobile devices. The resulting coarse meshes have
conspicuous polygonal silhouettes or polygonal feature edges,
causing impression of low visual quality, since human vision
is particularly acute to silhouettes or feature curves and their
non-smoothness. (A feature curve is a smooth curve defined
by transversal intersection, i.e., non-tangent intersection, of two
smooth surfaces. See Fig. 2.)
The challenge of rendering a coarse mesh with smoothed sil-
houettes is well recognized by the computer graphics community
[5], [11]. A small triangle count, which is required by efficient
storage and transmission, and faithful smooth appearance are two
conflicting requirements. Simply refining a coarse mesh overall
to increase the number of triangles would be inefficient for the
purpose of silhouette smoothing, since much extra resources
would then be wasted in creating, storing and rendering an
increased number of triangles in the interior (i.e., non-silhouette
region) of a mesh.
A. Overview and contributions
We present an efficient method that fixes polygonal silhouettes
and feature edges into smooth curves for real-time rendering of
coarse meshes (Fig. 1). The method requires only a coarse triangle
*Corresponding author. Lu Wang, Changhe Tu and Xiangxu Meng are with
the School of Computer Science and Technology, Shandong University, Jinan,
China, E-mail: luwang@mail.sdu.edu.cn, chtu@sdu.edu.cn, mxx@sdu.edu.cn.
Wenping Wang, Bin Chan and Dongming Yan are with the Department
of Computer Science, the University of Hong Kong, Pokfulam Road, Hong
Kong, China. E-mail: {wenping, bchan, dmyan}@cs.hku.hk.
(a) (b)
(c) (d)
Fig. 1. An example of silhouette smoothing by our proposed method. (a)
A coarse mesh model (500 triangles); (b) The shaded coarse mesh; (c) The
coarse meshed is refined near silhouette by our method (904 triangles); (d)
Phong shading of the mesh in (c), with smoothed silhouette.
mesh as input. It performs view-dependent 3D curve reconstruc-
tion and simple local re-meshing to generate smooth silhouettes at
run time; feature edges are smoothed in preprocessing. Unlike pre-
vious approaches, our method does not require an LOD model or
a fine mesh and avoids global smooth surface reconstruction. The
local re-meshing is based on the notion of silhouette triangles,
which ensures that smooth silhouettes are faithfully reconstructed
and coherent (i.e., free of visual discontinuity) when rendered
with respect to a moving view point or with continuous object
movement. Note that our method only focuses on smoothing of
silhouette and features edges and it does not attempt to alter the
appearance (e.g., shading and texture) of a coarse mesh in regions
away from its silhouettes or feature curves.
The major contribution of this paper is a new method for
computing smooth silhouettes of coarse triangle mesh surfaces.
The smooth silhouette curves are computed using an interpolation
scheme over silhouette triangles, a concept that we will introduce
later. As a result, the silhouette curves thus computed are more
accurate than by previous methods and possess visual coherence
for moving objects or with respect to a moving viewpoint.
Fig. 2. Feature curves (in red) are local traversal intersections of two surfaces.
Some other advantages of our method are as follows. Polygonal
feature edges are smoothed in the same way as polygonal silhou-
ette edges. We will (Section III D) see that this not only makes
feature edges look smooth but also ensures that these smooth
feature curves possess visual coherence during continuous motion.
We also apply an effective vertex perturbation scheme to saddle
regions to prevent the smoothed silhouette from being blocked by
the neighboring faces of the original mesh, which is a critical issue
that has not been addressed by previous silhouette smoothing
methods. The method runs efficiently with GPU assistance.
After reviewing related works in the rest of this section, we
will present an outline of our method in Section II. Curve
approximation to silhouettes and feature curves, and the re-
meshing scheme are described in detail in Section III. GPU speed-
up is discussed in Section IV. We present experimental results in
Section V and conclude the paper in Section VI.
B. Related work
The silhouette rendering problem has extensively been studied
for various geometric models (e.g. [4], [22]). Here we focus on
polygonal meshes. Many methods for real-time mesh rendering
require a fine mesh to start with. For better rendering efficiency,
a progressively simplified mesh is usually displayed in a view-
dependent manner to ensure smooth appearance of silhouettes.
These methods include techniques based on multiple (or even
continuous) levels of detail [21] [13] [8] [9] [6] [15] [1]. The
silhouette clipping technique [17] also needs a fine mesh for
assistance. However, in many applications only relatively coarse
meshes are available; very fine meshes cannot be used, either
because they are not available or because of limited bandwidth
or memory.
The PN-triangle method [19] and its variant [3] do not assume
a fine mesh as input; rather, they construct a smooth cubic surface
patch for each triangle of a coarse triangle mesh. These methods
use global surface patch reconstruction to achieve the overall
smoothness of a coarse mesh for rendering, thus improving the
silhouette smoothness as well as the shading quality of interior
region. Normally the level of subdivision of cubic patches is uni-
form across the model and xed before rendering, in a viewpoint
independent manner. Therefore the polygonal appearance (i.e., the
nonsmoothness) of silhouettes still becomes apparent when one
zooms in on a silhouette. (See the comparison of our method with
the PN-triangle method in Fig. 24, Section V).
For fast silhouette smoothing and rendering, the method in [20]
takes a local and view-dependent approach that does not assume
the availability of fine meshes and does not involve global geom-
etry reconstruction. It performs local 2D curve approximation to
the projected boundary of a mesh surface on the view plane. This
boundary curve corresponds to the silhouette edges of the mesh;
an edge of a mesh surface is called a silhouette edge if one of its
two incident triangles is visible and the other is invisible, i.e., a
back-facing triangle.
Many existing methods extract silhouette edges from polygonal
meshes [2], [10], [14], [16]. However, there are some fundamental
difficulties in computing smooth silhouettes from silhouette edges
of a mesh model. Consider the mesh approximating a sphere
in Fig. 3. Suppose that all the mesh vertices are on the sphere.
Fig. 3(b) shows that, in one view, the silhouette curve (i.e., the
circle in red) does not pass through the vertices of the silhouette
polygon. The side view in Fig. 3(c) shows that the silhouette curve
in (b) passes over a series of mesh triangles, which are marked
in thick lines, rather than corresponds to any edges of the mesh.
This observation suggests that it is inherently inaccurate and
essentially incorrect to use silhouette edges for reconstructing
smooth silhouette curves, because a smooth silhouette curve
does not necessarily correspond to silhouette edges. Furthermore,
besides the accuracy consideration, the smooth silhouette curves
thus computed are not visually coherent with respect to a moving
viewpoint or if the mesh model moves continuously, since they
depend on the silhouette edges that appear or disappear abruptly
for a rotating mesh model.
(a) (b) (c)
Fig. 3. (a) A sphere; (b) A coarse mesh sampled from the sphere and
silhouette curve (in red); (c) The coarse mesh and silhouette curve in (b)
viewed from another angle.
Hertzmann et. al in [7] noticed the visual coherence issue and
proposed the following method for extracting silhouette curves.
Suppose that a smooth surface is represented by a triangular mesh.
Let E be the view point. Let n(p) be the estimated unit normal
vector at a mesh vertex p. Then consider the function g(p) =
n(p) · (p E) defined at all mesh vertices p. Now extend the
domain of g(p) to be over each triangle by linear interpolation of
its values at the three vertices of the triangle. Then the silhouette
curve is defined to consist of all those points p on the mesh
surface satisfying g(p) = 0 (e.g., the line segment MN in Fig.
4). The silhouette thus computed possesses visual coherence since
it is dependent on the view point E continuously. However, the
smoothness issue is still not addressed, as the silhouette curve
thus constructed is a polyline lying on the faces of the mesh.
In the present paper we focus on local silhouette processing,
without using a fine mesh or global surface reconstruction. Rather
than using the silhouette edges of a mesh as in [2], [4], [10], [16],
[20], we compute silhouette curves based on silhouette triangles.
The idea here of associating a silhouette segment to a triangle
bears similarity to the treatment by Hertzmann et. al in [7]. We
aim at faithful smooth silhouette reconstruction as well as visual
A
B
C
n(A)
n(B)
n(C)
g(A) > 0
g(B) > 0
g(C) < 0
g = 0
Silhouette line MN
M
N
Fig. 4. Silhouette line M N as defined in [7].
coherence of the silhouettes of a moving mesh model to support
real-time rendering of a coarse mesh with smooth silhouette.
II. OUTLINE OF PROPOSED METHOD
We consider triangle meshes that approximate a piecewise
smooth surface. There are two types of edges in such mesh
surfaces: feature edges and non-feature edges. Feature edges
are those that approximate smooth feature curves (or creases)
defined by traversal intersection of two smooth surfaces, while
non-feature edges are those mesh edges located in regions ap-
proximating smooth parts of the original surface.
To circumvent the problem pointed out in Section I with using
silhouette edges for silhouette curve reconstruction, we propose
the use of silhouette triangles for computing smooth silhouette
curves of a mesh. Let M be a mesh approximating a smooth
surface Γ. A triangle T of the mesh M is then an approximation to
a triangular surface patch T
p
on the surface Γ. For some viewing
direction, the silhouette curve of Γ can lie on the patch T
p
, and
in this case we need to compute a smooth curve on the triangle
T to approximate that part of the smooth silhouette of Γ. This
naturally gives rise to the notion of the silhouette triangle, which
is the triangle T in this case. The silhouette triangles are viewpoint
dependent and are exactly those mesh faces that contain silhouette
curves given by g(p) = 0 in [7] (see Section I-B and Fig. 4).
Let N
V
be a shading normal defined at mesh vertex V . Let
D
V
denote the viewing direction vector from the vertex V to
the viewpoint (i.e. the eye). Then V is said to be visible if the
inner product (N
V
, D
V
) 0 and invisible otherwise. Note that
the notion of visibility here is a local one; it is different from
occlusion where a front-facing triangle may be blocked by another
object in front of it. We label a visible vertex by + and an
invisible one by ”. A mesh triangle face is called a silhouette
triangle if its three vertices do not have the same visibility status.
For example, among the four triangles with visibility labels in
Fig. 5, the triangles of types 1 and 2 are silhouette triangles,
while the other two are not.
Smooth silhouette curves of a mesh can be computed using
the property that the surface normal of any point on silhouette
is perpendicular to the viewing direction. First, all silhouette
triangles are identified by checking the visibility of all mesh
vertices. Each silhouette triangle has exactly two edges labeled as
“(, +)”, as V
1
V
2
and V
1
V
3
shown in Fig. 6. Using the endpoint
information of these two edges, we compute two cubic Hermite
interpolation curves S
1
(u) to connect V
1
and V
2
, and S
2
(v)
Fig. 5. Different types of triangles in a mesh. Only triangles of types 1 and
2 are silhouette triangles.
SilhouetteSegment S(t)
S (v )
2 0
S (u )
1 0
V
2
V
3
V
1
V
4
SilhouetteBridge S (u)
1
SilhouetteBridge S (v)
2
Eye sDirection
Fig. 6. Construction of silhouette bridges S
1
(u), S
2
(v), and silhouette
segment S(t).
to connect V
1
and V
3
, which are called silhouette bridges (see
Fig. 6).
Then, as an approximation, assuming a linear change of normal
vectors along the silhouette bridges, it is easy to compute a
silhouette point on each silhouette bridge such that the interpo-
lated surface normal vector at that point is perpendicular to the
viewing direction. Thus, we get two silhouette points, which are
S
1
(u
0
) and S
2
(v
0
) as shown in Fig. 6. Finally, the silhouette
curve segment, or simply silhouette segment, connecting the two
silhouette points is given by a cubic Hermite interpolation curve
S(t) as shown in Fig. 6. The silhouette segment S(t) is finally
sampled for local re-meshing for rendering the mesh with the
smoothed silhouettes.
Another goal of our method is to make polygonal feature lines
smooth in a coarse mesh. (Two meshes with such polygonal fea-
ture lines are shown in Fig. 20(a) and Fig. 21(b).) Feature edges
are connected to form polygonal feature lines that approximate
original smooth feature curves. We suppose that features edges
are already labeled as such in the input mesh. Based on the two
endpoints and estimated tangent vectors at the two endpoints of
each feature edge, we use the cubic Hermite curve to generate
a smooth feature curve segment (or simply feature segment) to
replace the straight feature edge. Because of their shared tangents,
consecutive feature segments are joined with G
1
continuity and
thus form a smooth piecewise cubic curve approximating the
original feature curve.
Clearly, feature segments need to be rendered whenever visible.
To speed up processing, they are pre-computed and stored for easy
retrieval through quick lookup during run-time processing. The
extra space requirement of this preprocessing scheme is justified
CoarseMeshwith
SmoothedFeatureEdges
CoarseMeshandObject
FindingSilhouette Triangles
CoarseMeshwith
Silhouette Triangles
ComputingSilhouetteBridges
LocalRe-meshing
CoarseMeshwith
SilhouetteBridges
ComputingSilhouetteCurves
CoarseMeshwith
SilhouetteCurves
RefinedMeshandObject
FeatureEdges Smoothing
Fig. 7. Flowchart of our silhouette smoothing method.
by that the number of feature edges is usually small compared
to the number of non-feature edges, so storing a fixed set of
feature segments normally does not cause a significant increase
in memory consumption.
Below is the main flow of our method, as illustrated in Fig. 7.
Details about the main steps will be presented in Section III.
1) Feature edge smoothing: For each feature edge, compute
its corresponding feature curve segment by Hermite curve
interpolation in preprocessing. This step is view indepen-
dent.
2) Finding silhouette triangles: Given a viewing direction D
for a parallel projection or a viewpoint E for a perspective
projection, locate all silhouette triangles with respect to D
or E.
3) Computing silhouette bridges: Compute two silhouette
bridges for each silhouette triangle using Hermite interpo-
lation.
4) Computing silhouette segments: Compute a silhouette
point on each silhouette bridge obtained in Step 3. Use
a Hermite interpolation curve again to construct a smooth
silhouette segment connecting the two silhouette points of
each silhouette triangle. (Note that each silhouette bridge
as well as the silhouette point on it – is computed only once
in each frame, since it is shared by two adjacent silhouette
triangles.)
5) Local re-meshing: Sample points on silhouette segments
and visible feature segments adaptively, according to their
curvature and perceived size. Use these sample points to
perform local re-meshing for rendering.
We use the GPU to perform local re-meshing and render the
re-meshed surface. Details about the implementation are given in
Section IV.
III. SMOOTH CURVE CONSTRUCTION
In this section we explain the five main steps of our method
as outlined in the preceding flow of algorithm.
A. Feature edges smoothing
The key to computing a feature segment via Hermite interpo-
lation is providing properly estimated tangent vectors at the two
endpoints of its corresponding feature edge. Feature edges can
be grouped into maximal polylines, called feature polylines. A
feature polyline either terminates at a non-feature vertex or meets
other feature polylines at a feature vertex where more than two
smooth surfaces intersect (See Fig. 2). Note that no smoothing
is needed when a feature polyline contains only a single feature
edge.
Suppose that a feature polyline contains at least three con-
secutive vertices (V
0
, V
1
, . . . , V
k
), k 2. For an internal vertex
V
i
, i = 1, 2, . . . , k 1, the tangent direction
ˆ
T
i
at V
i
is set to
be the tangent to the circle uniquely determined by the three
consecutive points V
i1
, V
i
, and V
i+1
. Let W
0
= V
i
V
i1
and
W
1
= V
i+1
V
i
. Then it is an elementary exercise to show
ˆ
T
i
= |W
1
| · W
0
+ |W
0
| · W
1
. (1)
The estimated tangent vector T
i
at V
i
to be used for Hermite
interpolation has the same direction of
ˆ
T
i
but has its length
determined in such a way that the resulting Hermite curve is
a good approximation to a circular arc if the Hermite data (i.e.,
the endpoints and their tangents) are sampled from a circle (see
details in Section III-C).
For the end vertex V
0
of the feature polyline, its tangent vector
is set to be the tangent to the circle passing through V
0
, V
1
and
V
2
. The tangent vector at the other end vertex V
k
is similarly
computed. Given V
i
, V
i+1
and their tangent vectors, their cubic
Hermite interpolation curve is then uniquely determined. (The
expression is given in Section III-C.)
B. Finding silhouette triangles
Silhouette triangles are determined by the visibility status of its
vertices, which in turn depends on the angles between the shading
normal vectors at the vertices and the viewing direction vector.
For a non-feature mesh vertex V , its shading normal vector is
computed as the angle-weighted average of the normal vectors of
triangles incident to V as in [18] (see Fig. 8(a)).
A feature vertex V has multiple shading normal vectors, each
associated with a surface incident to V , since V is at the
intersection of multiple smooth surfaces. Specifically, the triangles
adjacent to V are divided into several groups, with each group
belonging to a smooth surface passing through V (See Fig. 8(b)
for the case of two groups). The angle-weighted average normal
vector over the triangles in the same group is assigned as a
shading vector to V . Therefore, the vertex V has multiple shading
(a) (b)
Fig. 8. (a) One shading normal is associated with non-feature vertex; (b)
Here two normals are assigned to a feature vertex lying at intersection of two
smooth surface patches, represented by two groups of triangles.
O
N
Fig. 9. Hierarchical clustering structure on the Gaussian sphere.
vectors, each contributed by a group of triangles incident to V .
When testing whether a triangle incident to a feature vertex V is
silhouette triangles, we use the normal vector associated with the
group containing that triangle.
To parse silhouette triangles, we compute the visibility status of
all mesh vertices by hierarchical clustering based on the normals
of the vertices in the case of parallel projection or the normals
and positions of the vertices in the case of perspective projection.
For parallel projection, we follow the method in [2]. Specifically,
let D be the constant viewing direction vector and let N be the
normal of the vertex V . The vertex V is visible if N · D 0, and
invisible otherwise. The normalized normal vectors N of all mesh
vertices are mapped to points on the Gaussian sphere, which is
subdivided into hierarchical cells along longitudes and latitudes.
Each cell is a spherical convex region and has four corner points,
and each parent cell has four child cells, as shown in Fig. 9. The
cells induce a hierarchical clustering of mesh vertices via their
normal vectors.
For a cluster C of mesh vertices, if all the four corners of its
containing cell are visible (or invisible), then all the vertices in
C are visible (or invisible). If the four corners of the cell have
different visibility status, each child of the cell will be checked;
this is done recursively until the visibility of each mesh vertex is
resolved.
For perspective projection, the basic idea is the same but a
3D Gaussian sphere in 4D space will be used instead, since
the viewing direction is no longer a constant vector. Let V =
(V
x
, V
y
, V
z
) be a mesh vertex. Let N = (N
x
, N
y
, N
z
) be the
unit normal vector of the vertex V and let E = (E
x
, E
y
, E
z
) be
the viewpoint. The plane passing through V and having N as its
normal vector has the equation F (X; V ) N · (X V ) = 0.
Clearly, the vertex V is visible if and only if the viewpoint E
is above the plane F ( X; V ) = 0, that is, F (E; V ) 0, which is
the condition we use for determining the visibility of all mesh
vertices V . Note that
F (E; V ) = E
x
· N
x
+ E
y
· N
y
+ E
z
· N
z
(N
x
· V
x
+ N
y
· V
y
+ N
z
· V
z
)
E
· V
,
where E
= (E
x
, E
y
, E
z
, 1) and V
= (N
x
, N
y
, N
z
, (N
x
· V
x
+
N
y
· V
y
+ N
z
· V
z
)) are 4D vectors. Therefore we normalize V
and map them onto the 3D Gaussian sphere S
3
in 4D space,
and subdivide S
3
for hierarchical clustering. In this case, each
cluster has eight corner points and eight child cells. Then this
hierarchical structure is used for fast visibility determination for
all mesh vertices. The visibility checking for a point V
on S
3
is
simply done by the sign of the inner product E
· V
.
C. Computing silhouette bridges
By definition, there are two sides labeled (+, ) in a silhou-
ette triangle, such as V
1
V
2
and V
1
V
3
in the silhouette triangle
V
1
V
2
V
3
in Fig. 6. Below we use the side V
1
V
2
to explain
the procedure for computing a silhouette bridge by Hermite
interpolation.
First suppose that V
1
V
2
is not a feature edge. Let N
1
and N
2
be estimated normal vectors at V
1
and V
2
. To compute tangent
vectors at V
1
and V
2
, which are needed by Hermite interpolation,
we take the tangent direction
ˆ
T
1
at the vertex V
1
to be the
projection of
V
1
V
2
on the “tangent” plane at V
1
, which is the
plane passing through V
1
and having N
1
as the normal vector.
Then the vector
ˆ
T
1
is given by
ˆ
T
1
= (V
2
V
1
) [(V
2
V
1
) · N
1
] · N
1
. (2)
Similarly, the tangent direction
ˆ
T
2
at V
2
is
ˆ
T
2
= (V
2
V
1
) [(V
2
V
1
) · N
2
] · N
2
. (3)
The tangent vectors T
1
and T
2
we use in Hermite interpolation
have the same directions of
ˆ
T
1
and
ˆ
T
2
. Obviously different lengths
of the end tangent vectors affect the shape of the resulting cubic
Hermite interpolation curve. We choose the length of T
1
and T
2
in such a way that if the end data points (i.e., V
i
and T
i
, i = 1, 2)
are extracted from a circular arc, then the resulting Hermite cubic
curve gives a good approximation of the circular arc. It can be
shown that this requirement is met if the lengths of T
1
and T
2
are set to be
L
1
=
2|V
2
V
1
|
1 + cos θ
1
, L
2
=
2|V
2
V
1
|
1 + cos θ
2
, (4)
where θ
i
is the angle between
V
1
V
2
and
ˆ
T
i
, i = 1, 2. Here a
circular arc is used as a target shape for approximation because it
has constant curvature; hence, it is expected that the interpolating
cubic curve will have small curvature variation when the endpoint
data does not deviate much from a circular arc.
With all the end data points determined, the silhouette bridge
S
1
(u) over the side V
1
V
2
is given by the cubic Hermite curve
S
1
(u) = (2V
1
2V
2
+ T
1
+ T
2
)u
3
(3V
1
3V
2
+ 2T
1
+ T
2
)u
2
+ T
1
u + V
1
, u [0, 1].
(5)
D. Computing silhouette segments
A silhouette point is a point on a silhouette bridge whose
surface normal vector is perpendicular to the viewing direction.
Here we assume that the normal vector along the silhouette bridge
S
1
(u) over V
1
V
2
is linearly interpolated from the normal vectors
at V
1
and V
2
. Thus, we assign
˜
N
1
(u) = (1 u) · N
1
+ u · N
2
, u [0, 1], (6)
to be the normal vector at the point S
1
(u). First consider the case
of parallel projection, with the constant viewing direction vector
denoted by D. The silhouette point on the silhouette bridge S
1
(u)
is S
1
(u
0
), where the parameter value u
0
is easily obtained as the
solution to the linear equation D ·
˜
N
1
(u) = 0. The other silhouette
point S
2
(v
0
) on the silhouette bridge S
2
(v) can similarly be
computed.
For perspective projection, the viewing direction at the point
S
1
(u) is D(u) = S
1
(u)E, which is no longer a constant vector.
That means that we would have to solve the quartic equation
D(u) ·
˜
N
1
(u) = 0 to locate the silhouette point S
1
(u
0
), since
S
1
(u) is cubic. To obtain a simple approximate solution, we use
ˆ
D(u) = (1 u) · D
1
+ u · D
2
, u [0, 1], (7)
where D
1
and D
2
are viewing directions at V
1
and V
2
, to
approximate the true viewing direction D(u). This approximation
ˆ
D(u) makes sense because it agrees with the true view direction
D(u) at the endpoints V
1
and V
2
, and thus gives correct visibility
status at V
1
and V
2
. With this approximation, we just need to
solve the quadratic equation h(u)
ˆ
D(u) ·
˜
N
1
(u) = 0 to get
u
0
so as to determine the silhouette point S
1
(u
0
). Note that the
quadratic equation h(u) = 0 has a unique solution in [0, 1], since
h(u) has opposite signs at u = 0 and u = 1.
Once the two silhouette points S
1
(u
0
) and S
2
(v
0
) are available,
together with their normal vectors
˜
N
1
(u
0
) and
˜
N
2
(v
0
), we
compute the silhouette segment using the Hermite interpolation
in the same way as in computing silhouette bridges. Thus we get
the silhouette segment S(t) associated with the silhouette triangle
V
1
V
2
V
3
(see Fig. 6).
Special consideration is needed when a feature edge is one
of the sides of a silhouette triangle. There are two cases: (a) a
feature edge is a side whose endpoints have different visibility
status (e.g., the side V
1
V
2
or V
1
V
3
in Fig. 6); or (b) a feature
edge is the side whose endpoints have the same visibility status
(e.g., the side V
2
V
3
in Fig. 6). In case (a) the silhouette bridge
over the feature edge is simply its corresponding pre-computed
feature segment. (Fig. 10 illustrates an example of case (a) where
a feature edge is a side of a silhouette triangle.)
In case (b) no special treatment is actually needed. In this case,
assuming that the feature edge is the side V
2
V
3
in Fig. 6, if a
changing viewing direction makes the normal vectors at the two
endpoints of V
2
V
3
get closer and closer to being perpendicular
to the viewing direction, then the silhouette points S
1
(u
0
) and
S
2
(v
0
) on the two silhouette bridges will approach V
2
and V
3
Fig. 10. An example of a silhouette triangle containing a feature edge; (the
close-up is a side view). The red curves are smooth feature segments and the
blue one is a silhouette segment.
respectively, which ensures that the reconstructed smooth silhou-
ette segment S(t) approaches the feature segment associated with
the feature edge V
2
V
3
. This is important for the visual coherence
of silhouette curves with respect to a moving viewpoint when
feature segments are involved, and it is achieved by smoothing
feature edges using the same curve interpolation scheme as for
silhouette.
E. Local re-meshing
In this section we shall discuss how to re-mesh silhouette
regions and how to adaptively sample silhouette segments, and
then address the issue of silhouette blocking when concave
silhouette segments are involved.
Remeshing: To render a mesh surface with smooth feature
segments and silhouette segments that we have constructed via
Hermite interpolation, these curve segments need to be sampled
for local re-meshing. Fig. 11 shows different cases the curve
should be re-meshed, and Fig. 12 shows how the re-meshing
should be done when feature curves are involved. Note that when
the silhouette curve is concave it may be blocked by a neighboring
triangle of the original mesh. We will address this issue later in
this section.
Adaptive sampling: The reconstructed smooth silhouette curve
will be rendered as a polygon connecting sample points on the
silhouette. To ensure that the silhouette curve appears smooth after
sampling while keeping low the number of sample points, we
determine the number of sample points on the silhouette segment
adaptively based on the curvature and the projected length of the
silhouette segment. Since the silhouette segment is intended as
an approximation to a circular arc (cf. Section III. C), using the
simple geometry of a circular arc, we derive an estimate on the
number of sample points as follows.
h
A
B
l
α
E
O
S(
1
2
)
Fig. 13. Adaptive sampling of a silhouette curve segment.
SilhouetteSegment ( )S t SilhouetteSegment ( )S t
SilhouetteSegment ( )S t SilhouetteSegment ( )S t
(a) (b)
Fig. 11. Local remeshing. (a) (left) The silhouette triangle is V
1
V
2
V
3
and the silhouette segment S(t) is further back than the edge V
1
V
2
. To attain local
convexity, the sample points on the segment S(t) and the edge V
1
V
2
are connected to form local triangulation; (a)(right) The silhouette triangle is V
2
V
3
V
4
and S(t) is in front of the edge V
2
V
3
. In this case, again, to attain local convexity, the sample points on the segment S(t) are connected to the vertex V
4
;
(b) The two cases shown here involve concave silhouette curve segments, and are processed similarly; local vertex perturbation is needed in these cases to
rectify the blocking problem.
FeatureSegment
V
1
FeatureSegment
V
2
V
3
V
4
V
1
V
2
V
3
V
4
V
5
C
(a) (b)
Fig. 12. Re-meshing when feature segments are involved. (a) One feature segment: the sample points on the feature segment are connected to the opposite
vertex; (b) Two feature segments: the sample points on the two feature segments are connected to the center C of the triangle V
1
V
2
V
3
.
Let A and B denote the endpoints of the silhouette segment
S(t), t [0, 1]. (Refer to Fig. 13.) Let h be the height of
the silhouette segment in the projection plane, approximated by
the distance from the middle point S(1/2) of the silhouette
segment to the middle point of the line segment AB. Let r =
sin(α/2) = (1 cos(α))/2, where α is the central angle
of the arc subtended by the chord AB and so cos(α) can be
approximated by the inner product of the two unit normal vectors
at the endpoints A and B. Let e be the allowed tolerance, in pixels.
Then the bound n
s
on the number of sample points is given by
n
s
=
1
2
· h · r /e , (8)
The interpretation of this bound n
s
is that if the number of sample
points is at least n
s
, then the polygon connecting the sample
points approximate the smooth silhouette curve within the error
e. Here the value of e should be in the range of 0.5 to 2 to ensure
the smooth appearance of the rendered silhouette.
Local perturbation: In a saddle-shaped region the recon-
structed curved silhouette curve may be blocked by its neigh-
boring triangles, and thus the original polygonal silhouette still
persists, as already seen in Fig. 11(b). Fig. 14(a) provides a
convincing example with detailed explanation where the inner
circle of the torus does not appear to have been smoothed due
to this blocking problem. This occurs because the reconstructed
silhouette curve moves into the mesh surface, so may be blocked
by adjacent mesh triangles, which is an issue that has not been
addressed by previous view-dependent local refinement methods
(e.g., [20] and [4]). We propose below a simple and effective
perturbation technique to rectify this problem.
First we need to detect concave silhouette segments. Let
˜
N
1
and
˜
N
2
be the surface normal vectors at two consecutive silhouette
points S
1
and S
2
, respectively. Denote F = S
2
S
1
. Then the
silhouette segment connecting S
1
and S
2
is concave if
˜
N
1
·F > 0
and
˜
N
2
·F < 0, i.e., the angle between
˜
N
1
and F is less than π/2
and the angle between
˜
N
2
and F is greater than π/2. After all
concave silhouette segments are found, the following perturbation
technique will be applied.
The basic idea is to perturb positions of the neighboring mesh
vertices of a concave silhouette segment to keep them from block-
ing the silhouette segment. Let T denote the silhouette triangle
containing the concave silhouette segment under consideration.
Then by neighboring mesh vertices we mean those vertices of the
triangle T or vertices which are connected to T through a mesh
edge. For example, in Fig. 15, as far as the silhouette triangle
V
1
V
2
V
3
is concerned, all V
i
, i = 1, 2, 3, 4, 5, are neighboring
vertices, so they will be subject to perturbation.
For the simplicity of discussion, we will use the vertex V
1
to explain the perturbation procedure in the case of parallel
projection; the idea is similar for the case of a perspective
projection. Refer to Fig. 15. Suppose that S
i
(t), i = 1, 2, 3,
t [0, 1], are all the concave silhouette segments that involve
V
1
as a neighboring point. Take the middle points M
i
= S
i
(1/2)
(a) (b) (c)
Fig. 14. An example of silhouette smoothing in a saddle-shaped region. (a) The result without fixing by vertex perturbation; (b) The correspondence mesh
model of (a) and a close up view. The red curve is the smoothed silhouette blocked by the original mesh; (c) The result after fixing by vertex perturbation.
V
1
V
2
V
3
V
4
V
5
'V
1
P
1
M
3
M
2
M
1
P
2
P
3
Fig. 15. Vertex perturbation to remove silhouette blocking. The dashed blue curve is the smoothed silhouette, which is blocked by three adjacent triangles.
By lowering the vertex V
1
to V
1
, the smoothed silhouette curve is displayed properly.
of the silhouette segments S
i
(t). Let N
i
denote the interpolated
normal vector at M
i
. Then each silhouette segment S
i
(t) is
associated with a plane P
i
which passes through M
i
and has N
i
as its normal vectors. It can be shown that the plane P
i
contains
the viewing direction. The upper side of the plane P
i
is defined by
the inequality N
i
· (X M
i
) > 0. Clearly, the silhouette segment
S
i
(t), t [0, 1], lies entirely above the plane P
i
.
Now we shall determine a displacement vector for V
1
such that
after perturbation V
1
will be below each plane P
i
, i = 1, 2, 3. The
direction R of this displacement vector is the average of all the
normal vectors N
i
, i.e., R = N
1
+ N
2
+ N
3
, whose normalized
vector is denoted by
ˆ
R = R/kRk. It follows that the length of
this displacement vector is given by
= max
i=1,2,3
(V
1
M
i
) · N
i
ˆ
R · N
i
.
Hence, V
1
is displaced to V
1
by
V
1
= V
1
+
ˆ
R.
Similarly, we perturb all other the neighboring vertices of
concave silhouette segments. Since, for each silhouette segment
S
i
(t) all vertices of its neighboring triangles are below the plane
P
i
, these triangles will not block S
i
(t) with respect to the
viewpoint, which is on the plane P
i
.
The above simple scheme for blocking removal has proven very
effective. Fig. 14 shows a coarse mesh with a close-up view to
reveal the blocking phenomenon. Fig. 14(c) shows the unblocked
smooth silhouette after applying the above vertex perturbation.
Example shown in Fig. 22 also demonstrates the effectiveness of
this perturbation scheme.
In the above only those mesh triangles within a small neigh-
borhood of concave silhouette segments are considered for per-
turbation to avoid silhouette blocking. While it can be imagined
that triangles little beyond this neighborhood may also block the
concave silhouette segments, we do not consider this extension
in order to minimize implementation complexity. Moreover, such
potential blocking by relatively far neighboring triangles must
occur very rarely, since they have not been visually discernable
in our extensive experiments.
IV. SPEEDUP WITH GPU
To speed up the smooth processing and the rendering of the
refined mesh, we use the GPU as well as the CPU for the
computation described in Section III. We balance the loads of
GPU and CPU as shown in Fig. 16 to parallelize the computation
as much as possible.
After CPU computes the silhouette segments and completes
re-meshing, GPU is fed with mesh vertices accompanied by re-
meshing information. Vertex perturbation and the locations of
sample points on silhouette segments are computed by GPU.
After filtering the vertices by vertex processors, positions of all
the vertices are transformed to the homogenous clip-space by the
vertex program of GPU and then all the triangles are rendered in
Phong shading by the fragment processors of GPU.
Fig. 16. The graphics pipeline.
V. EXPERIMENTAL RESULTS
In this section we shall present experimental results of testing
our method, and compare our method with the methods in [20]
and [19] in terms of rendering quality and efficiency.
Fig. 1, 17, 18, 19, 22, and 23 show examples generated
by our method. The fine meshes shown there are used only
as reference for comparison to the re-meshed coarse meshes
with reconstructed smooth silhouette. For the textured mesh in
Fig. 19, linear interpolation of texture coordinates is used to
assign texture coordinates to the sample points on silhouette
curves. Fig. 22 shows how the silhouette blocking problem in
a saddle-shaped region (cf. Section III-E) is fixed by our vertex
perturbation technique. Fig. 23 provides an example to that our
method also works well for meshes with non-uniformly sampled
data points. Real-time rendering of these models is demonstrated
in the submitted video.
Fig. 20 and 21 show two mesh surfaces with prominent feature
edges. Our method constructs satisfactory smooth feature curves
from the polygonal feature edges. Although similar treatment to
feature curve smoothing has been studied before [4], we have
shown here how it can be integrated with our silhouette smoothing
technique in a unified framework (e.g., see the mesh model of a
pawn in Fig. 10).
Fig. 22 compares our method with the method in [20]. The re-
sult by the method in [20] is not as good, because the interpolated
silhouette curve used there corresponds to some mesh edges and
always passes through mesh vertices, leading to visual artifacts. In
contrast, the method of the present paper generates more faithful
smooth silhouette that is visually coherent with a moving view
point or a moving object, as can be seen in the accompanying
video.
Compared with the PN-triangle method in [19], besides mem-
ory saving due to local re-meshing, our method supports smooth
rendering of silhouette with arbitrary “zoom-in”, an operation
used frequently in applications. Fig. 24(b) shows a refined mesh
after three levels of subdivision by the PN-triangle method. The
silhouette by the PN-triangle method looks smooth in current
view, but its polygonal appearance is revealed when zooming
in for a close-up view. To make silhouettes smooth, the whole
model would need to be subdivided again, and the number of
triangle faces would increase significantly, since the PN-triangle
method operates in a viewpoint independent manner, insensitive
to the changing viewing distance and the changing location of
silhouettes.
In contrast, the number of sample points is determined adap-
tively in our method according to the curvature and the projected
height of silhouette segments. Therefore, our method ensures that
the silhouette curve always looks smooth via simple run-time
adaptive local re-meshing, when the model is scaled larger during
zoom-in (see Fig. 24(c)).
As shown in Fig. 24, although the PN-triangle method uses
global patches reconstruction, the quality of the interior shading
is the same with the shaded mesh refined by our method or
the shaded coarse mesh. That is because the Phong shading
model, which are used in our experiments, is based on normal
interpolation, and the normal computed by Phong shading model
at interior points of a triangle of the coarse mesh is almost the
same with the normal produced by the PN-triangle method at
the points. Hence, the interior shading quality has been improved
little by the PN-triangle method despite of the increased number
of triangles it uses for rendering.
We tested our silhouette smoothing method on a PC Work-
station with one 3.0 GHz CPU, 512M memory and an Nvidia’s
GeForce 7800 GT graphics card. The windows size was set to
1280 by 800. The testing objects covered areas around 40%-60%
of the window.
Table 1 compares the frame rates when applying the method
[20], the PN-triangle method [19] and our method. The column
“Direct Drawing” is the control, which shows the frame rates
when drawing objects without silhouette smoothing. All the
methods have used vertex arrays to obtain the best frame rates
possible. For the PN-triangle method, the frame rates depends
on the subdivision level. In this test, the subdivision level varies
from 2 to 3 to make the model’s silhouettes look smooth to give
comparable visual quality. Since, unlike the PN-triangle method,
the number of triangles rendered by our method in each frame
varies with the viewing parameters, only the typical counts of
triangles for our method are listed in Table 1 for reference.
Since the three methods are designed for enhancing visual
quality of rendered coarse meshes, in the comparisons we use only
coarse meshes with no more than 600 triangles except the last one,
the “hippo” model. Note that rendering of such simple models
using modern GPU is very fast. The frame rate for rendering a
single model of that size is usually in the range of thousands of
fps (frames per second) that is easily subject to noises caused by
timing inaccuracy, difference in GPU design and display driver
tweaks. Therefore, 60 copies of the same models were drawn to
make the rendering frame rates stay at the normal range of less
than 300 fps, except for the “hippo” model which was drawn only
once.
As shown in Table 1, our method is typically 2-3 times faster
than the method in [20] for different models. Comparing with
the PN-triangle method in [19], our method also shows a 30% to
over 200% speedup for all tested models. From our experiments
(a) (b) (c) (d) (e)
Fig. 17. Bunny: (a) shaded fine mesh (30,000 triangles); (b) coarse mesh (500 triangles); (c) shaded coarse mesh; (d) re-meshing by our method (873
triangles); (e) Phong shading of the mesh in (d).
(a) (b) (c) (d) (e)
Fig. 18. Fish: (a) shaded fine mesh (5000 triangles); (b) coarse mesh (500 triangles); (c) shaded coarse mesh; (d) re-meshing by our method (1018 triangles);
(e) Phong shading of the mesh in (d).
(a) (b) (c) (d) (e)
Fig. 19. Texture head: (a) shaded fine mesh (43,151 triangles); (b) coarse mesh (4,778 triangles); (c) shaded coarse mesh; (d) re-meshing by our method
(6858 triangles); (e) Phong shading of the mesh in (d).
(a) (b) (c) (d) (e)
Fig. 20. Cone mesh: (a) coarse mesh (70 triangles); (b) shaded coarse model; (c) re-meshing by our method (576 triangles); (d) Phong shading of the mesh
in (c); (e) another view.
(a) (b) (c) (d) (e)
Fig. 21. Fandisk: (a) fine mesh (1,000 triangles); (b) coarse mesh (128 triangles); (c) shaded coarse mesh; (d) re-meshing by our method (1899 triangles);
(e) Phong shading of the mesh in (d).
(a) (b) (c) (d) (e)
Fig. 22. Knot: (a) shaded fine mesh (15,000 triangles); (b) shaded coarse mesh (480 triangles); (c) result by the method in [20]; (d) shaded refined mesh by
our method without local perturbation (843 triangles); and (e) shaded refined mesh by our method with local perturbation.
(a) (b)
Fig. 23. Hippo, an example of smoothing the silhouettes of a non-uniformly sampled model: (a) shaded refined model; (b) the refined mesh model by our
method and a close-up view.
(a) (b) (c)
Fig. 24. Bear: (a) A coarse mesh (500 triangles); (b) the result by the PN-triangle method and a zoom-in view, showing non-smoothness; (c) the result by
our method and the zoom-in view of the same part.
Mesh Tri.count Phong shading frame rates (fps)
Original PN method Our method Direct Wang’s.
PN
Our method Our method
(LOD) Drawing method method Ortho Perspective
Sphere 200 3200 (3) 332 573 60 115 352 347
Knot 480 7680 (3) 884 605 56 53 152 147
Bunny 500 8000 (3) 800 442 53 50 75 68
Fish 500 4500 (2) 886 620 54 77 114 114
Bear 500 8000 (3) 893 633 54 52 117 112
Head 500 8000 (3) 904 448 52 54 78 75
Venus 500 4500 (2) 862 655 56 76 155 153
Cat 500 8000 (3) 842 372 53 52 74 74
Cow 500 8000 (3) 983 560 52 51 82 80
Torus 576 5184 (2) 960 592 57 67 172 167
Hippo(render once) 43288 389592 (2) 51842 462 43 60 99 96
TABLE I
PERFORMANCE COMPARISONS OF OUR METHOD AND THE OTHER METHODS.
THE WANGS METHOD REFERS TO THE ONE IN [20].
THE
PN-TRIANGLE METHOD REFERS TO THE ONE IN [19].
it has been observed that the part of extracting silhouette triangles
and the part of local perturbation take only a small part of the
overall computation and rendering time (less than 4% for each
part), while the parts on computing the smoothed silhouettes and
sampling points on them take the major portion of the total time.
Furthermore, with our scheme we have found that the tasks for
the CPU and those for the GPU take about the equal amount of
time. It is however possible that there is a better scheme that is
more suitable for the GPU computation, so as to further speed up
the method.
VI. CONCLUSIONS
We have presented a method for silhouette smoothing of coarse
meshes through 3D curve interpolation and local re-meshing
based on the notion of silhouette triangles. Both smooth silhouette
curves and feature curves are computed in a unified framework.
We have demonstrated the effectiveness of the method using a
number of examples. This method is a promising alternative to
LOD-based view-dependent methods, especially when a fine mesh
or an LOD model is not available.
VII. ACKNOWLEDGEMENT
This work is supported by the National Natural Science Foun-
dation (60473103,60473127) and the National Key supported
by Basic Research and Development Plan (2006CB303102,
2004CB318006).
REFERENCES
[1] D.I. Azuma, D. N. Wood, B. Curless, T. Duchamp, D. H. Salesin and
W. Stuetzle, View-dependent refinement of multiresolution meshes with
subdivision connectivity, Proceedings of AFRIGRAPH 2003, 2003, pp.
69-78.
[2] F. Benichou and G. Elber, Output sensitive extraction of silhouettes from
polygonal geometry, Proceedings of 7th Pacific Graphics Conference,
1999, pp. 60-69.
[3] T. Boubekeur, P. Reuter, C. Schlick, Scalar tagged PN triangles, Pro-
ceedings of Eurographics 2005, pp. 17-20.
[4] C. Dyken and M. Reimers, Real-time linear silhouette enhancement,
Proceedings of Mathematical Methods for Curves and Surfaces 2004,
July 2004, Tromso, Norway, pp. 135-143.
[5] J. D. Foley, A. van Dam, S. Feiner and J. Hughes, Computer Graphics:
Principles and Practice, Addison-Wesley, 1990.
[6] P. Heckbert and M. Garland, Survey of polygonal surface simplification
algorithms, SIGGRAPH’97 Course Notes #25, 1997.
[7] A. Hertzmann, D. Zorin, Illustrating Smooth Surfaces, Proc. SIG-
GRAPH00, 2000, pp.517-526.
[8] H. Hoppe, View-dependent refinement of progressive meshes, Proceed-
ings of SIGGRAPH’97, pp. 189-198.
[9] H. Hoppe, Smooth view-dependent level-of-detail control and its appli-
cation to terrain rendering, Proceedings of Visualization’98, pp. 35-42.
[10] D. Kirsanov, P.V. Sander and S.J. Gortler: Simple Silhouettes for Com-
plex Surfaces, Proceedings of Eurographics Symposium on Geometry
Processing, 2003, PP. 102-106.
[11] J.J. Koenderink, What does the occluding contour tell us about solid
shape, Perception, Vol. 13, 1984, pp. 321-330.
[12] S. Kumar, D. Manocha, B. Garrett and M. Lin, Hierarchical back-face
culling, Proceedings of the 7-th Eurographics Workshop on Rendering,
1996, pp. 231-240.
[13] D. P. Luebke and C. Erikson, View-dependent simplification of arbitrary
polygonal environments, Proceedings of SIGGRAPH’97, pp. 199-208.
[14] Matt Olson and Hao Zhang, Silhouette Extraction in Hough Space,
Proceedings of Eurographics 2006, pp. 273-282.
[15] R. Pajarola, FastMesh: efficient view-dependent meshing, Proceedings
Pacific Graphics 2001, pp. 22-30.
[16] M. Pop, C. Duncan, G. Barequet, M. Goodrich, W. Huang, S. Kumar:
Efficient perspective-accurate silhouette computation and applications,
in Proceedings of Annual Symp. on Computational Geometry, 2001, pp.
60-68.
[17] P. V. Sander, X. Gu, S. J. Gortler, H. Hoppe and J. Snyder: Silhouette
clipping. Proceedings of SIGGRAPH 2000, pp. 327-334.
[18] G. Th¨urmer and C.A. W¨uthrich, Computing Vertex Normals from
Polygonal Facets, Journal of Graphics Tools, 1998, vol. 3, no. 1, pp.
43-46.
[19] A. Vlachos, J. Peters, C. Boyd, J. Mitchell, Curved PN triangles,
Proceedings of I3D’01, pp. 159-166.
[20] B. Wang, W. Wang, J. Wu and J. Sun, Silhouette smoothing by
boundary curve interpolation, Proceedings of CAD/Graphics’03, Oct.
2003, Macau, pp. 197-202.
[21] J. Xia and A. Varshney, Dynamic view dependent simplification for
polygonal models, Proceedings of Visualization’96, 1996, pp. 327-334.
[22] H. Xu, M. X. Nguyen, X. Yuan, B. Chen, Interactive silhouette rendering
for point-based models, Proceedings of Eurographics Symposium on
Point-Based Graphics’04, pp. 13-18.
Lu Wang is currently pursuing the Ph.D degree in
the School of Computer Science and Technology,
Shandong University, Jinan, China. She received her
B.S. degree in Department of Computer Science
from Shandong Normal University, Jinan, China,
in 2003. Her research interests include computer
graphics, real-time rendering, non-photo realistic
rendering, and geometric modeling.
Changhe Tu received his B.Sc., M.Eng. and Ph.D.
in Shandong University in 1990, 1993 and 2003
respectively. He is now a professor in the School
of Computer Science and Technology at Shandong
University. His research interests include computer
graphics, geometric modeling and processing, non-
photorealistic rendering and virtual reality.
Wenping Wang Wenping Wang is Associate Pro-
fessor of Computer Science at University of Hong
Kong (HKU). His research covers computer graph-
ics, geometric computing and visualization, and has
published over 100 papers in these fields. He got
his B.Sc. (1983) and M.Eng. (1986) at Shandong
University, and Ph.D. (1992) at University of Al-
berta, all in computer science. He is associate editor
of the Springer journal Computer Aided Geometric
Design, and has been program chair of several inter-
national conferences, including Geometric Modeling
and Processing (GMP 2000), and Pacific Graphics ( PG 2000 and PG 2003),
ACM Symposium on Virtual Reality Software and Technology (VRST 2001),
ACM Symposium on Physical and Solid Modeling (SPM 2006). He received
the Teaching Excellence Award of The Department of Computer Science at
HKU in 2006 and the HKU Research Output Prize in 2007.
Xiangxu Meng received his B.Sc and M.Eng
Degrees in Dept. of Computer Science of Shan-
dong University in 1982 and 1985 respectively.
He received his Ph.D. degree from the Institute
of Computing Technology, Chinese Academy of
Science in 1998. He is a Professor in the School
of Computer Science and Technology at Shandong
University. His current research interests include
Human-Computer interaction, virtual reality, com-
puter graphics, CAD/CAM/CIMS, grid computing,
visualization and scientific computing.
Bin Chan received his B.Eng., M.Phil. and Ph.D.
in the University of Hong Kong in 1995, 1998 and
2005 respectively. He is now a research associate in
the Department of Computer Science at the Univer-
sity of Hong Kong. His research interests include
real-time rendering, virtual reality, visualization and
global illumination.
Dongming Yan is a PhD student of Computer
Science at the University of Hong Kong. He re-
ceived his BSc and Master degrees in Computer
Science from Tsinghua University, China, in 2002
and 2005, respectively. His current research interests
include geometric modeling, computer-aided design
and real-time rendering.
... Clearly, surfaces with rational support functions have rational PN parameterizations, since their inverse Gauss maps Γ are again rational. This is true both for odd rational support functions, where Γ takes the form (27), and for general rational support functions, see [10]. As observed in Section 2.2, this case includes quadratic polynomial surfaces and rational ruled surfaces with polynomial direction vectors. ...
... -For non-developable surface patches p * i we compute the rational support function H i , see Section 2.2, and the inverse Gauss map Γ i = ∇H i , see Eq. (27). H i and Γ i are rational functions of degree 3/2 and 4/4, respectively. ...
... Another possible application of our results is mesh contour smoothing, see [27]. This motivates the further investigation of surfaces with simple contours, such as quadratic polynomial surfaces. ...
Conference Paper
Full-text available
The support function of a free-form-surface is closely related to the implicit equation of the dual surface, and the process of computing both the dual surface and the support function can be seen as dual implicitization. The support function can be used to parameterize a surface by its inverse Gauss map. This map makes it relatively simple to study isophotes (which are simply images of spherical circles) and offset surfaces (which are obtained by adding the offsetting distance to the support function). We present several classes of surfaces which admit a particularly simple computation of the dual surfaces and of the support function. These include quadratic polynomial surfaces, ruled surfaces with direction vectors of low degree and polynomial translational surfaces of bidegree (3,2). In addition, we use a quasi-interpolation scheme for bivariate quadratic splines over criss-cross triangulations in order to formulate a method for approximating the support function. The inverse Gauss maps of the bivariate quadratic spline surfaces are computed and used for approximate isophote computation. The approximation order of the isophote approximation is shown to be 2.
... In order to deal with these problems, our method uses a contour smoothing method [23] which interpolates over contour triangles to generate long, smooth, and coherent silhouette curves with 3D connectivity. Taking into account the screen projection of those curves, we construct brush paths to approximate the smooth shapes of 3D feature lines which generate and receive geometric information through the correspondence between curve points and brush particles. ...
... Our method requires triangle meshes as input. In Ref. [23], a method was proposed to smooth the silhouettes of coarse triangle meshes. We use their method to reconstruct curves in 3D and introduce a simple local remeshing procedure to compute 3D silhouette chains for non-uniform meshes based on contour triangles. ...
... After the two silhouette points S(u 0 ) and S(v 0 ) have been found together with their normal vectors N (u 0 ) andÑ (v 0 ), smooth silhouette curves S(t) can be computed using Hermite interpolation as in Eq. (3) [23]. ...
Article
Full-text available
Temporal coherence is one of the central challenges for rendering a stylized line. It is especially difficult for stylized contours of coarse meshes or non-uniformly sampled models, because those contours are polygonal feature edges on the models with no continuous correspondences between frames. We describe a novel and simple technique for constructing a 2D brush path along a 3D contour. We also introduce a 3D parameter propagation and re-parameterization procedure to construct stroke paths along the 2D brush path to draw coherently stylized feature lines with a wide range of styles. Our method runs in real-time for coarse or non-uniformly sampled models, making it suitable for interactive applications needing temporal coherence.
... Figure ii: An example of silhouette smoothing. (Wang et al., 2010) the boundary edge case in the Zorin and Hertzmann modified subdivision scheme which uses the weight of 4 points to generate new vertices and triangles and generalize that to our subdivision algorithm. ...
Article
Our project aims to detect and subdivide the silhouette edges of triangle polygonal meshes. Such silhouetting is useful for character and object prototyping, and may be used to give 3D objects a sketched or cartoony feel. Furthermore, subdividing the entire mesh may be a very expensive operation, and the silhouette provides a good target for subdivision without dramatically increasing polygon count.
... A feasible way to render meshes with high speed as well as visually smooth quality is defining the geometry and surface normals separately [7,8]. The sharp contour of a triangular mesh can be made smooth by local interpolation of low order surface patches [8,9] or tessellating triangles adaptively with local interpolating curves or surfaces [10,11]. These local operations have the advantages of simplicity and low computational costs for contour smoothing, but the tangent planes for adjacent quadric or cubic surface patches may not be tangent continuous across the joining edges. ...
Article
Full-text available
Independent interpolation of local surface patches and local normal patches is an efficient way for fast rendering of smooth curved surfaces from rough polyhedral meshes. However, the independently interpolating normals may deviate greatly from the analytical normals of local interpolating surfaces, and the normal deviation may cause severe rendering defects when the surface is shaded using the interpolating normals. In this paper we propose two novel normal interpolation schemes along with interpolation of cubic Bézier triangles for rendering curved surfaces from rough triangular meshes. Firstly, the interpolating normal is computed by a Gregory normal patch to each Bézier triangle by a new definition of quadratic normal functions along cubic space curves. Secondly, the interpolating normal is obtained by blending side-vertex normal functions along side-vertex parametric curves of the interpolating Bézier surface. The normal patches by these two methods can not only interpolate given normals at vertices or boundaries of a triangle but also match the shape of the local interpolating surface very well. As a result, more realistic shading results are obtained by either of the two new normal interpolation schemes than by the traditional quadratic normal interpolation method for rendering rough triangular meshes.
Article
Notebook scribbles, art or technical illustrations—line drawings are a simplistic method to visually communicate information. Automated line drawings often originate from virtual 3D models, but one cannot trivially experience their three-dimensionality. This paper introduces a novel concept to produce stereo-consistent line drawings of virtual 3D objects. Some contour lines do not only depend on an objects geometry, but also on the position of the observer. To accomplish consistency between multiple view positions, our approach exploits geometrical characteristics of 3D surfaces in object space. Established techniques for stereo-consistent line drawings operate on rendered pixel images. In contrast, our pipeline operates in object space using vector geometry, which yields many advantages: The position of the final viewpoint(s) is flexible within a certain window even after the contour generation, e.g. a stereoscopic image pair is only one possible application. Such windows can be concatenated to simulate contours observed from an arbitrary camera path. Various types of popular contour generators can be handled equivalently, occlusions are natively supported and stylization based on geometry characteristics is also easily possible.
Article
In order to draw smooth shadow and enhance the reality in virtual scene, we present a GPU based real-time method to realize shadow smoothing of coarse meshes. We compute the accurate silhouette of the mesh by redefining the shadow silhouette. Then we apply the shadow volume method to get the shadow region with smooth contour. We utilize GPU to accelerate the process of computing the silhouette and the rendering of shadow. The experiment results show that our method has better effect and the rendering speed is superior to the existing methods as well.
Article
We propose an efficient and light-weight solution for rendering smooth shadow boundaries that do not reveal the tessellation of the shadow-casting geometry. Our algorithm reconstructs the smooth contours of the underlying mesh and then extrudes shadow volumes from the smooth silhouettes to render the shadows. For this purpose we propose an improved silhouette reconstruction using the vertex normals of the underlying smooth mesh. Then our method subdivides the silhouette loops until the contours are sufficiently smooth and project to smooth shadow boundaries. This approach decouples the shadow smoothness from the tessellation of the geometry and can be used to maintain equally high shadow quality for multiple LOD levels. It causes only a minimal change to the fill rate, which is the well-known bottleneck of shadow volumes, and hence has only small overhead.
Article
We present a new method for rendering silhouettes of point-based models . Due to the lack of connectivity informa- tion, most existing polygon-based silhouette generation algorithms cannot be applied to point-based models. Our method not only bypasses this connectivity requirement, but also accomm odates point-based models with sparse non-uniform sampling and inaccurate/no normal information. Like conven tional point-based rendering, we ren- der a model in two passes. The points are rendered asenlarged opaque disks in the first pass to obtain a visibility mask, while being rendered as regular size splats/disks in the second pas s. In this way, edges are automatically depicted at depth discontinuities, usually at the silhouette boundaries. The s ilhouette color is the disk color used in the first pass rendering. The silhouette thickness can be controlled by c hanging the disk size difference between two passes. We demonstrate our method on different types of point-based models from various sources. The simplicity of our method allows it to be easily integrated with other rendering techniques to cater to many applications. Our method is capable of rendering large scenes of millions of points at interactive rate s using modern graphics hardware.
Article
Rapid and precise interpolation of curves has been a key issue in development of computer numerical control (CNC) systems. With the development of firmware, it is a new attempt to perform the interpolation by means of firmware. This paper presents a method to develop a rapid and precise interpolator that can produce a smooth interpolating curve. An algorithm is first shown to construct a smooth interpolating curve and a way is put forward to calculate the constructed curve with firmware; then details are introduced to design and implement the firmware units that perform rapid and precise calculations. Factors related with the computational process control and precision control are also investigated and made clear in the paper.
Article
The method most commonly used to estimate the normal vector at a vertex of a polygonal surface averages the normal vectors of the facets incident to the vertex considered. The vertex normal obtained in this way may vary depending on the tessellation of the polygonal surface since the normal of each incident facet contributes equally to the normal in the vertex. To overcome this drawback, we extend the method so that it also incorporates the geometric contribution of each facet, considering the angle under which a facet is incident to the vertex in question.
Article
In this paper we utilize some hybrid rendering techniques combining image-based, point-based and geometry-based technique for the modeling and display of immersive virtual heritage environments (VHEs). This paper also uses a generic computation method of indirect illumination based on Monte Carlo sampling and on the sequential analysis theory, which is faster and more automatic than classical methods.The resulting VHEs aim to realistically recreate archaeological site that can be displayed in a range of immersive VR systems using a high-quality, view-dependent algorithm, the observer can act just like the ancients by interacting with the objects of the scenes. The observers are able to participate to lumber or fish in VHEs. We implement a collaborative system platform based on multi-PC called Multipro, which provide a platform to display immersive application which can be extended to be used in the areas of urban planning/architecture and cultural heritage education. In order to display photo-realistic virtual environment we implement a prototype system which is implemented on the Multipro. In order to enhance realism of the virtual environment, we adopt a view-dependent texture mapping technique to rendering the heritage model, furthermore the interaction and animation also are included into Hemudu virtual scenes, finally the lighting , vegetation rendering and hybrid rendering are addressed for enhancing the sense of realism. Examples of the prototype system in use are described and demonstrated in this paper.
Article
Conspicuous and visually unpleasant polygonal sil-houettes are often produced when rendering a poly-hedral approximation (i.e. a polygonal mesh) of a smooth surface model, regardless whatever shading method is used to improve the appearance of the in-terior region. We present in this paper a technique for displaying a coarse polygonal mesh with smooth silhouette. Unlike other existing methods that assume the availability of a fine model, our method uses only a coarse mesh to generate smooth silhouette through efficient boundary curve interpolation and local re-meshing near the reconstructed smooth silhouette. The time used by our method to render a polygonal mesh with smooth silhouette is about 80% more than render-ing the original mesh without any fix to the polygonal silhouette.
Article
We present a simple method for improving the rendered appear-ance of coarse triangular meshes. We refine and modify the geometry along silhouette edges in real-time, and thus increase the geometric complexity only where needed. We address how to define the improved silhouette and a method to blend the modified and the original geometry continuously.