Content uploaded by Xiangxu Meng
Author content
All content in this area was uploaded by Xiangxu Meng
Content may be subject to copyright.
Silhouette Smoothing for Realtime Rendering of
Mesh Surfaces
Lu Wang, Changhe Tu*, Wenping Wang, Xiangxu Meng, Bin Chan and Dongming Yan
Abstract—Coarse piecewise linear approximation of surfaces
causes undesirable polygonal appearance of silhouettes. We
present an efﬁcient method for smoothing the silhouettes of
coarse triangle meshes using efﬁcient 3D curve reconstruction
and simple local remeshing. It does not assume the availability
of a ﬁne mesh and generates only moderate amount of additional
data at run time. Furthermore, polygonal feature edges are also
smoothed in a uniﬁed framework. Our method is based on a novel
interpolation scheme over silhouette triangles and this ensures
that smooth silhouettes are faithfully reconstructed and always
change continuously with respect to continuous movement of
the view point or objects. We speed up computation with GPU
assistance to achieve realtime rendering of coarse meshes with
the smoothed silhouettes. Experiments show that this method
outperforms previous methods for silhouette smoothing.
Index Terms—Silhouette Smoothing, Polygonal Mesh, Interpo
lation, Hermite Curves.
I. INTRODUCTION
Polygonal meshes are widely used for representing 3D shapes
in computer graphics. Simpliﬁcation methods produce meshes of
small triangle count that are needed in many applications where
there is considerable resource limitation, such as 3D real time
graphics on mobile devices. The resulting coarse meshes have
conspicuous polygonal silhouettes or polygonal feature edges,
causing impression of low visual quality, since human vision
is particularly acute to silhouettes or feature curves and their
nonsmoothness. (A feature curve is a smooth curve deﬁned
by transversal intersection, i.e., nontangent intersection, of two
smooth surfaces. See Fig. 2.)
The challenge of rendering a coarse mesh with smoothed sil
houettes is well recognized by the computer graphics community
[5], [11]. A small triangle count, which is required by efﬁcient
storage and transmission, and faithful smooth appearance are two
conﬂicting requirements. Simply reﬁning a coarse mesh overall
to increase the number of triangles would be inefﬁcient for the
purpose of silhouette smoothing, since much extra resources
would then be wasted in creating, storing and rendering an
increased number of triangles in the interior (i.e., nonsilhouette
region) of a mesh.
A. Overview and contributions
We present an efﬁcient method that ﬁxes polygonal silhouettes
and feature edges into smooth curves for realtime rendering of
coarse meshes (Fig. 1). The method requires only a coarse triangle
*Corresponding author. Lu Wang, Changhe Tu and Xiangxu Meng are with
the School of Computer Science and Technology, Shandong University, Jinan,
China, Email: luwang@mail.sdu.edu.cn, chtu@sdu.edu.cn, mxx@sdu.edu.cn.
Wenping Wang, Bin Chan and Dongming Yan are with the Department
of Computer Science, the University of Hong Kong, Pokfulam Road, Hong
Kong, China. Email: {wenping, bchan, dmyan}@cs.hku.hk.
(a) (b)
(c) (d)
Fig. 1. An example of silhouette smoothing by our proposed method. (a)
A coarse mesh model (500 triangles); (b) The shaded coarse mesh; (c) The
coarse meshed is reﬁned near silhouette by our method (904 triangles); (d)
Phong shading of the mesh in (c), with smoothed silhouette.
mesh as input. It performs viewdependent 3D curve reconstruc
tion and simple local remeshing to generate smooth silhouettes at
run time; feature edges are smoothed in preprocessing. Unlike pre
vious approaches, our method does not require an LOD model or
a ﬁne mesh and avoids global smooth surface reconstruction. The
local remeshing is based on the notion of silhouette triangles,
which ensures that smooth silhouettes are faithfully reconstructed
and coherent (i.e., free of visual discontinuity) when rendered
with respect to a moving view point or with continuous object
movement. Note that our method only focuses on smoothing of
silhouette and features edges and it does not attempt to alter the
appearance (e.g., shading and texture) of a coarse mesh in regions
away from its silhouettes or feature curves.
The major contribution of this paper is a new method for
computing smooth silhouettes of coarse triangle mesh surfaces.
The smooth silhouette curves are computed using an interpolation
scheme over silhouette triangles, a concept that we will introduce
later. As a result, the silhouette curves thus computed are more
accurate than by previous methods and possess visual coherence
for moving objects or with respect to a moving viewpoint.
Fig. 2. Feature curves (in red) are local traversal intersections of two surfaces.
Some other advantages of our method are as follows. Polygonal
feature edges are smoothed in the same way as polygonal silhou
ette edges. We will (Section III D) see that this not only makes
feature edges look smooth but also ensures that these smooth
feature curves possess visual coherence during continuous motion.
We also apply an effective vertex perturbation scheme to saddle
regions to prevent the smoothed silhouette from being blocked by
the neighboring faces of the original mesh, which is a critical issue
that has not been addressed by previous silhouette smoothing
methods. The method runs efﬁciently with GPU assistance.
After reviewing related works in the rest of this section, we
will present an outline of our method in Section II. Curve
approximation to silhouettes and feature curves, and the re
meshing scheme are described in detail in Section III. GPU speed
up is discussed in Section IV. We present experimental results in
Section V and conclude the paper in Section VI.
B. Related work
The silhouette rendering problem has extensively been studied
for various geometric models (e.g. [4], [22]). Here we focus on
polygonal meshes. Many methods for realtime mesh rendering
require a ﬁne mesh to start with. For better rendering efﬁciency,
a progressively simpliﬁed mesh is usually displayed in a view
dependent manner to ensure smooth appearance of silhouettes.
These methods include techniques based on multiple (or even
continuous) levels of detail [21] [13] [8] [9] [6] [15] [1]. The
silhouette clipping technique [17] also needs a ﬁne mesh for
assistance. However, in many applications only relatively coarse
meshes are available; very ﬁne meshes cannot be used, either
because they are not available or because of limited bandwidth
or memory.
The PNtriangle method [19] and its variant [3] do not assume
a ﬁne mesh as input; rather, they construct a smooth cubic surface
patch for each triangle of a coarse triangle mesh. These methods
use global surface patch reconstruction to achieve the overall
smoothness of a coarse mesh for rendering, thus improving the
silhouette smoothness as well as the shading quality of interior
region. Normally the level of subdivision of cubic patches is uni
form across the model and ﬁxed before rendering, in a viewpoint
independent manner. Therefore the polygonal appearance (i.e., the
nonsmoothness) of silhouettes still becomes apparent when one
zooms in on a silhouette. (See the comparison of our method with
the PNtriangle method in Fig. 24, Section V).
For fast silhouette smoothing and rendering, the method in [20]
takes a local and viewdependent approach that does not assume
the availability of ﬁne meshes and does not involve global geom
etry reconstruction. It performs local 2D curve approximation to
the projected boundary of a mesh surface on the view plane. This
boundary curve corresponds to the silhouette edges of the mesh;
an edge of a mesh surface is called a silhouette edge if one of its
two incident triangles is visible and the other is invisible, i.e., a
backfacing triangle.
Many existing methods extract silhouette edges from polygonal
meshes [2], [10], [14], [16]. However, there are some fundamental
difﬁculties in computing smooth silhouettes from silhouette edges
of a mesh model. Consider the mesh approximating a sphere
in Fig. 3. Suppose that all the mesh vertices are on the sphere.
Fig. 3(b) shows that, in one view, the silhouette curve (i.e., the
circle in red) does not pass through the vertices of the silhouette
polygon. The side view in Fig. 3(c) shows that the silhouette curve
in (b) passes over a series of mesh triangles, which are marked
in thick lines, rather than corresponds to any edges of the mesh.
This observation suggests that it is inherently inaccurate and
essentially incorrect to use silhouette edges for reconstructing
smooth silhouette curves, because a smooth silhouette curve
does not necessarily correspond to silhouette edges. Furthermore,
besides the accuracy consideration, the smooth silhouette curves
thus computed are not visually coherent with respect to a moving
viewpoint or if the mesh model moves continuously, since they
depend on the silhouette edges that appear or disappear abruptly
for a rotating mesh model.
(a) (b) (c)
Fig. 3. (a) A sphere; (b) A coarse mesh sampled from the sphere and
silhouette curve (in red); (c) The coarse mesh and silhouette curve in (b)
viewed from another angle.
Hertzmann et. al in [7] noticed the visual coherence issue and
proposed the following method for extracting silhouette curves.
Suppose that a smooth surface is represented by a triangular mesh.
Let E be the view point. Let n(p) be the estimated unit normal
vector at a mesh vertex p. Then consider the function g(p) =
n(p) · (p − E) deﬁned at all mesh vertices p. Now extend the
domain of g(p) to be over each triangle by linear interpolation of
its values at the three vertices of the triangle. Then the silhouette
curve is deﬁned to consist of all those points p on the mesh
surface satisfying g(p) = 0 (e.g., the line segment MN in Fig.
4). The silhouette thus computed possesses visual coherence since
it is dependent on the view point E continuously. However, the
smoothness issue is still not addressed, as the silhouette curve
thus constructed is a polyline lying on the faces of the mesh.
In the present paper we focus on local silhouette processing,
without using a ﬁne mesh or global surface reconstruction. Rather
than using the silhouette edges of a mesh as in [2], [4], [10], [16],
[20], we compute silhouette curves based on silhouette triangles.
The idea here of associating a silhouette segment to a triangle
bears similarity to the treatment by Hertzmann et. al in [7]. We
aim at faithful smooth silhouette reconstruction as well as visual
A
B
C
n(A)
n(B)
n(C)
g(A) > 0
g(B) > 0
g(C) < 0
g = 0
Silhouette line MN
M
N
Fig. 4. Silhouette line M N as deﬁned in [7].
coherence of the silhouettes of a moving mesh model to support
realtime rendering of a coarse mesh with smooth silhouette.
II. OUTLINE OF PROPOSED METHOD
We consider triangle meshes that approximate a piecewise
smooth surface. There are two types of edges in such mesh
surfaces: feature edges and nonfeature edges. Feature edges
are those that approximate smooth feature curves (or creases)
deﬁned by traversal intersection of two smooth surfaces, while
nonfeature edges are those mesh edges located in regions ap
proximating smooth parts of the original surface.
To circumvent the problem pointed out in Section I with using
silhouette edges for silhouette curve reconstruction, we propose
the use of silhouette triangles for computing smooth silhouette
curves of a mesh. Let M be a mesh approximating a smooth
surface Γ. A triangle T of the mesh M is then an approximation to
a triangular surface patch T
p
on the surface Γ. For some viewing
direction, the silhouette curve of Γ can lie on the patch T
p
, and
in this case we need to compute a smooth curve on the triangle
T to approximate that part of the smooth silhouette of Γ. This
naturally gives rise to the notion of the silhouette triangle, which
is the triangle T in this case. The silhouette triangles are viewpoint
dependent and are exactly those mesh faces that contain silhouette
curves given by g(p) = 0 in [7] (see Section IB and Fig. 4).
Let N
V
be a shading normal deﬁned at mesh vertex V . Let
D
V
denote the viewing direction vector from the vertex V to
the viewpoint (i.e. the eye). Then V is said to be visible if the
inner product (N
V
, D
V
) ≥ 0 and invisible otherwise. Note that
the notion of visibility here is a local one; it is different from
occlusion where a frontfacing triangle may be blocked by another
object in front of it. We label a visible vertex by “+” and an
invisible one by “−”. A mesh triangle face is called a silhouette
triangle if its three vertices do not have the same visibility status.
For example, among the four triangles with visibility labels in
Fig. 5, the triangles of types 1 and 2 are silhouette triangles,
while the other two are not.
Smooth silhouette curves of a mesh can be computed using
the property that the surface normal of any point on silhouette
is perpendicular to the viewing direction. First, all silhouette
triangles are identiﬁed by checking the visibility of all mesh
vertices. Each silhouette triangle has exactly two edges labeled as
“(−, +)”, as V
1
V
2
and V
1
V
3
shown in Fig. 6. Using the endpoint
information of these two edges, we compute two cubic Hermite
interpolation curves S
1
(u) to connect V
1
and V
2
, and S
2
(v)
Fig. 5. Different types of triangles in a mesh. Only triangles of types 1 and
2 are silhouette triangles.
SilhouetteSegment S(t)
S (v )
2 0
S (u )
1 0
V
2
V
3
V
1
V
4
SilhouetteBridge S (u)
1
SilhouetteBridge S (v)
2
Eye sDirection
Fig. 6. Construction of silhouette bridges S
1
(u), S
2
(v), and silhouette
segment S(t).
to connect V
1
and V
3
, which are called silhouette bridges (see
Fig. 6).
Then, as an approximation, assuming a linear change of normal
vectors along the silhouette bridges, it is easy to compute a
silhouette point on each silhouette bridge such that the interpo
lated surface normal vector at that point is perpendicular to the
viewing direction. Thus, we get two silhouette points, which are
S
1
(u
0
) and S
2
(v
0
) as shown in Fig. 6. Finally, the silhouette
curve segment, or simply silhouette segment, connecting the two
silhouette points is given by a cubic Hermite interpolation curve
S(t) as shown in Fig. 6. The silhouette segment S(t) is ﬁnally
sampled for local remeshing for rendering the mesh with the
smoothed silhouettes.
Another goal of our method is to make polygonal feature lines
smooth in a coarse mesh. (Two meshes with such polygonal fea
ture lines are shown in Fig. 20(a) and Fig. 21(b).) Feature edges
are connected to form polygonal feature lines that approximate
original smooth feature curves. We suppose that features edges
are already labeled as such in the input mesh. Based on the two
endpoints and estimated tangent vectors at the two endpoints of
each feature edge, we use the cubic Hermite curve to generate
a smooth feature curve segment (or simply feature segment) to
replace the straight feature edge. Because of their shared tangents,
consecutive feature segments are joined with G
1
continuity and
thus form a smooth piecewise cubic curve approximating the
original feature curve.
Clearly, feature segments need to be rendered whenever visible.
To speed up processing, they are precomputed and stored for easy
retrieval through quick lookup during runtime processing. The
extra space requirement of this preprocessing scheme is justiﬁed
CoarseMeshwith
SmoothedFeatureEdges
CoarseMeshandObject
FindingSilhouette Triangles
CoarseMeshwith
Silhouette Triangles
ComputingSilhouetteBridges
LocalRemeshing
CoarseMeshwith
SilhouetteBridges
ComputingSilhouetteCurves
CoarseMeshwith
SilhouetteCurves
RefinedMeshandObject
FeatureEdges Smoothing
Fig. 7. Flowchart of our silhouette smoothing method.
by that the number of feature edges is usually small compared
to the number of nonfeature edges, so storing a ﬁxed set of
feature segments normally does not cause a signiﬁcant increase
in memory consumption.
Below is the main ﬂow of our method, as illustrated in Fig. 7.
Details about the main steps will be presented in Section III.
1) Feature edge smoothing: For each feature edge, compute
its corresponding feature curve segment by Hermite curve
interpolation in preprocessing. This step is view indepen
dent.
2) Finding silhouette triangles: Given a viewing direction D
for a parallel projection or a viewpoint E for a perspective
projection, locate all silhouette triangles with respect to D
or E.
3) Computing silhouette bridges: Compute two silhouette
bridges for each silhouette triangle using Hermite interpo
lation.
4) Computing silhouette segments: Compute a silhouette
point on each silhouette bridge obtained in Step 3. Use
a Hermite interpolation curve again to construct a smooth
silhouette segment connecting the two silhouette points of
each silhouette triangle. (Note that each silhouette bridge –
as well as the silhouette point on it – is computed only once
in each frame, since it is shared by two adjacent silhouette
triangles.)
5) Local remeshing: Sample points on silhouette segments
and visible feature segments adaptively, according to their
curvature and perceived size. Use these sample points to
perform local remeshing for rendering.
We use the GPU to perform local remeshing and render the
remeshed surface. Details about the implementation are given in
Section IV.
III. SMOOTH CURVE CONSTRUCTION
In this section we explain the ﬁve main steps of our method
as outlined in the preceding ﬂow of algorithm.
A. Feature edges smoothing
The key to computing a feature segment via Hermite interpo
lation is providing properly estimated tangent vectors at the two
endpoints of its corresponding feature edge. Feature edges can
be grouped into maximal polylines, called feature polylines. A
feature polyline either terminates at a nonfeature vertex or meets
other feature polylines at a feature vertex where more than two
smooth surfaces intersect (See Fig. 2). Note that no smoothing
is needed when a feature polyline contains only a single feature
edge.
Suppose that a feature polyline contains at least three con
secutive vertices (V
0
, V
1
, . . . , V
k
), k ≥ 2. For an internal vertex
V
i
, i = 1, 2, . . . , k − 1, the tangent direction
ˆ
T
i
at V
i
is set to
be the tangent to the circle uniquely determined by the three
consecutive points V
i−1
, V
i
, and V
i+1
. Let W
0
= V
i
− V
i−1
and
W
1
= V
i+1
− V
i
. Then it is an elementary exercise to show
ˆ
T
i
= W
1
 · W
0
+ W
0
 · W
1
. (1)
The estimated tangent vector T
i
at V
i
to be used for Hermite
interpolation has the same direction of
ˆ
T
i
but has its length
determined in such a way that the resulting Hermite curve is
a good approximation to a circular arc if the Hermite data (i.e.,
the endpoints and their tangents) are sampled from a circle (see
details in Section IIIC).
For the end vertex V
0
of the feature polyline, its tangent vector
is set to be the tangent to the circle passing through V
0
, V
1
and
V
2
. The tangent vector at the other end vertex V
k
is similarly
computed. Given V
i
, V
i+1
and their tangent vectors, their cubic
Hermite interpolation curve is then uniquely determined. (The
expression is given in Section IIIC.)
B. Finding silhouette triangles
Silhouette triangles are determined by the visibility status of its
vertices, which in turn depends on the angles between the shading
normal vectors at the vertices and the viewing direction vector.
For a nonfeature mesh vertex V , its shading normal vector is
computed as the angleweighted average of the normal vectors of
triangles incident to V as in [18] (see Fig. 8(a)).
A feature vertex V has multiple shading normal vectors, each
associated with a surface incident to V , since V is at the
intersection of multiple smooth surfaces. Speciﬁcally, the triangles
adjacent to V are divided into several groups, with each group
belonging to a smooth surface passing through V (See Fig. 8(b)
for the case of two groups). The angleweighted average normal
vector over the triangles in the same group is assigned as a
shading vector to V . Therefore, the vertex V has multiple shading
(a) (b)
Fig. 8. (a) One shading normal is associated with nonfeature vertex; (b)
Here two normals are assigned to a feature vertex lying at intersection of two
smooth surface patches, represented by two groups of triangles.
O
N
Fig. 9. Hierarchical clustering structure on the Gaussian sphere.
vectors, each contributed by a group of triangles incident to V .
When testing whether a triangle incident to a feature vertex V is
silhouette triangles, we use the normal vector associated with the
group containing that triangle.
To parse silhouette triangles, we compute the visibility status of
all mesh vertices by hierarchical clustering based on the normals
of the vertices in the case of parallel projection or the normals
and positions of the vertices in the case of perspective projection.
For parallel projection, we follow the method in [2]. Speciﬁcally,
let D be the constant viewing direction vector and let N be the
normal of the vertex V . The vertex V is visible if N · D ≥ 0, and
invisible otherwise. The normalized normal vectors N of all mesh
vertices are mapped to points on the Gaussian sphere, which is
subdivided into hierarchical cells along longitudes and latitudes.
Each cell is a spherical convex region and has four corner points,
and each parent cell has four child cells, as shown in Fig. 9. The
cells induce a hierarchical clustering of mesh vertices via their
normal vectors.
For a cluster C of mesh vertices, if all the four corners of its
containing cell are visible (or invisible), then all the vertices in
C are visible (or invisible). If the four corners of the cell have
different visibility status, each child of the cell will be checked;
this is done recursively until the visibility of each mesh vertex is
resolved.
For perspective projection, the basic idea is the same but a
3D Gaussian sphere in 4D space will be used instead, since
the viewing direction is no longer a constant vector. Let V =
(V
x
, V
y
, V
z
) be a mesh vertex. Let N = (N
x
, N
y
, N
z
) be the
unit normal vector of the vertex V and let E = (E
x
, E
y
, E
z
) be
the viewpoint. The plane passing through V and having N as its
normal vector has the equation F (X; V ) ≡ N · (X − V ) = 0.
Clearly, the vertex V is visible if and only if the viewpoint E
is above the plane F ( X; V ) = 0, that is, F (E; V ) ≥ 0, which is
the condition we use for determining the visibility of all mesh
vertices V . Note that
F (E; V ) = E
x
· N
x
+ E
y
· N
y
+ E
z
· N
z
−
(N
x
· V
x
+ N
y
· V
y
+ N
z
· V
z
)
≡ E
′
· V
′
,
where E
′
= (E
x
, E
y
, E
z
, 1) and V
′
= (N
x
, N
y
, N
z
, −(N
x
· V
x
+
N
y
· V
y
+ N
z
· V
z
)) are 4D vectors. Therefore we normalize V
′
and map them onto the 3D Gaussian sphere S
3
in 4D space,
and subdivide S
3
for hierarchical clustering. In this case, each
cluster has eight corner points and eight child cells. Then this
hierarchical structure is used for fast visibility determination for
all mesh vertices. The visibility checking for a point V
′
on S
3
is
simply done by the sign of the inner product E
′
· V
′
.
C. Computing silhouette bridges
By deﬁnition, there are two sides labeled (+, −) in a silhou
ette triangle, such as V
1
V
2
and V
1
V
3
in the silhouette triangle
△V
1
V
2
V
3
in Fig. 6. Below we use the side V
1
V
2
to explain
the procedure for computing a silhouette bridge by Hermite
interpolation.
First suppose that V
1
V
2
is not a feature edge. Let N
1
and N
2
be estimated normal vectors at V
1
and V
2
. To compute tangent
vectors at V
1
and V
2
, which are needed by Hermite interpolation,
we take the tangent direction
ˆ
T
1
at the vertex V
1
to be the
projection of
−−−→
V
1
V
2
on the “tangent” plane at V
1
, which is the
plane passing through V
1
and having N
1
as the normal vector.
Then the vector
ˆ
T
1
is given by
ˆ
T
1
= (V
2
− V
1
) − [(V
2
− V
1
) · N
1
] · N
1
. (2)
Similarly, the tangent direction
ˆ
T
2
at V
2
is
ˆ
T
2
= (V
2
− V
1
) − [(V
2
− V
1
) · N
2
] · N
2
. (3)
The tangent vectors T
1
and T
2
we use in Hermite interpolation
have the same directions of
ˆ
T
1
and
ˆ
T
2
. Obviously different lengths
of the end tangent vectors affect the shape of the resulting cubic
Hermite interpolation curve. We choose the length of T
1
and T
2
in such a way that if the end data points (i.e., V
i
and T
i
, i = 1, 2)
are extracted from a circular arc, then the resulting Hermite cubic
curve gives a good approximation of the circular arc. It can be
shown that this requirement is met if the lengths of T
1
and T
2
are set to be
L
1
=
2V
2
− V
1

1 + cos θ
1
, L
2
=
2V
2
− V
1

1 + cos θ
2
, (4)
where θ
i
is the angle between
−−−→
V
1
V
2
and
ˆ
T
i
, i = 1, 2. Here a
circular arc is used as a target shape for approximation because it
has constant curvature; hence, it is expected that the interpolating
cubic curve will have small curvature variation when the endpoint
data does not deviate much from a circular arc.
With all the end data points determined, the silhouette bridge
S
1
(u) over the side V
1
V
2
is given by the cubic Hermite curve
S
1
(u) = (2V
1
− 2V
2
+ T
1
+ T
2
)u
3
− (3V
1
− 3V
2
+ 2T
1
+ T
2
)u
2
+ T
1
u + V
1
, u ∈ [0, 1].
(5)
D. Computing silhouette segments
A silhouette point is a point on a silhouette bridge whose
surface normal vector is perpendicular to the viewing direction.
Here we assume that the normal vector along the silhouette bridge
S
1
(u) over V
1
V
2
is linearly interpolated from the normal vectors
at V
1
and V
2
. Thus, we assign
˜
N
1
(u) = (1 − u) · N
1
+ u · N
2
, u ∈ [0, 1], (6)
to be the normal vector at the point S
1
(u). First consider the case
of parallel projection, with the constant viewing direction vector
denoted by D. The silhouette point on the silhouette bridge S
1
(u)
is S
1
(u
0
), where the parameter value u
0
is easily obtained as the
solution to the linear equation D ·
˜
N
1
(u) = 0. The other silhouette
point S
2
(v
0
) on the silhouette bridge S
2
(v) can similarly be
computed.
For perspective projection, the viewing direction at the point
S
1
(u) is D(u) = S
1
(u)−E, which is no longer a constant vector.
That means that we would have to solve the quartic equation
D(u) ·
˜
N
1
(u) = 0 to locate the silhouette point S
1
(u
0
), since
S
1
(u) is cubic. To obtain a simple approximate solution, we use
ˆ
D(u) = (1 − u) · D
1
+ u · D
2
, u ∈ [0, 1], (7)
where D
1
and D
2
are viewing directions at V
1
and V
2
, to
approximate the true viewing direction D(u). This approximation
ˆ
D(u) makes sense because it agrees with the true view direction
D(u) at the endpoints V
1
and V
2
, and thus gives correct visibility
status at V
1
and V
2
. With this approximation, we just need to
solve the quadratic equation h(u) ≡
ˆ
D(u) ·
˜
N
1
(u) = 0 to get
u
0
so as to determine the silhouette point S
1
(u
0
). Note that the
quadratic equation h(u) = 0 has a unique solution in [0, 1], since
h(u) has opposite signs at u = 0 and u = 1.
Once the two silhouette points S
1
(u
0
) and S
2
(v
0
) are available,
together with their normal vectors
˜
N
1
(u
0
) and
˜
N
2
(v
0
), we
compute the silhouette segment using the Hermite interpolation
in the same way as in computing silhouette bridges. Thus we get
the silhouette segment S(t) associated with the silhouette triangle
△V
1
V
2
V
3
(see Fig. 6).
Special consideration is needed when a feature edge is one
of the sides of a silhouette triangle. There are two cases: (a) a
feature edge is a side whose endpoints have different visibility
status (e.g., the side V
1
V
2
or V
1
V
3
in Fig. 6); or (b) a feature
edge is the side whose endpoints have the same visibility status
(e.g., the side V
2
V
3
in Fig. 6). In case (a) the silhouette bridge
over the feature edge is simply its corresponding precomputed
feature segment. (Fig. 10 illustrates an example of case (a) where
a feature edge is a side of a silhouette triangle.)
In case (b) no special treatment is actually needed. In this case,
assuming that the feature edge is the side V
2
V
3
in Fig. 6, if a
changing viewing direction makes the normal vectors at the two
endpoints of V
2
V
3
get closer and closer to being perpendicular
to the viewing direction, then the silhouette points S
1
(u
0
) and
S
2
(v
0
) on the two silhouette bridges will approach V
2
and V
3
Fig. 10. An example of a silhouette triangle containing a feature edge; (the
closeup is a side view). The red curves are smooth feature segments and the
blue one is a silhouette segment.
respectively, which ensures that the reconstructed smooth silhou
ette segment S(t) approaches the feature segment associated with
the feature edge V
2
V
3
. This is important for the visual coherence
of silhouette curves with respect to a moving viewpoint when
feature segments are involved, and it is achieved by smoothing
feature edges using the same curve interpolation scheme as for
silhouette.
E. Local remeshing
In this section we shall discuss how to remesh silhouette
regions and how to adaptively sample silhouette segments, and
then address the issue of silhouette blocking when concave
silhouette segments are involved.
Remeshing: To render a mesh surface with smooth feature
segments and silhouette segments that we have constructed via
Hermite interpolation, these curve segments need to be sampled
for local remeshing. Fig. 11 shows different cases the curve
should be remeshed, and Fig. 12 shows how the remeshing
should be done when feature curves are involved. Note that when
the silhouette curve is concave it may be blocked by a neighboring
triangle of the original mesh. We will address this issue later in
this section.
Adaptive sampling: The reconstructed smooth silhouette curve
will be rendered as a polygon connecting sample points on the
silhouette. To ensure that the silhouette curve appears smooth after
sampling while keeping low the number of sample points, we
determine the number of sample points on the silhouette segment
adaptively based on the curvature and the projected length of the
silhouette segment. Since the silhouette segment is intended as
an approximation to a circular arc (cf. Section III. C), using the
simple geometry of a circular arc, we derive an estimate on the
number of sample points as follows.
h
A
B
l
α
E
O
S(
1
2
)
Fig. 13. Adaptive sampling of a silhouette curve segment.
SilhouetteSegment ( )S t SilhouetteSegment ( )S t
SilhouetteSegment ( )S t SilhouetteSegment ( )S t
(a) (b)
Fig. 11. Local remeshing. (a) (left) The silhouette triangle is △V
1
V
2
V
3
and the silhouette segment S(t) is further back than the edge V
1
V
2
. To attain local
convexity, the sample points on the segment S(t) and the edge V
1
V
2
are connected to form local triangulation; (a)(right) The silhouette triangle is △V
2
V
3
V
4
and S(t) is in front of the edge V
2
V
3
. In this case, again, to attain local convexity, the sample points on the segment S(t) are connected to the vertex V
4
;
(b) The two cases shown here involve concave silhouette curve segments, and are processed similarly; local vertex perturbation is needed in these cases to
rectify the blocking problem.
FeatureSegment
V
1
FeatureSegment
V
2
V
3
V
4
V
1
V
2
V
3
V
4
V
5
C
(a) (b)
Fig. 12. Remeshing when feature segments are involved. (a) One feature segment: the sample points on the feature segment are connected to the opposite
vertex; (b) Two feature segments: the sample points on the two feature segments are connected to the center C of the triangle △V
1
V
2
V
3
.
Let A and B denote the endpoints of the silhouette segment
S(t), t ∈ [0, 1]. (Refer to Fig. 13.) Let h be the height of
the silhouette segment in the projection plane, approximated by
the distance from the middle point S(1/2) of the silhouette
segment to the middle point of the line segment AB. Let r =
sin(α/2) = (1 − cos(α))/2, where α is the central angle
of the arc subtended by the chord AB and so cos(α) can be
approximated by the inner product of the two unit normal vectors
at the endpoints A and B. Let e be the allowed tolerance, in pixels.
Then the bound n
s
on the number of sample points is given by
n
s
=
1
2
· h · r /e , (8)
The interpretation of this bound n
s
is that if the number of sample
points is at least n
s
, then the polygon connecting the sample
points approximate the smooth silhouette curve within the error
e. Here the value of e should be in the range of 0.5 to 2 to ensure
the smooth appearance of the rendered silhouette.
Local perturbation: In a saddleshaped region the recon
structed curved silhouette curve may be blocked by its neigh
boring triangles, and thus the original polygonal silhouette still
persists, as already seen in Fig. 11(b). Fig. 14(a) provides a
convincing example with detailed explanation where the inner
circle of the torus does not appear to have been smoothed due
to this blocking problem. This occurs because the reconstructed
silhouette curve moves into the mesh surface, so may be blocked
by adjacent mesh triangles, which is an issue that has not been
addressed by previous viewdependent local reﬁnement methods
(e.g., [20] and [4]). We propose below a simple and effective
perturbation technique to rectify this problem.
First we need to detect concave silhouette segments. Let
˜
N
1
and
˜
N
2
be the surface normal vectors at two consecutive silhouette
points S
1
and S
2
, respectively. Denote F = S
2
− S
1
. Then the
silhouette segment connecting S
1
and S
2
is concave if
˜
N
1
·F > 0
and
˜
N
2
·F < 0, i.e., the angle between
˜
N
1
and F is less than π/2
and the angle between
˜
N
2
and F is greater than π/2. After all
concave silhouette segments are found, the following perturbation
technique will be applied.
The basic idea is to perturb positions of the neighboring mesh
vertices of a concave silhouette segment to keep them from block
ing the silhouette segment. Let T denote the silhouette triangle
containing the concave silhouette segment under consideration.
Then by neighboring mesh vertices we mean those vertices of the
triangle T or vertices which are connected to T through a mesh
edge. For example, in Fig. 15, as far as the silhouette triangle
∆V
1
V
2
V
3
is concerned, all V
i
, i = 1, 2, 3, 4, 5, are neighboring
vertices, so they will be subject to perturbation.
For the simplicity of discussion, we will use the vertex V
1
to explain the perturbation procedure in the case of parallel
projection; the idea is similar for the case of a perspective
projection. Refer to Fig. 15. Suppose that S
i
(t), i = 1, 2, 3,
t ∈ [0, 1], are all the concave silhouette segments that involve
V
1
as a neighboring point. Take the middle points M
i
= S
i
(1/2)
(a) (b) (c)
Fig. 14. An example of silhouette smoothing in a saddleshaped region. (a) The result without ﬁxing by vertex perturbation; (b) The correspondence mesh
model of (a) and a close up view. The red curve is the smoothed silhouette blocked by the original mesh; (c) The result after ﬁxing by vertex perturbation.
V
1
V
2
V
3
V
4
V
5
'V
1
P
1
M
3
M
2
M
1
P
2
P
3
Fig. 15. Vertex perturbation to remove silhouette blocking. The dashed blue curve is the smoothed silhouette, which is blocked by three adjacent triangles.
By lowering the vertex V
1
to V
′
1
, the smoothed silhouette curve is displayed properly.
of the silhouette segments S
i
(t). Let N
i
denote the interpolated
normal vector at M
i
. Then each silhouette segment S
i
(t) is
associated with a plane P
i
which passes through M
i
and has N
i
as its normal vectors. It can be shown that the plane P
i
contains
the viewing direction. The upper side of the plane P
i
is deﬁned by
the inequality N
i
· (X − M
i
) > 0. Clearly, the silhouette segment
S
i
(t), t ∈ [0, 1], lies entirely above the plane P
i
.
Now we shall determine a displacement vector for V
1
such that
after perturbation V
1
will be below each plane P
i
, i = 1, 2, 3. The
direction R of this displacement vector is the average of all the
normal vectors N
i
, i.e., R = N
1
+ N
2
+ N
3
, whose normalized
vector is denoted by
ˆ
R = R/kRk. It follows that the length of
this displacement vector is given by
ℓ = max
i=1,2,3
−
(V
1
− M
i
) · N
i
ˆ
R · N
i
.
Hence, V
1
is displaced to V
′
1
by
V
′
1
= V
1
+ ℓ
ˆ
R.
Similarly, we perturb all other the neighboring vertices of
concave silhouette segments. Since, for each silhouette segment
S
i
(t) all vertices of its neighboring triangles are below the plane
P
i
, these triangles will not block S
i
(t) with respect to the
viewpoint, which is on the plane P
i
.
The above simple scheme for blocking removal has proven very
effective. Fig. 14 shows a coarse mesh with a closeup view to
reveal the blocking phenomenon. Fig. 14(c) shows the unblocked
smooth silhouette after applying the above vertex perturbation.
Example shown in Fig. 22 also demonstrates the effectiveness of
this perturbation scheme.
In the above only those mesh triangles within a small neigh
borhood of concave silhouette segments are considered for per
turbation to avoid silhouette blocking. While it can be imagined
that triangles little beyond this neighborhood may also block the
concave silhouette segments, we do not consider this extension
in order to minimize implementation complexity. Moreover, such
potential blocking by relatively far neighboring triangles must
occur very rarely, since they have not been visually discernable
in our extensive experiments.
IV. SPEEDUP WITH GPU
To speed up the smooth processing and the rendering of the
reﬁned mesh, we use the GPU as well as the CPU for the
computation described in Section III. We balance the loads of
GPU and CPU as shown in Fig. 16 to parallelize the computation
as much as possible.
After CPU computes the silhouette segments and completes
remeshing, GPU is fed with mesh vertices accompanied by re
meshing information. Vertex perturbation and the locations of
sample points on silhouette segments are computed by GPU.
After ﬁltering the vertices by vertex processors, positions of all
the vertices are transformed to the homogenous clipspace by the
vertex program of GPU and then all the triangles are rendered in
Phong shading by the fragment processors of GPU.
Fig. 16. The graphics pipeline.
V. EXPERIMENTAL RESULTS
In this section we shall present experimental results of testing
our method, and compare our method with the methods in [20]
and [19] in terms of rendering quality and efﬁciency.
Fig. 1, 17, 18, 19, 22, and 23 show examples generated
by our method. The ﬁne meshes shown there are used only
as reference for comparison to the remeshed coarse meshes
with reconstructed smooth silhouette. For the textured mesh in
Fig. 19, linear interpolation of texture coordinates is used to
assign texture coordinates to the sample points on silhouette
curves. Fig. 22 shows how the silhouette blocking problem in
a saddleshaped region (cf. Section IIIE) is ﬁxed by our vertex
perturbation technique. Fig. 23 provides an example to that our
method also works well for meshes with nonuniformly sampled
data points. Realtime rendering of these models is demonstrated
in the submitted video.
Fig. 20 and 21 show two mesh surfaces with prominent feature
edges. Our method constructs satisfactory smooth feature curves
from the polygonal feature edges. Although similar treatment to
feature curve smoothing has been studied before [4], we have
shown here how it can be integrated with our silhouette smoothing
technique in a uniﬁed framework (e.g., see the mesh model of a
pawn in Fig. 10).
Fig. 22 compares our method with the method in [20]. The re
sult by the method in [20] is not as good, because the interpolated
silhouette curve used there corresponds to some mesh edges and
always passes through mesh vertices, leading to visual artifacts. In
contrast, the method of the present paper generates more faithful
smooth silhouette that is visually coherent with a moving view
point or a moving object, as can be seen in the accompanying
video.
Compared with the PNtriangle method in [19], besides mem
ory saving due to local remeshing, our method supports smooth
rendering of silhouette with arbitrary “zoomin”, an operation
used frequently in applications. Fig. 24(b) shows a reﬁned mesh
after three levels of subdivision by the PNtriangle method. The
silhouette by the PNtriangle method looks smooth in current
view, but its polygonal appearance is revealed when zooming
in for a closeup view. To make silhouettes smooth, the whole
model would need to be subdivided again, and the number of
triangle faces would increase signiﬁcantly, since the PNtriangle
method operates in a viewpoint independent manner, insensitive
to the changing viewing distance and the changing location of
silhouettes.
In contrast, the number of sample points is determined adap
tively in our method according to the curvature and the projected
height of silhouette segments. Therefore, our method ensures that
the silhouette curve always looks smooth via simple runtime
adaptive local remeshing, when the model is scaled larger during
zoomin (see Fig. 24(c)).
As shown in Fig. 24, although the PNtriangle method uses
global patches reconstruction, the quality of the interior shading
is the same with the shaded mesh reﬁned by our method or
the shaded coarse mesh. That is because the Phong shading
model, which are used in our experiments, is based on normal
interpolation, and the normal computed by Phong shading model
at interior points of a triangle of the coarse mesh is almost the
same with the normal produced by the PNtriangle method at
the points. Hence, the interior shading quality has been improved
little by the PNtriangle method despite of the increased number
of triangles it uses for rendering.
We tested our silhouette smoothing method on a PC Work
station with one 3.0 GHz CPU, 512M memory and an Nvidia’s
GeForce 7800 GT graphics card. The windows size was set to
1280 by 800. The testing objects covered areas around 40%60%
of the window.
Table 1 compares the frame rates when applying the method
[20], the PNtriangle method [19] and our method. The column
“Direct Drawing” is the control, which shows the frame rates
when drawing objects without silhouette smoothing. All the
methods have used vertex arrays to obtain the best frame rates
possible. For the PNtriangle method, the frame rates depends
on the subdivision level. In this test, the subdivision level varies
from 2 to 3 to make the model’s silhouettes look smooth to give
comparable visual quality. Since, unlike the PNtriangle method,
the number of triangles rendered by our method in each frame
varies with the viewing parameters, only the typical counts of
triangles for our method are listed in Table 1 for reference.
Since the three methods are designed for enhancing visual
quality of rendered coarse meshes, in the comparisons we use only
coarse meshes with no more than 600 triangles except the last one,
the “hippo” model. Note that rendering of such simple models
using modern GPU is very fast. The frame rate for rendering a
single model of that size is usually in the range of thousands of
fps (frames per second) that is easily subject to noises caused by
timing inaccuracy, difference in GPU design and display driver
tweaks. Therefore, 60 copies of the same models were drawn to
make the rendering frame rates stay at the normal range of less
than 300 fps, except for the “hippo” model which was drawn only
once.
As shown in Table 1, our method is typically 23 times faster
than the method in [20] for different models. Comparing with
the PNtriangle method in [19], our method also shows a 30% to
over 200% speedup for all tested models. From our experiments
(a) (b) (c) (d) (e)
Fig. 17. Bunny: (a) shaded ﬁne mesh (30,000 triangles); (b) coarse mesh (500 triangles); (c) shaded coarse mesh; (d) remeshing by our method (873
triangles); (e) Phong shading of the mesh in (d).
(a) (b) (c) (d) (e)
Fig. 18. Fish: (a) shaded ﬁne mesh (5000 triangles); (b) coarse mesh (500 triangles); (c) shaded coarse mesh; (d) remeshing by our method (1018 triangles);
(e) Phong shading of the mesh in (d).
(a) (b) (c) (d) (e)
Fig. 19. Texture head: (a) shaded ﬁne mesh (43,151 triangles); (b) coarse mesh (4,778 triangles); (c) shaded coarse mesh; (d) remeshing by our method
(6858 triangles); (e) Phong shading of the mesh in (d).
(a) (b) (c) (d) (e)
Fig. 20. Cone mesh: (a) coarse mesh (70 triangles); (b) shaded coarse model; (c) remeshing by our method (576 triangles); (d) Phong shading of the mesh
in (c); (e) another view.
(a) (b) (c) (d) (e)
Fig. 21. Fandisk: (a) ﬁne mesh (1,000 triangles); (b) coarse mesh (128 triangles); (c) shaded coarse mesh; (d) remeshing by our method (1899 triangles);
(e) Phong shading of the mesh in (d).
(a) (b) (c) (d) (e)
Fig. 22. Knot: (a) shaded ﬁne mesh (15,000 triangles); (b) shaded coarse mesh (480 triangles); (c) result by the method in [20]; (d) shaded reﬁned mesh by
our method without local perturbation (843 triangles); and (e) shaded reﬁned mesh by our method with local perturbation.
(a) (b)
Fig. 23. Hippo, an example of smoothing the silhouettes of a nonuniformly sampled model: (a) shaded reﬁned model; (b) the reﬁned mesh model by our
method and a closeup view.
(a) (b) (c)
Fig. 24. Bear: (a) A coarse mesh (500 triangles); (b) the result by the PNtriangle method and a zoomin view, showing nonsmoothness; (c) the result by
our method and the zoomin view of the same part.
Mesh Tri.count Phong shading frame rates (fps)
Original PN method Our method Direct Wang’s.
∗
PN
∗ ∗
Our method Our method
(LOD) Drawing method method Ortho Perspective
Sphere 200 3200 (3) 332 573 60 115 352 347
Knot 480 7680 (3) 884 605 56 53 152 147
Bunny 500 8000 (3) 800 442 53 50 75 68
Fish 500 4500 (2) 886 620 54 77 114 114
Bear 500 8000 (3) 893 633 54 52 117 112
Head 500 8000 (3) 904 448 52 54 78 75
Venus 500 4500 (2) 862 655 56 76 155 153
Cat 500 8000 (3) 842 372 53 52 74 74
Cow 500 8000 (3) 983 560 52 51 82 80
Torus 576 5184 (2) 960 592 57 67 172 167
Hippo(render once) 43288 389592 (2) 51842 462 43 60 99 96
TABLE I
PERFORMANCE COMPARISONS OF OUR METHOD AND THE OTHER METHODS.
∗
THE WANG’S METHOD REFERS TO THE ONE IN [20].
∗ ∗
THE
PNTRIANGLE METHOD REFERS TO THE ONE IN [19].
it has been observed that the part of extracting silhouette triangles
and the part of local perturbation take only a small part of the
overall computation and rendering time (less than 4% for each
part), while the parts on computing the smoothed silhouettes and
sampling points on them take the major portion of the total time.
Furthermore, with our scheme we have found that the tasks for
the CPU and those for the GPU take about the equal amount of
time. It is however possible that there is a better scheme that is
more suitable for the GPU computation, so as to further speed up
the method.
VI. CONCLUSIONS
We have presented a method for silhouette smoothing of coarse
meshes through 3D curve interpolation and local remeshing
based on the notion of silhouette triangles. Both smooth silhouette
curves and feature curves are computed in a uniﬁed framework.
We have demonstrated the effectiveness of the method using a
number of examples. This method is a promising alternative to
LODbased viewdependent methods, especially when a ﬁne mesh
or an LOD model is not available.
VII. ACKNOWLEDGEMENT
This work is supported by the National Natural Science Foun
dation (60473103,60473127) and the National Key supported
by Basic Research and Development Plan (2006CB303102,
2004CB318006).
REFERENCES
[1] D.I. Azuma, D. N. Wood, B. Curless, T. Duchamp, D. H. Salesin and
W. Stuetzle, Viewdependent reﬁnement of multiresolution meshes with
subdivision connectivity, Proceedings of AFRIGRAPH 2003, 2003, pp.
6978.
[2] F. Benichou and G. Elber, Output sensitive extraction of silhouettes from
polygonal geometry, Proceedings of 7th Paciﬁc Graphics Conference,
1999, pp. 6069.
[3] T. Boubekeur, P. Reuter, C. Schlick, Scalar tagged PN triangles, Pro
ceedings of Eurographics 2005, pp. 1720.
[4] C. Dyken and M. Reimers, Realtime linear silhouette enhancement,
Proceedings of Mathematical Methods for Curves and Surfaces 2004,
July 2004, Tromso, Norway, pp. 135143.
[5] J. D. Foley, A. van Dam, S. Feiner and J. Hughes, Computer Graphics:
Principles and Practice, AddisonWesley, 1990.
[6] P. Heckbert and M. Garland, Survey of polygonal surface simpliﬁcation
algorithms, SIGGRAPH’97 Course Notes #25, 1997.
[7] A. Hertzmann, D. Zorin, Illustrating Smooth Surfaces, Proc. SIG
GRAPH00, 2000, pp.517526.
[8] H. Hoppe, Viewdependent reﬁnement of progressive meshes, Proceed
ings of SIGGRAPH’97, pp. 189198.
[9] H. Hoppe, Smooth viewdependent levelofdetail control and its appli
cation to terrain rendering, Proceedings of Visualization’98, pp. 3542.
[10] D. Kirsanov, P.V. Sander and S.J. Gortler: Simple Silhouettes for Com
plex Surfaces, Proceedings of Eurographics Symposium on Geometry
Processing, 2003, PP. 102106.
[11] J.J. Koenderink, What does the occluding contour tell us about solid
shape, Perception, Vol. 13, 1984, pp. 321330.
[12] S. Kumar, D. Manocha, B. Garrett and M. Lin, Hierarchical backface
culling, Proceedings of the 7th Eurographics Workshop on Rendering,
1996, pp. 231240.
[13] D. P. Luebke and C. Erikson, Viewdependent simpliﬁcation of arbitrary
polygonal environments, Proceedings of SIGGRAPH’97, pp. 199208.
[14] Matt Olson and Hao Zhang, Silhouette Extraction in Hough Space,
Proceedings of Eurographics 2006, pp. 273282.
[15] R. Pajarola, FastMesh: efﬁcient viewdependent meshing, Proceedings
Paciﬁc Graphics 2001, pp. 2230.
[16] M. Pop, C. Duncan, G. Barequet, M. Goodrich, W. Huang, S. Kumar:
Efﬁcient perspectiveaccurate silhouette computation and applications,
in Proceedings of Annual Symp. on Computational Geometry, 2001, pp.
6068.
[17] P. V. Sander, X. Gu, S. J. Gortler, H. Hoppe and J. Snyder: Silhouette
clipping. Proceedings of SIGGRAPH 2000, pp. 327334.
[18] G. Th¨urmer and C.A. W¨uthrich, Computing Vertex Normals from
Polygonal Facets, Journal of Graphics Tools, 1998, vol. 3, no. 1, pp.
4346.
[19] A. Vlachos, J. Peters, C. Boyd, J. Mitchell, Curved PN triangles,
Proceedings of I3D’01, pp. 159166.
[20] B. Wang, W. Wang, J. Wu and J. Sun, Silhouette smoothing by
boundary curve interpolation, Proceedings of CAD/Graphics’03, Oct.
2003, Macau, pp. 197202.
[21] J. Xia and A. Varshney, Dynamic view dependent simpliﬁcation for
polygonal models, Proceedings of Visualization’96, 1996, pp. 327334.
[22] H. Xu, M. X. Nguyen, X. Yuan, B. Chen, Interactive silhouette rendering
for pointbased models, Proceedings of Eurographics Symposium on
PointBased Graphics’04, pp. 1318.
Lu Wang is currently pursuing the Ph.D degree in
the School of Computer Science and Technology,
Shandong University, Jinan, China. She received her
B.S. degree in Department of Computer Science
from Shandong Normal University, Jinan, China,
in 2003. Her research interests include computer
graphics, realtime rendering, nonphoto realistic
rendering, and geometric modeling.
Changhe Tu received his B.Sc., M.Eng. and Ph.D.
in Shandong University in 1990, 1993 and 2003
respectively. He is now a professor in the School
of Computer Science and Technology at Shandong
University. His research interests include computer
graphics, geometric modeling and processing, non
photorealistic rendering and virtual reality.
Wenping Wang Wenping Wang is Associate Pro
fessor of Computer Science at University of Hong
Kong (HKU). His research covers computer graph
ics, geometric computing and visualization, and has
published over 100 papers in these ﬁelds. He got
his B.Sc. (1983) and M.Eng. (1986) at Shandong
University, and Ph.D. (1992) at University of Al
berta, all in computer science. He is associate editor
of the Springer journal Computer Aided Geometric
Design, and has been program chair of several inter
national conferences, including Geometric Modeling
and Processing (GMP 2000), and Paciﬁc Graphics ( PG 2000 and PG 2003),
ACM Symposium on Virtual Reality Software and Technology (VRST 2001),
ACM Symposium on Physical and Solid Modeling (SPM 2006). He received
the Teaching Excellence Award of The Department of Computer Science at
HKU in 2006 and the HKU Research Output Prize in 2007.
Xiangxu Meng received his B.Sc and M.Eng
Degrees in Dept. of Computer Science of Shan
dong University in 1982 and 1985 respectively.
He received his Ph.D. degree from the Institute
of Computing Technology, Chinese Academy of
Science in 1998. He is a Professor in the School
of Computer Science and Technology at Shandong
University. His current research interests include
HumanComputer interaction, virtual reality, com
puter graphics, CAD/CAM/CIMS, grid computing,
visualization and scientiﬁc computing.
Bin Chan received his B.Eng., M.Phil. and Ph.D.
in the University of Hong Kong in 1995, 1998 and
2005 respectively. He is now a research associate in
the Department of Computer Science at the Univer
sity of Hong Kong. His research interests include
realtime rendering, virtual reality, visualization and
global illumination.
Dongming Yan is a PhD student of Computer
Science at the University of Hong Kong. He re
ceived his BSc and Master degrees in Computer
Science from Tsinghua University, China, in 2002
and 2005, respectively. His current research interests
include geometric modeling, computeraided design
and realtime rendering.