Conference PaperPDF Available

Real-Time Rendering of Water Surfaces with Cartography-Oriented Design


Abstract and Figures

More than 70% of the Earth's surface is covered by oceans, seas, and lakes, making water surfaces one of the primary elements in geospatial visualization. Traditional approaches in computer graphics simulate and animate water surfaces in the most realistic ways. However, to improve orientation, navigation, and analysis tasks within 3D virtual environments, these surfaces need to be carefully designed to enhance shape perception and land-water distinction. We present an interactive system that renders water surfaces with cartography-oriented design using the conventions of mapmakers. Our approach is based on the observation that hand-drawn maps utilize and align texture features to shorelines with non-linear distance to improve figure-ground perception and express motion. To obtain local orientation and principal curvature directions, first, our system computes distance and feature-aligned distance maps. Given these maps, waterlining, water stippling, contour-hatching, and labeling are applied in real-time with spatial and temporal coherence. The presented methods can be useful for map exploration, landscaping, urban planning, and disaster management, which is demonstrated by various real-world virtual 3D city and landscape models.
Content may be subject to copyright.
Real-Time Rendering of Water Surfaces with Cartography-Oriented Design
Amir Semmo
Jan Eric Kyprianidis
Matthias Trapp
urgen D
Hasso-Plattner-Institut, Germany
TU Berlin, Germany
Figure 1: Illustrative rendering techniques implemented in our system: waterlining, contour-hatching, water stippling, and labeling.
More than 70% of the Earth’s surface is covered by oceans, seas,
and lakes, making water surfaces one of the primary elements in
geospatial visualization. Traditional approaches in computer graph-
ics simulate and animate water surfaces in the most realistic ways.
However, to improve orientation, navigation, and analysis tasks
within 3D virtual environments, these surfaces need to be carefully
designed to enhance shape perception and land-water distinction.
We present an interactive system that renders water surfaces with
cartography-oriented design using the conventions of mapmakers.
Our approach is based on the observation that hand-drawn maps uti-
lize and align texture features to shorelines with non-linear distance
to improve figure-ground perception and express motion. To obtain
local orientation and principal curvature directions, first, our system
computes distance and feature-aligned distance maps. Given these
maps, waterlining, water stippling, contour-hatching, and labeling
are applied in real-time with spatial and temporal coherence. The
presented methods can be useful for map exploration, landscaping,
urban planning, and disaster management, which is demonstrated by
various real-world virtual 3D city and landscape models.
CR Categories:
I.3.3 [Computer Graphics]: Picture/Image
Generation—Display Algorithms
water surfaces, illustrative rendering, cartography-
oriented design, contour-hatching, waterlining, water stippling
1 Introduction
Water surfaces represent key elements in mapmaking and surveying
because they shape our world and convey important information
* |
to a number of domains, including hydrology, urban planning and
environmental sciences. Hand-drawn illustrations of such surfaces
are often carefully designed to help a viewer explore the geospatial
environment. Usually, these designs effectively establish land-water
distinction to facilitate orientation, navigation, or analysis tasks.
Yet the many different shapes of water surfaces pose a number of
challenges that have required contemporary craftsmanship and de-
sign skills. Typical challenges include (1) the symbolization of the
land-water interface using (2) design elements (e.g., strokes) that
exactly align with the shorelines to effectively provide figure-ground
and express motion. Over centuries, cartographers have developed
illustration techniques and design principles that address these chal-
lenges [Imhof 1972; Robinson et al
1995; Merian 2005]. Certain
techniques have become abundantly used in modern cartography,
such as waterlining, hatching, or water stippling; and most of them
express aesthetical appeal, provide excellent figure-ground balance,
and establish a sense of motion [Christensen 2008; Huffman 2010].
For instance, fine solid lines are placed parallel to shorelines to
effectively communicate shoreline distances (Figure 1 left).
To date, computer-generated illustrations of water surfaces are
mainly based on photorealistic rendering techniques [Darles et al
2011]. Approaches in illustrative rendering have been subject to
cartoon-like water effects [Eden et al
2007; Yu et al
2007] or other
primary cartographic elements, such as terrain [Bratkova et al
Buchin et al
2004], vegetation [Deussen and Strothotte 2000; Co-
conu et al
2006], or buildings [D
ollner and Walther 2003], but have
neglected the many challenges water exhibits for map design.
This paper presents an interactive system for rendering water sur-
faces with cartography-oriented design that addresses the aforemen-
tioned challenges. For this, our work makes the following contribu-
tions. First, we identify and describe design principles from tradi-
tional and modern cartography to create more effective illustrations
of water surfaces. Based on these findings, Euclidean distance maps
are computed using a novel feature-aligned distance transform to
derive principal curvature directions of complex water shapes. Using
this information, we contribute real-time rendering techniques for
waterlining, contour-hatching, water stippling and labeling (Figure 1)
that facilitate a view-dependent level-of-abstraction [Semmo et al
2012]. Finally, we tested these rendering techniques with various
© ACM, 2013. This is the authors' version of the work. It is posted here by
permission of ACM for your personal use. Not for redistribution. The definitive
version will be published in Proceedings of the International Symposium on
Computational Aesthetics in Graphics, Visualization, and Imaging (CAe'13).
CAe'13, July 19 - 21 2013, Anaheim, CA, USA.
real-world virtual 3D city and landscape models. Our results reveal
potential applications within 3D geovirtual environments for map
exploration, landscaping, urban planning, and flooding simulation.
The remainder of this paper is structured as follows. Section 2
summarizes design principles derived from hand-drawn maps and
textbooks on map design. Section 3 reviews related works on texture
synthesis and illustrative rendering. Section 4 presents our methods
used for rendering water surfaces according to the identified design
principles, whose implementation details and results are presented
in Section 5. Finally, Section 6 concludes this paper.
2 Design Principles from Cartography
Well-designed illustrations of water surfaces provide figure-ground,
establish a sense of motion, and communicate meta-information
(e.g., water names). A variety of illustration techniques have been
developed by cartographers whose design principles address feature-
aligned texturing and symbolization. However, most cartographers
develop and vary their own illustration styles. Therefore, we ana-
lyzed the work by famous cartographers like Matth
aus Merian and
Jouvin de Rochefort printed in map collections (e.g., [Merian 2005]),
and textbooks on map design and thematic cartography [French
1918; Imhof 1975; MacEachren 1995; Kraak and Ormeling 2003;
Tyner 2010]. From this analysis, we extracted the following design
principles, which we used for our illustrative rendering techniques:
. Waterlining became popu-
lar in the first half of the 20
century for
lithographed maps, because areas of solid
color tones could not be produced at that
time. With this technique, (P1) fine solid
lines are drawn parallel to shorelines,
and the spacing between succeeding lines
gradually increase [French 1918]. A com-
mon mistake is to “make the lines exces-
sively wavy or rippled” [French 1918] or the distance between lines
with insufficient continuity. If drawn with care, waterlining pro-
vides dynamism and effectively propagates distance information
[Christensen 2008; Huffman 2010].
Water Stippling
. Another conventional
technique for hand-drawn black-and-
white maps is water stippling [Tyner
2010]. Similar to waterlining, distance
information is propagated by aligning
small dots with non-linear distances to
shorelines. Compared to stippling in tra-
ditional artwork, water surfaces are de-
picted by (P2) stipples with varying den-
sity that irregularly overlap along streamlines to establish a sense of
motion. In perspective views, some cartographers draw (P3) stipples
with higher density at occluded areas (e.g., near bridges) to improve
depth perception, or with varying density to symbolize flow velocity.
Contour-Hatching and Vignetting.
Contour-hatching has been widely used to
balance the accurate propagation of dis-
tance information and establishment of
motion: by using (P4) individual strokes
that are placed with high density near
shorelines and complemented by loose
lines placed with increasing irregularity
towards the middle stream. In contrast to
waterlines, (P5) excessively wavy strokes are drawn to express mo-
tion. Alternative illustrations use non-feature-aligned cross-hatches
for land-water distinction. These methods have been replaced in
the second half of the 20
century by color tones and drop shad-
ows [Tyner 2010]. Today, coastal vignettes with solid color tones are
used to establish figure-ground but, in contrast to contour-hatching,
fail to express water movements.
. Labels are design elements
used in cartography to enrich maps
with meta-information. By conven-
tion, (P6) cartographers depict names of
water features with italic (slanted) letters
to distinguish them from land features
for which upright letters are used [Tyner
2010]. Typically, (P7) names follow prin-
cipal curvature directions and are placed
within water surfaces [Imhof 1975] to ensure legibility.
. Symbolization is a com-
mon practice in cartography to reflect data
and phenomena [Imhof 1972]. To date,
standardized symbolization for water sur-
faces has not been established. However,
certain conventions have been used over
the years, including (P8) the irregular
placement of signatures with area-wide
coverage to communicate water features
(e.g., wetland, saltwater vs. freshwater), or the placement of glyphs
along streamlines to symbolize the flow direction of rivers.
For map design, blue is established as a conventional color tone, and
our work considers the identified design principles.
3 Related Work
Our work is related to previous works on texture synthesis, illustra-
tive rendering, and cartography-oriented design.
Feature-guided Texture Synthesis.
From our analysis in Sec-
tion 2, we observed that texture features are aligned with shorelines
to symbolize the land-water interface. Geometric properties of com-
plex shapes can be reconstructed quite effectively using distance
fields [Frisken et al
2000]. Our work uses distance maps to syn-
thesize waterlines and align stipples with shorelines in real-time.
Most algorithms use vector propagation to compute these maps by
an approximate Euclidean distance transform [Danielsson 1980]
(e.g., jump-flooding [Rong and Tan 2006]). We use the fast, work-
load efficient Parallel Banding Algorithm (PBA) to compute exact
distance maps on the GPU [Cao et al. 2010].
Feature-guided texturing based on principal curvature directions can
significantly improve shape recognition [Girshick et al
2000]. For
an overview on this topic we refer to the survey by Wei et al. [2009].
Recent approaches use normal-ray differential geometry [Kim et al
2008b] or diffusion techniques [Xu et al
2009] to derive principal
curvature directions, or use learning-based approaches [Kalogerakis
et al
2012; Gerl and Isenberg 2013] to align textures to salient
feature curves. However, these methods either require significant
(pre-)processing time or do not provide spatial and temporal co-
herence. By contrast, we present the concept of a feature-aligned
distance transform that provides continuous Euclidean distance val-
ues in a tangential direction to the shorelines. We use a GPU-based
flooding algorithm to compute a feature-aligned distance map in
real-time. Together with bilinear texture interpolation [Green 2007],
this map can be used to parameterize and place texture features along
shorelines with frame-to-frame coherence.
Non-Photorealistic Rendering.
Coastal vignettes and waterlines
are used for cartographic line generalization [Christensen 1999] and
in geoinformation systems to improve figure-ground perception.
surface masking
Water Surfaces
Euclidean Distance
Local Orientation Medial Axis
Feature-aligned Distance
3D Models
iterate over water surfaces
Strokes Glyphs
Cartography-Oriented Shading
Quantitative Surface Analysis
Texture Features
Water Stippling
texture mapping
Figure 2:
Schematic overview of our system, which implements cartography-oriented shading using the results of quantitative surface analysis.
Stippling is a well-studied field in illustrative rendering for digital
half-toning. Conventional approaches represent local tone by a well-
spaced placement of small dots [Kyprianidis et al
2012]. Previous
work proposed feature-guided image stippling [Kim et al
Kim et al
2010] that is adapted to the gradient direction of distance
maps. Water stippling works similarly, but does not match density
distributions to local tones (e.g., by blue noise) because dots irreg-
ularly overlap with varying density (P2). For this, we propose an
enhancement to Glanville’s [2004] texture bombing algorithm that
aligns water stipples with waterlines and renders them in real-time.
Feature-guided hatching has received significant attention in previ-
ous works and comprises user-defined [Salisbury et al
1997], patch-
based [Praun et al
2000; Webb et al
2002], shading-based [Praun
et al
2001], or learning-based [Kalogerakis et al
2012; Gerl and
Isenberg 2013] algorithms. The main difference to our approach
is that texture coordinates are obtained by Euclidean and feature-
aligned distance maps, which gives us more artistic control over
parameterizing individual strokes within real-time shaders. For
cross-hatching, tonal art maps [Praun et al
2001] are aligned to
Euclidean distance maps according to the view distance. This way, a
continuous level-of-abstraction [Semmo et al
2012] can be achieved
that significantly reduces visual clutter at high view distances. A
level-of-abstraction may also be combined with shape simplifica-
tions to produce aesthetic renditions of digital maps [Isenberg 2013].
Cartography-Oriented Design.
Illustrative rendering of water
surfaces is not a new approach. Previous works used colorization,
edge enhancement, and texturing for cartoon-like water effects (e.g.,
ripples) and to emphasize liquid movement [Eden et al
2007; Yu
et al
2007]. However, these rendering styles do not relate to tradi-
tional map design in terms of figure-ground organization for map
exploration, navigation, or landscaping. Rendering with cartography-
oriented design has been subject to other primary map elements,
such as terrain [Bratkova et al
2009], trees [Deussen and Strothotte
2000], and buildings [D
ollner and Walther 2003], as well as land-
scape [Coconu et al
2006] and city models [Jobst and D
ollner 2008].
We complement these techniques by our work on water surfaces
and exemplify how cartography-oriented visualization of 3D geovir-
tual environments can be achieved by combining our results with
traditional relief presentations of landscapes [Buchin et al. 2004].
Internal labeling has been addressed in the visualization of virtual
3D scenes. Previous works compute geometric hulls [Maass and
ollner 2008] or derive medial axes based on a distance transform
otzelmann et al
2005; Ropinski et al
2007; Cipriano and Gleicher
2008] for shape-aligned labeling. Our work also uses distance maps
to align font glyphs with the shoreline distance and orientation.
4 Method
An overview of our system is shown in Figure 2. The input data
consists of a set of 2D or 3D water surfaces that are typically defined
as triangular irregular networks. Using orthographic projections, the
models’ shapes were captured in 2D binary masks to facilitate quan-
titative surface analysis that works in image-space. This analysis
includes the computation of Euclidean and feature-aligned distance
maps by iteratively propagating distance information in the normal
and tangent directions of shorelines. This information was used
for cartography-oriented shading, including waterlining, contour-
hatching, water stippling, and labeling. To enable a continuous
level-of-abstraction [Semmo et al
2012], the shading results were
parameterized, blended, and mapped onto the surfaces according to
the view distance. Because water surfaces are processed separately,
our system can be seamlessly embedded into existing rendering
systems, or combined with rendering techniques that pre-process the
3D virtual environment (e.g., terrain [Buchin et al. 2004]).
4.1 Quantitative Surface Analysis
Our goal is to provide quality, interactive illustrations of water sur-
faces that comply with the identified design principles (Section 2).
This section provides background on how geometric properties of
water surfaces can be derived using distance transforms. It includes
a novel feature-aligned distance transform that is used to align indi-
vidual strokes with the orientation of shorelines.
4.1.1 Euclidean Distance Transform
Euclidean distance information was computed to determine parts of
a water surface with equal shoreline distance. Let
I : R
{0, 1}
denote a water surface captured as a binary image, with
I(p) = 0
marking water areas, and
I(p) = 1
marking land areas by pixels
p I
(see Figure 2 top left). A distance transform of
defined as:
(p) = min
||p q|| +χ(q)
with χ(q) =
0 if I(q) = 1
obtains the minimum Euclidean distance of each pixel to a shoreline.
A fast, parallel implementation to compute this information as a
distance map
(Figure 3a) is based on the PBA [Cao et al
Iteratively propagating distance information with the PBA was also
performed to obtain the nearest shoreline position as directional
information (Figure 3b). Subsequently,
was used as a lookup
function for shoreline distances
d R
, and
to lookup the
nearest shoreline position b I
(a) shoreline distance (b) shoreline direction
Figure 3:
Exemplary visualization of normalized shoreline distances
and shoreline directions for a given water surface.
4.1.2 Local Orientation Estimation
Sobel Filter Structure Tensor
Figure 4: Tangential field.
The estimation of local orienta-
tion is based on the image gra-
dients of the distance map
A popular choice to approximate
the directional derivatives in
-direction is the Sobel fil-
ter which, however, yields non-
smooth tangent information on
the medial axes because opposite
gradients cancel out (Figure 4 left). A simple alternative is to use
the smoothed structure tensor [Brox et al
2006] and perform an
eigenanalysis to obtain gradient and tangent information. This leads
to more stable estimates of the local orientation (Figure 4 right).
4.1.3 Medial Axes Computation
θ=0.75π δ=0.0 θ=0.75π δ=0.8
Figure 5: Medial axes.
The medial axes were derived
from the distance map
et al
2010] to align design el-
ements (e.g., labels) along the
middle stream of water sur-
faces. Basically, the medial axes
were obtained by comparing and
thresholding the directions to the
nearest shorelines in the local
neighborhood for each
p I
. For this, the unsigned gradient
orientation n R
of the smoothed structure tensor is used:
= ||p D
(p + n)|| , b
= ||p D
(p n)|| .
Thresholding the angle between
then yields an approxi-
mate of the medial axes:
arccos (b
· b
) > θ [0, π].
In addition, the shoreline distance was thresholded by
δ R
avoid placing design elements too close to shorelines. For all the
examples in this paper, we use θ = 0.75π and δ = 0.8 (Figure 5).
4.1.4 Feature-aligned Distance Transform
Contour-hatching for water surfaces is a com-
plex problem, since the properties of individ-
ual strokes (e.g., length, spacing) vary with the
shoreline distance (P4). A typical approach is
to define orientation fields on a surface to guide
an example-based texture synthesis to salient
feature curves [Wei et al
2009]. Yet animat-
ing individual strokes on water surfaces requires fine control over
the parameterization and placement per rendering pass, in partic-
ular to simulate water movements. Our approach parameterizes
the level-set curves of distance map
to obtain Euclidean dis-
tance values along its tangential field. By parameterizing these
Shoreline Distance
Feature-aligned Distance
Texture Coordinates
Surface Mask
Figure 6: Exemplary distance maps computed by our system.
feature-aligned distances (
-coordinate) and the shoreline distances
-coordinate), they are directly used as texture coordinates within
real-time shaders (Figure 6).
Seed Non-Seed
Flooding Direction
For computing a feature-
aligned distance map
, we use an approach
similar to vector propagation [Danielsson
1980]. Using the non-normalized distance
, level sets correspond to the integral
part of the shoreline distances (e.g.,
bdc = 0
for the zero level sets). Starting with the
shorelines, random pixels are selected as
seed points from which (1) Euclidean dis-
tances are propagated along the level sets,
and (2) seed point information is propagated to the inner level sets.
These two steps are repeated for each level set until no more pixels
are available for processing. We implemented a parallel algorithm
by iteratively flooding distance information within the local neigh-
borhood (e.g.,
3 × 3
) of a pixel, which dynamically propagates
seed points during distance map construction (Figure 7). An effi-
cient parallel algorithm for normalization of this map is based on a
reduction [Nehab et al. 2011].
Figure 7:
Intermediate results of our algorithm that iteratively
computes a feature-aligned distance map on a GPU (
256 × 256
pixels, 503 iterations in total).
Exemplary results show that our approach provides
continuous feature-aligned distance values (Figure 6/7). Contrary
to texture synthesis based on energy minimization (e.g., [Xu et al
2009]), ours does not provide continuity across the level sets of
a surface. But since individual texture features (e.g., hatches) are
aligned with the level-set curves, no such constraint is required.
This allows us to perform an exact distance transform since no
compression or stretching is required to meet continuity in all major
directions of a surface. Because texture features can only be placed
on the level sets, choosing an adequate distance map resolution is
important to balance rendering quality and performance. On the one
(a) discrete input + distance lines
(b) continuous output
Figure 8: Bilinear filtering of a feature-aligned distance map.
hand, vector propagation along the level sets performs non-linearly
with the map resolution, and optimization techniques known from
jump flooding [Rong and Tan 2006] are less helpful since highly
curved sections require small step widths. On the other hand, too low
map resolutions insufficiently approximate the shorelines’ directions.
As a compromise, we utilized bilinear sampling for a piecewise-
linear approximation of shoreline and feature-aligned distances. This
approach has been proven effective for the magnification of glyph
contours, even with low-resolution distance maps [Green 2007].
Because bilinear sampling is able to accurately reconstruct distance
information, feature-aligned distance maps of up to
can be used for rendering and computed in real-time using our GPU-
based flooding algorithm (Section 5).
Bilinear Sampling.
In contrast to
signed distance maps, the accurate recon-
struction of feature-aligned distance val-
ues requires a modified version of bilinear
sampling to avoid filtering across the level
sets of
. For this, the shoreline distance
for a point
p I
is determined and
compared to the distance information of
the four samples p
to p
T (p) = A + B ((1 δ
)Bg(A) + δ
A = (1 u
+ u
= (g(p
1) + 1)g(p
B = (1 u
+ u
= (g(p
1) + 1)g(p
g(q) = 0
q {p
, p
, p
, p
p, q
correspond to
different level sets to omit a pixel from interpolation, otherwise
g(q) = 1
. As is shown in Figure 8, this modified version provides
continuous feature-aligned distance values on the level sets.
4.2 Shading Techniques
The results of the quantitative surface analysis are utilized for
cartography-oriented shading (Section 2). The following techniques
utilize pixel shaders and texturing combined with bilinear sampling
to accurately reconstruct distance information. These techniques
can be parameterized in terms of tone and density to provide a
view-dependent level-of-abstraction (see the Appendix).
Waterline Color
Body Color
Figure 10: Waterlining parameterized by shoreline distance.
4.2.1 Waterlining and Water Stippling
0 0.2 0.4 0.6 0.8 1
s=40, e=0.6, h=0
Figure 9:
Non-linear step
function ϕ(d).
In the following, a non-normalized ver-
sion of distance map
is used to
independently apply waterlining and
stippling from a water surface’s scale
(e.g., oceans vs. lakes). To comply
with non-equidistant interspaces (P1),
target distance values
are com-
puted using a non-linear step function:
ϕ(d) =
(b(s · d)
+ hc h)
The spacing of
can be parameterized by
e, s R
to define a
corresponding number of steps in the interval of
(Figure 9). In
addition, these steps can be shifted along d using h [0, 1].
Waterlines correspond to shaded areas of a water
surface with equal shoreline distance. To render waterlines, the dis-
τ R
between the positions obtained by
are thresh-
olded by a corresponding width
ψ R
, and padded by fade-in and
fade-out intervals (Figure 10) for antialiasing, and to provide smooth
transitions. For a continuous level-of-abstraction,
parameterized by the view distance. At high view distances, this
significantly reduces the number of rendered waterlines and provides
a smooth transition while zooming in (see the accompanying video).
Water Stippling.
Water stippling refers to placing small dots with
irregular distribution along waterlines to convey shape and motion.
Our algorithm uses an enhanced variant of Glanville’s [2004] texture
bombing to place water stipples with feature-aligned distribution and
irregular density. The basic idea of texture bombing is to randomly
place glyphs in regularly distributed grid cells. We extend this algo-
rithm by three phases; stipple selection, stipple displacement, and
stipple filtering (Figure 11). Instead of using a random placement
of stipples, offsets are computed that align them with the waterlines
. Our algorithm starts with stipples that are centered in regularly
distributed grid cells and mapped onto a water surface (Figure 11a).
1. Stipple Selection:
Stipples within grid cells that cross a waterline
were selected for further processing (Figure 11a). For this, the
distance to the next waterline (d ϕ(d)) is thresholded.
2. Stipple Displacement:
The gradient direction of
was used to
compute the approximate target position to the nearest waterline.
If origin and target positions correspond to the same grid cell (first
phase), a stipple was displaced towards the target position. This
results in stipples lined up with the waterlines (Figure 11b).
3. Stipple Filtering:
The displacement of stipples in the gradient di-
rection of
increased the irregular distribution along waterlines. To
render stipples with non-regular intervals, noise or pseudo-random
numbers were used for additional filtering (Figure 11c). Figure 12
exemplifies that this improves randomness.
Up to this point, stipples might be rendered with low density because
waterlines force them to split into multiple directions. To regular-
ize the density near coastal areas, the phases 1-3 were repeated to
place additional layers of stipples within slightly shifted grid cells.
We found that two layers were sufficient to meet this requirement
(Figure 12c). We experimented with different parameterizations to
locally vary the stipple density and tone, for example using tonal art
maps to symbolize flow velocity (P3). In addition, iterative appli-
cation of our algorithm with shifted step functions can be used to
indicate highlights or shadowed areas. For instance, a second pass
shifted halfway by
h = 0.5
was used to place additional
stipples at occluded areas (e.g., near bridges, Figure 19) to improve
depth perception of a virtual 3D scene (P3).
(a) selection (b) displacement (c) filtering
Figure 11:
Schematic overview of the water stippling phases. Wa-
terlines are marked as red lines, rendered stipples as black dots.
4.2.2 Contour-Hatching
To symbolize water movements, we developed a novel contour-
hatching technique. Once a feature-aligned distance map was com-
puted, individual stroke maps were irregularly placed with non-
linear distance to shorelines to express motion. Similar to Kaloger-
akis et al. [2012], parameters were defined to provide artistic control
over this placement:
Length (l R
) defines the length of a stroke.
Thickness (t [0, 1]) defines the width of a stroke.
Spacing (s [0, 1]) controls the stroke density.
Randomness (r [0, 1]) controls the stroke irregularity.
φ(d-1) φ(d)
The main idea is to derive texture
u, v
for each rendering
fragment by bilinearly sampling the
distance maps
. To obtain
-coordinate, waterline positions
and the stroke width
were used
to compute the fraction
u = (d
. To obtain the correspond-
-coordinate, feature-aligned dis-
tance values of the sampled distance map
were scaled by
match the desired stroke length. In addition, noise was used to
clamp the
-coordinate for an irregular placement of individual
strokes, and the
parameters for density control and filtering.
To render contour-hatches excessively wavy (P5), texture maps were
used that had been digitized from hand-drawn strokes. Because the
stroke placement works in object-space, it provides frame-to-frame
coherence, and avoids the shower door effect known from techniques
that work in image-space (e.g., [Kim et al. 2008b]).
From our analysis in Section 2, we observed that stroke layers
of varying tone and density are used based on the shoreline dis-
tance (P4). This observation can be modeled by our algorithm
using 3 layers with different parameter sets: dense, solid strokes
near shorelines, loose strokes with irregular density, and strokes with
shorter lengths near the medial axes (see Figure 13). Because our
technique is texture-based, water movements can be modeled quite
easily by shifting individual strokes along the major directions of
Figure 13:
Contour-hatching for water surfaces: (left) dense, wavy
strokes near shorelines, (right) loose strokes near the medial axes.
(a) overview (b) middle stream (c) coastal area
Figure 12:
A result of our water stippling technique showing a
non-linear, feature-aligned, and irregular distribution of stipples.
the distance maps D or T , for instance by a temporal displacement
of the v-coordinate using a sine function to animate rivers.
4.2.3 Water Vignetting and Cross-Hatching
In modern cartography, water vignettes are based on color gradi-
ents. A simple approach is to threshold the shoreline distance and
interpolate between a shoreline and water body’s color. We used
this effect to complement our waterlining technique (Figure 1 left).
A similar effect can be achieved by cross-hatching the shoreline
areas by a tonal art map [Praun et al
2001]. We used a map of five
varying levels of stroke size and density, which were blended and
mapped on water surfaces according to the shoreline distance, and
parameterized according to the view distance to create a continuous
level-of-abstraction [Semmo et al
2012]. In contrast to tone-based
shading, this approach does not affect shading of landmass (Fig-
ure 15), which allows us to visualize additional geospatial features,
such as terrain.
4.2.4 Thematic Visualization
Figure 14:
izing flooded areas.
We used distance maps for an automated, in-
ternal labeling that complies with the design
principles of cartographers (P6/P7). The
main idea is to derive piecewise cubic B
curves from the medial axes of the distance
and warp text to these curves ac-
cordingly [Ropinski et al
2007; Cipriano
and Gleicher 2008]. To obtain the control
points, pixels of the medial axes were traced
and iteratively downsampled in image-space
by nearest-neighbor interpolation. Together
with the tangent information of the structure tensor, arc-length pa-
rameterization was used to warp text with the flow direction of water
surfaces, and orient it with the viewing direction (Figure 16).
For symbolization, texture bombing was used and parameterized
so that signatures always face the view direction when viewed in
3D (Figure 14) and comply with (P8). Alternatively, an example-
based approach may be used to arrange signatures with more artistic
control, for which we refer to the work by Hurtut et al. [2009].
Figure 15:
A globe shaded by crosshatched strokes: (left) binary
mask, (middle) result of [Webb et al. 2002], (right) our result.
Figure 16:
Exemplary result of our labeling algorithm which aligns
font glyphs according to the shoreline distance and orientation.
5 Results
We have implemented our system using C++ and OpenGL/GLSL.
OpenSceneGraph is used as the rendering engine to handle 3D data
sets. The operations of the quantitative surface analysis are designed
for parallel execution, and we have implemented them in CUDA to
significantly improve overall performance. In particular, the PBA al-
gorithm is used for the computation of Euclidean distance maps [Cao
et al
2010], together with a reduction for normalization [Nehab et al
2011]. For text rendering, NVidia’s
NV path rendering
sion is used to enable the rendering and transforming of high quality,
instance-based text in a single pass. In the following, we demon-
strate the usefulness and flexibility of our rendering techniques for
different real-world data sets and potential usage scenarios.
5.1 Applications
Figure 17 shows a comparison of our rendering techniques using the
example of Spirit Lake (at Mount St. Helens, USA). We observed that
waterlining is a functional illustration technique that is able to com-
municate distance information quite effectively. The effects of water
stippling and contour-hatching are similar but add a sense of motion
and uncertainty. Coastal vignetting, by contrast, primarily focuses on
the land-water interface itself to improve figure-ground perception.
These techniques complement other cartography-oriented shading
techniques quite well. This is demonstrated by shading the terrain
in the environment with hachures of varying thickness according to
the slope steepness [Buchin et al
2004]. Moreover, our rendering
techniques provide a level-of-abstraction (see the Appendix), which
is exemplarily shown in the right image of Figure 17 where more
or less waterlines are depicted according to the view distance to
avoid visual clutter. The accompanying video demonstrates that
this parameterization yields a smooth, continuous transition while
zooming in and out. This example also demonstrates the ability of
our system to handle 3D scenes and provide a spatial and temporal
coherence. Because the rendering techniques are texture-based, they
are independent from a model’s geometric complexity. Finally, we
experimented with using our rendering techniques concurrently. For
instance, water stipples can be seamlessly blended with waterlines
according to the view distance; or coastal vignetting to complement
waterlining (Figure 1).
Our waterlining technique may be useful in flooding simula-
tions, i.e., to assess distances to the nearest safety zones for evacua-
tion planning. When performed over time, waterlines dynamically
shift with the flood distribution to convey motion and enhance the
depiction of land cover. This effect is demonstrated in the supple-
mentary video and is exemplarily shown in Figure 18 for the city of
Boston (USA). Here, a plane is used and temporally shifted upwards
to represent the change in the mean sea level. Using this plane as a
clipping mask with an orthographic projection, the corresponding
flooded areas are obtained and shaded in real-time.
Table 1:
Performance evaluation (in ms): distance / feature-aligned
distance transform (
), orientation and medial axis computation.
Image Res. D T Orient. M. Axes Total
128 × 128 1.6 26.6 0.1 0.7 29.0
256 × 256 2.2 96.3 0.2 0.8 99.5
512 × 512 4.5 371.6 0.6 0.9 377.6
Table 2: Performance evaluation of our illustrative rendering tech-
niques for different screen resolutions (in frames-per-second).
Screen Res. waterlining stippling contour-hatching
800 × 600 534 162 159
1280 × 720 523 88 84
1600 × 900 521 59 55
1920 × 1080 514 42 41
Figure 19 demonstrates the usefulness of our rendering techniques
in urban planning and cultural heritage. Within these domains, it is
often desired to avoid authentic impressions, in particular because
of missing evidence in the (re)construction or because construction
plans may be altered in the future. The top image shows a topograph-
ical reconstruction of ancient Cologne, in which contour-hatching
is used to express uncertainty. We animated the individual strokes
to express water movements. In addition, symbolization is used to
highlight those river areas that were flooded in ancient times. The
bottom image shows a bridge construction, where water stippling
is used to add expressiveness. As can be seen, the stipple density is
increased in the shadowed areas to improve depth perception.
5.2 Performance Evaluation
The performance tests of our system were conducted on an In-
3.06 GHz with 6 GByte RAM and NVidia
660 Ti GPU with 2 GByte VRAM. We used Spirit Lake (Figure 17)
as a test model. The results in Table 1 show the run-time of the
quantitative surface analysis scales with the resolution of a distance
map. The feature-aligned distance transform is shown to be a lim-
iting factor; however, its implementation is not heavily optimized
and we see potential to increase the performance. We compared
different sizes of distance maps for our illustrative rendering tech-
niques. Similar to signed distance maps [Green 2007], we achieve
stable results when bilinearly sampling a low resolution of a feature-
aligned distance map (
128 × 128
pixels). Note that the timings for
the distance maps include normalization. Table 2 shows that our
illustrative rendering techniques perform at real-time frame-rates
in HD resolution. During rendering, we observe that our shading
techniques are fill-limited and achieve, in SD resolution, twice the
performance of an HD resolution. We conclude that our system
for feature-aligned waterlining, stippling, and hatching performs in
real-time, and therefore is applicable to render animated 3D scenes.
5.3 Limitations
Our shading techniques work in object-space and the computation
of distance maps requires closed polygons for processing. For large-
scale water surfaces with complex courses, distance maps of high
resolution are required to achieve quality shading results. Here,
memory resources limit the distance map sizes that can be processed
by a GPU; for our system
Megapixels (MP) with 2 GByte
VRAM. Moreover, we observed that our feature-aligned distance
transform does not perform in real-time when computing distance
maps with
> 0.25
MP. Nonetheless, we observed that distance maps
256 × 256
pixels are sufficient for most water surfaces when
using bilinear filtering.
Figure 17:
Exemplary results of our rendering techniques for 3D mapping, (left) compared with each other within the environment of Mount St.
Helens, (right) waterlining applied to a globe that provides a continuous level-of-abstraction when zooming in and out.
Figure 18:
Flooding simulation for the city of Boston enhanced by our waterlining technique and illustrative rendering to express uncertainty.
Figure 19:
Further results of our rendering techniques for urban planning and landscaping: (top) contour-hatching used to express uncertainty
in a reconstructed topographical model of ancient Cologne (Germany), which served as the basis for the 3D reconstruction shown to the right,
(bottom) water stippling used to enhance the rendering of a bridge construction.
6 Conclusions and Future Work
We present a system for rendering water surfaces with cartography-
oriented design. Our real-time rendering techniques adopt design
principles from traditional cartography to improve figure-ground
perception and express a sense of motion. For contour-hatching,
we propose a novel feature-aligned distance transform to align indi-
vidual strokes with the shorelines of water surfaces. Results show
that our techniques provide temporal and spatial coherence, can be
parameterized for a view-dependent level-of-abstraction, and can be
useful within 3D geovirtual environments for map exploration, urban
planning, landscaping, and disaster management. Because of their
application to geospatial data, we plan to elaborate on the usefulness
of our techniques for geovisualization. Further, we plan to conduct
a user study to confirm significant effects in orientation, navigation,
and analysis tasks performed within 3D geovirtual environments.
The authors would like to thank the anonymous reviewers for their
valuable comments. This work was partly funded by the Federal
Ministry of Education and Research (BMBF), Germany, within the
InnoProfile Transfer research group “4DnDVis”, and was partly
supported by the ERC-2010-StG 259550 XSHAPE grant.
Artistic rendering of mountainous terrain. ACM Trans. Graph.
28, 102:1–102:17.
J., MR
AZEK, P., AND KORNPROBST, P. 2006. Adaptive Struc-
ture Tensors and their Applications. Visualization and Processing
of Tensor Fields, 17–47.
WALTHER, M. 2004. Illustrating Terrains using Direction of
Slope and Lighting. In ICA Mountain Carthography Workshop,
CAO, T.-T., TANG, K., MOHAMED, A., AND TAN, T.-S. 2010.
Parallel Banding Algorithm to compute exact distance transform
with the GPU. In Proc. I3D, 83–90.
CHRISTENSEN, A. H. 1999. Cartographic Line Generalization
with Waterlines and Medial-Axes. Cartography and Geographic
Information Science 26, 1, 19–32.
CHRISTENSEN, A. H. 2008. A Reflection on the Waterlining
Technique in Relation to the History of Map Ornamentation. The
Cartographic Journal 45, 1, 68–78.
CIPRIANO, G., AND GLEICHER, M. 2008. Text Scaffolds for
Effective Surface Labeling. IEEE Trans. Vis. Comput. Graphics
14, 6, 1675–1682.
COCONU, L., DEUSSEN, O., AND HEGE, H. 2006. Real-time
pen-and-ink illustration of landscapes. In Proc. NPAR, 27–35.
DANIELSSON, P.-E. 1980. Euclidean distance mapping. Computer
Graphics and Image Processing 14, 3, 227–248.
ZATO, J. 2011. A Survey of Ocean Simulation and Rendering
Techniques in Computer Graphics. Comput. Graph. Forum 30, 1,
DEUSSEN, O., AND STROTHOTTE, T. 2000. Computer-generated
pen-and-ink illustration of trees. In Proc. ACM SIGGRAPH,
OLLNER, J., AND WALTHER, M. 2003. Real-time expressive
rendering of city models. In Proc. IEEE IV, 245–250.
S. B., AND OBRIEN, J. F. 2007. A Method for Cartoon-
Style Rendering of Liquid Animations. In Proc. ACM Graphics
Interface, 51–55.
FRENCH, T. 1918. A manual of engineering drawing for students
and draftsmen. McGraw-Hill book company.
T. R. 2000. Adaptively sampled distance fields: a general
representation of shape for computer graphics. In Proc. ACM
SIGGRAPH, 249–254.
GERL, M., AND ISENBERG, T. 2013. Interactive example-based
hatching. Computers & Graphics 37, 1-2, 65–80.
2000. Line direction matters: an argument for the use of principal
directions in 3D line drawings. In Proc. NPAR, 43–52.
GLANVILLE, R. S. 2004. Texture Bombing. In GPU Gems.
Addison-Wesley, 323–338.
T. 2005. Form Follows Function: Aesthetic Interactive Labels.
In Proc. CAe, 193–200.
GREEN, C. 2007. Improved alpha-tested magnification for vector
textures and special effects. In ACM SIGGRAPH Courses, 9–18.
HUFFMAN, D. P. 2010. On Waterlines: Arguments for their Em-
ployment, Advice on their Generation. Cartographic Perspectives,
NACIS 66, 23–30.
DROUILLHET, R., AND COEURJOLLY, J.-F. 2009. Appearance-
guided Synthesis of Element Arrangements by Example. In Proc.
NPAR, 51–60.
IMHOF, E. 1972. Thematische Kartographie, vol. 10. Walter de
IMHOF, E. 1975. Positioning names on maps. The American
Cartographer 2, 2, 128–144.
ISENBERG, T. 2013. Visual Abstraction and Stylisation of Maps.
The Cartographic Journal 50, 1, 8–18.
OLLNER, J. 2008. 3D City Model Visualization
with Cartography-Oriented Design. In Proc. REAL CORP, 507–
HERTZMANN, A. 2012. Learning hatching for pen-and-ink
illustration of surfaces. ACM Trans. Graph. 31, 1, 1:1–1:17.
KIM, D., SON, M., LEE, Y., KANG, H., AND LEE, S. 2008.
Feature-guided Image Stippling. Comput. Graph. Forum 27, 4,
KIM, Y., YU, J., YU, X., AND LEE, S. 2008. Line-art illustration
of dynamic and specular surfaces. ACM Trans. Graph. 27, 5,
Automated Hedcut Illustration Using Isophotes. In Proc. Smart
Graphics, 172–183.
KRAAK, M., AND ORMELING, F. 2003. Cartography: Visualiza-
tion of Geospatial Data. Pearson Education.
BERG, T. 2012. State of the ’Art’: A Taxonomy of Artistic
Stylization Techniques for Images and Video. IEEE Trans. Vis.
Comput. Graphics 19, 5, 866–885.
OLLNER, J. 2008. Seamless Integration of
Labels into Interactive Virtual 3D Environments Using Parame-
terized Hulls. In Proc. CAe, 33–40.
MACEACHREN, A. 1995. How Maps Work. Guilford Press.
MERIAN, M. 2005. Topographia Germaniae.
GPU-efficient recursive filtering and summed-area tables. ACM
Trans. Graph. 30, 176:1–176:12.
textures. In Proc. ACM SIGGRAPH, 465–470.
Real-time hatching. In Proc. ACM SIGGRAPH, 581–586.
LING, A. J., AND GUPTILL, S. C. 1995. Elements of cartogra-
phy. New York: John Wiley & Sons.
RONG, G., AND TAN, T.-S. 2006. Jump flooding in GPU with
applications to Voronoi diagram and distance transform. In Proc.
ACM I3D, 109–116.
2007. Internal Labels as Shape Cues for Medical Illustration. In
Proc. VMV, 203–212.
D. H. 1997. Orientable textures for image-based pen-and-ink
illustration. In Proc. ACM SIGGRAPH, 401–406.
2012. Interactive Visualization of Generalized Virtual 3D City
Models using Level-of-Abstraction Transitions. Comput. Graph.
Forum 31, 885–894.
TYNER, J. 2010. Principles of map design. Guilford Press.
Fine tone control in hardware hatching. In Proc. NPAR, 53–58.
State of the Art in Example-based Texture Synthesis. In Euro-
graphics 2009 State of the Art Report, 93–117.
XU, K., COHEN-OR, D., JU, T., LIU, L., ZHANG, H., ZHOU, S.,
AND XIONG, Y. 2009. Feature-aligned shape texturing. ACM
Trans. Graph. 28, 108:1–108:7.
YU, J., JIANG, X., CHEN, H., AND YAO, C. 2007. Real-time
cartoon water animation. Computer Animation and Virtual Worlds
18, 4-5, 405–414.
Level-of-Abstraction 2 Level-of-Abstraction 1 Level-of-Abstraction 0
WaterliningWater StipplingContour-HatchingCross-Hatching
Figure 20:
Exemplary parameterization results of our rendering techniques for a view-dependent level-of-abstraction. In order to reduce
visual clutter at high view distances (bottom row), the number of rendered waterlines, stipples and hatches are reduced significantly. Once
these parameterizations have been authored by our system, blending between them is performed in real-time using OpenGL fragment shaders.
... For instance, natural color maps are investigated (Patterson and Kelso, 2004), relief realism is based on its enhancement by illumination (Patterson, 2002) and by natural texturing (Jenny and Jenny, 2012). Map designers also explore the potential of texturing rendering techniques coming from graphic computers, by synthetic vectorial textures (Loi et al., 2013;Jenny et al., 2014), by watercolorization on oblique views (Jenny et al., 2015); water surfaces are rendered by realist textures (Patterson, 2002), animated textures (Yu et al., 2011), by expressive renderings (Semmo et al., 2013). ...
... Based on this definition, several research works aim at controlling the photo-realism and abstraction levels of a cartographic representation in so-called cartographic continuum. In order to make progressive transitions between various levels of abstraction, the parameterization of rendering methods is explored through various strategies to distribute the level of abstraction in the representation (Semmo et al., 2012(Semmo et al., , 2013Semmo and Döllner, 2014): according to the distance from the image center or the saliency of rendered objects (Semmo et al., 2012), river rendering according to more or less cartographic styles (Semmo et al., 2013), more or less complex textures according to scene depth and expected abstraction level (Semmo and Döllner, 2014). Metrics are also used to describe the level of detail in order to make discrete scales of this level of detail (Biljecki et al., 2014). ...
... Based on this definition, several research works aim at controlling the photo-realism and abstraction levels of a cartographic representation in so-called cartographic continuum. In order to make progressive transitions between various levels of abstraction, the parameterization of rendering methods is explored through various strategies to distribute the level of abstraction in the representation (Semmo et al., 2012(Semmo et al., , 2013Semmo and Döllner, 2014): according to the distance from the image center or the saliency of rendered objects (Semmo et al., 2012), river rendering according to more or less cartographic styles (Semmo et al., 2013), more or less complex textures according to scene depth and expected abstraction level (Semmo and Döllner, 2014). Metrics are also used to describe the level of detail in order to make discrete scales of this level of detail (Biljecki et al., 2014). ...
Full-text available
Graphic interfaces of geoportals allow visualizing and overlaying various (visually) heterogeneous geographical data, often by image blending: vector data, maps, aerial imagery, Digital Terrain Model, etc. Map design and geo-visualization may benefit from methods and tools to hybrid, i.e. visually integrate, heterogeneous geographical data and cartographic representations. In this paper, we aim at designing continuous hybrid visualizations between ortho-imagery and symbolized vector data, in order to control a particular visual property, i.e. the photo-realism perception. The natural appearance (colors, textures) and various texture effects are used to drive the control the photo-realism level of the visualization: color and texture interpolation blocks have been developed. We present a global design method that allows to manipulate the behavior of those interpolation blocks on each type of geographical layer, in various ways, in order to provide various cartographic continua.
... In practice, this is complex to achieve in a GIS because there is currently no formal way to describe an artistic (complex) style. Rendering techniques are more and more used in map design, to manage photo-or non-photo-realistic rendering and pseudo-natural effects or to mimic artistic and old practices in cartography (Patterson et al. 2004;Trapp et al. 2011;Jenny & Jenny 2012;Semmo et al. 2013;amongst others). However, there techniques cannot be easily used and con-trolled within a GIS to reproduce a given style. ...
Full-text available
In the context of custom map design, handling more artistic and expressive tools has been identified as a carto-graphic need, in order to design stylized and expressive maps. Based on previous works on style formalization, an approach for specifying the map style has been proposed and experimented for particular use cases. A first step deals with the analysis of inspiration sources, in order to extract ‘what does make the style of the source’, i.e. the salient visual characteristics to be automatically reproduced (textures, spatial arrangements, linear stylization, etc.). In a second step, in order to mimic and generate those visual characteristics, existing and innovative rendering techniques have been implemented in our GIS engine, thus extending the capabilities to generate expressive renderings. Therefore, an extension of the existing cartographic pipeline has been proposed based on the following aspects: 1- extension of the symbolization specifications OGC SLD/SE in order to provide a formalism to specify and reference expressive rendering methods; 2- separate the specification of each rendering method and its parameterization, as metadata. The main contribution has been described in (Christophe et al. 2016). In this paper, we focus firstly on the extension of the cartographic pipeline (SLD++ and metadata) and secondly on map design capabilities which have been experimented on various topographic styles: old cartographic styles (Cassini), artistic styles (watercolor, impressionism, Japanese print), hybrid topographic styles (ortho-imagery & vector data) and finally abstract and photo-realist styles for the geovisualization of costal area. The genericity and interoperability of our approach are promising and have already been tested for 3D visualization.
... In practice, this is complex to achieve in a GIS because there is currently no formal way to describe an artistic (complex) style. Rendering techniques are more and more used in map design, to manage photo-or non-photo-realistic rendering and pseudo-natural effects or to mimic artistic and old practices in cartography (Patterson et al. 2004;Trapp et al. 2011;Jenny & Jenny 2012;Semmo et al. 2013; amongst others). However, there techniques cannot be easily used and controlled within a GIS to reproduce a given style. ...
... La texturisation de la mer est un problème complexe de par la nature extrêmement variable de l'eau (marée, météo, etc.). Pour la zone littorale, l'utilisation des textures se limite aux objets dont les variations spatio-temporelles sont faibles, comme par exemple les rochers (Loi et al., 2013), la végétation (Hurtut et al., 2009) ou les dynamiques des courants marins (Semmo et al., 2013 ;Yu et al., 2011). Notons également les travaux sur la mise en valeur du relief qui permettent, par l'utilisation d'ombrages sous-marins, de donner une impression de relief pour une visualisation en deux dimensions (Jenny et al., 2015 ;Samsonov, 2011). ...
Full-text available
The coastal area is a multi-issue areaand its observationprovides heterogeneous geographical database. Challenges are located on the integration and the rendering of these data:(1) geographic data used to characterize the seamust be combinedto obtain water depth and sea/land interface(data heterogeneity problem) and (2) current visualization systems do not provide continuous rendering on both sides of the sea/land interface and/orare not coherent with styles of conventional renderings (representation heterogeneity problem). Moreover, the representation of spatiotemporal dynamics around this interface, particularly tidal cycles, addsanother difficulty to geovisualization conception.We propose to improve the coastal visualization by representing more realistically water level dynamics: (1) with the use of close-to-reality data such as LiDAR and thus enhance the perceived realism and (2) by manipulating the rendering engine withabstraction (maps) and realism (photorealism images). We propose two applications of mapping and photorealism.
... For instance, Patterson [Pat02] enhances relief realism with realistic illumination. Example-based texture synthesis for map design [JJC12,JJ13], animated waterlining and labeling [SKTD13] are also examples of computer graphics techniques transposed to map rendering. In the meanwhile, expressive maps are considered by other recent works in computer graphics, human computerinteraction and visualization [AS01,GASP08,AMJ * 12,KMM * 13]. ...
Conference Paper
Full-text available
Cartographic design requires controllable methods and tools to produce maps that are adapted to users’ needs and preferences. The formalized rules and constraints for cartographic representation come mainly from the conceptual framework of graphic semiology. Most current Geographical Information Systems (GIS) rely on the Styled Layer Descriptor and Semiology Encoding (SLD/SE) specifications which provide an XML schema describing the styling rules to be applied on geographic data to draw a map. Although this formalism is relevant for most usages in cartography, it fails to describe complex cartographic and artistic styles. In order to overcome these limitations, we propose an extension of the existing SLD/SE specifications to manage extended map stylizations, by the means of controllable expressive methods. Inspired by artistic and cartographic sources (Cassini maps, mountain maps, artistic movements, etc.), we propose to integrate into our system three main expressive methods: linear stylization, patch-based region filling and vector texture generation. We demonstrate how our pipeline allows to personalize map rendering with expressive methods in several examples.
... With the development of computational fluid dynamics (CFD) and of the power of computing hardware, the demand for visualization solutions has grown rapidly across a variety of applications, such as product design evaluation, flood control, and the analysis of water pollution diffu- sion [15][16][17][18]. Such visualization software requires the functionality to extract flow features and, increasingly, the need to integrate it within a 3D environment to improve decision making [19][20][21]. For instance, in a virtual wind tunnel application [22,23], the authors not only used a flow visualization module to show flow conditions, they also used a 3D airplane model to determine the spatial position of the flow. ...
Full-text available
Geospatial data has become a natural part of a growing number of information systems and services in the economy, society, and people's personal lives. In particular, virtual 3D city and landscape models constitute valuable information sources within a wide variety of applications such as urban planning, navigation, tourist information, and disaster management. Today, these models are often visualized in detail to provide realistic imagery. However, a photorealistic rendering does not automatically lead to high image quality, with respect to an effective information transfer, which requires important or prioritized information to be interactively highlighted in a context-dependent manner. Approaches in non-photorealistic renderings particularly consider a user's task and camera perspective when attempting optimal expression, recognition, and communication of important or prioritized information. However, the design and implementation of non-photorealistic rendering techniques for 3D geospatial data pose a number of challenges, especially when inherently complex geometry, appearance, and thematic data must be processed interactively. Hence, a promising technical foundation is established by the programmable and parallel computing architecture of graphics processing units. This thesis proposes non-photorealistic rendering techniques that enable both the computation and selection of the abstraction level of 3D geospatial model contents according to user interaction and dynamically changing thematic information. To achieve this goal, the techniques integrate with hardware-accelerated rendering pipelines using shader technologies of graphics processing units for real-time image synthesis. The techniques employ principles of artistic rendering, cartographic generalization, and 3D semiotics—unlike photorealistic rendering—to synthesize illustrative renditions of geospatial feature type entities such as water surfaces, buildings, and infrastructure networks. In addition, this thesis contributes a generic system that enables to integrate different graphic styles—photorealistic and non-photorealistic—and provide their seamless transition according to user tasks, camera view, and image resolution. Evaluations of the proposed techniques have demonstrated their significance to the field of geospatial information visualization including topics such as spatial perception, cognition, and mapping. In addition, the applications in illustrative and focus+context visualization have reflected their potential impact on optimizing the information transfer regarding factors such as cognitive load, integration of non-realistic information, visualization of uncertainty, and visualization on small displays.
Introducing motion into existing static paintings is becoming a field that is gaining momentum. This effort facilitates keeping artworks current and translating them to different forms for diverse audiences. Chinese ink paintings and Japanese Sumies are well recognized in Western cultures, yet not easily practiced due to the years of training required. We are motivated to develop an interactive system for artists, non-artists, Asians, and non-Asians to enjoy the unique style of Chinese paintings. In this paper, our focus is on replacing static water flow scenes with animations. We include flow patterns, surface ripples, and water wakes which are challenging not only artistically but also algorithmically. We develop a data-driven system that procedurally computes a flow field based on stroke properties extracted from the painting, and animate water flows artistically and stylishly. Technically, our system first extracts water-flow-portraying strokes using their locations, oscillation frequencies, brush patterns, and ink densities. We construct an initial flow pattern by analyzing stroke structures, ink dispersion densities, and placement densities. We cluster extracted strokes as stroke pattern groups to further convey the spirit of the original painting. Then, the system automatically computes a flow field according to the initial flow patterns, water boundaries, and flow obstacles. Finally, our system dynamically generates and animates extracted stroke pattern groups with the constructed field for controllable smoothness and temporal coherence. The users can interactively place the extracted stroke patterns through our adapted Poisson-based composition onto other paintings for water flow animation. In conclusion, our system can visually transform a static Chinese painting to an interactive walk-through with seamless and vivid stroke-based flow animations in its original dynamic spirits without flickering artifacts.
Conference Paper
Full-text available
This presentation paper resumes recent results on an extension of the cartographic pipeline in GIS to integrate expressive rendering techniques, in order to increase the aesthetics and expressivity of maps [2]. They are implemented in the form of shaders which can be called and parameterized from a style file following the OGC style specifications. We propose an extension of the OGC Symbology Encoding specification.
Im Allgemeinen liegen Geodaten und georeferenzierte Daten verteilt vor, sind heterogen in ihrem Inhalt und ihrer Form, umfassen sehr große Datenmengen und müssen in verschiedenen IT-Informationssystemen integriert und in verschiedenen Anwendungskontexten genutzt werden. Im Mittelpunkt dieser Arbeit stehen deshalb Konzepte und Techniken für die Integration, Visualisierung, Analyse, Bereitstellung und Nutzung von 2D- und 3D-Geodaten sowie georeferenzierten Daten. Hierbei wird ein Ansatz verfolgt, der zum einen virtuelle 3D-Umgebungen als konzeptionellen und technischen Rahmen nutzt und zum anderen auf serviceorientierten Softwarearchitekturen und Geo-Standards basiert. Die vorgestellten Konzepte und Verfahren stellen damit Schlüsselbausteine dar, um neuartige IT-Lösungen und Anwendungen für 3D-Geoinformationen, z.B. als Bestandteile von Geodateninfrastrukturen, zu realisieren. Im Bereich der servicebasierten 3DGeovisualisierung beschreibt diese Arbeit, wie virtuelle 3D-Stadtmodelle für die Integration heterogener und verteilter Geodatenquellen genutzt werden können. Dazu werden Anforderungen für die Integration identifiziert und ein Konzept für die Integration auf Datenebene und die Integration auf Visualisierungsebene entworfen und deren Umsetzung am Beispiel komplexer 3D-Bauwerksinformationsmodelle beschrieben und demonstriert. Im Bereich servicebasierter, bildbasierter 3DPortrayal-Services wird mit dem Web View Service (WVS) ein spezialisierter SoftwareService für die Visualisierung, Interaktion und Analyse von geovirtuellen 3D-Umgebungen konzipiert und entwickelt. Kernkonzept dieses Services sind die serverseitige Datenintegration und -verwaltung sowie die ebenfalls serverseitige Bilderzeugung. Mit diesem durchgehend serverseitigen Ansatz können sehr große Mengen an 3D-Geodaten auch auf solchen Endgeräte bereitgestellt werden, die nicht über ausreichend Speicher und Rechenleistung für die Vorhaltung, die Verarbeitung und das Rendering von 3D-Modellen verfügen. Der entwickelte WVS ermöglicht so u.a. auch auf Tablet-PCs und im Webbrowser die interaktive Erkundung und Analyse von 3D-Geodaten. Anhand einer Referenzimplementierung des WVS und einer Client-Anwendung wird die praktische Anwendung des WVS demonstriert. Im Bereich der Komposition von Web View Services wird untersucht, wie der WVS als Baustein einer komplexen, verteilten Visualisierungs- und Renderingpipeline eingesetzt werden kann. Durch die Komposition des WVS mit anderen Darstellungs- und Verarbeitungsservices können z. B. komplexe Renderingeffekte erzielt oder einzelne 3D-Objekte nachträglich in eine 3D-Ansicht integriert werden. Hierzu wird ein Konzept zur servicebasierten, Tiefenbildbasierten Bildkomposition beschrieben und am Beispiel der verdeckungsfreien Annotation von 3D-Ansichten umgesetzt und demonstriert. Im Bereich der Interaktion mit Web View Services liefert diese Arbeit Grundlagen, Konzept und Umsetzung für intelligente, assistierende und automatisierte Interaktions- und Navigationstechniken, die auf dem Angebotscharakter (der Affordanz) von 3D-Szenenobjekten sowie auf der skizzen- und gestenbasierten Eingabe von Nutzerintentionen basiert. Diese Eingaben werden hinsichtlich ihrer Form und unter Berücksichtigung der Semantik der 3DSzenenobjekte ausgewertet und interpretiert und anschließend in anwendungsspezifische Navigationskommandos übersetzt, aus denen teilautomatische Kamerafahrten abgeleitet werden.
Full-text available
1 ABSTRACT This paper investigates and discusses concepts and techniques to enhance spatial knowledge transmission of 3D city model representations based on cartography-oriented design. 3D city models have evolved to important tools for urban decision processes and information systems, especially in planning, simulation, networks, and navigation. For example, planning tools analyze visibility characteristics of inner urban areas and allow planers to estimate whether a minimum amount of light is needed in intensely covered areas to avoid "Gotham city effect", i.e., when these areas become too dark due to shadowing. For radio network planning, 3D city models are required to configure and optimize wireless network services, i.e., to calculate and analyze network coverage and connectivity features. 3D city model visualization often lacks effectiveness and expressiveness. For example, if we analyze common 3D views, large areas of the graphical presentations contain useless or even "misused" pixels with respect to information content and transfer (e.g., pixels that represent several hundreds of buildings at once or pixels that show sky). Typical avatar perspectives frequently show too many details at once and do not distinguish between areas in focus and surrounding areas. In this case the perceptual and cognitive quality of visualized virtual 3D city model could be enhanced by cartographic models and semiotic adaptations. For example, we can integrate strongly perceivable landmarks as referencing marks to the real world, which establish more effective presentations and improve efficient interaction.
Full-text available
Recent years have witnessed significant progress in example-based texture synthesis algorithms. Given an example texture, these methods produce a larger texture that is tailored to the user's needs. In this state-of-the-art report, we aim to achieve three goals: (1) provide a tutorial that is easy to follow for readers who are not already familiar with the subject, (2) make a comprehensive survey and comparisons of different methods, and (3) sketch a vision for future work that can help motivate and guide readers that are interested in texture synthesis research. We cover fundamental algorithms as well as extensions and applications of texture synthesis.
We explore visual map abstraction for the generation of stylized renderings of 2D map data. We employ techniques that are centred around the concept of shape simplification and graph layout and that allow iterative abstraction of 2D maps. We use data from publicly available sources and show how we can iteratively generate aesthetic renditions of these maps. These renditions do not have the goal to allow for navigation tasks, but instead show the map data in a distorted manner. The techniques used to create these images apply simplification, abstraction/generalisation, and displacement operations to the map elements in varying orders and add stylistic shading to produce aesthetic renditions for print or electronic displays. The degree of abstraction/generalisation can be individually chosen and determines the characteristics of the distorted map: whether components retain their shape, degenerate, or are processed in a manner that the abstraction becomes the focus of the image rather than the underlying map data. The renditions can be further personalized by choosing shading and colours for this shading. Together, the presented techniques allow for playful and creative exploration of aesthetic renditions of 2D map data.
We present an approach for interactively generating pen-and-ink hatching renderings based on hand-drawn examples. We aim to overcome the regular and synthetic appearance of the results of existing methods by incorporating human virtuosity and illustration skills in the computer generation of such imagery. To achieve this goal, we propose to integrate an automatic style transfer with user interactions. This approach leverages the potential of example-based hatching while giving users the control and creative freedom to enhance the aesthetic appearance of the results. Using a scanned-in hatching illustration as input, we use image processing and machine learning methods to learn a model of the drawing style in the example illustration. We then apply this model to semi-automatically synthesize hatching illustrations of 3D meshes in the learned drawing style. In the learning stage, we first establish an analytical description of the hand-drawn example illustration using image processing. A 3D scene registered with the example drawing allows us to infer object-space information related to the 2D drawing elements. We employ a hierarchical style transfer model that captures drawing characteristics on four levels of abstraction, which are global, patch, stroke, and pixel levels. In the synthesis stage, an explicit representation of hatching strokes and hatching patches enables us to synthesize the learned hierarchical drawing characteristics. Our representation makes it possible to directly and intuitively interact with the hatching illustration. Amongst other interactions, users of our system can brush with patches of hatching strokes onto a 3D mesh. This interaction capability allows illustrators who are working with our system to make use of their artistic skills. Furthermore, the proposed interactions allow people without a background in hatching to interactively generate visually appealing hatching illustrations.
A simple and efficient method is presented which allows improved rendering of glyphs composed of curved and linear elements. A distance field is generated from a high resolution image, and then stored into a channel of a lower-resolution texture. In the simplest case, this texture can then be rendered simply by using the alpha-testing and alpha-thresholding feature of modern GPUs, without a custom shader. This allows the technique to be used on even the lowest-end 3D graphics hardware. With the use of programmable shading, the technique is extended to perform various special effect renderings, including soft edges, outlining, drop shadows, multi-colored images, and sharp corners.
A line generalization solution is presented based on the operations known as water-lining and medial-axis transformation. Although the solution is of general application, this report focuses on shorelines. The method is shown to preserve the general shape of a line through very broad scale changes; it also makes it possible to perform feature aggregation and elimination, where needed. Each scale change is proven to depend on the maximum distance spanned by the waterlining operation, which distance can be equated to the quantity known as ε in the generalization literature. The challenges encountered in the development of the generalization procedure are discussed; these challenges are less on the side of line simplification and more in the aggregation of features. Solutions are presented for broadening isthmus, linkages to streams and rivers, and the collapsing of straits into double and coincident lines. Particular shoreline configurations are shown to lead to ambiguities in feature aggregation and elimination that require user's input in order to be resolved. Intermediate results are found to replicate those submitted 39 years ago by Julian Perkal in his proposal for an objective generalization.
Based on a two-component descriptor, a distance label for each point, it is shown that Euclidean distance maps can be generated by effective sequential algorithms. The map indicates, for each pixel in the objects (or the background) of the originally binary picture, the shortest distance to the nearest pixel in the background (or the objects). A map with negligible errors can be produced in two picture scans which has to include forward and backward movement for each line. Thus, for expanding/shrinking purposes it may compete very successfully with iterative parallel propagation in the binary picture itself. It is shown that skeletons can be produced by simple procedures and since these are based on Euclidean distances it is assumed that they are superior to skeletons based on d4−, d8−, and even octagonal metrics.
Landscape illustrations and cartographic maps depict ter- rain surface in a qualitatively effective way. In this paper, we present a framework for line drawing techniques for automatically reproducing traditional illustrations of ter- rain by means of slope lines and tonal variations. Given a digital elevation model, surface measures are computed and slope lines of the terrain are hierarchically traced and stored. At run-time slope lines are rendered by stylized procedural and texture-based strokes. The stroke density of the final image is determined according to the light in- tensities. Using a texture based approach, the line draw- ing pipeline is encapsulated from the rendering of the ter- rain geometry. Our system operates on terrain data at in- teractive rates while maintaining frame-to-frame coher- ence.