ArticlePDF Available

Abstract and Figures

Texture mapping is a key technology in computer graphics. For the visual design of 3D scenes, in particular, effective texturing depends significantly on how important contents are expressed, e.g., by preserving global salient structures, and how their depiction is cognitively processed by the user in an application context. Edge-preserving image filtering is one key approach to address these concerns. Much research has focused on applying image filters in a post-process stage to generate artistically stylized depictions. However, these approaches generally do not preserve depth cues, which are important for the perception of 3D visualization (e.g., texture gradient). To this end, filtering is required that processes texture data coherently with respect to linear perspective and spatial relationships. In this work, we present an approach for texturing 3D scenes with perspective coherence by arbitrary image filters. We propose decoupled deferred texturing with (1) caching strategies to interactively perform image filtering prior to texture mapping and (2) for each mipmap level separately to enable a progressive level of abstraction, using (3) direct interaction interfaces to parameterize the visualization according to spatial, semantic, and thematic data. We demonstrate the potentials of our method by several applications using touch or natural language inputs to serve the different interests of users in specific information, including illustrative visualization, focus+context visualization, geometric detail removal, and semantic depth of field. The approach supports frame-to-frame coherence, order-independent transparency, multitexturing, and content-based filtering. In addition, it seamlessly integrates into real-time rendering pipelines, and is extensible for custom interaction techniques.
Content may be subject to copyright.
Interactive Image Filtering for Level-of-Abstraction Texturing of Virtual 3D Scenes *
Amir Semmo J¨
urgen D¨
Hasso Plattner Institute, University of Potsdam, Germany
Texture mapping is a key technology in computer graphics. For the visual design of 3D scenes, in particular, eective texturing
depends significantly on how important contents are expressed, e.g., by preserving global salient structures, and how their depiction
is cognitively processed by the user in an application context. Edge-preserving image filtering is one key approach to address these
concerns. Much research has focused on applying image filters in a post-process stage to generate artistically stylized depictions.
However, these approaches generally do not preserve depth cues, which are important for the perception of 3D visualization
(e.g., texture gradient). To this end, filtering is required that processes texture data coherently with respect to linear perspective and
spatial relationships. In this work, we present an approach for texturing 3D scenes with perspective coherence by arbitrary image
filters. We propose decoupled deferred texturing with (1) caching strategies to interactively perform image filtering prior to texture
mapping and (2) for each mipmap level separately to enable a progressive level of abstraction, using (3) direct interaction interfaces
to parameterize the visualization according to spatial, semantic, and thematic data. We demonstrate the potentials of our method by
several applications using touch or natural language inputs to serve the dierent interests of users in specific information, including
illustrative visualization, focus+context visualization, geometric detail removal, and semantic depth of field. The approach supports
frame-to-frame coherence, order-independent transparency, multitexturing, and content-based filtering. In addition, it seamlessly
integrates into real-time rendering pipelines, and is extensible for custom interaction techniques.
Keywords: image filtering, level of abstraction, texturing, virtual 3D scenes, visualization, interaction, focus+context interfaces
1. Introduction
Common 3D scenes are characterized by their visual complexity.
In particular, the appearance of most objects can be defined by
textures used to design the objects’ appearance. Prominent exam-
ples of texture-based 3D scenes include virtual 3D building, city,
and landscape models or environments used for gaming applica-
tions. Texture mapping is a key technology in today’s graphics
hardware for the visual design of 3D scenes [
]. In particular,
it provides important functionalities used in both photorealis-
tic and non-photorealistic real-time image synthesis. Common
texture maps encode color, diuse, normal, or displacement in-
formation as surface properties to enrich shading and lighting
eects (e.g., for the building fa
ades in Figure 1). Rendering
these properties in detail, however, does not automatically lead
to high image quality such as with respect to the eectiveness of
the information transfer regarding the user [2]. For example, in
non-phototealistic rendering (NPR), approaches particularly take
into account a user’s background, task, and perspective view,
facilitating the expression, recognition, and communication of
important or prioritized information [3, 4, 5, 6].
To improve image quality of textured 3D scenes, important
contents of textures should be emphasized while less signifi-
cant details should be removed—a challenging task because
1{first.last} |
This is the authors’ version of the work. The definitive version will be published
in Computers & Graphics, 2015. doi: 10.1016/j.cag.2015.02.001.
feature contours and global salient structures must be preserved.
A promising approach to address this problem is edge-preserving
image filtering. Popular image filters serve as smoothing or en-
hancing operators such as the bilateral filter [
] and dierence of
Gaussians [
]. Previous works have focused on applying these
filters in a post-process stage on the rendered results to foster an
artistically stylized rendering [9]. These approaches are able to
smooth low-contrast regions and preserve high-contrast edges;
however, they are generally not able to preserve depth cues
important for perceiving model contents as three-dimensional
(e.g., occlusion, texture gradient). For instance, the fine granular
patterns on the ground and rooftops of the 3D scene shown in
Figure 1 are not preserved because spatial relationships and lin-
ear perspective are not considered, and fine objects may become
indistinguishable from the background. For eective visual in-
formation encoding, “good image filtering” needs to preserve
these cues to help perceive relative positions, sizes, distances,
and shapes more clearly [10, 11].
This work explores level-of-abstraction (LoA) texturing and
its interactive parameterization by means of image filtering,
which refers to adapting the spatial granularity at which 3D
scene contents should be represented [
]. Our contributions are
based on the following key aspects:
(1) Filtering should be perspectively coherent.
Linear perspec-
tive, occlusion, and texture gradient are eective cues for hu-
mans to infer depth [
]. Texture mapping considers these
cues in perspective projections by foreshortening and scaling.
texture lod
Intensity 01
input texture space (proposed method)post-processed filtering
input screen space proposed
Figure 1: Exemplary results of a flow-based dierence-of-Gaussians filtering (FDoG) for a textured 3D scene, rendered in real-time using our framework. The closeups
and scanline plots illustrate the high accuracy for texture gradients when using our proposed method (red line) instead of conventional filtering in a post-process stage
on the rendered color image (green line).
Therefore, our approach performs image filtering prior to tex-
ture mapping via decoupled deferred texturing. It is a simple
yet eective method of preserving these cues without requiring
modifications of the original filter algorithms.
(2) Filtering should be interactively parameterized.
To serve
the dierent interests of users in prioritized information, the
LoA should be dynamically adapted to a user’s context, which
requires specialized interaction techniques for parameterization.
To this end, the proposed approach is coupled with an extensible
interaction framework to parameterize a context-aware image
filtering according to spatial, semantic, and thematic data. The
framework provides intuitive interfaces such as touch-based or
natural language inputs via textual descriptions. In addition,
interactive frame rates should be maintained to provide a re-
sponsive system during interaction and navigation in 3D space
(e.g., dynamic viewing situations, regions of interest). For this
reason, we contribute per-fragment and progressive filtering to-
gether with caching strategies to enable real-time frame rates for
local image filters without requiring to pre-process texture data.
In image-based artistic rendering it is often desired to give the
impression that texture features have been applied (painted) on
a flat canvas [
]. Instead, this paper draws upon the potentials
of image filtering for visualization purposes including cognitive
principles (here: psychological design aspects) and direct user
interaction for parameterization; two fields of research that pose
contemporary challenges for the NPR community [
]. Accord-
ingly, we want to preserve a perspective-coherent scale of texture
features, an approach also practiced by artists to enhance depth
sensation (Figure 2). The benefits are shown for applications
such as focus+context visualization, illustrative visualization,
and geometric detail removal. Because filtering is performed
in texture space, however, no explicit geometric abstraction is
applied, for which specialized rendering techniques are required.
This paper represents an extended journal version of the
Figure 2: The painting “Paris Street, Rainy Day” (1877) by Gustave Caillebotte
(source: Google Art Project). The artist carefully uses texture gradient with
linear perspective to enhance depth sensation. Notice how the cobblestones get
progressively smaller in depth until they are completely smooth.
CAe 2014 paper by Semmo and D
. Besides a sig-
nificant revision of all sections, we particularly expand on the
fields of user interaction for NPR. With respect to this, the fol-
lowing three major revisions have been made: (1) Section 3
expands our review of related work, now including topics such
as interaction design and user interfaces for 3D virtual environ-
ments, (2) the new Section 5 proposes an integrated, extensible
interaction framework to intuitively parameterize our methods,
and (3) Section 6.2 expands our original applications by using
touch and natural language interfaces, where we demonstrate
the versatile application of our framework to geospatial tasks,
including exploration, navigation, and orientation. Further, we
expand Section 6.2 on visualization results and applications to
provide a more elaborate discussion of our techniques with re-
spect to an eective 3D information transfer. Furthermore, the
new Section 6.3 provides a quantitative analysis of visual clutter
reduction and visual saliency to evaluate the eect of our im-
age filtering methods for focus+context visualization. Finally,
Section 7 provides an updated prospect on future work.
original output with mipmapping post-process filtering texture-space filtering
original post-processing texture-space filtering texture LoD
texture LoD
sphere 1 sphere 2
sphere 3
object structure preservedtexture structure preservedareas flattened
structures not preserved
Figure 3: Exemplary textured 3D scene for which the flow-based bilateral filter (
, σd
, σr
0) is applied: (1) on the original output,
(2) on each mipmap level prior to texture mapping. The scanline plots illustrate the dierences in regions of high texture LoD and occlusion, where the second
approach clearly preserves texture gradients and object boundaries.
The remainder of this extended paper is structured as fol-
lows. Section 2 explains the relevance of perspective coherence
for image filtering in 3D perspective views. Section 3 reviews
related work on image filtering, non-photorealistic rendering,
eective 3D information transfer, and user interaction. Section 4
presents the technical approach to deferred texture filtering, pa-
rameterized via our extensible interaction framework presented
in Section 5. Applications for our methods together with in-
teraction techniques are presented, discussed, and evaluated in
Section 6. Finally, Section 7 concludes this paper.
2. Background – Why Perspective Coherence Matters
This section motivates perspective coherence for LoA texturing.
The example 3D scene shown in Figure 3 is based on a flow-
based bilateral filter [
]. The input textures were converted
to a low-pass filtered mipmap pyramid [
] used for antialias-
ing. The filtering was performed (1) in a post-process stage on
the rendered results and (2) in texture space prior to perspec-
tive projection. Given a texture
within the
RGB color space as input, the first approach is described by an
image filtering
performed after sampling the convolution of
via mipmapping, with
being a projector function that
maps texture information into the domain of the rendered image
I:R2R3and Kbeing the mipmap kernel:
While this approach works in a similar manner as pre-filtering a
texture in regions of high texture level of detail (LoD), it is not
able to preserve structures in regions of low texture LoD because
only compressed information serves as input for the image filter-
ing kernel
. This eect can be observed in the intensity plots
along the yellow scanline shown in Figure 3 and in the concep-
tual overview depicted in Figure 4 (top row). To preserve depth
cues, instead, image filtering requires access to high-frequency
texture information prior to perspective projection.
Kernel F
Kernel F
Texturing Post-process FilteringOriginal Data (level 1)
Texture-space Filtering Filtered Data (level 1) Texturing
Figure 4: Comparison between post-process filtering (top row) and texture-space
filtering (bottom row) using the FDoG filter [
], with mipmapping enabled for
both approaches.
In the second approach, mipmap levels are filtered separately
prior to texture mapping to approximate the filtering eect of
), with
being the mipmap kernel that uses the
pre-filtered mipmap levels:
Figure 3 (scanline plots) and Figure 4 (bottom row) demonstrate
that this approach naturally preserves texture gradients and ob-
ject boundaries when trilinearly filtering the respective mipmap
levels. While this is not a new approach in 3D computer graphics
(e.g., used for pre-filtering shadow maps [
]), artists use similar
principles in their work to enhance the sensation of depth by
adapting the detail and size of texture features with the linear
perspective (Figure 2).
We contribute an interactive framework that implements
Equation 2 with optimization techniques to enable real-time
performance for local image filters. The framework uses con-
ventional mipmapping and projective texturing capabilities of
Closeup 1
Closeup 2
Closeup 3
Domain Transform / Recursive Filter (σ =
100, σ =
0.9, N = 8)
[Gastal and Oliveira 2011]
Input / Texture LoD
Coherence-Enhancing Filter
Closeup 1
Closeup 2
Closeup 3
Closeup 1
Closeup 2
Closeup 3
Closeup 1
Closeup 2
Closeup 3
Closeup 1
Closeup 2
Closeup 3
Closeup 1
Closeup 2
Closeup 3
(σ =
3, σ =
4.25%, n =
1, n =
4, q
2, nbins =
e a
(σ =
0, σ =
1.5, N
0.04, κ
(σ =
5, σ =
20%, n =
1, n =
4, q
2, nbins =
r ae
L Smoothing
Watercolor Rendering
Real-Time Video Abstraction
[Kyprianidis and Kang 2011]
[Winnemöller et al. 2006]
[Xu et al. 2011]
post-process filtering (pp) proposed method (pm) pp pm pp
pp pm
0.995, σ =
1, σ
6, φ
with FDoG [Kyprianidis and Döllner 2008]
Figure 5: Image filters applied to a 3D scene using our system. The top row shows the original rendering output and texture LoD, the first column the results of
post-process filtering (pp), and the second column our proposed method (pm). The texture gradients on the floor, the textured banners, and the lion figure in the back
are aggressively smoothed when filtered in a post-process stage, while our approach preserves their structures and overall object borders without compromising the
filters’ qualities. (Sponza Atrium scene ©Marko Dabrovic and Frank Meinl from Crytek. All rights reserved.)
the hardware-accelerated rendering pipeline to inherently take
the linear perspective, occlusions, and texture gradients into ac-
count. Using our approach with state-of-the-art edge-preserving
image filters, we performed a comparison between the filtering
approaches (1) and (2). Results are presented in Figure 5 and the
accompanying video. The comparison was carried out qualita-
tively on the basis that texture gradients and occlusions induced
by linear perspective are preserved. The results demonstrate
how each filter’s capability to preserve texture gradients and oc-
clusions is improved, while the filters’ qualities with respect to
image smoothing or enhancing remain unaected. For instance,
the ringing artifacts of the red banner in Figure 5 are reduced
with the coherence-enhancing filter [
], the texture gradient on
the floor and its transition to the back wall are preserved with
bilateral filtering [
] and
gradient minimization [
], and the
overall structures are emphasized with much higher precision.
3. Related Work
Related work is found in the fields of edge-preserving image fil-
tering, texture mapping and filtering for 3D scenes, focus+context
visualization, and interaction in 3D virtual environments.
3.1. Edge-preserving Image Filtering
Edge-preserving filtering emerged as an essential building block
to reduce image details without loss of salient structures. Many
filters have been proposed and explored using image abstrac-
tion and highlighting [
], and thus are of major interest for our
work. Typical approaches operate in the spatial domain, use a
kind of anisotropic diusion [
], and are designed for parallel
execution. We have implemented a range of local filters in our
prototype to demonstrate how they can be used for real-time
filtering of 3D scene contents for progressive LoA texturing.
A popular choice is the bilateral filter, which works by weight
averaging pixel colors in a local neighborhood based on their
distances in space and range [
]. This method has been used
for real-time image-based artistic rendering [
] and enhanced
by flow-based implementations [
] adapted to the local
image structure to provide smooth outputs at curved boundaries.
Because the bilateral filter may fail when used in high-contrast
images, we also explored the usage of the anisotropic Kuwa-
hara filter [
] to maintain a uniform LoA due to local area
flattening, and coherence-enhancing filtering based on direc-
tional shock filtering [
] (see Figure 5). Because our approach
is designed for generic application without requiring modifica-
tions of the original algorithms, future extensions are easy to
implement. Additional categories include mean-shift filtering
for discontinuity-preserving smoothing [
] and saliency-guided
image abstraction [
], morphological filtering based on dilation
and erosion (e.g., for watercolor rendering [
]), and geodesic
filtering using distance transforms [30, 31].
Edge detection is based on finding zero-crossings or thresh-
olding the gradient magnitude of an image. A popular choice
is the dierence-of-Gaussians (DoG) filter [
] and its enhanced
flow-based variants [
] to create smooth coherent out-
puts for line and curve segments. Complemented by an image-
space edge detection using geometry information [
], we used
the DoG filter to enhance important edges based on both geom-
etry and texture information. Our method produces accurate
filtering results with respect to linear perspective to preserve
texture gradients (Figure 1), performs in real-time, and provides
frame-to-frame coherence.
Recent filters focused on image decompositions by using
optimization methods such as weighted least squares [
], local
extrema for edge-preserving smoothing [
], locally weighted
histograms [
], domain transforms [
gradient minimiza-
tions [
], guided filtering [
], region covariances [
], and,
most lately, modified bilateral filtering [
]. Although most
of them do not provide interactive frame rates, we have im-
plemented GPU versions of a variety of these algorithms to
demonstrate how they can serve perspective-coherent filtering
of diuse, normal, and ambient occlusion maps for geometric
detail removal and illustrative visualization.
3.2. Texture Mapping and Filtering for 3D Scenes
Using texture-based methods for coherent stylization is not a
new approach. For an overview on this topic we refer to the
survey by B
enard et al.
. Relevant object-space methods
reduced perspective distortions and scale variations via mipmap-
ping (i.e., art maps [
]) and infinite zoom mechanisms [
to maintain a quasi-constant size of texture elements for arbi-
trary view distances. Conceptually, our approach behaves very
similar to the art maps approach but generalizes the idea of
using mipmapping for coherent stylization with respect to im-
age filtering in two ways: (1) instead of using pre-designed
mip-map levels or relying on procedural modeling our approach
interactively filters high-detail texture information at run-time
(e.g., photorealistic imagery), (2) our approach enables a user-
defined LoA texturing (e.g., for focus+context visualization) that
is not limited to color maps but also copes with multiple layers
of textures used for visualization (e.g., normal maps, thematic
maps). Previous approaches used bilateral, DoG, and Kuwa-
hara filtering with G-buer information [
] in a post-process
stage to express uncertainty [
], direct a viewer’s gaze to cer-
tain image regions [
], and convey dierent emotional and
experiential representations [
]. Our approach also uses G-
buer information for deferred rendering, but decouples filtering
from shading to preserve texture gradients and object boundaries
when mapping the filtered results.
Previous work has supported the assertion that perspective
is commonly used as an important source of depth informa-
tion [
]. In particular, surface textures are essential
components of 3D scenes to judge distances, shapes, and spa-
tial layouts [
]. A large body of research is dedicated to
the way human sensory systems process these pictorial depth
cues [
]; however, there are only a few works that reflect the
importance of preserving them when filtering information. A re-
cent study [
] showed that using the results of a DoG filter
on diuse maps significantly improved depth perception in the-
matic 3D visualization. We demonstrate similar approaches to
edge enhancement, but with a filtering that can be interactively
parameterized at run-time.
To control the detail of 3D shapes via parameterized light-
ing, major related work is found in cartoon shading [
that supports view-dependent eects (e.g., LoA, depth of field).
We propose LoA texturing of photorealistic diuse maps as an
orthogonal approach to these works. In addition, we demon-
strate how salient structures of textures can be preserved using
flow-based image filters for stylized shading and lighting.
3.3. Focus+Context Visualization
LoA texturing, as is proposed in this work, provides eective
means for focus+context visualization. Focus+context describes
the concept to visually distinguish between important or rel-
evant information from closely related information [
]. Fo-
cus+context visualization conforms with the visual information-
seeking mantra [
] by enabling users to interactively change
the visual representation of data for points and regions of interest
using highlighting techniques while maintaining a context for
orientation guidance, i.e., to solve the problem of over-cluttered
visual representations. It has the potential to improve the per-
ception of prioritized information [
] and direct a viewer’s
focus to certain image regions [
]. A common method is to
parameterize image filters according to view metrics (e.g., view
distance [
]) or by explicitly defined regions of interest [
to select and seamlessly combine dierent LoA representations
of 3D scene contents [
]. In this paper, we demonstrate our
interactive approach on several focus+context examples. In par-
ticular, our methods are able to enhance several applications
by incorporating texture information coherently with respect
to linear perspective, including the stylized focus pull [
] and
semantic depth of field [
] eects for information highlighting
and abstraction [58].
3.4. Interaction in 3D Virtual Environments
Many systems use a classical mouse/keyboard setup with a
graphical user interface to help inspect and parameterize a vi-
sualization of 3D scene contents [
]. Direct manipulation is
typically performed via ray casting to determine intersections
of a pointing device with the visualized output (e.g., to specify
regions of interest [
]). Due to the increasing availability of
ubiquitous mobile devices such as smartphones and tablets, visu-
alization systems also increasingly make use of the opportunities
of (multi-)touch interaction [
]. Evaluations showed that these
interfaces provide a quite natural, direct interaction within 3D
virtual environments [
], and may outperform mouse input for
certain tasks in terms of completion times [
]. However, touch
user interfaces also adhere to certain challenges such as the in-
tuitive mapping from 2D input to 3D manipulations [
Our work provides an extensible interface for user interaction to
parameterize image filtering on a spatial, thematic, and semantic
basis of the visualized scene contents, e.g., via mouse, touch
gestures, or textual input. The interaction framework integrated
in our approach is practically used for exploration, navigation
and analysis tasks performed with virtual 3D city models. In par-
ticular, we provide a blueprint for decoupling interaction from
concrete visualization techniques using concepts of image-based
rendering, G-buers [
] and distance transforms for parameter-
ization [
], and thus provide an extensible system design for
software developers.
Finally, we use our interaction framework to directly param-
eterize our approach according to explicit and implicit view met-
rics (e.g., region interest, view distance). Our interaction frame-
work explores interface schemes to allow users to attain both
focused and contextual views of their information spaces. In
particular, we use interactive lenses as established means to facil-
itate the exploration of complex 3D scenes [
], which are quite
versatile in their parameterization. Previous approaches have
been provided via the magic lense metaphor [
] and extended
for 3D scenes [
]. The concept has also been explored for
] and 3D virtual environments (e.g., for geospatial tasks,
such as navigation, landscaping, and urban city planning [
]) to
reveal information that is hidden in high-dimensional data sets.
We demonstrate how LoA texturing can be parameterized via the
magic lense metaphor to provide context-based style variances
for information highlighting and abstraction.
4. Method
An overview of our approach is shown in Figure 6. It is de-
signed for generic application, can be seamlessly integrated into
existing 3D rendering systems and pipelines, is extensible for
arbitrary 2D image filters, and has no particular requirements re-
garding the consistency of the input geometry (e.g., triangulated
3D meshes vs. 3D point clouds). Additional attributes (e.g., se-
mantics) and textures for appearance and thematic information
can be processed for content-based filtering and multitexturing
(e.g., using CityGML [71] for 3D geospatial models).
Our implementation performs filtering prior to texture map-
ping, for which decoupled deferred texturing is proposed (Sec-
tion 4.1). Filtering is performed using GPGPU separately on
each mipmap level for LoA texturing (Section 4.2). The imple-
mentation enables real-time performance for local image filters
via per-fragment and progressive filtering using visibility in-
formation, together with a caching mechanism to ensure that
image parts are only filtered once for a given configuration (Sec-
tion 4.3). Progressive filtering is based on a computational
budget that is applied per rendered frame to ensure a responsi-
ble system during navigation or interaction. Parameter sets of
image filters can be defined per texture channel (e.g., diuse vs.
normal maps) and may be used to define layers with dierent
levels of abstraction. These are blended according to view met-
rics or regions of interest for focus+context visualization. All
geometry and texture buers are represented as stencil-routed A-
buers [
] to support an order-independent image blending of
filtering eects (e.g., blueprint rendering styles [
]). Optional
post-processing may be performed in screen space and combined
with the filtering results (Section 4.4) such as for depth of field
The approach can be parameterized at multiple stages to
give explicit control over the filtering process, as well as im-
plicitly by view metrics (e.g., viewing distance). In addition, it
integrates an interaction framework to parameterize the filtering
eects more directly and concisely, e.g., using interaction tech-
niques such as (multi-)touch gestures or natural language inter-
face to automatically parameterize magic lenses or highlighting
techniques. Here, the main idea is to decouple the interaction
Figure 6: Schematic overview of our framework designed for interactive LoA texturing. The approach decouples the interaction interface and image filtering from
rendering to provide an extensible architecture. Further, it performs the filtering prior to texture mapping to process texture data coherently with respect to linear
perspective, and to preserve spatial relationships. (Jerry the Ogre ©Keenan Crane. All rights reserved.)
wrap x
# mipmap levels
texture 1
page table
texture 0
wrap y
...0x...0x... 0x... 0x... 0x... 0x... 0x... int64
0 | 1 0 | 1 0 | 1 0 | 1 0 | 10 | 1 0 | 1 bool
Figure 7: Layout of the virtual page table used for texturing of original and
filtered texture data on the GPU.
interface from rendering, and parameterize uniform buers or
projected distance maps as an abstraction layer in-between. In-
teraction devices and techniques are interpreted according to a
pre-configured interaction mode, and mapped to a functional
description of focus+context definitions. These definitions are
evaluated in the deferred rendering stage to dynamically com-
pute importance masks and perform image filtering according to
a user-defined stylization preset.
In the following, we first focus on the technical aspects of the
filtering processes that are performed on the GPU. Afterwards,
the interaction framework used for parameterization is described
in more detail (Section 5).
4.1. Decoupled Deferred Texturing
Our goal is to provide high-quality, interactive LoA visualization
of general, textured 3D scenes by image filtering that complies
with the key aspects named in Section 1. To this end, texture
mapping is decoupled from the geometry pass and postponed by
deferred texturing to transfer the filtering to texture space (Sec-
tion 2).
First, in a pre-process stage, textures are assigned unique
. After computing the mipmap pyramids, a
set of levels (
Tm0,Tm1, ..., Tmn
) is defined per scene texture. The
mipmap levels are transferred to GPU texture memory and refer-
enced by their virtual address to enable bindless texturing with
conventional deferred rendering
decoupled deferred texturing and shading
texture data
texture data
texture data
... ...
Figure 8: Conventional deferred rendering performs texture mapping in the
geometry stage (top). By contrast, we propose decoupled deferred texturing for
perspective-coherent filtering (bottom).
random read/write access. In addition, these virtual addresses
are used for virtual texturing to enable a dynamic resolution of
texture identifiers during processing. We use a page table with a
memory layout shown in Figure 7. For each registered texture,
this table references its dimensions, wrap modes required to
handle filtering across texture borders (e.g., clamped vs. mir-
rored), and the virtual address of each mipmap level. The total
number of mipmaps is adapted to the maximum texture dimen-
sion that can be processed by the GPU. Afterwards, rendering is
performed in a series of three stages:
R1 Geometry Stage:
Conventional deferred rendering stores
all processed surface information in a G-buer [
], in-
cluding texture sampling to synthesize diuse, normal
and thematic maps used for visualization. By contrast,
our implementation computes additional buers related to
texture information: identifiers, coordinates, and mipmap
LoD (Figure 8). Texture identifiers are assigned once in a
pre-process stage and uniquely reference texture objects
linked to the virtual address space.
R2 Filtering Stage:
Texture parts required for rendering are
sampled and filtered using the G-buer information of
the first stage. The results are written to separate texture
buers and used together with the original texture data as
input for shading.
R3 Shading Stage:
The results of the geometry and filtering
stages are used for deferred texture mapping. At this stage,
the texture information stored in the G-buer is used to
easily toggle between original and filtered texture variants.
Optional filtering in screen space may be performed for
post-processing eects, such as edge detection for contour
enhancement or Gaussian filtering for depth of field.
The following sections describe the filtering and shading stages
(R2/R3) in more detail.
4.2. Image Filtering
Once the G-buer has been computed, the following three basic
stages are performed for each render pass:
F1 G-buer mapping:
The G-buer is mapped as an arrayed
2D graphics resource, where each pixel (
) is mapped
to the texturing information (TID,Tcoord ,Tlod).
F2 Texture prefetch:
Relevant textures required for render-
ing are determined. For each fragment in the G-buer, the
correspondent mipmap levels (
are computed in parallel and stored in a global structure
using atomic operations. To avoid redundant filtering, a
processing flag is set for each mipmap level in the virtual
page table (Figure 7).
F3 Image filtering:
Each unique
is filtered with the cor-
respondent image filter and configuration. Additional
borders may be introduced according to the wrap modes
) to avoid visible seams when texturing. The
results are written to the destination buer and the pro-
cessing flag is set.
Level 0 Level 1
Level 1 Level 2
Level 2
per-level filtering downsampled
Figure 9: Example of LoA texturing by means of trilinear and DoG filtering:
(top) downsampled filtered input, (bottom) input downsampled and then filtered
per mipmap level.
Level 0
Level 2
Level 1
Level 3Filtered
View Frustum
View Frustum
Figure 10: Local filtering is performed only for visible texture parts as per-
formance optimization, whereas unused texture parts are uninitialized (red).
Geometry outside the view frustum (bottom left) is only textured for reused parts.
Image filtering (F3) is performed separately on each mipmap
level. Afterwards, the results are blended in the shading stage by
trilinear filtering. Compared to filtering the highest mipmap level
and downsampling the intermediate results, the approach enables
a progressive LoA texturing (Figure 9). In this manner, visual
clutter can be reduced significantly in areas of high perspective
compression without requiring the adaption of filter parameters
such as kernel sizes. Special care is required when mipmapped
texture atlases are used for filtering to avoid bleeding across
image tiles. Typically, this problem is addressed by introducing
additional spaces between tiles, but it does not ultimately solve
the problem for wide filter kernels or global operations. Virtual
and bindless texturing alleviates this problem but requires using
non-packed textures as input. An alternative approach may
define tile masks per mipmap level for thresholding; however,
this method increases the memory footprint.
4.3. Optimization Techniques and Enhancements
Dependent on the computational complexity of an image filter,
the filtering stage (F3) could take significant processing time and
stall the rendering stage. To this end, we introduce per-fragment
filtering, caching, and progressive filtering for optimization.
Per-fragment Filtering. Because in most cases only small parts
of textures are used for rendering and a lower texture LoD is se-
lected in background regions of 3D perspective views, visibility-
driven filtering allows for significant performance improvements,
i.e., amortizing the computational overhead over a screen-space
filtering. Therefore, per-fragment filtering is introduced by using
the G-buer to naturally process only the information required
for rendering (Figure 10). This is achieved by integrating the
texture prefetch in the filtering process, for which the texels
required for trilinear filtering are determined for each fragment
synthesized after rasterization. For trilinear filtering, this in-
volves the four related texels used to perform bilinear filtering
on each mipmap level (i.e., up to eight texels in total). The corre-
spondent filtered values are then computed in parallel. Here, we
used a top-down filtering approach, i.e., information required for
filtering is recursively resolved on-demand during processing.
Frame i+1
Frame i Filter Mode
Figure 11: Filtered texture parts are cached (green), reused for subsequent
frames, and combined with new filtered parts (blue).
Zoom In
Texture LoD Texture LoD
Frame i-1 Frame iFrame i+1
Figure 12: Example of a textured 3D scene that is progressively filtered using
a computational filtering budget per render pass to maintain interactive frame
rates. (Sponza Atrium scene
Marko Dabrovic and Frank Meinl from Crytek.
All rights reserved.)
Caching. A caching mechanism is used so that texture parts
are only filtered once for a given configuration and reused for
subsequent render frames (Figure 11). The virtual page table
(Figure 7) references additional image masks (
Tp0,Tp1, ..., Tpn
per mipmap level that indicate whether a pixel has already been
Progressive Filtering. Some local and many global filters that
solve an optimization problem do not perform at interactive
frame rates. Filtering the highest mipmap levels first and using
them as fallback is used for progressive filtering. Our system
uses a computational budget that is applied per rendered frame
and can be interactively configured. Detail information is then
progressively blended in subsequent render frames (Figure 12).
This procedure also ensures a responsive system during inter-
action, e.g., when adapting filter parameters at run-time. In
addition, it prepares an easy deployment on multi-GPU sys-
tems to decouple computationally expensive global filters from
rendering, but this remains subject to future work (Section 7).
The filtering kernel for both optimization techniques is summa-
rized in Algorithm 1. Using these techniques, the computational
cost of processing each texel over post-processing can be amor-
tized, on the one hand, because pre-filtered texture information
serves as input via mipmapping and, on the other hand, because
the caching strategies avoid reprocessing. Our performance eval-
uation in Section 6.4 demonstrates that this enables local image
filters to process textured 3D scenes at real-time frame rates.
Algorithm 1: Per-fragment filter kernel for texture data
1function local image filtering:begin
Input: G-buer G, texture page table Pwith color lookup PCand
process flag lookup PF, filtering budget B
2k0/* global number of texels filtered */
3for pixels pGdo in parallel
4(ID,lod,u,v)G(p)/* sample G-buffer */
5if ID =0then /* early out */
7(T0,T1)P(ID,blod cand dlod e)/* mipmap LoDs */
8forall the (T, uS, vS)of textureGather (T0)
9and (T , uS, vS)of textureGather (T1)do /* 8 */
10 if P(T , uS, vS)not marked as processed then
/* start of critical section */
11 if k<Bthen /* threshold budget */
12 PF(T,uS,vS)mark as processed
13 PC(T,uS,vS)filtering (T,uS,vS)
14 kk+1/* filtered color */
15 else /* progressive filtering */
16 PC(T,uS,vS)lookup (ID,lod,u,v)
17 end
/* end of critical section */
18 end
19 end
20 end
Multitexturing and Context-based Filtering.
uv 2x float32 uv 2x float32
T4x uint16
1x float32 as uint32
0 1
layer 0 layer 1
Figure 13: Extended G-buer used for depth
buering and multitexturing.
To enable filtering of
multitextured 3D scenes,
the G-buer is enhanced
as follows (Figure 13).
First, each fragment syn-
thesized by the rasteriza-
tion is assigned two sets
of texture coordinates to-
gether with multiple tex-
ture identifiers defined per
texture channel. Filter
configurations can then be
defined per texture channel to enable a content-based filtering
that is adapted to 3D model semantics. Second, fragments are
buered in depth and a sorting is performed in a post-process
stage to enable order-independent transparency eects. For this,
the G-buer is extended to a stencil-routed A-buer [
] en-
coded as 3D texture for GPGPU processing. Depth sorting and
image blending of the A-buer is performed during shading.
Third, dierent filters and configurations can be defined per
input texture, for which the virtual texture table is extended
by correspondent destination buers. The filter results are then
blended according to view metrics (e.g., viewing distance) or pre-
defined regions of interest using image masks for focus+context
visualization (e.g., stylized foci [
]). Examples that use these
enhancements are presented in Section 6.
The number of texture coordinates and channels is not ulti-
mately defined but should be limited to bound memory consump-
tion. Because all processes are designed for generic application,
the G-buer can be extended easily by additional attributes.
Texture LoD Texture Edges Geometry Edges SSAO UMDB
Post-processed FilteringOriginal Rendering Proposed Method
Figure 14: Example of post-processing eects used in our system. Edge detection is decoupled into DoG filtering of texture and geometry information. The results are
combined with screen-space ambient occlusion (SSAO), unsharp masking the depth buer (UMDB), and a background texture to compose the final images.
4.4. Deferred Shading and Composition
Once the G-buer is computed and filtering is performed, the
results are used as input for texture sampling, which is inde-
pendently performed from shading and lighting. Optionally,
screen-space ambient occlusion [
] and unsharp masking the
depth buer [
] can be performed using normal and depth in-
formation of the G-buer to improve depth perception further.
These eect layers can be individually amplified, colorized, and
blended by regular image compositing [
]. Results that include
DoG filtering are combined with an image-based edge enhance-
ment technique [
] to include silhouette, border, and crease
edges according to depth, normal, and object identifier infor-
mation [
]. In contrast to traditional DoG filtering in screen
space, our method is able to decouple edges based on texture
and geometry information. Figure 14 and the accompanying
video demonstrate that this approach produces much more ac-
curate filtering results with respect to linear perspective. In
addition, it provides real-time frame rates and frame-to-frame
coherence without requiring specialized methods such as texture
advection [15].
5. User Interaction
Our framework gives full control over the dierent parameters
defined per image filter, including kernel size, quantization in-
tensity, sensitivity of edge detection, adaptive smoothing in a
post-process stage, and weights of flow fields (see Figure 15 for
an excerpt). In addition, users are able to define parameters to
control the composition of the filtering results:
texture channel semantics with filters and configurations
defined per channel for content-based LoA texturing;
view metrics and region masks together with transition
parameters used for blending layered filtering eects;
Figure 15: Excerpt of the graphical user interface of our framework for the
parameterization of image filters (left) and eect layers (right), here exemplified
for the FDoG filter [18].
colors for enhanced geometry edges, screen-space ambi-
ent occlusion and unsharp masking eects;
multi-sampling and order-independent transparency pa-
rameters used for image composition.
For the practical usage of our framework in real-world scenar-
ios, in particular for non-expert users, however, adjusting these
parameters can be cumbersome and such cases thus call for
more high-level and direct interaction interfaces. For eective
visualization of 3D scenes, direct user interaction is a critical
design aspect to visualize as much information as needed for
focus and as little as required for context. Four major inter-
face schemes have been identified for focused and contextual
views [
]: zooming, focus+context, overview+detail, and cue
techniques. Using these schemes to parameterize LoA texturing
in 3D virtual environments, however, remains a challenging task,
because these environments are often inherently complex with
respect to appearance and thematic information. Here, our ma-
jor goal is to provide an interaction framework that seamlessly
integrates the proposed method for LoA texturing, is extensible
for custom interaction devices and techniques, and considers the
following challenges:
The filtering and rendering stages should be decoupled
from concrete interaction interfaces, and be parameterized
via high-level operations to facilitate an easy deployment
of new interaction devices and techniques.
Interactive frame rates should be maintained to provide a
responsive system to the user.
Mapping of direct interaction from 2D into 3D space
should be handled, e.g., with respect to occlusions [64].
In the following, we define a generic workflow on how user
interaction can be mapped to definitions of focus and context
and their graphical representations via LoA texturing.
5.1. Focus+Context Definition and Interaction Techniques
Figure 16: Overview of the interaction interface
and tools implemented in our framework for
parameterizing a focus+context visualization.
A concrete challenge
for the design of an
interaction framework
is to strive for consis-
tency [
] while hav-
ing the user in control
of parameterizing the
LoA texturing. Here, a
key observation is that
no constraints regard-
ing the input device or
technique should be made, i.e., to allow users to use the best
direct interaction method for parameterizing our framework for
a given task and environment (e.g., desktop vs. mobile devices).
Our main idea is to decouple the functional descriptions of focus
and context from the concrete interaction device and filter con-
figurations, e.g., so that mouse/keyboard, touch-based, natural
language, or implicit gaze-based interfaces can be used equally
to define regions of interest. Technically, the interaction inter-
pretation is formulated as mapping the user-defined input to a
high-level functional description for focus and context definition.
Interaction modes are required for disambiguation and to avoid
redundant mappings, but should be made as concise as possible
(e.g., using quasi-modes [
]). Exemplary mappings include
object selections by textual lookup using natural language inputs
or point-and-click metaphors. Similarly, circular regions of in-
terest may be defined via pinch-to-zoom metaphors or sketching
(e.g., for illustrative cutaway rendering [
]. Technically, high-
level descriptions can be mapped from parameter space into
model space using logical collections of 3D scene data as input,
enriched with descriptive information that is stored as attributes
(e.g., encoded with CityGML for geospatial models [71]).
Dealing with 3D virtual environments, we distinguish be-
tween six types of focus definition. Figure 17 gives a conceptual
overview of these types in a geospatial context:
Object selection: Highlighting single or groups of objects
that are of major interest to a user.
(a) Object Selection (b) 2D Region of Interest / Route
(c) 3D Region of Interest (d) Logical Selection
“High Solar Potential” “Close to View”
(e) Thematic Selection (f) View-dependent Selection
Figure 17: Focus definition types exemplified for virtual 3D city models.
2D region of interest: Highlighting objects that are located
close to, or within a 2D region of interest.
3D region of interest: Spatial highlighting of objects or
components with additional constraints in height.
Logical selection: Selection of objects or components with
respect to semantic data, such as feature type (e.g., street
networks in a routing scenario).
Thematic selection: Selection of objects or components
with respect to thematic data, such as solar potential, and
according to a range of interest.
View-dependent selection: Highlighting image regions
according to view-based metrics (e.g., view distance or
Direct interaction for these types should trigger immediate visual
feedback to symbolize a correspondent mode, for which we
provide specialized shading eects. For instance, the boundaries
of a circular region of interest are visualized using projected
lines as visual cues.
Figure 16 gives an overview of the interaction interface
implemented in our framework that resembles some of the
focus+context definition types shown in Figure 17. Besides
techniques for orbital, zoom and pan navigation, this interface
is (1) context-aware according to a user-defined application
(e.g., highlighting regions of interest), (2) allows users to select
filter configurations from pre-defined presets, and (3) provides
techniques for interactive focus+context definition. The latter
includes regional definitions via the pinch-to-zoom metaphor,
highlighting of objects or regions of interest via a search bar,
value-range sliders for numerical constraints, and on-screen
pointing techniques for direct object selection. In addition,
multiple interaction techniques may be equivalently used for
parameterization, e.g., using on-screen pointing or textual input
to locate and highlight 3D scene objects.
Circular RoI
- radius : float
- center : vec2
id 2
- mid pts : vec2[]
- control pts : vec2[]
id 4
- objectID : uint
id 3
identifier distance map
- dmap : texture - dmap : texture
Figure 18: Focus definitions are mapped to uniform variables or are buered as
projected distance maps, which are evaluated during shading.
5.2. Shader Uniform Mapping
In our framework, focus and context definitions are either stored
using GPU uniform buers or mapped to distance maps (Fig-
ure 18), whose parameters are evaluated during shading. In
the first case, these attributes are compared with the informa-
tion synthesized in the G-buer (e.g., object identifiers, 3D
world position) for focus detection. In the second case, a Eu-
clidean distance transform is performed to buer line segments
or 2D regions of interest. The synthesized distance maps are
then projected onto the scene geometry and evaluated during
shading. This approach enables image-based operations to be
eciently implemented, such as fragment-based thresholding of
the Euclidean distance between objects (e.g., distance to a route,
Figure 18).
In most interaction modes, a basic functionality is to map 2D
inputs to 3D attributes via raycasting (e.g., object selection via
touch interfaces [
]). It is typically performed using intersec-
tion tests with the 3D scene geometry, but is often too complex
to be interactively performed. Instead, we use an image-based
approach using the synthesized G-buer information to query
3D scene attributes for the visible scene geometry (“picking”).
Here, we observed that the synthesized world position, texture
coordinates, and identifier information (i.e., objects, textures) are
sucient to query arbitrary attributes stored in a database (Fig-
ure 20). For instance, texture identifiers and coordinates directly
map into texture space for a fragment-based information lookup,
whereas object identifiers can be mapped to object-specific at-
5.3. Importance Mask Synthesis and Shading
Figure 19: Example of a synthesized impor-
tance mask for a circular region of interest.
The shader uniforms are
evaluated using fragment
shaders. For each def-
inition type, a normal-
ized importance mask is
synthesized that indicates
whether a fragment should
be shaded for focus or context (Figure 19). Blend functions are
utilized for image composition [
] and to enable smooth tran-
sitions between focus and context regions (Figure 21), but may
also be configured for hard transitions, e.g., to avoid distorted
color tones when using heterogeneous image filters. Finally,
the importance masks are blended to enable multivariate eects
(e.g., a route with a circular region of interest at the destination).
3D position (x,y,z)
(textures for
thematic data)
3D world
texture coordinates
Figure 20: Texture coordinates, identifiers, and positions synthesized in the
G-buer are used to query additional attributes, stored in a database.
ζ (b, u , u )
Figure 21: Transition functions used
to blend focus and context regions.
For view-dependent visu-
alization, view or user-defined
metrics are evaluated during
shading. Here, we supply pre-
sets of image filter configura-
tions to enable an automated
LoA texturing setup. For in-
stance, a blueprint rendering
may be automatically selected
to represent construction sites in an urban planning scenario.
Post-processing of the synthesized importance masks with a
screen-space distance transform may also be used for highlight-
ing (e.g., glow eects [
]). Results using these interactive
techniques for LoA texturing are presented in the next section.
6. Results and Discussion
This section presents implementation details, applications, eval-
uations, and limitations of our work.
6.1. Implementation
We have implemented our framework using C++, OpenGL, and
GLSL, and the image filters on the GPU with CUDA. Open-
SceneGraph was used as the rendering engine to handle 3D
data sets. For global image processing, CUDA’s fast Fourier
transform (cuFFT) was used to implement the L0gradient min-
imization [
], the Deriche method [
] to perform Gaussian
smoothing in constant time per input pixel, and recursive meth-
ods proposed by Nehab et al.
for box filtering. All image
filters operate in RGB color space or CIE-Lab. Our implementa-
tion uses texture and surface objects introduced in CUDA 5 for
virtual texturing along with the OpenGL Interop functionality
for random read and write access. We assume that similar results
should be achievable using OpenCL or compute shaders.
For bindless texturing, we used the OpenGL 4.4 extension
GL ARB bindless texture
. The G-buer is packed to RGBA
or RGB texture channels and encoded as a stencil-routed A-
buer [
] for order-independent image blending. The parallel-
banding algorithm [
] was used to perform work-load ecient
distance transforms and compute importance masks for regions
of interest. All results were rendered on an Intel
3.06 GHz with 6 GByte RAM and NVidia
with 4 GByte VRAM. In addition, user interaction was tested
with a 23.6” Lenovo®L2461xwa multitouch monitor.
Denoised Difference
Figure 22: Edge-preserving denoising of texture-encoded thematic data via
L0gradient minimization (λ=0.02, κ =2), projected onto surface geometry.
6.2. Applications
The individual applications of the respective image filters—e.g.,
HDR tone mapping, detail exaggeration, edge adjustment, or
colorization—can be maintained because they are merely trans-
ferred to the texture domain. For instance, Figure 22 shows a
result of our method using
gradient minimization [
] for
perspective-coherent denoising of texture-encoded thematic data
and overall contrast enhancement. Here, solar potential was
computed by a radiation summed up over a whole year.
Focus+Context Visualization. Highlighting regions of interest
while removing detail in context regions is a major goal in the
eective information transfer and for directing a viewer’s gaze
to important or prioritized information [
]. Figure 25 shows
a result of our framework to implement a stylized focus pull
eect [
] based on the view distance. Emphasis is drawn to the
respective image regions using the FDoG filter coupled with
image-based enhancements of geometry edges for context re-
gions. Our method is able to preserve the overall structure of
objects in the background and in the context regions (e.g., win-
dows on the building fa
ades to emphasize spatial relationships).
Because the image composition is performed in the deferred
shading stage and multiple layer eects may be defined, our
approach is generic with respect to filter combinations and ex-
tensible for further view metrics (e.g., region masks or view
angles). Here, we coupled the pinch-to-zoom metaphor with our
interaction framework to directly change the graphical represen-
tation in regions of interest. Based on a multitouch interface, the
user starts pointing with two fingers and spans—via the zoom
metaphor—the range of interest in projected world space. A
user-defined preset for image filtering then automatically adjusts
the LoA texturing in the focus region, providing optional smooth
transitions controlled via a blend slider. Alternatively, a natural
language interface may be used to locate and highlight regions
of interest via a search bar. Figure 24 exemplifies a semantic
lens that automatically blends a detailed 3D model for focus
with a map for context. Here, the apparent greyscale algorithm
by Smith et al. [84] was used for filtering the context regions.
Thematic Visualization. The multitexturing support was used
to enhance depth cues important for visualizing color-encoded
thematic data in virtual 3D scenes. Figure 26 shows a parameter-
ization in which the FDoG filter was used to detect edges in the
Original SDoF
Proposed Method Difference -0.1
Figure 23: Gaussian smoothing for semantic depth of field. We used our LoA
texturing method for perspective-coherent Gaussian smoothing of color maps
5) prior to additional smoothing in screen space (
5). Compared
to the traditional approach, which only filters in screen space (
5), our
method preserves scene structures in context regions better at a similar LoA.
Importance Mask Output Map (Filtered)
Map (Orginal)
Figure 24: Semantic highlighting (Grand Canyon National Park) using a 3D ter-
rain model for focus and a 2D map for context. The map is filtered using the ap-
parent greyscale algorithm by Smith et al.
color textures and blend the result with the thematic information.
Compared to an edge detection that only processes geometry
information, our approach also enhances structural information
that is not explicitly modeled as geometry, but is captured by
aerial or terrestrial imaging. This way, the correlation between
thematic data and surface structures is much more plausible. We
explored approaches to directly parameterize a thematic visu-
alization for information highlighting. Here, the implemented
slider interface for value-range thresholding was used to param-
eterize the range of interest for thematic data. Figure 27 shows
a result, where areas with a high solar potential are visualized
with detailed graphics, and areas with a low solar potential are
automatically filtered, providing smooth transitions in-between.
Semantic Depth of Field. Depth of field is known to direct
the pre-attentive cognition of prioritized information. Using
our method, we have implemented a variant of the semantic
depth of field (SDoF) eect [
], in which scene objects are
selectively highlighted via image masks to direct a Gaussian
smoothing in screen space. First, diuse maps are filtered in
texture space, i.e., to control the LoA of textured surfaces. After-
Focus in Front Focus in Middle Focus in Back
Figure 25: Example of a stylized focus pull where the distance-based focal plane is gradually directed from the foreground to the background. A watercolor and DoG
filter were used to draw emphasis to regions of interest and preserve depth cues in context regions (e.g., building windows).
Original Thematic Color Mapping with Geometry Edges Proposed Method
Figure 26: An example for thematic data visualization, where our framework was used to combine geometry edges and DoG filtered color textures and, hence,
improve the visualization of color-encoded thematic data (here: solar potential) significantly.
OriginalThematic Lens
Thematic Color Mapping
Importance Mask
Figure 27: Interactive selection of “high solar potential data” using our framework to direct the filtering process to context regions. The user is able to control the
focus and context definition via a value-range slider together with a blend slider to provide smooth transitions between both definitions.
Proposed Method
Photorealistic Textures
Figure 28: Cartography-oriented design of 3D geospatial information visualization for a virtual 3D city model (Nuremberg, Germany). Building fa
ades (encoded
with color textures) are filtered and colorized by their dominant colors per component (exterior walls vs. roofs), and blended with feature contours.
Original Diffuse Bump
Original Result Filtered Result
Filtered Diffuse Bump
Level 2 Level 2
Figure 29: Smoothed bump mapping using a watercolor filter and domain trans-
form (recursive mode,
, σr
=3) for color and normal maps.
wards, regular Gaussian smoothing is performed in screen space
to blur geometry edges. As shown in Figure 23, using this “dual”
filtering approach enabled a clearer visualization of structures
induced by geometry edges. By contrast, the regular screen-
space approach requires wider filter kernels to achieve a similar
LoA eect but at the cost of a considerably high degradation of
scene structures. Our search interface was used to automatically
parameterize the SDoF eect for object highlighting.
Figure 30: Design of buildings and
sites in a historic, hand-drawn map.
Cartography-oriented Design.
Graphical core variables in the
cartographic design of 3D build-
ing and site models often involve
reduced color palettes, e.g., dis-
tinct colors for roofs and fa
combined with contour lines and
strokes indicating features such
as windows (Figure 30). We
used our method to extract struc-
tural elements from photorealis-
tic color textures together with an algorithm that automatically
extracts dominant colors from regions with constant tone, which
are weighted and aggregated via entropy-based metrics. Fig-
ure 28 shows a result of our approach applied to a virtual 3D city
model. Compared to typical photorealistic depictions (e.g., as
provided by 3D geovirtual environments such as Google Earth),
our approach improves feature contrasts, provides a decluttered
representation with inherent shadow removal, and expresses un-
certainty. We believe that this kind of visualization can be of
major interest for routing applications, where often only a few
selected information are required for orientation.
Geometric Detail Removal. Bump or displacement mapping is
an essential process for enriching shading and lighting eects
by geometric detail. We used our method to perform edge-
preserving smoothing of normal maps to coarse bump mapping
with respect to the linear perspective. Figure 29 shows the
Ambient Occlusion
Phong Shading Composition
Figure 31: Stylization of Phong shading, ambient occlusion, and color maps.
An oil paint filter was used for the normal and ambient occlusion maps, together
with the abstraction filter by Winnem¨
oller et al. [22] for color maps.
results of a domain transform [
] applied to normal maps,
where mipmapping enables a smooth transition between the
dierent levels of structural abstraction.
Stylized Shading. We experimented with filtering normal and
ambient occlusion maps to achieve stylized shading and lighting
eects. The rich parameterization options of our framework
give artists creative control over this process. For instance, Fig-
ure 31 shows how an oil paint filter—based on a smoothed
structure tensor—was used to apply Phong shading and am-
bient occlusion with a sketchy style. Similar directions were
proposed by DeCoro et al.
for stylized shadows and toon
shading by Barla et al.
for general LoA; however, without
the capability for flow-based image abstraction. By contrast, our
approach provides stylized variants of texture maps that includes
salient structures, which are blended by conventional shading.
Hence, it is especially useful to interactively stylize photorealis-
tic texture maps (e.g., captured by terrestrial photography).
Blueprint Rendering. Finally, the capability to layer filtering
in depth was observed by a novel blueprint rendering approach.
In contrast to image-based techniques that solely operate on G-
buer information [
], we also used a DoG filter for perspective-
coherent abstraction of diuse maps that preserves texture gra-
dients (Figure 32 left). The filtering results were colorized and
blended by order-independent transparency. Using our inter-
action framework, interiors of 3D models model can be made
interactively explorable via the magic lens metaphor [
Here, a user is able to span regions of interest via direct touch
interaction and shift the lens to a desired location. The focus area
may then be visualized with our blueprint rendering approach
(Figure 32 right). The accompanying video demonstrates an
interactive parameterization for a multitouch display. In addi-
tion, it demonstrates frame-to-frame coherence for all of our
results. Here, the coherence primarily comes from processing
the respective mipmap levels according to perspective projection,
prior to trilinearly filtering the results on the GPU for smooth
interpolation (Equation 2).
Level of Abstraction
Magic Lens
Importance Mask
α = 0.0 α = 0.6
Figure 32: Blueprint rendering based on geometry edges and FDoG filtered color maps. The filtering is performed on each depth layer, the results are then sorted in
depth and blended in a post-process stage. Left: LoA capabilities of our approach, where the close-ups represent the first (visible) depth layer. Right: A magic lens
interactively parameterized via a touch interface to direct a blueprint rendering to a region of interest, highlighting interior structures.
Photorealistic Textures Proposed Method
Original Focus in Front
Focus in Back
Feature Congestion ClutterVisual Saliency
Figure 33: Visual clutter analysis (top) for Figure 28 and visual saliency analysis
(bottom) for Figure 25, each compared to the respective photorealistic version,
using the algorithms by Rosenholtz et al. [86] and Harel et al. [87].
6.3. Visual Clutter and Saliency Evaluation
To demonstrate the benefits of LoA texturing for an eective
information transfer, we compared visual clutter and saliency
maps of our outputs with the original, high-detail renderings.
Using the feature congestion measure proposed by Rosenholtz
et al.
, Figure 33 demonstrates how our cartography-oriented
visualization approach reduces visual clutter, thus being able
to reliably draw attention to important features of the virtual
3D scene. In addition, Figure 33 demonstrates how the visual
saliency of our the stylized focus pull eect yields high saliency
within the respective focus regions. Similar results have been
qualitatively verified using an eye tracker [3, 4].
6.4. Performance Evaluation
Because our framework was designed for general-purpose appli-
cation, its performance greatly depends on which image filters
are used—which we identified as bottleneck—and how many
eect layers are defined. We rendered the virtual 3D city scene
depicted in Figure 25 with the multi-scale anisotropic Kuwahara
Table 1: Performance evaluation of our framework using the multi-scale
Anisotropic Kuwahara filter (AKF) [
] and the SDoF configuration used to
produce Figure 23 (average and minimum frames-per-second). Setups: (1) filter-
ing entire mipmap levels with caching enabled, (2) per-fragment filtering with
caching disabled and (3) enabled, (4) filtering in a post-process stage.
Screen Res. Setup 1 Setup 2 Setup 3 Setup 4
800 ×600 75.6 (12) 12.9 (9) 87.9 (44) 17.2 (16)
1280 ×720 51.9 (12) 11.5 (10) 67.3 (26) 14.7 (14)
1920 ×1080 31.5 (11) 11.3 (10) 37.3 (14) 11.8 (11)
800 ×600 53.8 (23) 12.3 (8) 66.3 (54) 29.4 (28)
1280 ×720 44.1 (20) 11.1 (8) 52.9 (41) 19.6 (18)
1920 ×1080 30.2 (13) 8.6 (7) 33.5 (24) 10.0 (9)
filter (
=4) [
] and the proposed SDoF variant with the sys-
tem setup described in Section 6.1. The scene is composed of
540 unique 3D objects with 15 texture atlases (each 1024
pixels). We defined a fly-through sequence that lasts 15 seconds
with filter caches cleared prior to each iteration. The results pro-
vided in Table 1 show that the performance scales with the screen
resolution, reaching interactive frame rates in HD resolutions
and real-time performance when using our caching concept. No-
tice how per-fragment filtering without caching is almost on par
with conventional post-process filtering in HD screen resolutions,
thus it could also be used for dynamic textures. The timings
include all filtering and rendering cycles with potential memory
transfers. The memory consumption is proportional to the screen
resolution with respect to the G-buer and proportional to the
number of filter layers with respect to cached textures targets.
We believe that the consumption can be significantly decreased
when using sparse textures (OpenGL 4.3).
6.5. Limitations
In contrast to image-based artistic stylization, our methods are
not able to inherently control the LoA of the scene geometry.
However, this enables a more controlled geometric abstraction
by specialized techniques. Alternatively, our methods may be
combined with filtering in a post-process stage to selectively
filter across object boundaries. In addition, UV mapped tex-
tures are required as input, otherwise a spatial filtering (e.g., for
the SDoF variant) is not practical. There is also room for im-
provement of our system’s performance. As a current limitation,
interactive filtering cannot be maintained when solving sparse
linear systems (e.g., for image decomposition) because local
image filtering cannot be performed. This also applies to itera-
tive approaches that are resistant to parallelization. Outsourcing
computationally expensive filters to multi-GPU systems could
alleviate this problem, which is supported by progressive and
decoupled filtering but it remains subject to future work.
7. Conclusions and Future Work
In this paper, we have presented interactive image filtering for
LoA texturing of virtual 3D scenes. Our decoupled deferred
texturing concept enables to process texture maps coherently
with respect to linear perspective by arbitrary image filters to
preserves depth cues. We fashioned a caching concept and opti-
mization schemes by per-fragment and progressive filtering that
enable interactive or real-time frame rates. In addition, we pro-
posed a framework that is extensible by custom image filters and
interaction techniques to parameterize the filtering process via
mouse, touch, or natural language. Several results demonstrate
its manifold application to the fields of visualization.
We see multiple directions for future work. First, sketch-
based interfaces could be coupled with our interaction frame-
work to provide an authoring tool for aesthetic renditions of
virtual 3D scenes. Second, we plan to conduct a qualitative
user study next to our quantitative evaluation to confirm that
our approach is able to significantly improve visualization tasks.
Third, our applications demonstrate that decoupled deferred tex-
turing and shading is able to eectively map user interaction to
focus and context definitions. However, the integration of new
interaction techniques still requires eorts to map the user input
to GPU resources (e.g., uniform buers). Here, an extensible
rendering backend remains to be explored.
We would like to thank the anonymous reviewers for their valu-
able comments. The Berlin 3D city model has been provided by
Berlin Partner for Business and Technology and the Business
Location Center, Berlin. The Nuremberg 3D city model is used
by courtesy of the City of Nuremberg, Bavarian Agency for Sur-
veying and Geoinformation (2013). We would also like to thank
our technology partner 3D Content Logistics, Marko Dabrovic
and Frank Meinl from Crytek for the Sponza scene used in Fig-
ure 12 and Figure 14, and Keenan Crane for Jerry the Ogre used
in Figure 6, Figure 11 and Figure 31. This work was funded
by the Federal Ministry of Education and Research (BMBF),
Germany, within the InnoProfile Transfer research group “4DnD-
Vis” (
Haeberli, P., Segal, M.. Texture Mapping as a Fundamental Drawing
Primitive. In: Eurographics Workshop on Rendering. The Eurographics
Association; 1993, p. 259–266.
Ware, C.. Information Visualization: Perception for Design. San Fran-
cisco: Morgan Kaufmann Publishers Inc.; 2004.
Santella, A., DeCarlo, D.. Visual Interest and NPR: an Evaluation and
Manifesto. In: Proc. NPAR. ACM; 2004, p. 71–78. doi:
Cole, F., DeCarlo, D., Finkelstein, A., Kin, K., Morley, K., Santella,
A.. Directing Gaze in 3D Models with Stylized Focus. In: Proc. EGSR.
The Eurographics Association; 2006, p. 377–387. doi:
oller, H., Feng, D., Gooch, B., Suzuki, S.. Using NPR to
Evaluate Perceptual Shape Cues in Dynamic Environments. In: Proc.
NPAR. ACM; 2007, p. 85–92. doi:10.1145/1274871.1274885.
Redmond, N., Dingliana, J.. Investigating the Eect of Real-time Stylisa-
tion Techniques on User Task Performance. In: Proc. APGV. ACM; 2009,
p. 121–124. doi:10.1145/1620993.1621017.
Tomasi, C., Manduchi, R.. Bilateral Filtering for Gray and Color Images.
In: Proc. ICCV. IEEE; 1998, p. 839–846. doi:
Gooch, B., Reinhard, E., Gooch, A.. Human Facial Illustrations: Creation
and Psychophysical Evaluation. ACM Trans Graph 2004;23(1):27–44.
Kyprianidis, J.E., Collomosse, J., Wang, T., Isenberg, T.. State of the
Art: A Taxonomy of Artistic Stylization Techniques for Images and Video.
IEEE Trans Vis Comput Graphics 2013;19(5):866–885. doi:
Pfautz, J.D.. Depth Perception in Computer Graphics. Ph.D. thesis;
University of Cambridge; 2000.
Goldstein, E.B.. Sensation and Perception. Wadsworth Publishing Com-
pany; 2010.
Semmo, A., Trapp, M., Kyprianidis, J.E., D
ollner, J.. Interactive
Visualization of Generalized Virtual 3D City Models using Level-of-
Abstraction Transitions. Comput Graph Forum 2012;31(3):885–894.
Wanger, L., Ferwerda, J., Greenberg, D.. Perceiving Spatial Relationships
in Computer-Generated Images. IEEE Comput Graph Appl 1992;12(3):44–
58. doi:10.1109/38.135913.
Surdick, R.T., Davis, E.T., King, R.A., Corso, G.M., Shapiro, A.,
Hodges, L., et al. Relevant Cues for the Visual Perception of Depth: Is
Where You See it Where it is? In: Proc. Hum. Fact. Ergon. Soc. Annu.
Meet.; vol. 38. SAGE Publications; 1994, p. 1305–1309. doi:
enard, P., Bousseau, A., Thollot, J.. State-of-the-Art Report on
Temporal Coherence for Stylized Animations. Comput Graph Forum
2011;30(8):2367–2386. doi:10.1111/j.1467-8659.2011.02075.x.
Gooch, A.A., Long, J., Ji, L., Estey, A., Gooch, B.S.. Viewing Progress
in Non-photorealistic Rendering through Heinlein’s Lens. In: Proc. NPAR.
ACM; 2010, p. 165–171. doi:10.1145/1809939.1809959.
Semmo, A., D
ollner, J.. Image Filtering for Interactive Level-of-
Abstraction Visualization of 3D Scenes. In: Proc. CAe. ACM; 2014,
p. 5–14. doi:10.1145/2630099.2630101.
Kyprianidis, J.E., D
ollner, J.. Image Abstraction by Structure Adaptive
Filtering. In: Proc. EG UK TPCG. The Eurographics Association; 2008, p.
51–58. doi:
Williams, L.. Pyramidal Parametrics. ACM SIGGRAPH Comput Graph
1983;17(3):1–11. doi:10.1145/800059.801126.
Donnelly, W., Lauritzen, A.. Variance Shadow Maps. In: Proc. ACM
I3D. ACM; 2006, p. 161–165. doi:10.1145/1111411.1111440.
Kyprianidis, J.E., Kang, H.. Image and Video Abstraction by Coherence-
Enhancing Filtering. Comput Graph Forum 2011;30(2):593–602. doi:
oller, H., Olsen, S.C., Gooch, B.. Real-Time Video Abstrac-
tion. ACM Trans Graph 2006;25(3):1221–1226. doi:
Xu, L., Lu, C., Xu, Y., Jia, J.. Image Smoothing via
Minimization. ACM Trans Graph 2011;30(6):174:1–174:12. doi:
Weickert, J.. Anisotropic Diusion in Image Processing. Teubner
Stuttgart; 1998.
Kang, H., Lee, S., Chui, C.K.. Flow-Based Image Abstraction. IEEE
Trans Vis Comput Graphics 2009;15(1):62–76. doi:
Kyprianidis, J.E.. Image and Video Abstraction by Multi-scale
Anisotropic Kuwahara Filtering. In: Proc. NPAR. ACM; 2011, p. 55–
64. doi:10.1145/2024676.2024686.
Comaniciu, D., Meer, P., Member, S.. Mean Shift: A Robust Approach
Toward Feature Space Analysis. IEEE Trans Pattern Anal Mach Intell
2002;24:603–619. doi:10.1109/34.1000236.
DeCarlo, D., Santella, A.. Stylization and Abstraction of Photographs.
ACM Trans Graph 2002;21(3):769–776. doi:
Bousseau, A., Kaplan, M., Thollot, J., Sillion, F.X.. Interactive Wa-
tercolor Rendering with Temporal Coherence and Abstraction. In: Proc.
NPAR. ACM; 2006, p. 141–149. doi:10.1145/1124728.1124751.
Criminisi, A., Sharp, T., Rother, C., P
erez, P.. Geodesic Image and
Video Editing. ACM Trans Graph 2010;29(5):134:1–134:15. doi:
Mould, D.. Texture-preserving Abstraction. In: Proc. NPAR. The Euro-
graphics Association; 2012, p. 75–82. doi:
Kang, H., Lee, S., Chui, C.K.. Coherent Line Drawing. In: Proc. NPAR.
ACM; 2007, p. 43–50. doi:10.1145/1274871.1274878.
oller, H., Kyprianidis, J.E., Olsen, S.C.. XDoG: An eXtended
dierence-of-Gaussians compendium including advanced image styliza-
tion. Computers & Graphics 2012;36(6):740–753. doi:
Nienhaus, M., D
ollner, J.. Edge-Enhancement – An Algorithm for Real-
Time Non-Photorealistic Rendering. Journal of WSCG 2003;11(2):346–
Farbman, Z., Fattal, R., Lischinski, D., Szeliski, R.. Edge-Preserving
Decompositions for Multi-Scale Tone and Detail Manipulation. ACM
Trans Graph 2008;27(3):67:1–67:10. doi:10.1145/1360612.1360666.
Subr, K., Soler, C., Durand, F.. Edge-preserving Multiscale Im-
age Decomposition based on Local Extrema. ACM Trans Graph
2009;28(5):147:1–147:9. doi:10.1145/1661412.1618493.
Kass, M., Solomon, J.. Smoothed Local Histogram Filters. ACM Trans
Graph 2010;29(4):100:1–100:10. doi:10.1145/1833351.1778837.
Gastal, E.S.L., Oliveira, M.M.. Domain Transform for Edge-Aware
Image and Video Processing. ACM Trans Graph 2011;30(4):69:1–69:12.
He, K., Sun, J., Tang, X.. Guided Image Filtering. IEEE Trans Pattern
Anal Mach Intell 2013;35(6):1397–1409. doi:
Karacan, L., Erdem, E., Erdem, A.. Structure-preserving Image Smooth-
ing via Region Covariances. ACM Trans Graph 2013;32(6):176:1–176:11.
Cho, H., Lee, H., Kang, H., Lee, S.. Bilateral Texture Filtering. ACM
Trans Graph 2014;33(4):128:1–8. doi:10.1145/2601097.2601188.
Klein, A.W., Li, W., Kazhdan, M.M., Corr
ea, W.T., Finkelstein,
A., Funkhouser, T.A.. Non-photorealistic Virtual Environments. In:
Proc. ACM SIGGRAPH. ACM; 2000, p. 527–534. doi:
Praun, E., Hoppe, H., Webb, M., Finkelstein, A.. Real-Time Hatching.
In: Proc. ACM SIGGRAPH. ACM; 2001, p. 581–586. doi:
enard, P., Bousseau, A., Thollot, J.. Dynamic Solid Textures for Real-
time Coherent Stylization. In: Proc. ACM I3D. ACM; 2009, p. 121–127.
Saito, T., Takahashi, T.. Comprehensible Rendering of 3-D Shapes. In:
Proc. ACM SIGGRAPH. ACM; 1990, p. 197–206. doi:
ollner, J., Kyprianidis, J.E.. Approaches to Image Abstraction for
Photorealistic Depictions of Virtual 3D Models. In: Cartography in Cen-
tral and Eastern Europe. Springer; 2010, p. 263–277. doi:
978-3- 642-03294- 3_17.
Redmond, N., Dingliana, J.. Adaptive Abstraction of 3D Scenes in
Real-Time. In: Eurographics Short Papers. The Eurographics Association;
2007, p. 77–80.
Magdics, M., Sauvaget, C., Garc
ıa, R.J., Sbert, M.. Post-processing
NPR Eects for Video Games. In: Proc. ACM VRCAI. ACM; 2013, p.
147–156. doi:10.1145/2534329.2534348.
Gibson, J.J.. The Ecological Approach to Visual Perception. Routledge;
Howard, I.P., Rogers, B.J.. Perceiving in Depth, Volume 3: Other
Mechanisms of Depth Perception. 29; Oxford University Press; 2012.
Engel, J., Semmo, A., Trapp, M., D
ollner, J.. Evaluating the Perceptual
Impact of Rendering Techniques on Thematic Color Mappings in 3D
Virtual Environments. In: Proc. Vision, Modeling & Visualization. The
Eurographics Association; 2013, p. 25–32. doi:
Lake, A., Marshall, C., Harris, M., Blackstein, M.. Stylized Rendering
Techniques for Scalable Real-time 3D Animation. In: Proc. NPAR. ACM;
2000, p. 13–20. doi:10.1145/340916.340918.
Barla, P., Thollot, J., Markosian, L.. X-toon: An Extended Toon
Shader. In: Proc. NPAR. ACM; 2006, p. 127–132. doi:
Furnas, G.W.. Generalized Fisheye Views. In: Proc. CHI. ACM; 1986, p.
16–23. doi:10.1145/22627.22342.
Shneiderman, B.. The Eyes Have it: A Task by Data Type Taxonomy
for Information Visualizations. In: Proc. IEEE Symposium on Visual
Languages. IEEE; 1996, p. 336–343. doi:10.1109/VL.1996.545307.
Cong, L., Tong, R., Dong, J.. Selective Image Abstraction. Vis Comput
2011;27(3):187–198. doi:10.1007/s00371-010- 0522-2.
Kosara, R., Miksch, S., Hauser, H.. Semantic Depth of Field. In:
Proc. IEEE InfoVis. IEEE; 2001, p. 97–104. doi:
Trapp, M., Beesk, C., Pasewaldt, S., D
ollner, J.. Interac-
tive Rendering Techniques for Highlighting in 3D Geovirtual Environ-
ments. In: Proc. 3D GeoInfo Conference. Springer; 2010,doi:
978-3- 642-12670- 3_12.
Jankowski, J., Hachet, M.. Advances in Interaction with 3D Environments.
Comput Graph Forum 2014;in print. doi:10.1111/cgf.12466.
Tominski, C., Gladisch, S., Kister, U., Dachselt, R., Schumann, H..
A Survey on Interactive Lenses in Visualization. In: Proc. EuroVis -
STARs. The Eurographics Association; 2014, p. 43–62. doi:
Lee, B., Isenberg, P., Riche, N., Carpendale, S.. Beyond Mouse and Key-
board: Expanding Design Considerations for Information Visualization
Interactions. IEEE Trans Vis Comput Graphics 2012;18(12):2689–2698.
Robles-De-La-Torre, G.. The Importance of the Sense of Touch in
Virtual and Real Environments. IEEE MultiMedia 2006;13(3):24–30.
Knoedel, S., Hachet, M.. Multi-touch RST in 2D and 3D spaces: Studying
the impact of directness on user performance. In: Proc. IEEE 3DUI. IEEE;
2011, p. 75–78. doi:10.1109/3DUI.2011.5759220.
Isenberg, P., Isenberg, T., Hesselmann, T., Lee, B., von Zadow, U., Tang,
A.. Data Visualization on Interactive Surfaces: A Research Agenda. IEEE
Comput Graph Appl 2013;33(2):16–24. doi:10.1109/MCG.2013.24.
Keefe, D., Isenberg, T.. Reimagining the Scientific Visualization Inter-
action Paradigm. IEEE Trans Vis Comput Graphics 2013;46(5):51–57.
Frisken, S.F., Perry, R.N., Rockwood, A.P., Jones, T.R.. Adaptively
Sampled Distance Fields: A General Representation of Shape for Com-
puter Graphics. In: Proc. ACM SIGGRAPH. ACM; 2000, p. 249–254.
Bier, E.A., Stone, M.C., Pier, K., Buxton, W., DeRose, T.D.. Toolglass
and Magic Lenses: The See-through Interface. In: Proc. ACM SIGGRAPH.
ACM; 1993, p. 73–80. doi:10.1145/166117.166126.
Viega, J., Conway, M.J., Williams, G., Pausch, R.. 3D Magic Lenses.
In: ACM UIST. ACM; 1996, p. 51–58. doi:10.1145/237091.237098.
Neumann, P., Isenberg, T., Carpendale, S.. NPR Lenses: Interactive
Tools for Non-photorealistic Line Drawings. In: Proc. Smart Graphics.
Springer; 2007, p. 10–22. doi:10.1007/978-3- 540-73214-3_2.
Trapp, M., Glander, T., Buchholz, H., D
ollner, J.. 3D Generalization
Lenses for Interactive Focus +Context Visualization of Virtual City Mod-
els. In: Proc. IEEE IV. IEEE; 2008, p. 356–361. doi:
Kolbe, T.H.. Representing and Exchanging 3D City Models with
CityGML. In: Proc. Int. Workshop on 3D Geo-Information. Springer;
2009, p. 15–31. doi:10.1007/978-3- 540-87395-2_2.
Myers, K., Bavoil, L.. Stencil Routed A-Buer. In: ACM SIGGRAPH
Sketches. ACM; 2007, p. 21. doi:10.1145/1278780.1278806.
Nienhaus, M., D
ollner, J.. Blueprints: Illustrating Architecture and
Technical Parts Using Hardware-accelerated Non-photorealistic Rendering.
In: Proc. Graphics Interface. AK Peters; 2004, p. 49–56.
Shanmugam, P., Arikan, O.. Hardware Accelerated Ambient Occlusion
Techniques on GPUs. In: Proc. ACM I3D. ACM; 2007, p. 73–80. doi:
Luft, T., Colditz, C., Deussen, O.. Image Enhancement by Unsharp
Masking the Depth Buer. ACM Trans Graph 2006;25(3):1206–1213.
Porter, T., Du, T.. Compositing Digital Images. SIGGRAPH Comput
Graph 1984;18(3):253–259. doi:10.1145/964965.808606.
Cockburn, A., Karlson, A., Bederson, B.B.. A Review of
Overview+Detail, Zooming, and Focus+Context Interfaces. ACM Comput
Surv 2009;41(1):2:1–2:31. doi:10.1145/1456650.1456652.
Shneiderman, B., Plaisant, C., Cohen, M., Jacobs, S.. Designing the User
Interface: Strategies for Eective Human-Computer Interaction. Pearson;
Raskin, J.. The Humane Interface: New Directions for Designing Interac-
tive Systems. Addison-Wesley Professional; 2000.
Knoedel, S., Hachet, M., Guitton, P.. Interactive Generation and
Modification of Cutaway Illustrations for Polygonal Models. In: Smart
Graphics. Springer Berlin Heidelberg; 2009, p. 140–151. doi:
978-3- 642-02115- 2_12.
Deriche, R.. Recursively Implementing the Gaussian and its Derivatives.
In: Proc. ICIP. IEEE; 1993, p. 263–267.
Nehab, D., Maximo, A., Lima, R.S., Hoppe, H.. GPU-Ecient Recursive
Filtering and Summed-Area Tables. ACM Trans Graph 2011;30:176:1–
176:12. doi:10.1145/2024156.2024210.
Cao, T.T., Tang, K., Mohamed, A., Tan, T.S.. Parallel Banding Algorithm
to Compute Exact Distance Transform with the GPU. In: Proc. ACM I3D.
ACM; 2010, p. 83–90. doi:10.1145/1730804.1730818.
Smith, K., Landes, P.E., Thollot, J., Myszkowski, K.. Apparent
Greyscale: A Simple and Fast Conversion to Perceptually Accurate Images
and Video. Comput Graph Forum 2008;27(2):193–200. doi:10.1111/j.
DeCoro, C., Cole, F., Finkelstein, A., Rusinkiewicz, S.. Stylized
Shadows. In: Proc. NPAR. ACM; 2007, p. 77–83. doi:
Rosenholtz, R., Li, Y., Nakano, L.. Measuring Visual Clutter. Journal of
Vision 2007;7(2):1–22. doi:10.1167/7.2.17.
Harel, J., Koch, C., Perona, P.. Graph-Based Visual Saliency. Advances
in Neural Information Processing Systems 2007;19:545–552.
... Finally, image filtering as a building block of image and video processing has also found its applications in visualization systems when attempting optimal expression, recognition, and communication of important or prioritized information, e. g., with respect to 3D geospatial data [35,46] and surgical media to make the filtered results more easy to look at [3,2]. Here, previous works showed that edge-preserving image filters can be used to process texture data coherently with respect to linear perspective and spatial relationships when applied in a decoupled deferred texturing approach [36,37], e. g., to ease an illustrative visualization and semantic depth of field. ...
Full-text available
With the improvement of cameras and smartphones, more and more people can now take high-resolution pictures. Especially in the field of advertising and marketing, images with extremely high resolution are needed, e. g., for high quality print results. Also, in other fields such as art or medicine, images with several billion pixels are created. Due to their size, such gigapixel images cannot be processed or displayed similar to conventional images. Processing methods for such images have high performance requirements. Especially for mobile devices, which are even more limited in screen size and memory than computers, processing such images is hardly possible. In this thesis, a service-based approach for processing gigapixel images is presented to approach this problem. Cloud-based processing using different microservices enables a hardware-independent way to process gigapixel images. Therefore, the concept and implementation of such an integration into an existing service-based architecture is presented. To enable the exploration of gigapixel images, the integration of a gigapixel image viewer into a web application is presented. Furthermore, the design and implementation will be evaluated with regard to advantages, limitations, and runtime.
... For example, in geometric modeling researchers have derived techniques to remove detail from geometric models to maintain the overall shape (e. g., [74], [117]). Similarly in image processing, abstraction can remove detail from both shape and color values (e. g., [57], [96]). An alternative notion of abstraction in computer graphics was proposed by Gomes and Velho [37] in form of a universes paradigm: A physical object is on the lowest level of abstraction, its mathematical description is the first level of abstraction, which is further abstracted in a discrete representation, and the highest level of abstraction is the implementation universe. ...
We explore the concept of abstraction as it is used in visualization, with the ultimate goal of understanding and formally defining it. Researchers so far have used the concept of abstraction largely by intuition without a precise meaning. This lack of specificity left questions on the characteristics of abstraction, its variants, its control, or its ultimate potential for visualization and, in particular, illustrative visualization mostly unanswered. In this paper we thus provide a first formalization of the abstraction concept and discuss how this formalization affects the application of abstraction in a variety of visualization scenarios. Based on this discussion, we derive a number of open questions still waiting to be answered, thus formulating a research agenda for the use of abstraction for the visual representation and exploration of data. This paper, therefore, is intended to provide a contribution to the discussion of the theoretical foundations of our field, rather than attempting to provide a completed and final theory.
... The RGBD edge detection is still prone to noise, especially when using photorealistic textures. A general pre-processing stage for textures [Semmo and Döllner 2015] would create more robust and meaningful edges, especially for effects like gaps & overlaps. Additionally the edge-detection would probably benefit from a deferred rendering pipeline, as the edges would not be influenced by shading conditions (i.e., sharp edges from specular highlights). ...
Conference Paper
Full-text available
We investigate characteristic edge- and substrate-based effects for watercolor stylization. These two fundamental elements of painted art play a significant role in traditional watercolors and highly influence the pigment's behavior and application. Yet a detailed consideration of these specific elements for the stylization of 3D scenes has not been attempted before. Through this investigation, we contribute to the field by presenting ways to emulate two novel effects: dry-brush and gaps & overlaps. By doing so, we also found ways to improve upon well-studied watercolor effects such as edge-darkening and substrate granulation. Finally, we integrated controllable external lighting influences over the watercolorized result, together with other previously researched watercolor effects. These effects are combined through a direct stylization pipeline to produce sophisticated watercolor imagery, which retains spatial coherence in object-space and is locally controllable in real-time.
... Textured surface models are used in a widening range of application domains, such a urban planning (Semmo and Döllner, 2015), cultural heritage (Potenziani et al., 2015), archaeology (Van Damme, 2015) and geological outcrop modelling (Howell et al., 2014, Rarity et al., 2014. After a model has been captured, often by means of terrestrial laser scanning (TLS) or photogrammetry, it may be desirable for domain experts to supplement the model with novel images to make initially-hidden features visible, or to add annotations. ...
Full-text available
Adding supplementary texture and 2D image-based annotations to 3D surface models is a useful next step for domain specialists to make use of photorealistic products of laser scanning and photogrammetry. This requires a registration between the new camera imagery and the model geometry to be solved, which can be a time-consuming task without appropriate automation. The increasing availability of photorealistic models, coupled with the proliferation of mobile devices, gives users the possibility to complement their models in real time. Modern mobile devices deliver digital photographs of increasing quality, as well as on-board sensor data, which can be used as input for practical and automatic camera registration procedures. Their familiar user interface also improves manual registration procedures. This paper introduces a fully automatic pose estimation method using the on-board sensor data for initial exterior orientation, and feature matching between an acquired photograph and a synthesised rendering of the orientated 3D scene as input for fine alignment. The paper also introduces a user-friendly manual camera registration- and pose estimation interface for mobile devices, based on existing surface geometry and numerical optimisation methods. The article further assesses the automatic algorithm’s accuracy compared to traditional methods, and the impact of computational- and environmental parameters. Experiments using urban and geological case studies show a significant sensitivity of the automatic procedure to the quality of the initial mobile sensor values. Changing natural lighting conditions remain a challenge for automatic pose estimation techniques, although progress is presented here. Finally, the automatically-registered mobile images are used as the basis for adding user annotations to the input textured model.
... Textured surface models are used in a widening range of application domains, such a urban planning (Semmo and Döllner, 2015), cultural heritage (Potenziani et al., 2015), archaeology (Van Damme, 2015) and geological outcrop modelling (Howell et al., 2014, Rarity et al., 2014. After a model has been captured, often by means of terrestrial laser scanning (TLS) or photogrammetry, it may be desirable for domain experts to supplement the model with novel images to make initially-hidden features visible, or to add annotations. ...
Full-text available
Adding supplementary texture and 2D image-based annotations to 3D surface models is a useful next step for domain specialists to make use of photorealistic products of laser scanning and photogrammetry. This requires a registration between the new camera imagery and the model geometry to be solved, which can be a time-consuming task without appropriate automation. The increasing availability of photorealistic models, coupled with the proliferation of mobile devices, gives users the possibility to complement their models in real time. Modern mobile devices deliver digital photographs of increasing quality, as well as on-board sensor data, which can be used as input for practical and automatic camera registration procedures. Their familiar user interface also improves manual registration procedures. This paper introduces a fully automatic pose estimation method using the on-board sensor data for initial exterior orientation, and feature matching between an acquired photograph and a synthesised rendering of the orientated 3D scene as input for fine alignment. The paper also introduces a user-friendly manual camera registration- and pose estimation interface for mobile devices, based on existing surface geometry and numerical optimisation methods. The article further assesses the automatic algorithm’s accuracy compared to traditional methods, and the impact of computational- and environmental parameters. Experiments using urban and geological case studies show a significant sensitivity of the automatic procedure to the quality of the initial mobile sensor values. Changing natural lighting conditions remain a challenge for automatic pose estimation techniques, although progress is presented here. Finally, the automatically-registered mobile images are used as the basis for adding user annotations to the input textured model.
Full-text available
Geospatial data has become a natural part of a growing number of information systems and services in the economy, society, and people's personal lives. In particular, virtual 3D city and landscape models constitute valuable information sources within a wide variety of applications such as urban planning, navigation, tourist information, and disaster management. Today, these models are often visualized in detail to provide realistic imagery. However, a photorealistic rendering does not automatically lead to high image quality, with respect to an effective information transfer, which requires important or prioritized information to be interactively highlighted in a context-dependent manner. Approaches in non-photorealistic renderings particularly consider a user's task and camera perspective when attempting optimal expression, recognition, and communication of important or prioritized information. However, the design and implementation of non-photorealistic rendering techniques for 3D geospatial data pose a number of challenges, especially when inherently complex geometry, appearance, and thematic data must be processed interactively. Hence, a promising technical foundation is established by the programmable and parallel computing architecture of graphics processing units. This thesis proposes non-photorealistic rendering techniques that enable both the computation and selection of the abstraction level of 3D geospatial model contents according to user interaction and dynamically changing thematic information. To achieve this goal, the techniques integrate with hardware-accelerated rendering pipelines using shader technologies of graphics processing units for real-time image synthesis. The techniques employ principles of artistic rendering, cartographic generalization, and 3D semiotics—unlike photorealistic rendering—to synthesize illustrative renditions of geospatial feature type entities such as water surfaces, buildings, and infrastructure networks. In addition, this thesis contributes a generic system that enables to integrate different graphic styles—photorealistic and non-photorealistic—and provide their seamless transition according to user tasks, camera view, and image resolution. Evaluations of the proposed techniques have demonstrated their significance to the field of geospatial information visualization including topics such as spatial perception, cognition, and mapping. In addition, the applications in illustrative and focus+context visualization have reflected their potential impact on optimizing the information transfer regarding factors such as cognitive load, integration of non-realistic information, visualization of uncertainty, and visualization on small displays.
Full-text available
Presentation of Research Paper "Evaluating the Perceptual Impact of Rendering Techniques on Thematic Color Mappings in 3D Virtual Environments"
Digitisation of this thesis was sponsored by Arcadia Fund, a charitable fund of Lisbet Rausing and Peter Baldwin.
Local histogram and local histogram-based functions can be determined by generating offset-kernel images based on domain-shifted tonal filter kernels. The offset-kernel images can be reused for multiple image locations and/or local neighborhood sizes, shapes, and weights. A neighborhood filter representing the desired local neighborhood size, shape, and frequency domain characteristics is applied to each of the offset-kernel images. Neighborhood filters may include a temporal dimension for evaluating neighborhoods in space and time. Neighborhood filtered offset-kernel images' values represent samples of local histogram or local histogram-based function corresponding with the domains of their associated domain-shifted tonal filter kernels. Arbitrary functions may be used as tonal filter kernels. A histogram kernel may be used to sample values of local histogram functions. A tonal filter kernel that is a derivative or integral of another tonal filter kernel may be used to sample a derivative or integral, respectively, of a function.
We present a method for creating black-and-white illustrations from photographs of human faces. In addition an interactive technique is demonstrated for deforming these black-and-white facial illustrations to create caricatures which highlight and exaggerate representative facial features. We evaluate the effectiveness of the resulting images through psychophysical studies to assess accuracy and speed in both recognition and learning tasks. These studies show that the facial illustrations and caricatures generated using our techniques are as effective as photographs in recognition tasks. For the learning task we find that illustrations are learned two times faster than photographs and caricatures are learned one and a half times faster than photographs. Because our techniques produce images that are effective at communicating complex information, they are useful in a number of potential applications, ranging from entertainment and education to low bandwidth telecommunications and psychology research.
This paper presents a novel structure-preserving image decomposition operator called bilateral texture filter. As a simple modification of the original bilateral filter [Tomasi and Manduchi 1998], it performs local patch-based analysis of texture features and incorporates its results into the range filter kernel. The central idea to ensure proper texture/structure separation is based on patch shift that captures the texture information from the most representative texture patch clear of prominent structure edges. Our method outperforms the original bilateral filter in removing texture while preserving main image structures, at the cost of some added computation. It inherits well-known advantages of the bilateral filter, such as simplicity, local nature, ease of implementation, scalability, and adaptability to other application scenarios.
Recent years have witnessed the emergence of new image smoothing techniques which have provided new insights and raised new questions about the nature of this well-studied problem. Specifically, these models separate a given image into its structure and texture layers by utilizing non-gradient based definitions for edges or special measures that distinguish edges from oscillations. In this study, we propose an alternative yet simple image smoothing approach which depends on covariance matrices of simple image features, aka the region covariances. The use of second order statistics as a patch descriptor allows us to implicitly capture local structure and texture information and makes our approach particularly effective for structure extraction from texture. Our experimental results have shown that the proposed approach leads to better image decompositions as compared to the state-of-the-art methods and preserves prominent edges and shading well. Moreover, we also demonstrate the applicability of our approach on some image editing and manipulation tasks such as image abstraction, texture and detail enhancement, image composition, inverse halftoning and seam carving.