Conference PaperPDF Available

An Interaction Framework for Level-of-Abstraction Visualization of 3D Geovirtual Environments

Authors:

Abstract

3D geovirtual environments constitute effective media for the analysis and communication of complex geospatial data. Today, these environments are often visualized using static graphical variants (e.g., 2D maps, 3D photorealistic) from which a user is able to choose from. To serve the different interests of users in specific information, however, the spatial and thematic granularity at which model contents are represented (i.e., level of abstraction) should be dynamically adapted to the user's context, which requires specialized interaction techniques for parameterization. In this work, we present a framework that enables interaction interfaces to parameterize the level-of-abstraction visualization according to spatial, semantic, and thematic data. The framework is implemented in a visualization system that provides image-based rendering techniques for context-aware abstraction and highlighting. Using touch and natural language interfaces, we demonstrate its versatile application to geospatial tasks, including exploration, navigation, and orientation.
An Interaction Framework for Level-of-Abstraction
Visualization of 3D Geovirtual Environments
Amir Semmo
Hasso Plattner Institute
University of Potsdam
amir.semmo@hpi.de
Jürgen Döllner
Hasso Plattner Institute
University of Potsdam
office-doellner@hpi.de
Figure 1: Touch-based interaction using our framework with the pinch-to-zoom metaphor to parameterize
the level of abstraction of 3D geospatial objects in a region of interest for focus+context visualization.
ABSTRACT
3D geovirtual environments constitute effective media for
the analysis and communication of complex geospatial data.
Today, these environments are often visualized using static
graphical variants (e.g., 2D maps, 3D photorealistic) from
which a user is able to choose from. To serve the different
interests of users in specific information, however, the spa-
tial and thematic granularity at which model contents are
represented (i.e., level of abstraction) should be dynamically
adapted to the user’s context, which requires specialized in-
teraction techniques for parameterization. In this work, we
present a framework that enables interaction interfaces to
parameterize the level-of-abstraction visualization according
to spatial, semantic, and thematic data. The framework is
implemented in a visualization system that provides image-
based rendering techniques for context-aware abstraction
and highlighting. Using touch and natural language inter-
faces, we demonstrate its versatile application to geospatial
tasks, including exploration, navigation, and orientation.
Categories and Subject Descriptors
H.5.2 [Information Interfaces and Presentation]: User
Interfaces—Interaction styles, Input devices and strategies;
I.4.3 [Computer Graphics]: Image Processing and Com-
puter Vision—Enhancement—Filtering
c
The Authors 2014. This is the authors’ version of the work. It is posted
here for your personal use. Not for redistribution. The definitive version has
been published in Proceedings of the 2nd ACM SIGSPATIAL Workshop on
MapInteraction (MapInteract’14).
MapInteract ’14, November 4–7 2014, Dallas/Fort Worth, TX, USA.
Copyright is held by the owner/author(s). Publication rights licensed to ACM.
http://dx.doi.org/10.1145/2677068.2677072.
General Terms
Algorithms, Design, Human Factors
Keywords
interaction, level of abstraction, 3D virtual environments,
focus+context visualization, interactive map design
1. INTRODUCTION
3D geovirtual environments (3D GeoVEs), such as virtual
3D city and landscape models, constitute effective media
for the analysis and communication of complex geospatial
data in manifold applications, such as city planning, navi-
gation, and disaster management. General-purpose systems
like Google EarthTM or Microsoft Bing MapsTM typically
represent these environments with a pre-defined set of graph-
ical variants, which is often constituted by 2D/2.5D map-like
representations (e.g., for navigation), and 3D photorealis-
tic representations (e.g., for exploration). According to a
user’s background, task, and perspective view, however, of-
ten too much irrelevant (cluttered) or too few information
is visualized [32], and thus not a meaningful map design
is provided [19]. Changing different graphical variants is a
common approach to address this concern, but memorizing
these may affect a user’s visual attention and performance
significantly [22, 34].
Another promising approach is to select the spatial and
thematic granularity at which model contents are repre-
sented (i.e., level of abstraction, LoA) according to a user’s
interest in local regions (Figure 1), or according to thematic
and semantic information. Previous works showed how in-
teractive LoA visualization of 3D geovirtual environments
can be technically achieved (e.g., using view metrics [31]).
Yet a fundamental problem lies in the way on how a user-
defined parameterization of the LoA can be intuitively per-
Figure 2: Comparison between static representations of a 3D geovirtual environment in Google MapsTM(left,
middle) and our approach that is able to interactively select and combine levels of abstraction (right).
formed via interaction devices and techniques (e.g., to pa-
rameterize magic lenses [35]), which remains to be explored.
In this paper, we present a framework for interaction in-
terfaces to parameterize the level-of-abstraction visualiza-
tion of 3D geovirtual environments. We base our framework
on image-based rendering techniques using deferred shading
of modern GPU architectures to enable smooth transitions
between focus and context regions with different graphical
representations [31] (Figure 1/2). In particular, geometry
buffers (G-buffers) [28] and distance transforms [6] are used
to enable a flexible mapping of user interaction to geospa-
tial properties in real-time. We further provide use cases
for our framework using touch-based and natural language
interaction (e.g., via textual descriptions) to parameterize
a context-aware abstraction and highlighting, i.e., semantic
depth-of-field [17], image filtering in texture space [30] to
direct a user’s pre-attentive cognition [29], and cartographic
visualization techniques [31]. In addition, interactively se-
lecting the LoA is able to significantly reduce the data trans-
fer in client-server environments, and improve overall ren-
dering performance because only relevant data is processed.
The remainder of this paper is structured as follows. Sec-
tion 2 reviews related work. Section 3 gives a technical back-
ground and states challenges on interactive LoA visualiza-
tion. Section 4 presents our framework for parameterizing
the LoA visualization of 3D geovirtual environments, which
is exemplified for concrete use cases in Section 5. Finally,
Section 6 concludes this paper.
2. RELATED WORK
Our work is related to focus+context interfaces and visual-
ization, and interaction techniques designed to parameterize
a context-aware abstraction within 3D virtual environments.
2.1 Focus+Context Interfaces & Visualization
Focus+context describes the concept to visually distinguish
between important or relevant information from closely re-
lated information [7]. Focus+context visualization conforms
with the visual information-seeking mantra [32] by enabling
users to interactively change the visual representation of
data for points and regions of interest [1], and to solve the
problem of over-cluttered visual representations. Many ad-
ditional interface schemes exist to allow users to attain both
focused and contextual views of their information spaces,
i.e., detail+overview, zooming, and cue techniques [4]. Be-
cause the efficiency of these schemes highly depends on a
user’s task, our work explores a combination of these inter-
faces and how they can be coupled with a level-of-abstraction
visualization via explicit and implicit view metrics (e.g., re-
gion interest, view distance).
Interactive lenses have become established means to facil-
itate the exploration of large data sets, and are quite ver-
satile in their parameterization [35]. First approaches have
been provided via the magic lense metaphor [2], and ex-
tended for 3D spaces [38]. The concept has also been ex-
plored for illustrative visualization [20] and 3D geovirtual
environments [27] (e.g., for navigation, landscaping, urban
city planning) to reveal information that is hidden in high-
dimensional data sets. Typically, the concepts are combined
with context-based geometric or graphical style variances
to direct a user’s pre-attentive cognition [29]. A common
method is to parameterize image filters according to view
metrics (e.g., view distance) or regions of interest to se-
lect and seamlessly combine different LoA representations
of 3D scene contents [31] or map contents (e.g., route visu-
alization [13]), or to combine different generalized geometric
representations [37]. Our work demonstrates how user inter-
action can directly parameterize these kind of visualization
techniques using image-based shading on the GPU. In par-
ticular, we provide use cases of the semantic depth-of-field
effect [17, 36] for information highlighting and abstraction.
2.2 Interaction in 3D Virtual Environments
Many systems use a classical mouse/keyboard setup with an
optional graphical user interface to inspect objects and pa-
rameterize the visualization of 3D virtual environments [11].
Direct manipulation is typically coupled with ray casting to
determine intersections of a pointing device with the visual-
ized output (e.g., to specify regions of interest [35]). Due to
the increasing availability of ubiquitous devices, e.g., smart-
phones and tablets, visualization systems also increasingly
make use of the opportunities of (multi-)touch interaction
[18]. Evaluations showed that these interfaces provide a
quite natural direct-touch interaction within virtual envi-
ronments [26], and may outperform mouse input for specific
tasks in terms of completion times [15]. However, touch
user interfaces also adhere to certain challenges, e.g., the
intuitive mapping from 2D input to 3D manipulations [10,
14]. Our work provides a generic interface for user interac-
tion (e.g., mouse, touch gestures) to parameterize the visual
representation on a spatial, thematic, and semantic basis,
which is exemplified for virtual 3D city models. The map-
ping is performed interactively via image-based rendering,
geometry buffers [28] and distance maps [6], and is extensi-
ble for arbitrary shading effects.
Interaction Interface (CPU) Deferred / Image-based Rendering (GPU)
Shader
Uniforms
Shading
Data (CPU/GPU)
Processing
Module
Input/Output
Data
Data
Flow
3D GeoVEs
G-Buffer
Stage
User Input
Interaction
Devices
Input
Interpreter
Navigation
Routing
Highlighting
Interaction
Mode
Uniform
Mapping
3D Scene
Data
Interaction
Feedback
Shading
Effects
Importance
Mask Synthesis
View
Metrics
Rendered
Image
Original Rendering Output
Start Point
End Point
IDs depth position
Figure 3: Overview of our framework, which is arranged in three components: an interaction interface, scene
data and parameter processing, and rendering. The framework is subject for further discussion in Section 4.
3. BACKGROUND AND CHALLENGES
User involvement is a critical design aspect for 3D geovirtual
environments. A good design presents as much information
as needed for focus and as little as required for context [25].
Four major interface schemes have been identified for fo-
cused and contextual views [4]: zooming, focus+context,
overview+detail, and cue techniques. Using these schemes
with parameterized graphical variants of 3D geovirtual en-
vironments, however, remains a challenging task, because
these environments are often inherently complex with re-
spect to geometry, appearance, and thematic information.
Here, a major goal is to provide a framework that seam-
lessly integrates into the real-time rendering pipeline, and is
extensible for custom interaction devices and techniques.
The programmable pipeline of modern graphics hardware
facilitates graphical variants to be selected and parameter-
ized in a flexible way. A promising approach is deferred
rendering, where geometry information is rendered in a G-
buffer [28] and then used in a post-processing stage to per-
form image-based algorithms on visible fragments only. In
particular, this method has been proven effective for non-
photorealistic rendering techniques to implement how im-
portant or prioritized information is highlighted and cog-
nitively processed in an application context [29]. General
concepts that parameterize the deferred rendering pipeline
according to user interaction have been proposed for high-
lighting [36] and LoA visualization [31, 30]. A generic inter-
action interface for these implementations, however, remains
to be explored. Here, we identified two major challenges:
1. Rendering should be decoupled from concrete interac-
tion interfaces, and instead be parameterized via high-
level descriptions to facilitate an easy deployment of
new interaction devices and techniques.
2. Interactive frame rates should be maintained to pro-
vide a responsive system to the user.
In the next Section, we show how the deferred rendering
pipeline can be effectively parameterized using uniform buffers
and textures as abstraction layer.
4. INTERACTION FRAMEWORK
An overview of our framework is shown in Figure 3. It is
generic in its application and can be seamlessly integrated
into existing visualization systems. The input data consist
of 3D meshes with additional attributes (e.g., semantics) and
textures for appearance and thematic information (e.g., us-
ing CityGML [16]). The main idea of our framework is to
decouple the interaction interface from rendering, and in-
stead use uniform buffers and textures for LoA parameteri-
zation. First, input via interaction devices and techniques is
interpreted according to a pre-configured interaction mode,
and mapped to a functional description to configure GPU
resources (Section 4.1). The resources are then uploaded to
GPU memory and evaluated in a deferred rendering stage
to dynamically compute importance masks (Section 4.2).
Multiple importance masks may be computed and blended
to enable multi-variate highlighting and abstraction effects,
and may be additionally parameterized according to view
metrics for view-dependent LoA visualization. The frame-
work provides presets of semantics-based shading for map-
like representations, and image filtering techniques for fo-
cus+context visualization.
4.1 Interaction Interface
In the following, we define a generic workflow on how user
interaction can be mapped to definitions of focus and context
and their graphical representations via shading.
User Input, Interaction Devices and Interpreter
A concrete challenge for the design of an interaction inter-
face is to strive for consistency while having the user in
control of parameterization the visualization process [33].
Here, a key observation is that no constraints regarding
the input device or technique should be made, i.e., to al-
low users to use the best direct interaction method for pa-
rameterizing a visualization system for a given task. Our
main idea is to decouple the functional descriptions of fo-
cus and context from the concrete interaction device, e.g.,
so that mouse/keyboard, touch-based or implicit gaze-based
interfaces [8] can be used equally to define a region of in-
terest. Optionally, natural language input should be possi-
ble for automatic highlighting. Technically, the interaction
interpreting is formulated as mapping the user-defined in-
put to a high-level functional description for focus and con-
text definition. To avoid redundant mappings, interaction
modes are required for disambiguation, but should be made
as concise as possible (e.g., using quasi-modes [24]). Exem-
plary mappings include object selection by textual lookup
via natural language interfaces (e.g., line edit widgets), or
via direct selection using point-and-click metaphors or tap
gestures. Similarly, a circular region of interest can be de-
fined via pinch-to-zoom metaphors (Figure 1) or sketching
(a)
Object
Selection
(e)
Thematic
Selection
(b)
2D Region
of Interest
(c)
3D Region
of Interest
(f)
View-
dependent
Selection
(d)
Logical
Selection
object identifier, naming
freeform area, route, functional description
freeform 3D primitive, functional description
semantics-based, hierchical-based
data range, data value
view parameters (distance, angle, ...)
“Main Street 437” “City Centre/Street Network”
“Route X->Y”
“Floors 24-30”
“High Solar Potential”
“Close to View”
Figure 4: Types for focus definition and mapping of
exemplary high-level descriptions into model space.
(e.g., shape strokes). Technically, the mapping to high-
level descriptions can be realized using logical collections
of 3D model data as input, enriched with descriptive in-
formation that is stored as attributes (e.g., encoded with
CityGML [16]). These attributes are then used to map user-
defined input from parameter space into model space. Deal-
ing with 3D geovirtual environments, we distinguish between
six categories for focus definition (Figure 4):
1. Object selection: The highlighting of single or groups
of objects that serve as landmarks according to a user’s
context and interest.
2. 2D region of interest: The highlighting of objects that
are located close to, or within a 2D region of interest.
3. 3D region of interest: The spatial highlighting of ob-
jects or components with additional constraints in height.
4. Logical selection: The selection of objects or compo-
nents with respect to semantic constraints, such as fea-
ture type (e.g., street networks).
5. Thematic selection: The selection of objects or com-
ponents with respect to thematic data, such as popu-
lation or solar potential, and according to a range of
interest.
6. View-dependent selection: The definition of regions of
interest according to view-based metrics (e.g., viewing
distance to virtual camera, or viewing inclination).
Direct interaction for these focus and context definitions
should trigger immediate visual feedback to symbolize a cor-
respondent mode, for which we provide specialized shading
effects. For instance, using the pinch-to-zoom metaphor to
define a circular region of interest (Figure 1) visualizes its
boundaries as projected circular cues. From a cartographic
point of view, choosing an adequate graphical representa-
tion of the focus and context definitions is task-dependent,
for which we implemented a range of image-based filtering
techniques [30] that are provided as presets a user can select
from (Section 5).
In most interaction modes, a basic functionality is to map
2D input to 3D attributes via raycasting (e.g., for object
selection [11]). It is typically performed using intersection
3D position (x,y,z)
spatial
attributes
database
query
(textures for
thematic data)
3D world
position
texture/object
identifiers
texture coordinates
Figure 5: Using G-buffers to map texture coordi-
nates, identifiers, and positions to spatial attributes.
tests with the 3D scene geometry, but may become too
complex to be interactively performed. Instead, we use an
image-based approach using the geometry buffer [28] of the
rendered contents to query 3D scene attributes for the visible
parts only. We observed that the world position, texture co-
ordinates, and identifier information (i.e., objects, textures)
synthesized in a G-buffer are sufficient to query arbitrary
spatial information (Figure 5). For instance, texture identi-
fiers and coordinates directly map into texture space for a
fragment-based information lookup, whereas object identi-
fiers can be mapped to any kind of object-specific attributes.
Refer to Figure 3 for a route definition and how it maps to 3D
virtual environments. In the following, we focus on direct in-
teraction performed on the rendered image or task-oriented
interaction via high-level descriptions.
Shader Uniform Mapping
High-level descriptions for focus regions are either mapped
to GPU uniform buffers or textures. In the first case, pa-
rameters are directly evaluated on the GPU for shading. In
the second case, we make use of the parallel-banding algo-
rithm [3] to compute an exact distance transform in real-
time. The synthesized distance maps are then used for pro-
jective texturing [9]. This approach enables image-based
operations to be effectively implemented, such as fragment-
based thresholding of the Euclidean distance between 3D
objects to regions of interest (e.g., distance to a route or
point of interest) for image blending and overlays. Refer to
Figure 6 for exemplary results.
r
cp
0
p
1
p
2
p
3
p
4p
5p
6
Circular RoI
- radius : float
- center : vec2
id 2
...
...
Route
- mid pts : vec2[]
- control pts : vec2[]
id 4
...
...
Highlighting
- objectID : uint
id 3
...
...
object
identifier distance map
- dmap : texture - dmap : texture
Figure 6: Exemplary mapping of focus definitions to
shader uniforms and distance maps.
4.2 Rendering Interface
Deferred rendering with image-based shading is performed
to maintain interactive frame rates. In addition, visual feed-
back is provided during interaction in case it is appropriate
(e.g., the boundary of a region of interest during definition,
refer to Figure 1).
cr
r’
Figure 7: Exemplary synthesis of a smoothed im-
portance mask for a user-defined region of interest.
Importance Mask Synthesis
Figure 8: Exemplary
transition functions.
The uniform parameters are
evaluated using fragment
shaders. For each definition
type, an importance mask
is synthesized that indicates
whether a fragment should
be shaded for focus or con-
text (Figure 7). Blend func-
tions are utilized for image
composition [23] and enable
smooth transitions between focus and context regions, but
may also be configured to enable hard transitions (Figure 8),
e.g., to avoid distorted color tones or emphasize regions of
interest when using heterogeneous graphical representations
for focus and context. All computed importance masks of a
3D scene are blended to enable multivariate effects (e.g.,
a route with a circular region of interest at the destina-
tion). Optionally, view metrics (e.g., distance of fragments
to viewing position) are evaluated for view-dependent fo-
cus+context visualization [31].
Shading and Composition
We explored two shading procedures. First, explicit shading
effects that are defined per model semantics. Previous work
showed that this method facilitates visualization with map-
like representations, i.e., using cartographic design princi-
ples [31] (refer to Figure 9). Second, image filtering tech-
niques working in texture space are used for visual abstrac-
tion (e.g., on color maps) [30]. We configured presets for
these shading techniques to enable an automated setup for
user-defined tasks. For instance, a blueprint rendering style
may be automatically selected to represent construction sites
in urban planning. Highlighting may also be performed by
post-processing the synthesized importance masks with a
distance transform in screen space (e.g., for glow effects [36]).
5. RESULTS
We implemented our framework using C++, OpenGL, and
GLSL. OpenSceneGraph was used as the rendering engine
to handle 3D data sets (e.g., CityGML [16]). The image
filters used with the framework were implemented on the
GPU with CUDA. All results were rendered in real time on
an Intel R
XeonTM 4×3.06 GHz with 6 GByte RAM and
NVidia R
GTX 760 GPU with 4 GByte VRAM, and tested
with a 23.6” Lenovo R
L2461xwa multitouch monitor.
Four interaction interfaces were implemented to resemble
some of the definition types shown in Figure 4: (1) regional
definitions via the pinch-to-zoom metaphor, (2) a search bar
for textual lookup, (3) sliders for thematic data range defi-
nition, (4) and direct object selection. In addition, we use
distance transforms for route highlighting, and view metrics
Output
Importance Mask
Figure 9: Cartographic shading and landmark high-
lighting for context regions in a routing scenario [21].
Map (Context)
Output
Importance Mask 3D Model (Focus)
Figure 10: Highlighting of the Grand Canyon Na-
tional Park using a 3D terrain model and a 2D map.
Importance Mask
Thematic Color Mapping
Output
Figure 11: User-defined, thematic selection of “high
solar potential data” for focus and context definition.
(e.g., distance) for view-dependent visualization.
Typical user tasks that deal with information exploration
require detailed regional information, and only selected in-
formation in context regions for navigational purposes. We
coupled the pinch-to-zoom metaphor with our framework to
directly change the graphical representation in regions of in-
terest (Figure 1). The user starts pointing with two fingers
and spans – via the zoom metaphor – the range of interest in
world space. A pre-selected mode for graphical representa-
tion then automatically adjusts the LoA in the focus region.
Alternatively, we provide a natural language interface that
allows users to directly highlight regions of interest via a
search bar with database lookup functionality. Figure 10
exemplifies a semantic lense for the virtual environment of
the Grand Canyon that automatically blends a digital ter-
rain model for focus with a projected map for context. We
also used this interface to automatically parameterize the
semantic depth-of-field (SDoF) effect [17] for object high-
lighting. Figure 12 shows a result, where the user searched
for a specific group of buildings in a local environment.
Another popular task is routing, where the user is in-
terested in information that ease guidance and orientation.
Starting from a B-spline route, we render it as a graphical
primitive in a texture map, followed by a distance transform.
The georeferenced distance map is then used with projective
texturing, and is finally thresholded to select the objects
close to the route for high-detail shading. Figure 9 exempli-
fies a result that combines the focus regions along a route
with abstract representations of the context regions. This
example makes use of multiple mechanisms on graphical core
variables in cartographic design [12], such as a degressive
perspective in the background to increase screen-space uti-
lization, symbolization concepts to represent land use effec-
tively, and an iconification of landmarks so that their best
views always face the viewing direction [31].
Importance Mask
Figure 12: Semantic depth-
of-field and highlighting.
We also explored ap-
proaches to parameter-
ize a thematic visualiza-
tion. We implemented
a slider interface where
the user is able to spec-
ify the range of interest
for thematic data. Fig-
ure 11 demonstrates how
this interface was used
to highlight areas within
the virtual environment
of the city of Berlin (Germany) with a high solar potential.
Here, difference-of-Gaussians filtering in texture space [30]
was used to automatically stylize the context regions.
Figure 13: Blueprint styl-
ization for urban planning.
Finally, model vari-
ants for urban planning
and analysis were made
interactively explorable
via magic lenses [2]. The
user spans a region of in-
terest via direct touch in-
teraction, shifts the lense
to a desired location, and
selects a model variant
via a dropdown menu.
The focus area was then
exemplary visualized in a blueprint style (Figure 13).
The uses cases demonstrate that image-based rendering
techniques are able to effectively map user interaction to fo-
cus and context definitions. However, it still requires lots
of effort for a developer to integrate new interaction tech-
niques, in particular considering the mapping to GPU uni-
form buffers, and shader programming. Here, an extensible
interface for the rendering backend remains to be explored.
6. CONCLUSIONS AND FUTURE WORK
We present an interaction framework that selects and seam-
lessly combines the level of abstraction of 3D geovirtual en-
vironments according to a user’s interest in spatial, seman-
tic, and thematic information. The framework decouples
concrete interaction devices and techniques from rendering,
and thus provides a generic blueprint for visualization sys-
tems that demand for extensible interaction interfaces. We
demonstrate its flexibility by the example of typical tasks
performed with virtual 3D city and landscape models, such
as exploration, navigation, and orientation.
We see multiple directions for future work:
The paper reports on decoupling the interaction in-
terface from the rendering backend by using a unified
parameter space; only a few examples for interaction
techniques are presented. In the future, the potentials
of a generic framework for user interface developers
and designers should be explored to make concrete in-
teraction techniques and devices more easy to deploy.
This also includes the exploration of methods to con-
cisely trigger interaction modes.
Because the presented framework and techniques are
designed for generic application, they can also be useful
and applied in other visualization domains, such as
medical visualization.
We also plan to explore the application of our frame-
work on mobile devices, i.e., to evaluate the impact
of interaction metaphors on a user’s task performance
when directing a focus+context visualization on lim-
ited screen sizes. Once multiple interaction interfaces
are designed, it is also mandatory to evaluate their
effectiveness. A reasonable approach is presented by
¸C¨
oltekin et al. [5] who analyzed and evaluated eye
movements according to usability metrics.
Acknowledgments
We would like to thank the anonymous reviewers for their
valuable comments. The gesture overlay used in Figure 1 is
kindly provided under Creative Commons license by Ges-
tureWorks. This work was funded by the Federal Min-
istry of Education and Research (BMBF), Germany, within
the InnoProfile Transfer research group “4DnD-Vis” (www.
4dndvis.de).
7. REFERENCES
[1] J. Bertin. Graphics and graphic information
processing. Walter de Gruyter, 1981.
[2] E. A. Bier, M. C. Stone, K. Pier, W. Buxton, and
T. D. DeRose. Toolglass and Magic Lenses: The
See-through Interface. In Proc. ACM SIGGRAPH,
pages 73–80, 1993.
[3] T.-T. Cao, K. Tang, A. Mohamed, and T.-S. Tan.
Parallel Banding Algorithm to compute exact distance
transform with the GPU. In Proc. I3D, pages 83–90,
2010.
[4] A. Cockburn, A. Karlson, and B. B. Bederson. A
Review of Overview+Detail, Zooming, and
Focus+Context Interfaces. ACM Comput. Surv.,
41(1):2:1–2:31, 2009.
[5] A. ¸C¨
oltekin, B. Heil, S. Garlandini, and S. I.
Fabrikant. Evaluating the Effectiveness of Interactive
Map Interface Designs: A Case Study Integrating
Usability Metrics with Eye-movement Analysis.
Cartography and Geographic Information Science,
36(1):5–17, 2009.
[6] S. F. Frisken, R. N. Perry, A. P. Rockwood, and T. R.
Jones. Adaptively sampled distance fields: a general
representation of shape for computer graphics. In
Proc. ACM SIGGRAPH, pages 249–254, 2000.
[7] G. W. Furnas. Generalized Fisheye Views. In Proc.
CHI, pages 16–23, 1986.
[8] I. Giannopoulos, P. Kiefer, and M. Raubal.
GeoGazemarks: Providing Gaze History for the
Orientation on Small Display Maps. In Proc. ACM
ICMI, pages 165–172, 2012.
[9] P. Haeberli and M. Segal. Texture mapping as a
fundamental drawing primitive. In Eurographics
Workshop on Rendering, pages 259–266, 1993.
[10] P. Isenberg, T. Isenberg, T. Hesselmann, B. Lee,
U. von Zadow, and A. Tang. Data Visualization on
Interactive Surfaces: A Research Agenda. IEEE
Computer Graphics and Applications, 33(2):16–24,
2013.
[11] J. Jankowski and M. Hachet. A Survey of Interaction
Techniques for Interactive 3D Environments. In
Eurographics 2013- STARs, pages 65–93, 2013.
[12] M. Jobst, J. E. Kyprianidis, and J. D¨
ollner.
Mechanisms on Graphical Core Variables in the
Design of Cartographic 3D City Presentations. In
Geospatial Vision, pages 45–59. Springer, 2008.
[13] P. Karnick, D. Cline, S. Jeschke, A. Razdan, and
P. Wonka. Route Visualization Using Detail Lenses.
IEEE Trans. Vis. Comput. Graphics, 16(2):235–247,
2010.
[14] D. Keefe and T. Isenberg. Reimagining the Scientific
Visualization Interaction Paradigm. IEEE Trans. Vis.
Comput. Graphics, 46(5):51–57, 2013.
[15] S. Knoedel and M. Hachet. Multi-touch RST in 2D
and 3D spaces: Studying the impact of directness on
user performance. In Proc. IEEE 3DUI, pages 75–78,
2011.
[16] T. H. Kolbe. Representing and Exchanging 3D City
Models with CityGML. In Proc. Int. Workshop on 3D
Geo-Information, page 20, 2009.
[17] R. Kosara, S. Miksch, and H. Hauser. Semantic Depth
of Field. In Proc. IEEE InfoVis, pages 97–104, 2001.
[18] B. Lee, P. Isenberg, N. Riche, and S. Carpendale.
Beyond Mouse and Keyboard: Expanding Design
Considerations for Information Visualization
Interactions. IEEE Trans. Vis. Comput. Graphics,
18(12):2689–2698, 2012.
[19] A. MacEachren. How Maps Work. Guilford Press,
1995.
[20] P. Neumann, T. Isenberg, and S. Carpendale. NPR
Lenses: Interactive Tools for Non-photorealistic Line
Drawings. In Proc. Smart Graphics, pages 10–22, 2007.
[21] S. Pasewaldt, A. Semmo, M. Trapp, and J. D¨
ollner.
Multi-Perspective 3D Panoramas. International
Journal of Geographical Information Science, 2014. in
print.
[22] M. D. Plumlee and C. Ware. Zooming Versus Multiple
Window Interfaces: Cognitive Costs of Visual
Comparisons. ACM Trans. Comput.-Hum. Interact.,
13:179–209, 2006.
[23] T. Porter and T. Duff. Compositing digital images.
Proc. ACM SIGGRAPH, 18(3):253–259, 1984.
[24] J. Raskin. The humane interface: new directions for
designing interactive systems. Addison-Wesley
Professional, 2000.
[25] T. Reichenbacher. The concept of relevance in mobile
maps. In Location Based Services and
TeleCartography, pages 231–246. Springer, 2007.
[26] G. Robles-De-La-Torre. The Importance of the Sense
of Touch in Virtual and Real Environments. IEEE
MultiMedia, 13(3):24–30, 2006.
[27] T. Ropinski, K. H. Hinrichs, and F. Steinicke. A
Solution for the Focus and Context Problem in
Geo-Virtual Environments. In Proc. ISPRS DMGIS,
pages 144–149, 2005.
[28] T. Saito and T. Takahashi. Comprehensible Rendering
of 3-D Shapes. In Proc. ACM SIGGRAPH, pages
197–206, 1990.
[29] A. Santella and D. DeCarlo. Visual Interest and NPR:
an Evaluation and Manifesto. In Proc. NPAR, pages
71–150, 2004.
[30] A. Semmo and J. D¨
ollner. Image Filtering for
Interactive Level-of-Abstraction Visualization of 3D
Scenes. In Proc. CAe, pages 5–14, 2014.
[31] A. Semmo, M. Trapp, J. E. Kyprianidis, and
J. D¨
ollner. Interactive Visualization of Generalized
Virtual 3D City Models using Level-of-Abstraction
Transitions. Comput. Graph. Forum, 31(3):885–894,
2012.
[32] B. Shneiderman. The eyes have it: a task by data type
taxonomy for information visualizations. In Proc.
IEEE Symposium on Visual Languages, pages
336–343, 1996.
[33] B. Shneiderman, C. Plaisant, M. Cohen, and
S. Jacobs. Designing the User Interface: Strategies for
Effective Human-Computer Interaction. Pearson, 2009.
[34] O. Swienty, T. Reichenbacher, S. Reppermund, and
J. Zihl. The role of relevance and cognition in
attention-guiding geovisualisation. The Cartographic
Journal, 45(3):227–238, 2008.
[35] C. Tominski, S. Gladisch, U. Kister, R. Dachselt, and
H. Schumann. A Survey on Interactive Lenses in
Visualization. In Proc. EuroVis - STARs, pages 43–62,
2014.
[36] M. Trapp, C. Beesk, S. Pasewaldt, and J. D¨
ollner.
Interactive Rendering Techniques for Highlighting in
3D Geovirtual Environments. In Proc. 3D GeoInfo
Conference, pages 197–210, 2010.
[37] M. Trapp, T. Glander, H. Buchholz, and J. D¨
ollner.
3D generalization lenses for interactive focus +
context visualization of virtual city models. In Proc.
IEEE IV, pages 356–361, 2008.
[38] J. Viega, M. J. Conway, G. Williams, and R. Pausch.
3D magic lenses. In ACM UIST, pages 51–58, 1996.
... Various other spacevariant visualization approaches have been proposed in which, rather than varying the scale or level of detail, the levels of realism or generalization are varied across the display to support focus + context interactions with the data. These approaches aim to smoothly navigate between data and its representation at one scale (e.g., Hoarau and Christophe 2017), between different levels of generalization across scales (e.g., Dumont et al. 2018), or between different rendering styles (Boér et al. 2013;Semmo and Döllner 2014;Semmo et al. 2012). Mixed levels of realism have been proposed for regular maps used for data exploration purposes (Jenny et al. 2012) as well as for VR. ...
... Various other spacevariant visualization approaches have been proposed in which, rather than varying the scale or level of detail, the levels of realism or generalization are varied across the display to support focus + context interactions with the data. These approaches aim to smoothly navigate between data and its representation at one scale (e.g., Hoarau and Christophe 2017), between different levels of generalization across scales (e.g., Dumont et al. 2018), or between different rendering styles (Boér et al. 2013;Semmo and Döllner 2014;Semmo et al. 2012). Mixed levels of realism have been proposed for regular maps used for data exploration purposes (Jenny et al. 2012) as well as for VR. ...
Chapter
Full-text available
In this chapter, we review and summarize the current state of the art in geovisualization and extended reality (i.e., virtual, augmented and mixed reality), covering a wide range of approaches to these subjects in domains that are related to geographic information science. We introduce the relationship between geovisualization, extended reality and Digital Earth, provide some fundamental definitions of related terms, and discuss the introduced topics from a human-centric perspective. We describe related research areas including geovisual analytics and movement visualization, both of which have attracted wide interest from multidisciplinary communities in recent years. The last few sections describe the current progress in the use of immersive technologies and introduce the spectrum of terminology on virtual, augmented and mixed reality, as well as proposed research concepts in geographic information science and beyond. We finish with an overview of “dashboards”, which are used in visual analytics as well as in various immersive technologies. We believe the chapter covers important aspects of visualizing and interacting with current and future Digital Earth applications.
... Points are grouped into different subsets (e.g., based on their surface category), each of which is rendered separately. A compositing pass merges the rendering results, enabling sophisticated focus+context visualization and interaction techniques (Elmqvist & Tsigas 2008, Vaaraniemi et al. 2013, Semmo & Döllner 2014) such as visibility masks (Sigg et al. 2012) or interactive lenses (Trapp et al. 2008, Pasewaldt et al. 2012. Thus, task-relevant structures can be easily identified even if they are fully or partly occluded (R6, Figure 6). ...
Chapter
Full-text available
Today, landscapes, cities, and infrastructure networks are com-monly captured at regular intervals using LiDAR or image-based remote sensing technologies. The resulting point clouds, representing digital snap-shots of the reality, are used for a growing number of applications, such as urban development, environmental monitoring, and disaster management. Multi-temporal point clouds, i.e., 4D point clouds, result from scanning the same site at different points in time and open up new ways to automate common geoinformation management workflows, e.g., updating and main-taining existing geodata such as models of terrain, infrastructure, building, and vegetation. However, existing GIS are often limited by processing strat-egies and storage capabilities that generally do not scale for massive point clouds containing several terabytes of data. We demonstrate and discuss techniques to manage, process, analyze, and provide large-scale, distributed 4D point clouds. All techniques have been implemented in a system that follows service-oriented design principles, thus, maximizing its interopera-bility and allowing for a seamless integration into existing workflows and systems. A modular service-oriented processing pipeline is presented that uses out-of-core and GPU-based processing approaches to efficiently han-dle massive 4D point clouds and to reduce processing times significantly. With respect to the provision of analysis results, we present web-based vis-ualization techniques that apply real-time rendering algorithms and suita-ble interaction metaphors. Hence, users can explore, inspect, and analyze arbitrary large and dense point clouds. The approach is evaluated based on several real-world applications and datasets featuring different densities and characteristics. Results show that it enables the management, pro-cessing, analysis, and distribution of massive 4D point clouds as required by a growing number of applications and systems.
Presentation
Full-text available
Mobility Analytics - A Brief Overview of Network-Based Spatial Analytics and Reachability
Thesis
Full-text available
Geospatial data has become a natural part of a growing number of information systems and services in the economy, society, and people's personal lives. In particular, virtual 3D city and landscape models constitute valuable information sources within a wide variety of applications such as urban planning, navigation, tourist information, and disaster management. Today, these models are often visualized in detail to provide realistic imagery. However, a photorealistic rendering does not automatically lead to high image quality, with respect to an effective information transfer, which requires important or prioritized information to be interactively highlighted in a context-dependent manner. Approaches in non-photorealistic renderings particularly consider a user's task and camera perspective when attempting optimal expression, recognition, and communication of important or prioritized information. However, the design and implementation of non-photorealistic rendering techniques for 3D geospatial data pose a number of challenges, especially when inherently complex geometry, appearance, and thematic data must be processed interactively. Hence, a promising technical foundation is established by the programmable and parallel computing architecture of graphics processing units. This thesis proposes non-photorealistic rendering techniques that enable both the computation and selection of the abstraction level of 3D geospatial model contents according to user interaction and dynamically changing thematic information. To achieve this goal, the techniques integrate with hardware-accelerated rendering pipelines using shader technologies of graphics processing units for real-time image synthesis. The techniques employ principles of artistic rendering, cartographic generalization, and 3D semiotics—unlike photorealistic rendering—to synthesize illustrative renditions of geospatial feature type entities such as water surfaces, buildings, and infrastructure networks. In addition, this thesis contributes a generic system that enables to integrate different graphic styles—photorealistic and non-photorealistic—and provide their seamless transition according to user tasks, camera view, and image resolution. Evaluations of the proposed techniques have demonstrated their significance to the field of geospatial information visualization including topics such as spatial perception, cognition, and mapping. In addition, the applications in illustrative and focus+context visualization have reflected their potential impact on optimizing the information transfer regarding factors such as cognitive load, integration of non-realistic information, visualization of uncertainty, and visualization on small displays.
Conference Paper
Full-text available
Since their introduction in the early nineties, Magic Lenses have attracted much interest. Especially in the realm of visualization, the elegance of using a virtual interactive lens to provide an alternative visual representation of a selected part of the data is highly valued. In this report, we survey the literature on interactive lenses in the context of visualization. Our survey (1) takes a look at how lenses are defined and what properties characterize them, (2) reviews existing lenses for different types of data and tasks, and (3) illustrates the technologies employed to display lenses and to interact with them. Based on our review, we identify challenges and unsolved problems to be addressed in future research.
Conference Paper
Full-text available
Texture mapping is a key technology in computer graphics for visual design of rendered 3D scenes. An effective information transfer of surface properties, encoded by textures, however, depends significantly on how important information is highlighted and cognitively processed by the user in an application context. Edge-preserving image filtering is a promising approach to address this concern while preserving global salient structures. Much research has focused on applying image filters in a post-process stage to foster an artistically stylized rendering, but these approaches are generally not able to preserve depth cues important for 3D visualization (e.g., texture gradient). To this end, filtering that processes texture data coherently with respect to linear perspective and spatial relationships is required. In this work, we present a system that enables to process textured 3D scenes with perspective coherence by arbitrary image filters. We propose decoupled deferred texturing with (1) caching strategies to interactively perform image filtering prior to texture mapping, and (2) for each mipmap level separately to enable a progressive level of abstraction. We demonstrate the potentials of our methods on several applications, including illustrative visualization, focus+context visualization, geometric detail removal, and depth of field. Our system supports frame-to-frame coherence, order-independent transparency, multitexturing, and content-based filtering.
Article
Full-text available
This article presents multi-perspective 3D panoramas that focus on visualizing 3D geovirtual environments (3D GeoVEs) for navigation and exploration tasks. Their key element, a multi-perspective view, seamlessly combines what is seen from multiple viewpoints into a single image. This approach facilitates thepresentation of information for virtual 3D city and landscape models, particularly by reducing occlusions, increasing screen-space utilization, and providing additional context within a single image. We complement multi-perspective views with cartographic visualization techniques to stylize features according to their semantics and highlight important or prioritized information. When combined, both techniques constitute the core implementation of interactive, multi-perspective 3D panoramas. They offer a large number of effective means for visual communication of 3D spatial information, a high degree of customization with respect to cartographic design, and manifold applications in different domains. We discuss design decisions of 3D panoramas for the exploration of and navigation in 3D GeoVEs. We also discuss a preliminary user study that indicates that 3D panoramas are a promising approach for navigation systems using 3D GeoVEs.
Article
Full-text available
The importance of interaction to Information Visualization (InfoVis) and, in particular, of the interplay between interactivity and cognition is widely recognized [12, 15, 32, 55, 70]. This interplay, combined with the demands from increasingly large and complex datasets, is driving the increased significance of interaction in InfoVis. In parallel, there have been rapid advances in many facets of interaction technologies. However, InfoVis interactions have yet to take full advantage of these new possibilities in interaction technologies, as they largely still employ the traditional desktop, mouse, and keyboard setup of WIMP (Windows, Icons, Menus, and a Pointer) interfaces. In this paper, we reflect more broadly about the role of more “natural” interactions for InfoVis and provide opportunities for future research. We discuss and relate general HCI interaction models to existing InfoVis interaction classifications by looking at interactions from a novel angle, taking into account the entire spectrum of interactions. Our discussion of InfoVis-specific interaction design considerations helps us identify a series of underexplored attributes of interaction that can lead to new, more “natural,” interaction techniques for InfoVis.
Article
One major aim of exploring virtual geo-environments is to reveal information which is hidden in large sets of high-dimensional data. Since only three dimensions in space - plus one dimension in time - can be used in 3D visualization it is difficult to visualize a large number of attributes associated with geo-objects without burdening the user’s cognition. The 3D Magic Lens metaphor has the potential to solve this problem, since it can be used to assign different visual appearances to different parts of a virtual environment. Based on an image-based magic lens rendering algorithm we revealed several application areas in geovisualization which can potentially benefit from the use of 3D magic lenses. In this paper we will give application examples of different lenses applied to virtual geoenvironments. In particular, we will introduce the use of the magic lens metaphor to aid landscaping, urban city planning as well as subsurface visualization tasks. Based on these application areas we will also introduce a classification of different lens types.
Article
In order to investigate large information spaces effectively, it is often necessary to employ navigation mechanisms that allow users to view information at different scales. Some tasks require frequent movements and scale changes to search for details and compare them. We present a model that makes predictions about user performance on such comparison tasks with different interface options. A critical factor embodied in this model is the limited capacity of visual working memory, allowing for the cost of visits via fixating eye movements to be compared to the cost of visits that require user interaction with the mouse. This model is tested with an experiment that compares a zooming user interface with a multi-window interface for a multiscale pattern matching task. The results closely matched predictions in task performance times; however error rates were much higher with zooming than with multiple windows. We hypothesized that subjects made more visits in the multi-window condition, and ran a second experiment using an eye tracker to record the pattern of fixations. This revealed that subjects made far more visits back and forth between pattern locations when able to use eye movements than they made with the zooming interface. The results suggest that only a single graphical object was held in visual working memory for comparisons mediated by eye movements, reducing errors by reducing the load on visual working memory. Finally we propose a design heuristic: extra windows are needed when visual comparisons must be made involving patterns of a greater complexity than can be held in visual working memory.
Article
The technological building blocks are in place to address six major challenges for natural visualization interfaces to enable an exciting future where natural interfaces powerfully strengthen and expand the use of visualizations in science, engineering, art, and the humanities. The Web extra at http://youtu.be/7lmajnw2hm0 is a video in which authors Daniel F. Keefe and Tobias Isenberg show how users interact with several of the visualization systems cited in their article 'Reimagining the Scientific Visualization Interaction Paradigm.' These recent research results provide some early evidence for the potential of natural user interfaces to change the way that scientists, engineers, and others interact with scientific visualizations.
Article
Interactive tabletops and surfaces (ITSs) provide rich opportunities for data visualization and analysis and consequently are used increasingly in such settings. A research agenda of some of the most pressing challenges related to visualization on ITSs emerged from discussions with researchers and practitioners in human-computer interaction, computer-supported collaborative work, and a variety of visualization fields at the 2011 Workshop on Data Exploration for Interactive Surfaces (Dexis 2011).