ArticlePDF Available

Interactive Visualization of Generalized Virtual 3D City Models using Level-of-Abstraction Transitions

Authors:

Abstract and Figures

Virtual 3D city models play an important role in the communication of complex geospatial information in a growing number of applications, such as urban planning, navigation, tourist information, and disaster management. In general, homogeneous graphic styles are used for visualization. For instance, photorealism is suitable for detailed presentations, and non-photorealism or abstract stylization is used to facilitate guidance of a viewer's gaze to prioritized information. However, to adapt visualization to different contexts and contents and to support saliency-guided visualization based on user interaction or dynamically changing thematic information, a combination of different graphic styles is necessary. Design and implementation of such combined graphic styles pose a number of challenges, specifically from the perspective of real-time 3D visualization. In this paper, the authors present a concept and an implementation of a system that enables different presentation styles, their seamless integration within a single view, and parametrized transitions between them, which are defined according to tasks, camera view, and image resolution. The paper outlines potential usage scenarios and application fields together with a performance evaluation of the implementation.
Content may be subject to copyright.
* http://www.hpi3d.de. This is the authors’ version of the work.
The definitive version will be available at
diglib.eg.org
and
www.blackwell-synergy.com.
Eurographics Conference on Visualization (EuroVis) 2012
S. Bruckner, S. Miksch, and H. Pfister
(Guest Editors)
Volume 31 (2012), Number 3
Interactive Visualization of Generalized Virtual 3D City
Models using Level-of-Abstraction Transitions *
Amir Semmo Matthias Trapp Jan Eric Kyprianidis Jürgen Döllner
Hasso-Plattner-Institut, University of Potsdam, Germany *
A B
Figure 1:
Exemplary result of the visualization system that enables the seamless transition between abstract graphics (A) and a
photorealistic version (B) view-dependently. The sequence below shows single frames of this transition.
Abstract
Virtual 3D city models play an important role in the communication of complex geospatial information in a growing
number of applications, such as urban planning, navigation, tourist information, and disaster management. In
general, homogeneous graphic styles are used for visualization. For instance, photorealism is suitable for detailed
presentations, and non-photorealism or abstract stylization is used to facilitate guidance of a viewer’s gaze to
prioritized information. However, to adapt visualization to different contexts and contents and to support saliency-
guided visualization based on user interaction or dynamically changing thematic information, a combination of
different graphic styles is necessary. Design and implementation of such combined graphic styles pose a number
of challenges, specifically from the perspective of real-time 3D visualization. In this paper, the authors present a
concept and an implementation of a system that enables different presentation styles, their seamless integration
within a single view, and parametrized transitions between them, which are defined according to tasks, camera
view, and image resolution. The paper outlines potential usage scenarios and application fields together with a
performance evaluation of the implementation.
Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Picture/Image Generation—Viewing
algorithms I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism—
1. Introduction
Virtual 3D city models are an integral part of a growing num-
ber of applications, systems, and services, and are becoming
general-purpose tools for interactively viewing, editing, and
distributing geospatial information. Typically, visualization
systems apply a homogeneous graphic style to depict virtual
3D city models: photorealistic rendering is usually used for
detailed presentations, or illustrative, abstract rendering to
draw attention to prioritized information [SD04]. Using a
suitable graphic style can be beneficial for making a visual-
© 2012 The Author(s)
Computer Graphics Forum © 2012 The Eurographics Association and Blackwell Publish-
ing Ltd. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ,
UK and 350 Main Street, Malden, MA 02148, USA.
A. Semmo et al. / Interactive Visualization of Generalized Virtual 3D City Models using Level-of-Abstraction Transitions
s
ID LoD Interp. Start Interp. End t0t1t2t3
0.57
0.79
1.0
Style ID LoD Interp. Start Interp. End t0t1t2t3
T0
T1
T2
Photorealistic 0Fragment Fragment 0.0 0.0 0.35 0.55
Silhouettes 1Fragment Object 0.35 0.55 0.75 0.75
Generalized 2Object Object 0.75 0.75 1.0 1.0
Feature
Class: Trees
...
COMBINE rgb
Geometric Transformation
Transition Configurations (Section 3.1)
...
Feature Classification
Input
Landmarks
...
Image Compositing (Section 3.2.4)
F
Multiresolution Models & Feature Types (Section 3.1)
A
Alpha Blending Layers
Output
Layer 0
Layer 1
Layer 2
Blend Functions (Section 3.2.1)
Ct0t1
1
0s
ζ
s
ζ
p
ζ
d
ζ(s,t0,t1,t2,t3)
Layer 1
Layer 2
Layer 0
Cartographic Rendering (Section 3.2.3)
E
...
Cartographic Shading
Layer 0
Layer 1
Saliency Metrics (Section 3.2.1)
B
η(cs,ce,SC(f))
Resolution 0Resolution 1
Global Transformations
(Section 3.2.2)
D
...
...
Figure 2:
Overview of the present system’s approach of LoA transitions for virtual 3D city models. (A) Feature classification
using semantic information, (B/C) blend value computation based on saliency metrics (multipass), (D) global transformation
of landmarks, (E) cartographic shading (multipass), (F) order-independent image blending and compositing. The transition
configurations are used by components B-F.
ization meaningful in its corresponding context and usage
scenario [Mac95]. For instance, detailed presentations can
aid the exploration of local environments, whereas 2D maps
can be an effective medium for navigational purposes.
Systems like Google Maps or Bing Maps integrate differ-
ent graphic styles to serve users with a presentation suitable
for viewing maps or getting driving directions. Because these
systems provide high interactivity, a user’s task and context,
such as viewing situations and regions-of-interest (RoIs), can
be dynamically changed. Typically, a user is able to switch the
graphic style to display more or less detail in RoIs or context
regions to avoid cluttered information. However, concurrent
visualization leads to constant reorientations and additional
cognitive load [JD08] because of hard transitions between
the graphic style, level-of-detail (LoD), and view perspective.
Therefore, a great potential lies in the seamless combination
of various graphic styles into a single view to communicate
only relevant information, thus directing a viewer’s gaze by
salient stimuli attraction (saliency-guided visualization).
A seamless combination of generic 2D and 3D graphic
styles in a visualization pipeline by means of computer
graphics is yet to be achieved. One approach is to select
alevel-of-abstraction (LoA) in a context-dependent way.
LoA refers to the spatial and thematic granularity at which
model contents are represented, and extends geometric ab-
straction (LoD) by visual abstraction (e.g., using shading ef-
fects) [GD09]. Relevant techniques use image blending or de-
formation (e.g., for focus+context visualization) to highlight
RoIs [CDF
06,MDWK08,LTJD08,QWC
09], but (1) do not
provide different LoAs for selected entities and prioritized
information, and (2) blend only two graphic styles, or are
domain-specific (e.g., routes [QWC
09]). This motivates a
system approach that is designed to integrate multiple, cus-
tomized graphic styles in a context-dependent way.
This paper presents a concept and an implementation for
a system that enables different graphic styles, their seamless
integration, and parametrized transitions. The system selects
the LoA used to represent 3D city model entities in a task-
dependent, view-dependent, and resolution-dependent way
(Figure 1). Being based on shader technology and multi-pass
rendering, the system seamlessly integrates into common
visualization pipelines, providing context-dependent visual-
ization for novel visualization techniques and geoinformation
systems (GIS). The system can be further used to author
and visualize smooth LoA transitions to improve important
applications in geovirtual environments: in particular, map
viewing, wayfinding, and locating businesses. To summarize,
this work makes the following contributions:
1.
A concept and an implementation for a system that enables
seamless combinations of various 2D/3D graphic styles.
2.
A model for the parametrization of transitions of graphic
styles in a visualization pipeline (Figure 2).
3.
Usage scenarios using cartography-oriented design to
demonstrate the benefits of the system.
© 2012 The Author(s)
© 2012 The Eurographics Association and Blackwell Publishing Ltd.
A. Semmo et al. / Interactive Visualization of Generalized Virtual 3D City Models using Level-of-Abstraction Transitions
2. Related Work
Visualizing virtual 3D city models by LoA transitions is
related to several previous works in the domains of non-
photorealistic rendering, context-aware abstraction, and ani-
mated transitions and geomorphing.
2.1. Non-Photorealistic Rendering
A stylization of virtual 3D city models uses non-photorealistic
rendering techniques [GGCS11] to reduce visual complex-
ity. Döllner and Walther [DW03] visualized virtual 3D city
models with abstract graphics using procedurally generated,
stylized facades of 3D building models. The system in the
present paper uses edge enhancement in image-space [ND03]
and object-space [DW03] to achieve an expressive rendering
of 3D building models. In contrast to previous work, this
system enhances edges view-dependently to highlight enti-
ties of interest. Examples of the stylization of landscapes
are panorama maps of terrain models and their relief presen-
tation [BST09]. The system presented in this paper is able
to stylize terrain models with a cartography-oriented design
using slope lines and shadowed hachures [BSD
04]. In addi-
tion, it is capable of parametrizing and combining these styles
within a single view for cartographic 3D city presentations.
2.2. Context-aware Abstraction
A context-aware abstraction has the potential to improve the
perception of important or prioritized information [SD04].
Major related work is found in focus+context and semantics-
based visualization, which aims to combine and parametrize
different graphic styles into a single view.
Focus+Context Visualization.
Highlighting important in-
formation in foci while maintaining a context is subject to
focus+context visualization. Applications of focus+context
visualization of virtual 3D city models are generaliza-
tion lenses [TGBD08] and cell-based geometric generaliza-
tion [GD09]. Highlighting can be further amplified by us-
ing semantic depth-of-field (SDOF) [KMH01]. Techniques
relevant to the approach of this work used stylized foci to
move the focus of a viewer to certain locations of an im-
age [SD04,CDF
06,LTJD08]. The presented system extends
these works to (1) enable smooth transitions between levels of
structural abstraction with (2) a context-dependent selection
of LoAs using saliency metrics defined per feature type, and
(3) their dynamic parametrization at run-time. In addition, the
system provides cartography-oriented, thematic visualization
using different LoAs for selected model entities or RoIs.
Further relevant work visualized 3D geovirtual en-
vironments with high detail and applied deformation
techniques [MDWK08,DK09] or focus+context zooom-
ing [QWC
09] to magnify RoIs and scale landmarks along
routes to increase visibility of important information. The sys-
tem presented here, by contrast, maintains cartographic rela-
tions in stylized foci and instead visualizes with cartography-
oriented design to reduce visual clutter in context regions
and support saliency-guided visualization. Because the sys-
tem seamlessly integrates into a visualization pipeline, it can
be used to implement these techniques to increase visibility
in RoIs. In addition, it generalizes context regions and can
therefore enhance focus+context zooming.
Semantics-based Visualization.
One approach to para-
metrize visualization of model contents is a semantics-based
image abstraction [YLL
10]. To adapt a visualization to
model contents, CityGML [Kol09] introduced a semantics-
driven classification and exchange format that has been stan-
dardized by the OGC and is accepted by a growing number of
GIS software vendors. In the system presented here, semantic
information is derived from material and texture informa-
tion, or defined explicitly at run-time to enable a customized
parametrization of visual attributes. Brewer [Bre94] proposed
conventions for using colors in cartography-oriented design.
The system presented here uses qualitative color schemes to
represent entity types of city models.
2.3. Transitions for Level-of-Abstraction
Alpha blending, animation, and geomorphing are common
visualization techniques to enable smooth transitions between
graphic styles in context-aware abstraction.
Alpha Blending.
A well-known method for image composit-
ing is alpha blending [PD84], which is used in multiperspec-
tive rendering to enable a “quasi”-continuous transition be-
tween focus and context regions [MDWK08,LTJD08]. The
system presented in this paper uses cumulative alpha blending
to blend multiple RoIs with varying LoA.
Animation.
An alternative approach for smooth transitions
is to animate visual and structural changes. Previous work in
information visualization showed that animated transitions
ease orientation and guidance [RCM93,TMB02,HR07], and
aid the reconstruction of information spaces [BB99]. More-
over, animated transitions “improve graphical perception of
changes between statistical data graphics” [HR07], facilitate
understanding and increase engagement. In the system pre-
sented here, global deformations of 3D building models are
animated to enable predictable transitions between detailed
and cartographic visualization of landmarks [EPK05].
Morphing.
Morphing is a visual effect that enhances anima-
tions by smooth transitions between models with varying reso-
lution [LDSS99,SD96]. For instance, geomorphing was used
in continuous LoD of digital terrain models [Hop98,Wag03]
to provide smooth transitions and temporal coherence. How-
ever, morphing is based on assumptions about the geometric
representations and can only be applied to 3D objects with
a suitable geometry. By contrast, virtual 3D city models, in
general, cannot fulfill such assumptions. Moreover, morph-
ing of 3D city models has to take cartographic generaliza-
tion [Mac95] into account. Previous work on continuous LoD
exemplified how smooth, view-dependent transitions can be
achieved using collapsing as the pre-dominant generalization
operator [LKR96,Hop98].
© 2012 The Author(s)
© 2012 The Eurographics Association and Blackwell Publishing Ltd.
A. Semmo et al. / Interactive Visualization of Generalized Virtual 3D City Models using Level-of-Abstraction Transitions
Original
Quantized
FDoG
rgb
α
Feature ID Type
... ...
438
439
440
441 Building
Building
Street
Water Surface
... ...
w0= 0.9 w1= 0.2 w2= 0.5
Figure 3:
Geospatial data processed by the system: multires-
olution models (A), semantic information (B), landmarks with
interest values wiand best-views (C), stylized textures (D).
3. Method
An overview of the system presented in this paper is shown
in Figure 2. The input data consists of textured multiresolu-
tion 3D models (Figure 3A) and task-dependent transition
configurations. These models are typically defined as trian-
gular irregular networks (e.g., acquired by remote sensing,
procedural generation, or manual modeling). The 3D mod-
els are composed of features (i.e., abstractions of real-world
phenomena [ISO]) that are categorized using a feature taxon-
omy, and grouped according to their appearance (Section 3.1).
Based on this information, the system performs visual abstrac-
tion (Section 3.2):
by geometric transformation of features (LoD),
and by cartographic shading (LoA), such as waterlining,
signatures for green spaces, and abstract building facades.
To perform context-dependent visualization, features are
rendered multiple times using different graphic styles that
are continuously blended. To each graphic style of a certain
feature used for visualization, interest values are assigned
that are computed at rendering time using saliency metrics,
such as viewing distance, view angle, or region interest (Sec-
tion 3.2.1). These interest values are computed for all visible
and non-visible (i.e., occluded) fragments of a feature. After
a normalization, these are used as blend values to compose
the final image by order-independent image blending [PD84].
The remainder of this section describes the stages of the
transition pipeline and its architecture in detail.
3.1. Pre-processing Geospatial Data
To enable LoA transitions for complex scenes, such as virtual
3D city models, global information about a model’s features
is required: a feature type, location in the 3D scene, global
interest, and how visual attributes adapt to user interaction
(intelligence of objects [MEH
99]). The pre-processing of
this information is explained in the remainder of this section.
Scenario Definition.
The system presented in this paper is
based on usage scenarios that define how a 3D scene is vi-
sualized for a given task and how graphics are dynamically
adapted to a user’s context. A scenario consists of a set of
Figure 4:
Continuous LoA for textured green spaces: near
distance (A), mid-range distance (B), far distance (C).
features with unique interest values and a set of transition
configurations that define rules and constraints for LoA tran-
sitions. To enable a parametrization of graphic styles for
each feature, information about a feature’s type (i.e., building,
green space, street, water, or terrain) and sub-type (e.g., conif-
erous forest, deciduous forest) is stored (Figure 3B). Thereby,
the system enables cartography-oriented design, leading to
improved perception of context information [JD08]. The re-
quired semantic information can be derived automatically
from texture and material information by grouping features
with similar appearance. Alternatively, semantic information
can be provided manually at run-time or as part of the model
data (e.g., CityGML [Kol09]).
Parts of the input data are best-view directions of build-
ings and sites to enable a cartographic visualization of land-
marks [EPK05,GASP08]. The definition of landmarks is
context-dependent [GASP08], using interest values defined
per feature (Figure 3C). The computation of these records is
not limited to pre-processing, but can be updated at runtime if
models are added or removed from a 3D scene, or if a user’s
interest in a specific feature type changes. Thereby, the system
maintains interactivity and context-dependent visualization.
Transition Configuration.
Transitions between graphic
styles are implemented by rendering features multiple times
and compositing the intermediate results using image blend-
ing [PD84]. This approach was chosen because it is generic
and simplifies the extension of the system with new graphic
styles. The sequence of graphic styles can be configured at
three levels:
Ascope defines if a graphic style applies to a certain inter-
est in a feature.
The transition is parametrized with a fragment-, object-
or group-based interpolation. For this purpose, the axis-
aligned bounding box of each feature is stored.
The parametrization of the LoD and LoA, such as by color,
texture abstraction, and edge enhancement.
Thereby, the system enables a user-defined visual abstraction
of features. Figure 2(top left corner) exemplifies a transition
configuration for tree models.
Image Abstraction.
A bilateral and difference of Gaussians
(DoG) filter [KD08] is utilized to automatically stylize tex-
tures in a pre-processing stage. The input textures are first
converted to mip maps [Wil83], and then processed for each
level separately. This provides a continuous LoA of textured
© 2012 The Author(s)
© 2012 The Eurographics Association and Blackwell Publishing Ltd.
A. Semmo et al. / Interactive Visualization of Generalized Virtual 3D City Models using Level-of-Abstraction Transitions
Figure 5:
Exemplary saliency metrics defined by the system: view distance (A), view angle (B) and region interest (C). The debug
outputs show areas of a 3D scene to be visualized with high detail (black) and low detail (white), respectively.
surfaces (Figure 4), while using standard capabilities of graph-
ics hardware [Wil83]. In contrast to [KD08], the output of the
edge enhancement is not combined with the quantized color
output (Figure 3D). Instead, color and outline are blended
at rendering time for individual parametrization. The image
abstraction is performed once per model. For a 3D city model
(CityGML LoD3 [Kol09]) with
1,520
unique texture maps,
each with an average resolution of
128 ×128
pixels, this
process takes
20
minutes (using the first hardware configu-
ration and Chemnitz model described in Section 4).
3.2. Rendering
The rendering comprises the following steps: (1) computing
the interest of features for a user’s task and context, such
as defined by the viewing perspective and region interest,
(2) visual abstraction depending on the features’ interest, and
(3) image compositing.
3.2.1. Computing Interest using Saliency Metrics
The thematic categorization (Section 3.1) is used to stylize in-
formation with high interest (high salience) differently from
information with low interest (low salience). The present
system interprets a high interest in a feature by selecting
photorealistic graphics, and a low interest by selecting ab-
stract graphics for rendering respectively, where interest val-
ues in-between yield a mix of graphic styles. To identify
areas to be visualized with high detail, the interest value
for each visible feature is computed using saliency metrics,
such as view distance, vertical view-angle, and region interest
(Figure 5). Other metrics can be added as long as they are
normalized. For instance, view metrics can be defined by
normalized Euclidean distances and angles as is shown in
Figure 5. The region interest is represented by a distance map
that is computed using the jump-flooding algorithm [RT06],
and is used to visualize RoIs or routes through a virtual 3D
city model [TGBD08]. The computation of distance maps is
Figure 6: Examplary transition states for tree models.
based on the assumption that the terrain in the locality of the
camera can be approximated by a plane (Figure 5). In contrast
to previous techniques [CDF
06,MDWK08,LTJD08], the
system presented here enables multivariable transitions based
on interest values and saliency metrics, resulting in increased
flexibility. For instance, a weighted blending between view
distance and view angle can be defined to prevent high detail
presentations in bird’s eye views with a near viewing distance.
A transition between graphic styles is based on image
blending [PD84]. Blend values are computed for each transi-
tion configuration, and features with matching feature types
and LoD. This procedure is performed during scene graph
traversal on the CPU (Algorithm 1), and by using linear or
smooth blend functions on the GPU (Figure 2C). The va-
lidity range of each transition configuration determines the
blend value of a graphic style for a feature of certain interest.
Depending on how the threshold values for two successive
transition configurations are defined, two general cases can
be identified (Figure 6):
1. Smooth transition.
A smooth transition from one graphic
style to another can be defined by the fade-out interval and
fade-in interval of two successive transition configurations.
A smooth transition between two graphic styles is enabled
© 2012 The Author(s)
© 2012 The Eurographics Association and Blackwell Publishing Ltd.
A. Semmo et al. / Interactive Visualization of Generalized Virtual 3D City Models using Level-of-Abstraction Transitions
AB C D E
Figure 7: Transformation of landmarks: scaling (A), rotation to best-view (B), flattening (C), billboard transformation (D).
Algorithm 1: Extended scene graph traversal (CPU)
Input: A usage scenario Swith features {Fi}and transition
configurations {Tj}
1forall the t∈ {Tj}do
2forall the f∈ {Fi}do
3if feature type of f and t match and LoD of f and t
match and f is inside validity range of t (=not culled)
then
4Render fand apply graphic style of t
5end
6end
7end
using the smoothstep function within these two intervals
(Figure 6A-C).
2. Hard transition.
For certain configurations, discrete LoA
transitions are appropriate – for instance, if two graphic
styles lead to distorted color tones [GW07]. Hard transi-
tions are enabled if the overlap of a fade-in and fade-out
interval is set to zero (Figure 6C-D).
For the computation of blend values, three interpolation
modes are distinguished: (1) a fragment-based interpolation
for smooth transitions within 3D features, (2) an object-based
interpolation with blend values applied uniformly to a feature
by using the center of a feature’s axis-aligned bounding box
as focus point, and (3) a group-based interpolation with a
shared focus point among features – for instance, to replace
tree instances by a coarse geometry representing woodland.
3.2.2. Global Transformations
Certain buildings and sites within a virtual environment serve
as landmarks (i.e., reference points with a characteristic ap-
pearance or location, or user’s interest). The visualization of
landmarks is essential for localization, orientation and nav-
igation [GASP08]. To this end, the system presented here
provides a map-like visualization of landmarks, using global
deformation applied prior to rasterization. The flattened land-
marks are rotated to face the user’s viewing direction ac-
cording to their best-views. For buildings, best-views often
face the street or main entrance and are approximated using
viewpoint entropy [VFSH04]. To obtain a deformed land-
mark in world space coordinates, the following four steps are
performed on a per-vertex basis during rendering:
1. Landmark scaling
. Landmarks are scaled to improve their
visibility in far view distances (Figure 7A-B). A weighted
smoothstep function is used to compute the scale factor. To
avoid over cluttering, landmarks are smoothly faded-out
according to their interest values.
2. Rotation to best-view
. Landmarks are pitched so that their
best-view direction horizontally coincides with the virtual
camera’s view direction (Figure 7B-C).
3. Object flattening
. Landmarks are flattened in depth; that
is, their vertices are projected to the plane facing the hori-
zontal view direction (Figure 7C-D).
4. Cylindrical billboard transformation
. The flattened land-
mark is yawed by the camera elevation to vertically face
the view direction (Figure 7D-E).
The system linearly blends original and transformed vertices
based on a feature’s interest using shader technology (GPU).
Thus, further transformation techniques can be seamlessly
integrated, such as global deformation to increase visibility
in RoIs [MDWK08,DK09,QWC
09] or terrain geomorphing
[Wag03]. Furthermore, the system smoothly shrinks non-
landmarks and translates them below the terrain to remove
extraneous information for map-like visualization (Figure 1).
3.2.3. Cartographic 3D City Presentations
To demonstrate the system’s ability to integrate customized
2D and 3D graphic styles, several abstraction techniques are
authored to achieve thematic visualization in context regions.
The thematic categorization (of Section 3.1) is used to stylize
features by non-photorealistic rendering techniques:
Building models
. Stylized texture maps, as discussed in Sec-
tion 3.1, are used to provide a continuous LoA of building
facades. With increasing distance or viewing angle, subtle
details are smoothly coarsened.
Street networks
. Street networks are stylized using carto-
graphic color schemes [Bre94]. Important edges are enhanced
using an image-space edge enhancement technique [ND03].
In general, the system seamlessly integrates street labels using
distance maps that are blended on graphics hardware. For this,
the authors enhanced Green’s [Gre07] rendering technique to
align and scale street labels in a view-dependent way.
Water surfaces
. Water surfaces are visualized using a novel
waterlining shading technique that is based on distance maps
[RT06]. The technique visualizes waterlines with non-linear
intervals to propagate distance information (Figure 8A).
Green spaces and trees
. As for buildings, stylized tex-
ture maps are used for continuous LoA (Figure 4). In
addition, signatures are visualized using texture bombing
© 2012 The Author(s)
© 2012 The Eurographics Association and Blackwell Publishing Ltd.
A. Semmo et al. / Interactive Visualization of Generalized Virtual 3D City Models using Level-of-Abstraction Transitions
Mixed ForestConiferous Forest
CDA B
Figure 8: Stylization techniques for water surfaces (A), green spaces (B), tree models (C) and digital terrain models (D).
and parametrized to represent tree species (Figure 8B).
For this, the authors developed an enhanced variant of
Glanville’s [Gla04] texture bombing algorithm that ensures
signatures always face the viewing direction.
Tree models are stylized using view-dependent enhance-
ment of silhouettes [DS00]. Technically, point sprites were
used and the differences of depth values thresholded to en-
hance silhouettes in image-space (Figure 8C).
Digital terrain models
. Digital terrain models are visual-
ized as relief presentations using loose lines, slope lines, and
shadowed hachures [BSD04] (Figure 8D).
Generic models
. Generic stylization is applied to miscella-
neous features, such as city furniture. A combination of styl-
ized texture maps, edge enhancement in image-space [ND03]
and object-space [DW03], and thematic colorization [Bre94]
is used.
3.2.4. Image Compositing
The final image is composed using alpha blending [PD84].
Technically, a stencil routed A-buffer [MB07] is used, since
it is able to buffer fragments in depth at real-time frame rates.
Because certain features have a high complexity in depth (e.g.,
foliage of trees), the system provides the capability to render
features off-screen into a Ping-Pong buffer. It comprises two
render textures, uses a depth test, and switches its render
texture for successive graphic styles. For image compositing,
the output of both textures is blended and combined with the
information of the A-buffer.
To improve the rendering performance, the A-buffer sort-
ing [MB07] is enhanced using a dynamic image composi-
tion (Algorithm 2). Fragments with depth values
d=1
are
excluded so that only routed samples are blended (see Algo-
rithm 2, lines 4-7). The system then improves shape and depth
perception by unsharp masking the depth buffer [LCD06]. To
reduce ringing artifacts, the method is enhanced by locally
weighting depth differences according to the alpha values.
4. Applications and Evaluation
The system presented here was implemented using C++,
OpenGL and GLSL. Two platforms were used for perfor-
mance evaluation: (1) an Intel
®
Xeon
™ 4×
3.06 GHz with
6 GByte RAM and NVidia
®
GTX 560 Ti GPU with 2 GByte
VRAM, and (2) an Intel
®
Core2Duo
™ 2×
3.0 GHz with
4 GByte RAM and NVidia
®
GTX 460 GPU with 1 GByte
VRAM. To show the effectiveness of the system, usage sce-
narios were authored for the virtual 3D city model of Chem-
nitz (Germany) with
458
features,
223,743
vertices,
176,601
Algorithm 2: Image compositing using shaders (GPU)
Input: Buffered color, alpha and depth values per pixel
Output: Blended color values per pixel
1begin
2color background color;
3count #SAMPLES;
4for n0,#SAMPLES 1do
5depths[n]fetchABufferDepth(n);
6count count − bdepths[n]c;
7end
8for n0,count do
9colors[n]fetchABufferColor(n);
10 end
11 blendPingPongFBO(colors[count],depths[count]);
12 count count +1;
13 SortColorByDepth(colors,depths,count);
14 for ncount 1,0do
15 color mix(color.rgb,colors[n].rgb,colors[n].a);
16 end
17 end
faces, and
1,520
texture maps (Figure 1and Figure 9A-D);
another 3D city model with
532
objects,
63,630
vertices, and
40,993
faces (Figure 9E); and a virtual landscape model of
Mount St. Helens (Figure 9F). To improve the rendering per-
formance, view-frustum and back-face culling were enabled,
and geometry instancing applied for the vegetation objects.
For order-independent blending, an A-buffer with 8 samples
was used.
4.1. Usage Scenarios
This section demonstrates the benefits of the system in ap-
plications of 3D geovirtual environments, such as map view-
ing, business locating, navigation, and wayfinding. Further,
thematic visualization was used to provide a cartographic
presentation (Section 3.2.3).
Despite the manual selection of the LoA (Figure 9A), view-
distance-based transitions were authored to visualize features
near the virtual camera at high detail, and distant features in
an abstracted way (Figure 9B). This approach can be of in-
terest in highly dynamic and ubiquitous information systems.
For instance, mobile navigation systems could use this visual-
ization to present places close to a viewer’s position with high
detail for local orientation assistance, and places far away
with less detail and emphasized landmarks for navigational
© 2012 The Author(s)
© 2012 The Eurographics Association and Blackwell Publishing Ltd.
A. Semmo et al. / Interactive Visualization of Generalized Virtual 3D City Models using Level-of-Abstraction Transitions
BB C
DE F
A
Figure 9:
Exemplary applications authored with the visualization system: manual LoA selection (A), distance-based transitions
(B), route highlighting (C), and circular RoIs (D-F).
assistance. Thereby, the exploration of complex virtual envi-
ronments can be improved, in general, since the viewer is not
required to switch between a 3D perspective view and a map
view, as common in map view services like Google Maps.
Because of emphasized features in the foreground, it further
facilitates direction guidance of a viewer’s gaze [CDF
06]
while preserving context information in the background.
Combined with the vertical viewing angle as saliency met-
ric (Section 3.2.1), detailed and map-like visualization can be
seamlessly blended. The present system used a map-like visu-
alization to highlight thematic information (e.g., landmarks,
labeled roads, green spaces, and water surfaces), leveraging
the capability of the system to implement cartographic color
schemes [Bre94] and non-photorealistic rendering techniques
(Section 3.2.3). This is useful for 3D car navigation systems
because only the most relevant information is communicated
in areas of high information compression (e.g., in the back-
ground of perspective views [JD08]). Further, orientation as-
sistance is provided by seamlessly integrating 3D illustrations
of landmarks. Furthermore, the driving speed could be used as
metric to dynamically select the LoA used for visualization.
Wayfinding is an important task for virtual environments
and can be improved by the proposed system using saliency
metrics for LoA transitions (Section 3.2.1). Features can be
highlighted along routes to attract and direct a viewer’s fo-
cus – for instance, as a navigational aid to guide a user to
a destination or RoI (Figure 9C). Within this application
domain, the system can provide improvements over previ-
ous techniques designed for occlusion-free route visualiza-
tion [QWC
09,MDWK08] because these neglect informa-
tion abstraction in context regions. In addition, the system
parametrizes LoA transitions at run-time. This can be used to
Table 1:
Performance evaluation measured in frames-per-
second for three virtual environments and screen resolutions.
The evaluation was performed on two platforms (Section 4).
Model / Screen Res. 1920 ×1080 1280 ×720 800 ×600
Chemnitz 5.7 5.0 5.8 5.2 6.1 5.3
MegaCity 14.2 13.1 17.2 16.6 20.4 20.1
Mt. St. Helens 46.5 36.8 72.2 57.6 87.8 78.2
highlight selected information of database queries for analy-
sis purposes. Moreover, there is potential to use the system
for the visualization of (time-based) model variants. Further-
more, the system is feasible to visualize RoIs as blue prints
and seamlessly combine these with high-detail graphic styles
in the context area (Figure 9E). Applications designed for ur-
ban planning could use this visualization to highlight complex
structures or architectural features of 3D building models.
4.2. Saliency-guided Visualization
To demonstrate the advantage of saliency-guided visualiza-
tion as provided by the system, the authors compared saliency
maps of the system’s output and homogeneous high-detail
visualization typical for mass-market systems (e.g., Google
Earth). As can be seen in Figure 10, visual saliency of homo-
geneous graphic styles is distributed across focus and context
regions. By contrast, visualization of the system presented in
this paper yields concentrated high saliency within a circular
RoI and for single landmarks in the context area. In case of
saliency-guided route visualization, the saliency follows the
route due to high frequencies in color, orientation, and depth.
4.3. Performance Evaluation
The performance tests were conducted for the aforementioned
platforms, virtual environments, and usage scenarios. The
test results in Table 1show that the system provides inter-
active frame rates in HD resolution. It was observed that
the performance depends on the total number of transition
configurations defined for a usage scenario. For instance,
a view-distance-based transition performs, in mean,
27.5
%
slower than a visualization with a homogeneous graphic style.
Further, it was observed that the system is fill-limited, with a
performance increase of
75
% (in mean) when using a reso-
lution of
800 ×600
pixels over
1920 ×1080
pixels. For the
city of Chemnitz, it was observed that the system is CPU-
limited because of the rendering engine being limited to a
single-threaded traversal of the scene graph. Moreover, the
results for Mount St. Helens show that the system can handle
3D scenes with high visual complexity in full HD resolution
at real-time frame-rates.
© 2012 The Author(s)
© 2012 The Eurographics Association and Blackwell Publishing Ltd.
A. Semmo et al. / Interactive Visualization of Generalized Virtual 3D City Models using Level-of-Abstraction Transitions
0.2
0.4
0.6
0.8
1.0
0.0
Saliency
Detailed visualization (no highlighting) Highlighted circular region-of-interest Highlighted route
OutputSaliency Map
Figure 10:
Examples showing a circular RoI and a highlighted route within the city of Chemnitz, compared to a detailed version.
The bottom row shows the respective saliency maps using the algorithm of graph-based visual saliency [HKP07].
The memory consumption (VRAM) of our system mainly
depends on the A-buffer (32bit color, 24bit depth, and 8bit
stencil values per sample), the Ping-Pong buffer (32bit color
and 32bit depth values per pixel), and the geometry buffer
(32bit edge map, 32bit ID map, and 32bit normal map):
M=W·H·(2N+7)
262,144 MB,
where
W
and
H
refer to screen resolutions in pixels, and
N
to the number of samples used by the A-buffer.
4.4. Lessons Learned
The system is currently used in the authors’ research group as
a platform for implementing novel visualization techniques
designed for 3D geovirtual environments. During develop-
ment and authoring of usage scenarios, it was observed that
the parametrization of the system can be cumbersome. Moti-
vated by this, functionality to serialize transition configura-
tions, feature classifications, and parametrizations of graphic
styles was added to the system, to be able to maintain de-
signs in libraries and easily deploy usage scenarios. It was
further observed that development of new graphic styles can
become time consuming. Therefore, a shader editor was inte-
grated into the system, which facilitates modification of ver-
tex, geometry, and fragment shaders at run-time. Moreover,
the performance evaluation indicates that a single-threaded
rendering engine has impact on the system’s interactivity.
Therefore, the authors plan to port the system to a rendering
engine that supports multi-threading. Finally, the approach
to buffer fragments in depth is memory consuming and re-
quires sufficient samples, or additional rendering passes using
occlusion queries [MB07] to avoid visual artifacts. The pro-
posed Ping-Pong buffer only resolves this issue if features
are rendered opaquely.
5. Conclusions and Future Work
This paper presents a concept and an implementation of a sys-
tem that visualizes virtual 3D city models with parametrized
level-of-abstraction transitions for a seamless combination
of various graphic styles in a single view. The system pro-
vides interactive, saliency-guided visualization by coupling
saliency metrics with cartographic rendering techniques. It
is extensible by custom 2D and 3D graphic styles, integrates
into a visualization pipeline, and can be used to improve ex-
isting visualization techniques (e.g., based on focus+context
zooming [QWC
09] and deformation [MDWK08]). Usage
scenarios based on the system’s capability for thematic vi-
sualization demonstrate the system’s benefits for typical ap-
plications of geovirtual environments – in particular, map
viewing, business locating, navigation, and wayfinding.
Since the system operates in 3D space, it can be used
to enhance the x-ray volumetric lens effect [VCWP96] for
indoor visualization. Visualization on mobile devices also
has high potential to benefit from the system because level-of-
abstraction can reduce information compression on displays
with limited size. Finally, saliency maps of the presented
results show that the system is feasible to draw attention to
important information, though this requires further validation.
Therefore, the authors plan to conduct a user study to confirm
this hypothesis.
Acknowledgments
The authors would like to thank the anonymous reviewers for
their valuable comments.
References
[BB99]
BEDERSON B. B., BOLTMAN A.: Does animation help
users build mental maps of spatial information? In Proc. IEEE
InfoVis (1999), pp. 28–35.
© 2012 The Author(s)
© 2012 The Eurographics Association and Blackwell Publishing Ltd.
A. Semmo et al. / Interactive Visualization of Generalized Virtual 3D City Models using Level-of-Abstraction Transitions
[Bre94]
BREWER C. A.: Color Use Guidelines for Mapping and
Visualization. Elsevier Science, 1994, ch. 7, pp. 123–147.
[BSD04]
BUCHIN K., SOUSA M. C., DÖ LLN ER J., SAMAVATI
F., WALTHE R M.: Illustrating terrains using direction of slope
and lighting. In ICA Mountain Carthography Workshop (2004),
pp. 259–269.
[BST09]
BRATKOVA M., SHI RLEY P., THOMPSON W. B.: Artis-
tic rendering of mountainous terrain. ACM Trans. Graph. 28
(2009), 102:1–102:17.
[CDF06]
COLE F., DECAR LO D. , FINKELSTEIN A., KIN K.,
MORLEY K., SANTEL LA A.: Directing gaze in 3D models with
stylized focus. In Proc. EGSR (2006), pp. 377–387.
[DK09]
DEGENER P., KLEIN R.: A variational approach for
automatic generation of panoramic maps. ACM Trans. Graph. 28
(2009), 2:1–2:14.
[DS00]
DEUSSEN O., STROT HOTT E T.: Computer-generated pen-
and-ink illustration of trees. In Proc. ACM SIGGRAPH (2000),
pp. 13–18.
[DW03]
DÖLLNER J., WALTH ER M.: Real-time expressive ren-
dering of city models. In Proc. IEEE InfoVis (2003), pp. 245–250.
[EPK05]
ELIAS B., PAELK E V., KUH NT S.: Concepts for the
cartographic visualization of landmarks. In Proc. LBS and Tele-
cartography (2005), pp. 149–155.
[GASP08]
GRABLER F., AGRAWALA M., SUMNER R. W.,
PAULY M.: Automatic generation of tourist maps. In Proc. ACM
SIGGRAPH (2008), pp. 100:1–100:11.
[GD09]
GLANDER T., DÖL LNER J.: Abstract representations for
interactive visualization of virtual 3D city models. Computers,
Environment and Urban Systems 33, 5 (2009), 375–387.
[GGCS11]
GOOCH A., GOO CH B., COS TA-SOUS A M.: Illus-
trative Visualization: The Art and Science of Non-Photorealistic
Rendering. A.K. Peters, 2011.
[Gla04]
GLANVILLE R. S.: Texture bombing. In GPU Gems.
Addison-Wesley, 2004, pp. 323–338.
[Gre07]
GREEN C.: Improved alpha-tested magnification for vec-
tor textures and special effects. In ACM SIGGRAPH Courses
(2007), pp. 9–18.
[GW07]
GIEGL M., WIMMER M.: Unpopping: Solving the image-
space blend problem for smooth discrete LOD transitions. Comput.
Graph. Forum 26, 1 (2007), 46–49.
[HKP07]
HAREL J., KOCH C., P ERO NA P.: Graph-based visual
saliency. Advances in Neural Information Processing Systems 19
(2007), 545–552.
[Hop98]
HOPPE H.: Smooth view-dependent level-of-detail con-
trol and its application to terrain rendering. In Proc. IEEE Vis
(1998), pp. 35–42.
[HR07]
HEER J., RO BERT SON G .: Animated transitions in statis-
tical data graphics. In Proc. IEEE Vis (2007), pp. 1240–1247.
[ISO]
ISO 19101:2002: Geographic information - Reference
model. Tech. rep., ISO, Geneva, Switzerland.
[JD08]
JOBST M., DÖL LNE R J.: 3D city model visualization
with cartography-oriented design. In Proc. REAL CORP (2008),
pp. 507–516.
[KD08]
KYPRIANIDIS J. E., DÖLLN ER J.: Image abstraction
by structure adaptive filtering. In Proc. EG UK TPCG (2008),
pp. 51–58.
[KMH01]
KOSARA R., MIKSCH S., HAUS ER H.: Semantic depth
of field. In Proc. IEEE InfoVis (2001), pp. 97–104.
[Kol09]
KOLBE T. H.: Representing and exchanging 3D city
models with CityGML. In 3D GeoInformation Sciences (2009),
pp. 15–31.
[LCD06]
LUFT T., COL DITZ C., DEU SSE N O.: Image enhance-
ment by unsharp masking the depth buffer. In Proc. ACM SIG-
GRAPH (2006), pp. 1206–1213.
[LDSS99]
LEE A. W. F., DOBKIN D ., SWELDENS W.,
SCHRÖDER P.: Multiresolution mesh morphing. In Proc. ACM
SIGGRAPH (1999), pp. 343–350.
[LKR96]
LINDSTROM P., KOL LER D., RIBARSKY W., HODGES
L. F., FAU ST N., TURNER G. A.: Real-time continuous level
of detail rendering of height fields. In Proc. ACM SIGGRAPH
(1996), pp. 109–118.
[LTJD08]
LORENZ H., TRAPP M., JOB ST M. , DÖLLNER J.: In-
teractive multi-perspective views of virtual 3D landscape and city
models. In Proc. AGILE (2008), pp. 301–321.
[Mac95]
MACEAC HREN A.: How Maps Work. Guilford Press,
1995.
[MB07]
MYERS K., BAVOIL L.: Stencil routed A-Buffer. In ACM
SIGGRAPH Sketches (2007).
[MDWK08]
MÖSER S., DEG ENE R P., WAHL R., KL EIN R.: Con-
text aware terrain visualization for wayfinding and navigation.
Comput. Graph. Forum 27 (2008), 1853–1860.
[MEH99]
MACEAC HRE N A. M., ED SAL L R., HAUG D., BAX -
TER R., OT TO G., MAS TER S R., FUHRMANN S., QIAN L.:
Virtual environments for geographic visualization: Potential and
challenges. In Proc. ACM NPIVM (1999), pp. 35–40.
[ND03]
NIENHAUS M., DÖLL NER J.: Edge-enhancement - An
algorithm for real-time non-photorealistic rendering. Journal of
WSCG 11, 2 (2003), 346–353.
[PD84]
PORTER T., DU FF T.: Compositing digital images. In
Proc. ACM SIGGRAPH (1984), pp. 253–259.
[QWC09]
QUH., WANG H., CUI W., WUY., CHAN M.-Y.:
Focus+context route zooming and information overlay in 3D ur-
ban environments. IEEE Trans. Vis. Comput. Graphics 15 (2009),
1547–1554.
[RCM93]
ROBERTSON G. G., CARD S. K., M ACK INL AY J. D.:
Information visualization using 3D interactive animation. Com-
mun. ACM 36 (1993), 57–71.
[RT06]
RONG G., TAN T.-S.: Jump flooding in GPU with appli-
cations to voronoi diagram and distance transform. In Proc. ACM
I3D (2006), pp. 109–116.
[SD96]
SEITZ S. M., DYE R C. R.: View morphing. InProc. ACM
SIGGRAPH (1996), pp. 21–30.
[SD04]
SANTELLA A., DECAR LO D.: Visual interest and NPR:
an evaluation and manifest. In Proc. NPAR (2004), pp. 71–150.
[TGBD08]
TRAPP M., GLA NDE R T., BUCHHOLZ H., DÖ LL-
NER J.: 3D generalization lenses for interactive focus + context
visualization of virtual city models. In Proc. IEEE IV (2008),
pp. 356–361.
[TMB02]
TVERSKY B., MORRISON J. B., BET RANCOURT M.:
Animation: can it facilitate? Int. Journal of Human-Computer
Studies 57 (2002), 247–262.
[VCWP96]
VIEGA J., CON WAY M. J., WILLIAMS G., PAUSC H
R.: 3D magic lenses. In Proc. ACM UIST (1996), pp. 51–58.
[VFSH04]
VÁZQU EZ P.-P., FEIXAS M., SBERT M., HEIDRICH
W.: Automatic view selection using viewpoint entropy and its
application to image-based modelling. Comput. Graph. Forum 22,
4 (2004), 689–700.
[Wag03]
WAGNER D.: Terrain geomorphing in the vertex shader.
In ShaderX2. Wordware Publishing, 2003.
[Wil83]
WILLIAMS L.: Pyramidal parametrics. In Proc. ACM
SIGGRAPH (1983), vol. 17, pp. 1–11.
[YLL10]
YANG M. , LIN S., LUO P., LI N L., CHAO H.:
Semantics-driven portrait cartoon stylization. In Proc. IEEE ICIP
(2010), pp. 1805–1808.
© 2012 The Author(s)
© 2012 The Eurographics Association and Blackwell Publishing Ltd.
... Buildings are generated by extruding the building footprints and the roofs by using a set a shape descriptors. However, these studies did not aim for photorealism in favor of more efficient data models and adopting a strategy of "Level of Abstraction transition" (Semmo et al., 2012), using stylised visualization. Visualizations of the landscape and vegetation is representative and uses a random distribution of adjustable density. ...
Conference Paper
Full-text available
An early-stage development of a Digital Twin (DT) in Virtual Reality (VR) is presented, aiming for civic engagement in a new urbandevelopment located in an area that is a forest today. The area is presently used for recreation. For the developer, it is important bothto communicate how the new development will affect the forest and allow for feedback from the citizen. High quality DT models aretime-consuming to generate, especially for VR. Current model generation methods require the model developer to manually designthe virtual environment. Furthermore, they are not scalable when multiple scenarios are required as a project progresses. This studyaimed to create an automated, procedural workflow to generate DT models and visualize large-scale data in VR with a focus onexisting green structures as a basis for participatory approaches. Two versions of the VR prototype were developed in closecooperation with the urban developer and evaluated in two user tests. A procedural workflow was developed for generating DTmodels and integrated into the VR application. For the green structures, efforts focused on the vegetation, such as realisticrepresentation and placement of different types of trees and bushes. Only navigation functions were enabled in the first user test withpractitioners (9 participants). Interactive functions were enabled in the second user test with pupils (age 15, 9 participants). In bothtests, the researchers observed the participants and carried out short reflective interviews. The user test evaluation focussed on theperception of the vegetation, general perception of the VR environment, interaction, and navigation. The results show that theworkflow is effective, and the users appreciate green structure representations in VR environments in both user tests. Based on theworkflow, similar scenes can be created for any location in Sweden. Future development needs to concentrate on the refinement ofbuildings and information content. A challenge will be balancing the level of detail for communication with residents.
... Using Google Earth worked well to create aerial images with compelling background entourage that were easy to edit, but the specificity of source LIDAR and aerial photographs also often resulted in glaring geometry conflicts (for example, trees, which emerge from new buildings). Applying image filters could produce simplified color palettes; sophisticated edge detection is needed for intelligent urban model simplification (see Döllner 2007, Semmo 2012). ...
... Several aspects of the urban projects or the tasks to accomplish could lead the adopted representation to either hinder or enhance the participatory practices (Hayek, 2011;Boér, Çöltekin, & Clarke, 2013;Chassin et al., 2019). Hybrid representations are also explored to balance the limits introduced by the two level-of-abstraction alternatives (Brasebin et al., 2016;Lokka et al., 2018;Salter et al., 2009;Semmo et al., 2012). Hence, the local authorities often struggle to select the appropriate type of representation for the design of their approach. ...
Article
Full-text available
The adoption of technology in urban participatory planning with tools such as Virtual Geographic Environments (VGE) promises a broader engagement of urban dwellers, which should ultimately lead to the creation of better cities. However, the authorities and urban experts show hesitancy in endorsing these tools in their practices. Indeed, several parameters must be wisely considered in the design of VGE; if misjudged, their impact could be damaging for the participatory approach and the related urban project. The objective of this study is to engage participants (N = 107) with common tasks conducted in participatory sessions, in order to evaluate the users’ performance when manipulating a VGE. We aimed at assessing three crucial parameters: (1) the VGE representation, (2) the participants’ idiosyncrasies, and (3) the nature of the VGE format. The results demonstrate that the parameters did not affect the same aspect of users’ performance in terms of time, inputs, and correctness. The VGE representation impacts only the time needed to fulfill a task. The participants’ idiosyncrasies, namely age, gender and frequency of 3D use also induce an alteration in time, but spatial abilities seem to impact all characteristics of users’ performance, including correctness. Lastly, the nature of the VGE format significantly alters the time and correctness of users interactions. The results of this study highlight concerns about the inadequacies of the current VGE practices in participatory sessions. Moreover, we suggest guidelines to improve the design of VGE, which could enhance urban participatory planning processes, in order to create better cities.
... Although different factors contribute to developing UBEM workflows, virtual 3D models are mostly considered as a starting point for simulations. Virtual 3D city models are currently being used in many different applications and have become general-purpose tools for storing, exchanging and distributing geo-spatial information [63]. For computing the energy performance of the buildings, 3D geometrical data models are generally used as an input for building geometries, years of construction, building usages, roof types, etc. ...
Article
Full-text available
Urban Building Energy Modelling (UBEM) requires adequate geometrical information to represent buildings in a 3D digital form. However, open data models usually lack essential information, such as building geometries, due to a lower granularity in available data. For heating demand simulations, this scarcity impacts the energy predictions and, thereby, questioning existing simulation workflows. In this paper, the authors present an open-source CityGML LoD Transformation (CityLDT) tool for upscaling or downscaling geometries of 3D spatial CityGML building models. With the current support of LoD0–2, this paper presents the adapted methodology and developed algorithms for transformations. Using the presented tool, the authors transform open CityGML datasets and conduct heating demand simulations in Modelica to validate the geometric processing of transformed building models.
... Recent research on LODs focuses on the formalization of 3D city modelling (Biljecki et al. 2014;, presenting features in virtual environments (VE) through the selective use of more detailed models (Semmo et al., 2012;Lokka and Çöltekin 2019), and revising the LOD concept in future CityGML 3.0 specifications (Kutzner, Chaturvedi, and Kolbe 2020). Rautenbach, Coetzee, and Çöltekin (2016) noted that users can struggle to perform tasks such as identifying or differentiating between features in the LOD2 model, whereas objects with higher LODs can guide attention by standing out from other objects. ...
Article
Full-text available
This paper investigates user preferences and behaviour associated with 2D and 3D modes of urban representation within a novel Topographic Immersive Virtual Environment (TopoIVE) created from official 1:10,000 mapping. Sixty participants were divided into two groups: the first were given a navigational task within a simulated city and the second were given the freedom to explore it. A Head-Mounted Display (HMD) Virtual Reality (VR) app allowed participants to switch between 2D and 3D representations of buildings with a remote controller and their use of these modes during the experiment was recorded. Participants performed mental rotation tests before entering the TopoIVE and were interviewed afterwards about their experiences using the app. The results indicate that participants preferred the 3D mode of representation overall, although preference for the 2D mode was slightly higher amongst those undertaking the navigational task, and reveal that different wayfinding solutions were adopted by participants according to their gender. Overall, the findings suggest that users exploit different aspects of 2D and 3D modes of visualization in their wayfinding strategy, regardless of their task. The potential to combine the functionality of 2D and 3D modes therefore offers substantial opportunities for the development of immersive virtual reality products derived from topographic datasets.
... It differentiates five LODs with increasing complexity: i) LOD0 -a two and a half dimensional digital terrain model with or without an aerial image draped over it, ii) LOD1 -a blocks model of prismatic buildings with flat roof structures, iii) LOD2 -a blocks model with differentiated roof structures and thematically distinct surfaces, iv) LOD3 -which includes detailed outside architecture of buildings such as walls, doors, and windows, and vegetation and transportation objects, and v) LOD4 -which includes interior details for the buildings. Different LODs may be used within a model to highlight certain features, for example by varying the LOD depending on distance allowing more details as the viewer closes in towards parts of the model [46,48]. ...
... The "nameability" of features appears to be important in recall tasks: It has been shown that not only objects but also colors that people can 14 name are better recalled (Brewer, 1996;Brown, Lindsey, & Guckes, 2011;Özgen, 2004). Possibly motivated by such studies, there have been efforts to increase realism levels in VR in navigation-related domains Semmo, Trapp, Kyprianidis, & Döllner, 2012) so that people can easily match real-world structures and landmarks to the ones on the displays (Partala et al., 2010). ...
Article
With technological advancements, it has become notably easier to create virtual environments (VEs) depicting the real world with high fidelity and realism. These VEs offer some attractive use cases for navigation studies looking into spatial cognition. However, such photorealistic VEs, while attractive, may complicate the route learning process as they may overwhelm users with the amount of information they contain. Understanding how much and what kind of photorealistic information is relevant to people at which point on their route and while they are learning a route can help define how to design virtual environments that better support spatial learning. Among the users who may be overwhelmed by too much information, older adults represent a special interest group for two key reasons: 1) The number of people over 65 years old is expected to increase to 1.5 billion by 2050 (World Health Organization, 2011); 2) cognitive abilities decline as people age (Park et al., 2002). The ability to independently navigate in the real world is an important aspect of human well-being. This fact has many socio-economic implications, yet age-related cognitive decline creates difficulties for older people in learning their routes in unfamiliar environments, limiting their independence. This thesis takes a user-centered approach to the design of visualizations for assisting all people, and specifically older adults, in learning routes while navigating in a VE. Specifically, the objectives of this thesis are threefold, addressing the basic dimensions of: ❖ Visualization type as expressed by different levels of realism: Evaluate how much and what kind of photorealistic information should be depicted and where it should be represented within a VE in a navigational context. It proposes visualization design guidelines for the design of VEs that assist users in effectively encoding visuospatial information. ❖ Use context as expressed by route recall in short- and long-term: Identify the implications that different information types (visual, spatial, and visuospatial) have over short- and long-term route recall with the use of 3D VE designs varying in levels of realism. ❖ User characteristics as expressed by group differences related to aging, spatial abilities, and memory capacity: Better understand how visuospatial information is encoded and decoded by people in different age groups, and of different spatial and memory abilities, particularly while learning a route in 3D VE designs varying in levels of realism. In this project, the methodology used for investigating the topics outlined above was a set of controlled lab experiments nested within one. Within this experiment, participants’ recall accuracy for various visual, spatial, and visuospatial elements on the route was evaluated using three visualization types that varied in their amount of photorealism. These included an Abstract, a Realistic, and a Mixed VE (see Figure 2), for a number of route recall tasks relevant to navigation. The Mixed VE is termed “mixed” because it includes elements from both the Abstract and the Realistic VEs, balancing the amount of realism in a deliberate manner (elaborated in Section 3.5.2). This feature is developed within this thesis. The tested recall tasks were differentiated based on the type of information being assessed: visual, spatial, and visuospatial (elaborated in Section 3.6.1). These tasks were performed by the participants both immediately after experiencing a drive-through of a route in the three VEs and a week after that; thus, addressing short- and long-term memory, respectively. Participants were counterbalanced for their age, gender, and expertise while their spatial abilities and visuospatial memory capacity were controlled with standardized psychological tests. The results of the experiments highlight the importance of all three investigated dimensions for successful route learning with VEs. More specifically, statistically significant differences in participants’ recall accuracy were observed for: 1) the visualization type, highlighting the value of balancing the amount of photorealistic information presented in VEs while also demonstrating the positive and negative effects of abstraction and realism in VEs on route learning; 2) the recall type, highlighting nuances and peculiarities across the recall of visual, spatial, and visuospatial information in the short- and long-term; and, 3) the user characteristics, as expressed by age differences, but also by spatial abilities and visuospatial memory capacity, highlighting the importance of considering the user type, i.e., for whom the visualization is customized. The original and unique results identified from this work advance the knowledge in GIScience, particularly in geovisualization, from the perspective of the “cognitive design” of visualizations in two distinct ways: (i) understanding the effects that visual realism has—as presented in VEs—on route learning, specifically for people of different age groups and with different spatial abilities and memory capacity, and (ii) proposing empirically validated visualization design guidelines for the use of photorealism in VEs for efficient recall of visuospatial information during route learning, not only for shortterm but also for long-term recall in younger and older adults.
... We claim here for a closer methodological approach between the 'abstraction' paradigm and the 'photo-realism' paradigm, in order to take advantages from both for the visual integration of data. Various research works propose managing continuous transitions in a same visualization, for instance between levels of abstraction, according to the distance from the image center or some rendered objects, to the scene depth, in rendering styles or through scales (Semmo et al., 2012;Trapp et al., 2015;Dumont et al., 2017). Multiplexing tools have been investigated, in order to focus on some parts of the visualization or some objects in the visualization (Pietriga et al., 2010;Pindat et al., 2012) opening a main lead for the visualization of several data types. ...
Conference Paper
Full-text available
The purpose of this position paper is to emphasize the remaining challenges for geovisualization in an evolutive context of data, users and spatio-temporal problems to solve in an interdisciplinary approach. Geovisualization is the visualization of spatio-temporal data, phenomena and dynamics on earth, based on the user interaction with heterogeneous data, and their capacities of perception and cognition. This implies to bring closer together knowledge, concepts and models from related scientific visualization domains, for a better understanding, interpretation and analysis of spatio-temporal phenomena on earth. We currently face and cross several types of complexities, regarding spaces, data, models and tools. Our position here, based on past and on-going works, as first proofs of concept, is to model a multidimensional exploration of the territory, because integrating explorations of uses, styles, interaction and immersion capacities, until various ’points of view’ on the represente d spatio-temporal phenomenon.
... Various other spacevariant visualization approaches have been proposed in which, rather than varying the scale or level of detail, the levels of realism or generalization are varied across the display to support focus + context interactions with the data. These approaches aim to smoothly navigate between data and its representation at one scale (e.g., Hoarau and Christophe 2017), between different levels of generalization across scales (e.g., Dumont et al. 2018), or between different rendering styles (Boér et al. 2013;Semmo and Döllner 2014;Semmo et al. 2012). Mixed levels of realism have been proposed for regular maps used for data exploration purposes (Jenny et al. 2012) as well as for VR. ...
Article
Full-text available
In cartography, good practices are clearly established whereas they are not clearly defined for 3D (geographical) renderings. This article details some very first researches and an agenda that aims to provide a style knowledge database that offers possibilities to classify renderings according to graphical patterns. One application is to provide a method to generate relevant transition between two different styles to ease navigation in 3D geographical environment.
Article
Full-text available
1 ABSTRACT This paper investigates and discusses concepts and techniques to enhance spatial knowledge transmission of 3D city model representations based on cartography-oriented design. 3D city models have evolved to important tools for urban decision processes and information systems, especially in planning, simulation, networks, and navigation. For example, planning tools analyze visibility characteristics of inner urban areas and allow planers to estimate whether a minimum amount of light is needed in intensely covered areas to avoid "Gotham city effect", i.e., when these areas become too dark due to shadowing. For radio network planning, 3D city models are required to configure and optimize wireless network services, i.e., to calculate and analyze network coverage and connectivity features. 3D city model visualization often lacks effectiveness and expressiveness. For example, if we analyze common 3D views, large areas of the graphical presentations contain useless or even "misused" pixels with respect to information content and transfer (e.g., pixels that represent several hundreds of buildings at once or pixels that show sky). Typical avatar perspectives frequently show too many details at once and do not distinguish between areas in focus and surrounding areas. In this case the perceptual and cognitive quality of visualized virtual 3D city model could be enhanced by cartographic models and semiotic adaptations. For example, we can integrate strongly perceivable landmarks as referencing marks to the real world, which establish more effective presentations and improve efficient interaction.
Article
Full-text available
Landmarks are an indispensable part of maps in mobile cartography applications. In this paper we propose a design concept for the visualization of building landmarks in mobile maps. We consider four categories of building landmarks: well-known shops (trade chains), shops referenced by their type, buildings with a specific name or function and buildings described by characteristic visual aspects and examine how each of these groups is most effectively visualized. Possible visualizations differ in their abstraction levels, ranging from photo realistic image presentations, over drawings, sketches and icons to abstract symbols and words. As a guideline to designers we provide a matrix representation of the design space from which possible and recommended presentation styles for each building type can be identified.
Conference Paper
We present a new method for user controlled morphing of two homeomorphic triangle meshes of arbitrary topology. In particular we focus on the problem of establishing a correspondence map between source and target meshes. Our method employs the MAPS algorithm to parameterize both meshes over simple base domains and an additional harmonic map bringing the latter into correspondence. To control the mapping the user specifies any number of feature pairs, which control the parameterizations produced by the MAPS algorithm. Additional controls are provided through a direct manipulation interface allowing the user to tune the mapping between the base domains. We give several examples of æsthetically pleasing morphs which can be created in this manner with little user input. Additionally we demonstrate examples of temporal and spatial control over the morph.
Article
A simple and efficient method is presented which allows improved rendering of glyphs composed of curved and linear elements. A distance field is generated from a high resolution image, and then stored into a channel of a lower-resolution texture. In the simplest case, this texture can then be rendered simply by using the alpha-testing and alpha-thresholding feature of modern GPUs, without a custom shader. This allows the technique to be used on even the lowest-end 3D graphics hardware. With the use of programmable shading, the technique is extended to perform various special effect renderings, including soft edges, outlining, drop shadows, multi-colored images, and sharp corners.
Article
We have developed a stencil routing algorithm for implementing a GPU accelerated A-Buffer, by using a multisample texture to store a vector of fragments per pixel. First, all the fragments are captured per pixel in rasterization order. Second, a fullscreen shader pass sorts the fragments using a bitonic sort. At this point, the sorted fragments can be blended arbitrarily to implement various types of algorithms such as order independent transparency or layered depth image generation. Since we handle only 8 fragments per pass, we developed a method for detecting overflow, so we can do additional passes to capture more fragments.
Article
Landscape illustrations and cartographic maps depict ter- rain surface in a qualitatively effective way. In this paper, we present a framework for line drawing techniques for automatically reproducing traditional illustrations of ter- rain by means of slope lines and tonal variations. Given a digital elevation model, surface measures are computed and slope lines of the terrain are hierarchically traced and stored. At run-time slope lines are rendered by stylized procedural and texture-based strokes. The stroke density of the final image is determined according to the light in- tensities. Using a texture based approach, the line draw- ing pipeline is encapsulated from the rendering of the ter- rain geometry. Our system operates on terrain data at in- teractive rates while maintaining frame-to-frame coher- ence.
Chapter
I have presented a comprehensive set of color scheme types and corresponding guidelines for the use of hue and lightness for each scheme (Fig. 7.2): The schemes are matched with parallel conceptualizations of data. Examining data with different schemes may reveal different characteristics of distributions and their interrelationships. Software that allows interactive switching between scheme types will facilitate accurate and thorough understanding through data visualization. Random or perceptually ill-fit assignments of colors to combinations of two variables remains a possible “solution” to mapping problems. This solution, however, will mask the interrelationships that the map-maker should be attempting to illuminate by mapping variables together. Better results would be produced by comparing maps of the individual variables (displayed with suitable schemes). The characteristics of both distributions will be rendered indecipherable by failing to organize hue, lightness and saturation in a way that corresponds with logical orderings within the mapped variables. A disorderly jumble of colors produces a map that is little more than a spatially arranged look-up table. The goal of this chapter is to help you do better than that by using color with skill.
Article
This paper is a first attempt at analyzing the history and the state of the art of computer graphics (CG) education in Russia. Since this collection of information has just begun, the resulting picture is necessarily incomplete.