PresentationPDF Available

3D Generalization Lenses for Interactive Focus+Context Visualization of Virtual City Models

Authors:

Abstract

Presentation of Research Paper "3D Generalization Lenses for Interactive Focus+Context Visualization of Virtual City Models"
[Glander, ACMGIS 2007, ICA WS 2008]
[Glander, ACMGIS 2007, ICA WS 2008]
[Glander, ACMGIS 2007, ICA WS 2008]
[Glander, ACMGIS 2007, ICA WS 2008]
LOA VDS
FNC = map(VDS LOA)
render(FNC)
Preprocessing
LOA = generalization(CM)
Mapping FNC=(M, C)Volumetric Depth Sprites VDS
Generalization Data LOA
Rendering
render(FNC)
Image
City Model CM
Preprocessing
VDS = createVDS(S)
Mapping
M = map(VDS, LOA)
Solid Lens Volumes S
Preprocessing Phase Rendering Phase
CM
LOA
Preprocessing
LOA = generalization(CM)
Volumetric Depth Sprites VDS
Generalization Data LOA
City Model CM
Preprocessing
VDS = createVDS(S)
Solid Lens Volumes S
Preprocessing Phase
Weighted
streets
(9 levels)
Building
geometry
Facade
images
Cell-based
Generalization
Levels of Abstraction: LOA = (LOA1, , LOA9)
S
VDS
Preprocessing
LOA = generalization(CM)
Volumetric Depth Sprites VDS
Generalization Data LOA
City Model CM
Preprocessing
VDS = createVDS(S)
Solid Lens Volumes S
Preprocessing Phase
n
i
Mapping FNC=(M, C)Volumetric Depth Sprites VDS
Generalization Data LOA
Rendering
render(FNC)
Image
Mapping
M = map(VDS, LOA)
Preprocessing Phase Rendering Phase
( )
 
( )
VDSLOA
M
M,
=
==
=
iiiii
iVDSLOALOAVDSM
niM
CFNC
,:
...0|
render(FNC)
{
" Mi M {
VDS ¬ VDSi Mi
setActive(VDS, true)
setParity(VDS, false)
}
renderGeometry(C)
i = || M ||
while(i > 0)
{
VDS ¬ VDSi Mi
LOA ¬ LOAi Mi
if(!culling(VDS)
{
setParity(VDS, true)
renderGeometry(LOA)
setActive(VDS, false)
}
i = i-1
}
}
i = n
( ) ( ) ( )
21100 ,,,, LOALOAVDSLOAVDSFNC =
render(FNC)
{
" Mi M {
VDS ¬ VDSi Mi
setActive(VDS, true)
setParity(VDS, false)
}
renderGeometry(C)
i = || M ||
while(i > 0)
{
VDS ¬ VDSi Mi
LOA ¬ LOAi Mi
if(!culling(VDS)
{
setParity(VDS, true)
renderGeometry(LOA)
setActive(VDS, false)
}
i = i-1
}
}
i = n
( ) ( ) ( )
21100 ,,,, LOALOAVDSLOAVDSFNC =
render(FNC)
{
" Mi M {
VDS ¬ VDSi Mi
setActive(VDS, true)
setParity(VDS, false)
}
renderGeometry(C)
i = || M ||
while(i > 0)
{
VDS ¬ VDSi Mi
LOA ¬ LOAi Mi
if(!culling(VDS)
{
setParity(VDS, true)
renderGeometry(LOA)
setActive(VDS, false)
}
i = i-1
}
}
i = n
( ) ( ) ( )
21100 ,,,, LOALOAVDSLOAVDSFNC =
... 3) Shape Generation: The shape of a close-up can be generated or derived based on selected features of the 3D DTM (e.g., geometry or texture features) [15]. ...
Conference Paper
Full-text available
This paper presents an interactive rendering technique for detail+overview visualization of 3D digital terrain models using interactive close-ups. A close-up is an alternative presentation of input data varying with respect to geometrical scale, mapping, appearance, as well as level-of-detail and level-of-abstraction used. The presented 3D close-up approach enables in-situ comparison of multiple region-of-interest simultaneously. We present a GPU-based rendering technique for the image-synthesis of multiple close-ups in real-time.
Article
Full-text available
Thematic maps are a common tool to visualize semantic data with a spatial reference. Combining thematic data with a geometric representation of their natural reference frame aids the viewer’s ability in gaining an overview, as well as perceiving patterns with respect to location; however, as the amount of data for visualization continues to increase, problems such as information overload and visual clutter impede perception, requiring data aggregation and level-of-detail visualization techniques. While existing aggregation techniques for thematic data operate in a 2D reference frame (i.e., map), we present two aggregation techniques for 3D spatial and spatiotemporal data mapped onto virtual city models that hierarchically aggregate thematic data in real time during rendering to support on-the-fly and on-demand level-of-detail generation. An object-based technique performs aggregation based on scene-specific objects and their hierarchy to facilitate per-object analysis, while the scene-based technique aggregates data solely based on spatial locations, thus supporting visual analysis of data with arbitrary reference geometry. Both techniques can apply different aggregation functions (mean, minimum, and maximum) for ordinal, interval, and ratio-scaled data and can be easily extended with additional functions. Our implementation utilizes the programmable graphics pipeline and requires suitably encoded data, i.e., textures or vertex attributes. We demonstrate the application of both techniques using real-world datasets, including solar potential analyses and the propagation of pressure waves in a virtual city model.
Conference Paper
Full-text available
Panorama maps are stylized paintings of terrain often seen at tourist destinations. They are difficult to create since they are both artistic and grounded in real geographic data. In this paper we present techniques for rendering real-world data in the style of Heinrich Berann's panorama maps in a real-time application. We analyse several of Berann's paintings to identify the artistic elements used. We use this analysis to form algorithms that mimic the panorama map style, focusing on replicating the terrain deformation, distorted projection, terrain colouring, tree brush strokes, water rendering, and atmospheric scattering. In our approach we use freely available digital earth data to render interactive panorama maps without needing further design work.
Article
Many approaches have been developed to visualize 3D city scenes, most of which exhibit the visualization results in a uniform rendering style. This paper presents an expressive rendering approach for visualizing large-scale 3D city scenes with various rendering styles integrated in a seamless way. Each view is actually a combination of the photorealistic rendering and the nonphotorealistic rendering to highlight the information that is interesting for the users and de-emphasize the other that is less important. At run-time, the users are allowed to specify their interested locations interactively. Our system automatically computes the salience of each location and illustrates the entire scene with emphasis in the area of interests. The GPU-based implementation enables interactive realtime performance. Our implementation of a system demonstrates benefits in many applications such as 3D GPS navigation, tourist information, etc. We have performed a pilot user evaluation of the effect for users to access information in 3D city.
Article
Das vorausgegangene Kapitel hat eine Einführung in das Thema Informationsvisualisierung gegeben, die abstrakte Natur der zu Grunde liegenden Daten betont und typische Ziele, wie Exploration und Analyse, eingeführt. Im Fokus stand die Präsentation von Lösungen für Probleme, wie beispielsweise: Gibt es einen Zusammenhang zwischen den Attributen eines mehrdimensionalen Produktdatensatzes? Wie kann ich mir schnell einen Überblick über die hierarchische Organisation eines Unternehmens verschaffen?Wer sind die Freunde meiner Freunde in einem sozialen Online-Netzwerk, und welche Interessen haben sie? Dabei ging es vor allem darum, wie abstrakte Daten repräsentiert werden können. Je nach Aufgabe sowie Art und Dimensionalität der Daten müssen geeignete visuelle Kodierungen gefunden werden. Dabei kommen Raum und Zeit sowie Visualisierungsattribute, wie Farbe, Form, Orientierung oder Verbindung zum Einsatz
Conference Paper
Full-text available
Today’s fast growth of both the number and complexity of digital 3D models results in a number of research challenges. Amongst others, the efficient presentation of, and interaction with, such complex models is essential. It therefore has become more and more important to provide the user with a smart visual interface that presents all the information required in the context of the task at hand in a comprehensive way. In this paper, we present a two-stage concept for the task-oriented exploration of 3D polygonal meshes. An authoring tool uses a combination of automatic mesh segmentation and manual enrichment with semantic information for association with specified exploration goals. This information is then used at runtime to adapt the model’s presentation to the task at hand. The exploration of the enriched model can further be supported by interactive tools. 3D lenses are discussed as an example.
ResearchGate has not been able to resolve any references for this publication.