PresentationPDF Available

3D Generalization Lenses for Interactive Focus+Context Visualization of Virtual City Models

Authors:

Abstract

Presentation of Research Paper "3D Generalization Lenses for Interactive Focus+Context Visualization of Virtual City Models"
[Glander, ACMGIS 2007, ICA WS 2008]
[Glander, ACMGIS 2007, ICA WS 2008]
[Glander, ACMGIS 2007, ICA WS 2008]
[Glander, ACMGIS 2007, ICA WS 2008]
LOA VDS
FNC = map(VDS LOA)
render(FNC)
Preprocessing
LOA = generalization(CM)
Mapping FNC=(M, C)Volumetric Depth Sprites VDS
Generalization Data LOA
Rendering
render(FNC)
Image
City Model CM
Preprocessing
VDS = createVDS(S)
Mapping
M = map(VDS, LOA)
Solid Lens Volumes S
Preprocessing Phase Rendering Phase
CM
LOA
Preprocessing
LOA = generalization(CM)
Volumetric Depth Sprites VDS
Generalization Data LOA
City Model CM
Preprocessing
VDS = createVDS(S)
Solid Lens Volumes S
Preprocessing Phase
Weighted
streets
(9 levels)
Building
geometry
Facade
images
Cell-based
Generalization
Levels of Abstraction: LOA = (LOA1, , LOA9)
S
VDS
Preprocessing
LOA = generalization(CM)
Volumetric Depth Sprites VDS
Generalization Data LOA
City Model CM
Preprocessing
VDS = createVDS(S)
Solid Lens Volumes S
Preprocessing Phase
n
i
Mapping FNC=(M, C)Volumetric Depth Sprites VDS
Generalization Data LOA
Rendering
render(FNC)
Image
Mapping
M = map(VDS, LOA)
Preprocessing Phase Rendering Phase
( )
 
( )
VDSLOA
M
M,
=
==
=
iiiii
iVDSLOALOAVDSM
niM
CFNC
,:
...0|
render(FNC)
{
" Mi M {
VDS ¬ VDSi Mi
setActive(VDS, true)
setParity(VDS, false)
}
renderGeometry(C)
i = || M ||
while(i > 0)
{
VDS ¬ VDSi Mi
LOA ¬ LOAi Mi
if(!culling(VDS)
{
setParity(VDS, true)
renderGeometry(LOA)
setActive(VDS, false)
}
i = i-1
}
}
i = n
( ) ( ) ( )
21100 ,,,, LOALOAVDSLOAVDSFNC =
render(FNC)
{
" Mi M {
VDS ¬ VDSi Mi
setActive(VDS, true)
setParity(VDS, false)
}
renderGeometry(C)
i = || M ||
while(i > 0)
{
VDS ¬ VDSi Mi
LOA ¬ LOAi Mi
if(!culling(VDS)
{
setParity(VDS, true)
renderGeometry(LOA)
setActive(VDS, false)
}
i = i-1
}
}
i = n
( ) ( ) ( )
21100 ,,,, LOALOAVDSLOAVDSFNC =
render(FNC)
{
" Mi M {
VDS ¬ VDSi Mi
setActive(VDS, true)
setParity(VDS, false)
}
renderGeometry(C)
i = || M ||
while(i > 0)
{
VDS ¬ VDSi Mi
LOA ¬ LOAi Mi
if(!culling(VDS)
{
setParity(VDS, true)
renderGeometry(LOA)
setActive(VDS, false)
}
i = i-1
}
}
i = n
( ) ( ) ( )
21100 ,,,, LOALOAVDSLOAVDSFNC =
... Different lenses to magnify, select, filter, color, and analyze image data were proposed: Carpendale et al. [17] present a categorization of 1-3D distortion techniques to magnify in 2D uniform grids. Focusing on lens-based selection, MoleView [29] selects spatial and attribute-related data ranges in spatial embeddings and Trapp et al. present a technique for filtering multi-layer GIS data for city planning [70]. Similarly, Vollmer et al. propose a lens to aggregate ROIs in a geospatial scene to reduce information overload [72]. ...
Article
Inspection of tissues using a light microscope is the primary method of diagnosing many diseases, notably cancer. Highly multiplexed tissue imaging builds on this foundation, enabling the collection of up to 60 channels of molecular information plus cell and tissue morphology using antibody staining. This provides unique insight into disease biology and promises to help with the design of patient-specific therapies. However, a substantial gap remains with respect to visualizing the resulting multivariate image data and effectively supporting pathology workflows in digital environments on screen. We, therefore, developed Scope2Screen, a scalable software system for focus+context exploration and annotation of whole-slide, high-plex, tissue images. Our approach scales to analyzing 100GB images of 10 9 or more pixels per channel, containing millions of individual cells. A multidisciplinary team of visualization experts, microscopists, and pathologists identified key image exploration and annotation tasks involving finding, magnifying, quantifying, and organizing regions of interest (ROIs) in an intuitive and cohesive manner. Building on a scope-to-screen metaphor, we present interactive lensing techniques that operate at single-cell and tissue levels. Lenses are equipped with task-specific functionality and descriptive statistics, making it possible to analyze image features, cell types, and spatial arrangements (neighborhoods) across image channels and scales. A fast sliding-window search guides users to regions similar to those under the lens; these regions can be analyzed and considered either separately or as part of a larger image collection. A novel snapshot method enables linked lens configurations and image statistics to be saved, restored, and shared with these regions. We validate our designs with domain experts and apply Scope2Screen in two case studies involving lung and colorectal cancers to discover cancer-relevant image features.
... 3) Shape Generation: The shape of a close-up can be generated or derived based on selected features of the 3D DTM (e.g., geometry or texture features) [15]. ...
Conference Paper
Full-text available
This paper presents an interactive rendering technique for detail+overview visualization of 3D digital terrain models using interactive close-ups. A close-up is an alternative presentation of input data varying with respect to geometrical scale, mapping, appearance, as well as level-of-detail and level-of-abstraction used. The presented 3D close-up approach enables in-situ comparison of multiple region-of-interest simultaneously. We present a GPU-based rendering technique for the image-synthesis of multiple close-ups in real-time.
Article
Full-text available
Thematic maps are a common tool to visualize semantic data with a spatial reference. Combining thematic data with a geometric representation of their natural reference frame aids the viewer’s ability in gaining an overview, as well as perceiving patterns with respect to location; however, as the amount of data for visualization continues to increase, problems such as information overload and visual clutter impede perception, requiring data aggregation and level-of-detail visualization techniques. While existing aggregation techniques for thematic data operate in a 2D reference frame (i.e., map), we present two aggregation techniques for 3D spatial and spatiotemporal data mapped onto virtual city models that hierarchically aggregate thematic data in real time during rendering to support on-the-fly and on-demand level-of-detail generation. An object-based technique performs aggregation based on scene-specific objects and their hierarchy to facilitate per-object analysis, while the scene-based technique aggregates data solely based on spatial locations, thus supporting visual analysis of data with arbitrary reference geometry. Both techniques can apply different aggregation functions (mean, minimum, and maximum) for ordinal, interval, and ratio-scaled data and can be easily extended with additional functions. Our implementation utilizes the programmable graphics pipeline and requires suitably encoded data, i.e., textures or vertex attributes. We demonstrate the application of both techniques using real-world datasets, including solar potential analyses and the propagation of pressure waves in a virtual city model.
Conference Paper
Full-text available
Panorama maps are stylized paintings of terrain often seen at tourist destinations. They are difficult to create since they are both artistic and grounded in real geographic data. In this paper we present techniques for rendering real-world data in the style of Heinrich Berann's panorama maps in a real-time application. We analyse several of Berann's paintings to identify the artistic elements used. We use this analysis to form algorithms that mimic the panorama map style, focusing on replicating the terrain deformation, distorted projection, terrain colouring, tree brush strokes, water rendering, and atmospheric scattering. In our approach we use freely available digital earth data to render interactive panorama maps without needing further design work.
Article
Many approaches have been developed to visualize 3D city scenes, most of which exhibit the visualization results in a uniform rendering style. This paper presents an expressive rendering approach for visualizing large-scale 3D city scenes with various rendering styles integrated in a seamless way. Each view is actually a combination of the photorealistic rendering and the nonphotorealistic rendering to highlight the information that is interesting for the users and de-emphasize the other that is less important. At run-time, the users are allowed to specify their interested locations interactively. Our system automatically computes the salience of each location and illustrates the entire scene with emphasis in the area of interests. The GPU-based implementation enables interactive realtime performance. Our implementation of a system demonstrates benefits in many applications such as 3D GPS navigation, tourist information, etc. We have performed a pilot user evaluation of the effect for users to access information in 3D city.
Article
Das vorausgegangene Kapitel hat eine Einführung in das Thema Informationsvisualisierung gegeben, die abstrakte Natur der zu Grunde liegenden Daten betont und typische Ziele, wie Exploration und Analyse, eingeführt. Im Fokus stand die Präsentation von Lösungen für Probleme, wie beispielsweise: Gibt es einen Zusammenhang zwischen den Attributen eines mehrdimensionalen Produktdatensatzes? Wie kann ich mir schnell einen Überblick über die hierarchische Organisation eines Unternehmens verschaffen?Wer sind die Freunde meiner Freunde in einem sozialen Online-Netzwerk, und welche Interessen haben sie? Dabei ging es vor allem darum, wie abstrakte Daten repräsentiert werden können. Je nach Aufgabe sowie Art und Dimensionalität der Daten müssen geeignete visuelle Kodierungen gefunden werden. Dabei kommen Raum und Zeit sowie Visualisierungsattribute, wie Farbe, Form, Orientierung oder Verbindung zum Einsatz
Conference Paper
Full-text available
Research in cognitive sciences suggests that orientation and navigation along routes can be improved if the graphical representation is aligned with the user’s mental concepts of a route. In this paper, we analyze an existing 2D schematization approach called wayfinding choremes and present an implementation for virtual 3D urban models, transferring the approach to 3D. To create the virtual environment, we transform the junctions of a route defined for a given road network to comply with the eight sector model, that is, outgoing legs of a junction are slightly rotated to align with prototypical directions in 45° increments. Then, the adapted road network is decomposed into polygonal block cells, the individual polygons being extruded to blocks and their facades textured. For the evaluation of our 3D wayfinding choreme implementation, we present an experiment framework allowing for training and testing subjects by a route learning task. The experimental framework can be parameterized flexibly, exposing parameters to the conductor. We finally give a sketch of a user study by identifying hypotheses, indicators, and, hence, experiments to be done.
Conference Paper
Full-text available
Today’s fast growth of both the number and complexity of digital 3D models results in a number of research challenges. Amongst others, the efficient presentation of, and interaction with, such complex models is essential. It therefore has become more and more important to provide the user with a smart visual interface that presents all the information required in the context of the task at hand in a comprehensive way. In this paper, we present a two-stage concept for the task-oriented exploration of 3D polygonal meshes. An authoring tool uses a combination of automatic mesh segmentation and manual enrichment with semantic information for association with specified exploration goals. This information is then used at runtime to adapt the model’s presentation to the task at hand. The exploration of the enriched model can further be supported by interactive tools. 3D lenses are discussed as an example.
ResearchGate has not been able to resolve any references for this publication.