ArticlePDF Available

Image-based strategies for interactive visualisation of complex 3D geovirtual environments on lightweight devices

Authors:
  • XU Exponential University of Applied Sciences

Abstract and Figures

In this article, we present strategies for service-oriented, standards and image-based 3D geovisualisation that have the potential to provide interactive visualisation of complex 3D geovirtual environments (3DGeoVE) on lightweight devices. In our approach, interactive geovisualisation clients retrieve sets of 2D images of projective views of 3DGeoVEs generated by a 3D rendering service. As the key advantage of the image-based approach, the complexity that a client is exposed for displaying a visual representation is reduced to a constant factor primarily depending on the image resolution. To provide users with a high degree of interactivity, we propose strategies that are based on additional service-side functionality and on exploiting multiple layers of information encoded into the images for the local reconstruction of visual representations of the remote 3DGeoVE. The use of service-orientation and standards facilitates designing distributed 3D geovisualisation systems that are open, interoperable and can easily be adapted to changing requirements. We demonstrate the validity of the proposed strategies by presenting proof-of-concept implementations of several image-based 3D clients for the case of virtual 3D city models.
Content may be subject to copyright.
Journal of Location Based Services
2011, 1–21, iFirst
Image-based strategies for interactive visualisation of complex
3D geovirtual environments on lightweight devices
Dieter Hildebrandt*, Benjamin Hagedorn and Ju
¨rgen Do
¨llner
Hasso-Plattner-Institut, University of Potsdam, Prof.-Dr.-Helmert-Str. 2-3,
14482 Potsdam, Germany
(Received 30 October 2010; final version received 3 April 2011; accepted 11 April 2011)
In this article, we present strategies for service-oriented, standards and
image-based 3D geovisualisation that have the potential to provide
interactive visualisation of complex 3D geovirtual environments
(3DGeoVE) on lightweight devices. In our approach, interactive geovisua-
lisation clients retrieve sets of 2D images of projective views of 3DGeoVEs
generated by a 3D rendering service. As the key advantage of the image-
based approach, the complexity that a client is exposed for displaying a
visual representation is reduced to a constant factor primarily depending on
the image resolution. To provide users with a high degree of interactivity,
we propose strategies that are based on additional service-side functionality
and on exploiting multiple layers of information encoded into the images
for the local reconstruction of visual representations of the remote
3DGeoVE. The use of service-orientation and standards facilitates
designing distributed 3D geovisualisation systems that are open,
interoperable and can easily be adapted to changing requirements. We
demonstrate the validity of the proposed strategies by presenting proof-
of-concept implementations of several image-based 3D clients for the case
of virtual 3D city models.
Keywords: 3D geovirtual environments; distributed 3D geovisualisation;
image-based representations; lightweight devices; service-oriented
architectures; standardisation
1. Introduction
For the interactive 3D geovisualisation of complex 3D geovirtual environments
(3DGeoVE) such as virtual 3D city models and landscape models, massive amounts
of geodata as well as complex processing and visualisation algorithms are involved.
For interactive access to these models, the amount of required resources for
generating visual representations in terms of network, storage and computing
capacity significantly reduce the applicability of 3D geovisualisation, especially on
mobile devices. As a common solution, visualisation systems can be deployed that
distribute geodata and functionality over computers connected by a network using
visualisation clients (e.g. Google Earth). However, common approaches for
distributed visualisation either do not scale with the increasing complexity of
geodata (e.g. streaming detailed, textured CAD-based 3D city models) or the
*Corresponding author. Email: dieter.hildebrandt@hpi.uni-potsdam.de
ISSN 1748–9725 print/ISSN 1748–9733 online
ß2011 Taylor & Francis
DOI: 10.1080/17489725.2011.580787
http://www.informaworld.com
computation required for visualisation (e.g. real-time photorealistic 3D rendering),
do not easily scale with an increasing number of concurrent users, provide only
limited interactivity or yield closed, tightly coupled systems.
In this article, we present strategies for service-oriented, standards and image-
based 3D geovisualisation that have the potential to overcome the aforementioned
limitations. In our approach, interactive geovisualisation clients retrieve a set of 2D
images of projective views of 3DGeoVE generated by a 3D rendering service. As the
key advantage of the image-based approach, the complexity that a client is exposed
to for displaying a visual representation is reduced to a constant factor primarily
depending on the image resolution. To provide users with a high degree of
interactivity, we propose strategies that are based on additional service-side
functionality and on exploiting multiple layers of information encoded into the
images for the local reconstruction of the remote 3DGeoVE. The use of service-
orientation and standards facilitates designing distributed 3D geovisualisation
systems that are open, interoperable and can easily be adapted to changing
requirements. We demonstrate the validity of the proposed strategies by presenting
proof-of-concept implementations of several image-based 3D clients for the case of
virtual 3D city models.
The remainder of this article is structured as follows. In Section 2, we identify a
set of requirements for a specific class of practically relevant 3D geovisualisation
systems. The fundamentals of our approach including SOA, standards and the
distributed visualisation pipeline as well as related work are described in Section 3.
We present the outline of the general approach we propose for designing 3D
geovisualisation systems intended to meet the previously identified requirements in
Section 4. As instances of the general approach, we present three concepts for image-
based, interactive visualisation clients in Section 5. In Section 6, we discuss how the
proposed approach and the three concrete concepts support meeting the previously
identified requirements. Finally, in Section 7 we conclude this article with a
summary, conclusions and future work.
2. Requirements
In this section, we identify a set of requirements for 3D geovisualisation systems.
This particular set is valid for a specific, practically relevant class of 3D
geovisualisation systems and is informed by the existing literature. In this article,
we place a particular focus on 3DGeoVEs and virtual 3D city models. Furthermore,
we focus on 3D rendering for generating 2D images of projective views of primarily
static CAD-based models with real-time interaction and navigation using six degrees
of freedom.
(i) Support for integration (R1) is required to connect computer systems
effectively and efficiently on different levels of abstraction such as data,
functionality, process, visualisation, interaction and system. It should
improve the flexibility and efficiency of adapting systems to changing
requirements and ease the reuse of software components (Rhyne and
MacEachren 2004, Brodlie et al. 2007).
(ii) Interoperability (R2) increases the effectiveness and efficiency of the integra-
tion on the different levels and can be improved by applying standards.
2D. Hildebrandt et al.
In the geospatial and the geovisualisation domain, insufficient interopera-
bility has been identified as a major barrier for progress in the respective
domain (Bishr 1998, MacEachren and Kraak 2001, Andrienko et al. 2005).
(iii) Typically, in real world applications, systems are required to facilitate
processing, visualising and interacting with massive amounts of geodata (R3).
In particular, this applies to virtual 3D city models (MacEachren and Kraak
2001, Hildebrandt and Do
¨llner 2009).
(iv) Providing effective,high-quality visual representations (R4) improves the
effectiveness of a geovisualisation system and is facilitated by advanced,
complex, innovative visualisation algorithms and in certain cases massive
amounts of data (e.g. for virtual 3D city models), for both realistic and
abstract views (Do
¨llner 2005, Hildebrandt and Do
¨llner 2009).
(v) Support for platform independency (R5) comprises the relative independency
of a system solution from software and hardware platforms on different
levels of abstraction and adaptive and moderate use of platform resources.
It can improve dissemination and reduce costs (MacEachren et al. 2004,
Brodlie et al. 2007).
(vi) A high degree of interactivity (R6) is a key defining characteristic and as well
as a crucial requirement for geovisualisation systems and should be effective
and efficient (MacEachren and Kraak 2001, Dykes 2005).
(vii) Support for styling (R7) visual representations allows control over what
(e.g. filtering of features) and how to portray (e.g. mapping of features to
geometries and visual attributes) and is essential for interaction and
generating different visualisations from the same base data (Yi et al. 2007,
Neubauer and Zipf 2009).
In the following Sections, we make explicit reference to each introduced
requirement via its respective code (e.g. ‘R1’ for the first listed requirement) when the
discussion touches the requirement.
3. Fundamentals and related work
3.1. SOA, standards and the distributed visualisation pipeline
The service-oriented computing (SOC) paradigm promotes the idea of assembling
application components into a network of services that can be loosely coupled to
create flexible, dynamic business processes and agile applications that span
organisations and computing platforms (Papazoglou et al. 2007). The term service-
oriented architecture (SOA) denotes both an architectural concept and style that
adheres to the SOC paradigm and concrete architectures that are designed following
that architectural concept. SOC and SOA are specific paradigms for designing
distributed systems.
In the geospatial domain, the Open Geospatial Consortium (OGC 2010)
adopted the SOA paradigm and proposes standards for service interfaces, data
models and encodings. For the presentation of information to humans, the OGC
proposes stateless portrayal services. For 3D portrayal, the Web 3D Service (W3DS)
(Schilling and Kolbe 2010) and the Web View Service (WVS) (Hagedorn et al. 2009,
Hagedorn et al. 2010) are proposed as different approaches that are both still in the
early stages of the standardisation process. The major difference in the current
Journal of Location Based Services 3
proposals for the 3D portrayal services is the representation that they generate and
what visualisation pipeline stages they implement to what extent. The W3DS delivers
scene graphs that can be rendered by a client, whereas the WVS delivers rendered
images of projected views that are ready for display. An analysis of the respective
strengths and weaknesses of 3D portrayal services can be found in Hildebrandt and
Do
¨llner (2009). As a complement to this, the Styled Layer Descriptor (SLD) and
Symbology Encoding (SE) (Lupp 2007, Neubauer and Zipf 2009) are standardisation
proposals for user-defined styling of 2D and 3D visual representations. Portrayal
services may include support for SLD.
We introduce the visualisation pipeline as a model for allocating resources in a
distributed system and for motivating the image-based representation. The visual-
isation pipeline (Haber and McNabb 1990) is a well-established concept for
separating the concerns of the process of generating visual representations from
data in three stages. The data is filtered into enhanced data, then mapped to
visualisation objects (e.g. represented as scene graphs, geometry and visual
attributes), and finally rendered into a digital 2D image that is ready for display to
a human user. For designing a geovisualisation system based on SOA, the
visualisation pipeline must be functionally decomposed. A basic, conceptual
decomposition splits the pipeline into two parts interconnected by a network,
resulting in a 2-tier physical client/service architecture. The separation can be applied
after the filtering, mapping or rendering stage. Three types of geovisualisation clients
can be categorised: thick clients,medium clients and thin clients (adapted from Doyle
and Cuthbert (1998)). Note that this classification is schematic. Concrete systems
may implement variations of this model, as demonstrated in Sections 5.2 and 5.3.
The W3DS and WVS adhere to this model. The W3DS provides scene graphs as
output of the mapping stage, whereas the WVS provides images as output of the
rendering stage.
3.2. Related work
In this article, we are concerned with strategies for service-oriented, standards and
image-based 3D geovisualisation systems that support meeting the requirements
identified in Section 2. For distributed visualisation systems, most commonly visual
representations based on scene graphs, geometry such as triangle meshes and texture
maps are proposed and applied (such as in the W3DS (Schilling and Kolbe 2010),
Google Earth, Microsoft Bing Maps 3D). Here, we focus on image-based represen-
tations since we estimate that they have the potential to better support the stated
requirements. For image-based, distributed visualisation, the proposed approaches
include streaming videos of rendered 3D models from a server to a tightly coupled
client (Lamberti and Sanna 2007), applying image-based modelling and rendering
(IBMR) (Shum et al. 2007) and warping a representation based on colour and depth
images retrieved from a remote server for rendering novel views (Chang and Ger
2002), applying point-based modelling and rendering (PBMR) (Gross and Pfister 2007)
and utilising remotely rendered colour and depth images as input for client-side
PBMR (Ge 2007), and rendering novel views on the client by warping between image-
based panoramas retrieved from a server (Filip 2009). In addition, proposals exist for
designing visualisation systems as distributed systems (e.g. Brodlie et al. 2004),
4D. Hildebrandt et al.
distributed systems based on SOA (e.g. Wang et al. 2008), or distributed systems based
on SOA and OGC standards (e.g. Basanow et al. 2008, Hildebrandt and Do
¨llner 2009,
OGC 2010).
However, to the best of our knowledge, we are not aware of related work that
proposes designing 3D geovisualisation systems based on SOA, OGC standards and
image-based representations with the aim of meeting the previously stated
requirements. In particular, related work often does not address at the same time
improving integration by loose coupling, interoperability, supporting lightweight
clients and the application to 3DGeoVE.
4. General approach
We propose a particular approach for designing 3D geovisualisation systems that is
intended to support meeting the requirements identified in Section 2. In this section,
we outline the general approach. Based on the general approach, we present three
different, concrete concepts in the Section 5. The three presented concepts differ in
the degree that they meet the stated requirements.
4.1. Working principle
The general approach is based on the distributed visualisation pipeline, image-based
representations, standards and SOA (Figure 1). The 3D rendering service implements
all stages of the visualisation pipeline and locally stores the geodata that it can
portray. Clients retrieve 2D images of projective views of a 3DGeoVE from the
service. Clients then either directly display the retrieved images, or use the images as
input for further processing. Interactions of users with the graphical user interface
3D viewer client
Functionality
layer
CTRL
DPVP
3D renderer
(WVS)
M
R
F
Data
WVS operations:
GetView(Layers,
Styles,
Camera, …)
GetFeatureInfo(…)
GetPosition(…)
GetMeasurement(…)
GetCamera(…)
GetIdentifierMapping(…)
Interaction
layer
Process
layer
Data
layer
Colour Normal
Depth Object ID
Mask
Non-image
results
Figure 1. Architecture of the general approach. Clients use the operations provided by the
WVS to retrieve image layers and other results.
Journal of Location Based Services 5
(GUI) of the display result in user input events. The controller (CTRL) and its
implemented interaction techniques process these events. The view process (VP)
transforms commands for updating the visualisation from the controller into
the calling of service operations for requesting images or other functionality.
In Section 5, we present three client concepts that differ in how they exploit images
retrieved from the service and use additional service-side functionality for providing
interactivity. We apply standards where available and feasible. The 3D rendering
service implements the WVS service interface, images are encoded using standard
image formats (i.e. JPEG, PNG), clients access the service via HTTP on top of
TCP/IP, and if a client requests multiple images from the service with one call, the
service returns images as one multipart response. The architecture and service
complies with design principles commonly proposed for SOA (Erl 2005). The WVS
is, e.g. stateless, loosely coupled, and autonomous.
As service interface for the 3D rendering service, we propose the OGC
standardisation proposal WVS (Hagedorn et al. 2009, Hagedorn et al. 2010). The
WVS overcomes restricted visualisation and interaction capabilities of preceding
proposals such as the Web Terrain Service (WTS, OGC discussion paper) and its
successor, the Web Perspective View Service (WPVS, OGC-internal draft specifica-
tion). The OGC approved the WVS specification as discussion paper. It provides
(a) additional image layers for 3D views and (b) additional service operations for
supporting analysis, navigation and information retrieval. The WVS supports
retrieving additional image layers for 3D views besides colour layers through the
GetView operation. These layers store various spatial and thematic information for
each image pixel such as colour, spatial depth, object ID, surface normal and mask.
This concept is based on the G-buffer (Saito and Takahashi 1990) concept from 3D
computer graphics. This data does not necessarily represent colour values and is not
necessarily dedicated for human cognition. However, the WVS also supports
encoding also non-colour image layers by standard image formats. Thus for all
image layers the same principles for data encoding, data exchange, and client-side
data loading and processing can be applied. Additionally, using image encodings
allows for applying state-of-the-art compression algorithms. One implication is that
on the service-side for each pixel each non-colour information has to be encoded as a
colour (e.g. with four components as RGBA). Symmetrically, the client has to
decode non-colour information from colour. As most important additional
operations, the WVS provides the following: GetFeatureInfo (returning attribute
information for a feature identified by a specified 2D coordinate in a 3D view),
GetPosition (returning the 3D position of a part of a feature specified by 2D
coordinate in a 3D view or the 2D coordinate of a specified 3D position),
GetMeasurement (returning the Euclidean length of a path or the area of a polygon
specified by a set of 2D coordinates in a 3D view), GetCamera (returning a camera
specification providing a ‘good view’ on features identified by 2D coordinates in a
3D view) and GetIdentifierMapping (returning mappings between GML feature
IDs and object IDs that encode the GML feature IDs compactly as integer pixels in
object ID layers). Based on these functional extensions, WVS clients can implement
various 3D visualisation and interaction features without changing the underlying
working principle. This leads to an increased degree of interactivity and is
demonstrated by prototypic web-based client applications given in Section 5.
6D. Hildebrandt et al.
4.2. Challenges
However, there are fundamental challenges when applying image-based representa-
tions regarding interactivity (R6) and the efficient use of the network channel in the
course of interactions (R3).
Interaction occurs by manipulating parameters of the visualisation pipeline stages
that results in updated displays. Separating the rendering and display of images by a
network introduces the high latency and low bandwidth of the network channel to
the interaction loop. If real-time navigation is required (i.e. requiring display updates
with more than 10 frames per second and six degrees of freedom), this results in
displays with low or limited frame rates and high latencies between user input and
display updates. This drawback also applies to interaction techniques that change
parameters of the rendering or preceding stages (e.g. removing or changing the
styling of features).
Furthermore, interaction techniques that require access to features of a model
and their properties can be limited by a purely image-based approach (e.g.
highlighting, relating or showing additional information for features). For imple-
menting these techniques supplying merely colour does not provide sufficient
information. Access to the outputs of previous stages is required.
One single view of a 3DGeoVE based on massive geodata is typically most
efficiently encoded as an image. However, in the course of interaction numerous
images are required (e.g. when navigating a virtual camera). Basic image-based
approaches instantly discard an image after it has been displayed and replace it with a
newly rendered, self-contained image. Not existing or limited reuse of image data can
significantly reduce memory efficiency and increase interaction latency. In addition,
service load increases.
5. Concepts for image-based, interactive visualisation clients
In this section, we present three concepts for image-based, interactive visualisation
clients. The first is based on additional service-side functionality, while the others are
based on local 3D reconstruction of the remote 3DGeoVE. The presented concepts
are intended to meet the requirements identified in Section 2. Each concept is an
instance of the general approach presented in Section 4, and, in particular, is
intended to tackle the challenges of the general approach identified in Section 4.2.
For each concept, we briefly present a prototypical, rudimentary proof-of-concept
implementation. Moreover, we evaluate how each concept provides interactivity by
examining how it supports specific interaction categories. We employ the following
seven categories that are proposed in Yi et al. (2007) and are based on the notion of
user intent: Select (mark something as interesting), Explore (show me something else),
Reconfigure (show me a different arrangement), Encode (show me a different
representation), Abstract/Elaborate (show me more or less detail), Filter (show me
something conditionally) and Connect (show me related items).
5.1. Concept based on image retrieval and display
The WVS display client presented in this subsection conceptually requests 3D views
from a WVS as colour images and directly displays these images (Figure 2).
Journal of Location Based Services 7
As a complement to this, it allows users to control the virtual camera, to retrieve
information about displayed features and to perform analysis in the displayed
3DGeoVE. For this, the client takes advantage of various WVS operations that are
designed for supporting interactivity even on thin clients.
Technically, the client is a JavaScript-based web application, which is fully
executed on the client side. Thus, technical barriers for its application are low. It runs
in any web browser that supports JavaScript. No additional plug-ins (e.g. Java or
Flash) need to be installed and no dedicated 3D rendering hardware or software is
required at the client side. Due to this, the WVS display client is particularly
applicable on platforms that are limited in computing and 3D rendering capabilities
or are faced with limited network connectivity (e.g. mobile phones). Furthermore,
the client can be easily integrated into existing web sites and web applications.
For manoeuvring the camera, the WVS display client determines a new camera
specification and requests a new view from the WVS. Currently, no additional
intermediate views are considered. Thus, camera control is conceptually not
continuous, but inherently discrete and step-by-step. The camera can be manipulated
by (a) GUI controls (translate, rotate left/right, tilt up/down, zoom in/out, orient to
north), (b) mouse-wheel usage modified by keys (zooming, rotating) and (c) selection
of one or multiple 2D positions in the displayed view.
The client supports in-image interaction tools as a major concept to allow users to
interact directly with the 3D view and the contained features. For this, several WVS
operations are available that require one or more image pixels as input. For in-image
camera control, the client transforms one or more selected pixel positions into a
corresponding 3D geospatial location by the WVS GetPosition operation (imple-
mented service-side as ray intersection tests). The returned 3D locations can be used
within new GetView requests as new camera positions and/or orientations. Further,
the client incorporates the GetCamera operation to utilise service-side support for
smart camera control: for a specific 2D pixel input, a WVS can compute ‘good’
Figure 2. Screenshot of a web-based WVS display client. (a) Integrated with a 2D map from
Google maps, which marks camera look-to and look-from. The arrow indicates the selection
of a new camera look-from and look-to within the image. (b) Specification of a path and
display of its length. (c) Display of thematic information retrieved from the WVS for a feature
selected by the user (3D data: Boston Redev. Authority).
8D. Hildebrandt et al.
camera specifications. This enables assisting and higher-level 3D camera control for
thin clients, including input preprocessing (e.g. sketch recognition) as well as the
consideration of the type and geometry of affected features.
To foster information gathering, the client can retrieve and display information
for selected features (implemented through the GetFeatureInfo operation).
Additionally, the client allows users to perform distance and path measurements
within the displayed view. This is based on the GetMeasurement operation, a generic
approach for providing analysis functionalities that is also based on 2D pixel
positions. Measurements are computed at the service-side and returned to the client
for display.
Using HTML and a JavaScript drawing library, the client can annotate the 3D
view by text or drawings, for integrating, e.g. feature information or visual
navigation feedback. Examples are overlays marking interesting or selected
positions, arrows indicating a new camera look-from and look-to, and paths that
are measured.
In summary, the WVS display client supports the interaction categories presented
in this section as follows. For Select, clicking on a feature generates a mark at this
position. For Explore, the client allows for rotating at the camera position, tilting the
camera up/down, translating the camera, orienting the camera towards a selected
location of interest, as well as moving the camera to a location picked in the image.
Points of interest can be selected from a drop-down-menu. For Reconfigure, the
client allows for rotating around the camera’s look-to, and for moving the camera up
and down while keeping the look-to. For Abstract/Elaborate, the client allows for
zooming into the scene as well as for showing feature information in an information
widget. To Filter a user can select the data layers to select from a GUI control. For
Encode, a user can select the visual style to apply to the selected data from a GUI
control. Finally, to Connect, the display client can be integrated and combined with
other views showing the related data (Figure 2).
5.2. Concept based on image-based modelling and rendering
In this subsection, we present a concept that employs a latency hiding technique
based on a client-side, partial visualisation pipeline and 3D reconstruction of the
visual representation of the remote 3DGeoVE from images. For the client-side 3D
reconstruction and rendering, the concept employs techniques based on image-based
modelling and rendering (IBMR).
5.2.1. Latency hiding technique
As a general strategy to mitigate the negative effects on interactivity of high latency
and low bandwidth introduced by a distributed visualisation pipeline, we first
propose avoiding the execution of visualisation pipeline stages. Instead of aiming at
reducing the absolute latency of the system, we aim at reducing the user-perceived
latency (Sisneros et al. 2007). This can be achieved by using latency hiding techniques
that trade steadily low response times with approximations.
We propose a concrete latency hiding technique that is based on the nested,
partial visualisation pipeline architectural pattern and the client-side, partial 3D
reconstruction of the visual representation of the remote 3DGeoVE from images.
Journal of Location Based Services 9
A pattern that extends the visualisation pipeline and that we observe in existing
systems is what we term the nested,partial visualisation pipeline (NPVP). Using this
pattern, partial pipelines can be nested in the main pipeline after the filter, map or
render stage (e.g. F(FMR) M(FMR) R(FMR)). Stages must possibly reinterpret the
output of a preceding stage (e.g. rendered images as textures for subsequent
rendering in FMR(R)). We use the NPVP pattern to insert a partial pipeline in the
main pipeline after the rendering stage that consists of an additional mapping and
rendering stage (Figure 3). As before, the WVS implements a filtering, mapping and
rendering stage and the client retrieves images from the WVS.
In the newly inserted mapping stage on the client, the images are reinterpreted as
a 3D visual representation of the remote 3DGeoVE. Each pixel is conceptually
interpreted as a surface patch in 3D space that covers a part of the visible surface of
the 3DGeoVE with attributes including 3D extend, colour and object ID. From the
surface patches, a computer graphics representation based on geometry and visual
attributes is constructed. The mapping stage provides a mechanism for aggregating
multiple, consecutively retrieved image sets that depends on the type of represen-
tation used. The aggregated representations constitute the client-side, partial 3D
reconstruction of the visual representation of the remote 3DGeoVE.
In the newly inserted rendering stage, novel views can be rendered of the local 3D
reconstruction from arbitrary virtual camera viewpoints. For a specific view, the
available 3D reconstruction on the client is typically under- or oversampled in
comparison to the available original data in the distributed pipeline. For this reason,
the images rendered from the reconstruction represent only approximations of the
3D viewer client
Interaction
layer
Process
layer
Functionality
layer
Data
layer
R
CTRL
DP
VP
M
3D renderer
(WVS)
M
R
F
Data
Figure 3. Architecture employed for the proposed latency hiding technique used by both the
concept based IBMR and the one based on PBMR.
10 D. Hildebrandt et al.
visual representations of the remote 3DGeoVE. By using the interaction techniques
provided by the client, a user can manipulate parameters of the pipeline stages.
Manipulating the local pipeline results in low-latency updates of the display (e.g. for
real-time navigation with six degrees of freedom). Specific interaction techniques
require manipulating parameters of the remote pipeline (e.g. for changing the styling,
or when parts of the model come into view that were not sufficiently sampled with
previously retrieved images). These manipulations still result in high-latency
responses. Effectively, decoupling the update of the display from the high-latency,
distributed pipeline, and introducing a local, low-latency pipeline based on the
NPVP pattern and 3D reconstruction of the remote 3DGeoVE allows hiding the
absolute latency and updating the display with low latency.
5.2.2. Image-based modelling and rendering
In this subsection, we present an instance of the general strategy that is based on
techniques from the domain of image-based modelling and rendering (Shum et al.
2007). Figure 4 (left) depicts a screenshot of an implementation of this concept.
In the mapping stage, the client retrieves for a given camera specification a set of
perspective images consisting of a colour, depth and object ID image layer. From a
Figure 4. Left: Screenshot of a web-based WVS IBMR client application (Section 5.2)
executing inside a web browser. The IBMR client shows a highlighted and annotated feature
that was selected by the user. The IBMR client represents the remote 3D GeoVE locally as sets
of 3D depth triangle meshes. Middle, right: Screenshots of a WVS PBMR client application
(Section 5.3) executing on the smartphone Apple iPhone. The screenshots depict a close up
view of a 3DGeoVE (middle) and an overview (right). The PBMR client represents the remote
3D GeoVE locally as an octree that contains 3D surface patches in nodes and leafs. In the
screenshots, the client draws the bounding box of each octree node traversed in the rendering
algorithm to illustrate the underlying hierarchical spatial data structure.
Journal of Location Based Services 11
depth image and the camera specification it was retrieved with, a 3D depth triangle
mesh is constructed. First, each depth value is projected back to a 3D coordinate in
camera space and then transformed into world space. Second, for every original pixel
in the depth image two triangles are created that connect its corresponding 3D
coordinate with the coordinates of three neighbouring pixels (i.e. for pixel coordinate
(x,y) two triangles with the following coordinates are created: (x,y), (xþ1, y),
(x,yþ1) and (xþ1, y), (xþ1, yþ1), (x,yþ1)). The mapping stage catches each
image set and its corresponding mesh until the controller triggers its eviction
explicitly or by being the least recently used image set when the memory limit for the
cache is exceeded.
In the rendering stage, for a given camera specification a novel view is rendered
from the available depth meshes of the mapping stage. The camera specification as
an input for this stage is typically provided by an interaction technique applied by a
human user. Each depth mesh that spatially intersects the 3D view frustum of the
camera is rendered. The rendering applies colour to the depth mesh via projectively
texturing of the depth mesh with its corresponding colour image. It resolves visibility
in the frame buffer via depth buffering. Multiple depth meshes are aggregated in
screen space to represent the remote 3DGeoVE from a set of locally available
sampled images of the 3DGeoVE. For highlighting a feature identified by an object
ID, the depth mesh is rendered by a shader that colours each pixel with the same
object ID. Additionally, the contours of features are detected by detecting edges in
the object ID image. Feature contours and interiors in image space are coloured
differently.
The controller receives user input events and implements interaction techniques.
The controller is responsible for providing camera specifications as input parameters
for the local rendering stage. Furthermore, the controller is responsible for
implementing a sampling strategy. The sampling strategy controls how the remote
3DGeoVE is sampled by retrieving images and when already present samples can be
discarded. The sampling strategy depends on the applied interaction technique and
current and assumed future camera specifications. Its goal is to provide for each
novel view that is rendered on the client a set of images that allows rendering the
view with minimal under- and minimal oversampling. Undersampling displays in the
novel view as holes (i.e. no samples are available for an area) or a blurred area
(i.e. samples not dense enough). A negative consequence of oversampling
(i.e. samples too dense for an area) is overdrawn and reduced rendering performance
and frame rate.
The implementation of the concept supports the seven interaction categories as
follows. As an interaction technique in the category Select, clicking on a feature
highlights the feature (implemented by colouring all pixels with the same object ID as
the clicked on pixel). For Explore, the client allows rotating the camera around itself,
moving the camera by panning, and moving it by selecting a feature of interest that is
then brought into focus via a continuous camera animation (implemented by
rendering the local 3D reconstruction of the 3DGeoVE in a local NPVP with low
latency). For Reconfigure, the client allows rotating the camera on a sphere around a
selected feature in the 3DGeoVE (implementation similar to previous category). For
Encode, the client can specify the styling in a SLD document and retrieve projected
images with the styling applied from a WVS. For Abstract/Elaborate, the client offers
(geometric) zooming and tool-tips for displaying additional information about
12 D. Hildebrandt et al.
features (implemented by modifying the field-of-view of the virtual camera and by
retrieving additional information encoded in GML for a feature identified by its
object ID). To Filter the data set being presented, the client can specify in a SLD
document conditionally what features to include and retrieve images from WVS with
the filtering applied. For Connect, the client can be combined with different displays
in a system based on coordinated, multiple views (e.g. as part of a mashup as
presented in Section 5.1).
5.3. Concept based on point-based modelling and rendering
In this subsection, we present a second concept that employs the latency hiding
technique presented previously in Section 5.2.1. However, instead of IBMR, the
concept presented in this section employs techniques based on point-based modelling
and rendering (PBMR) (Gross and Pfister 2007) for the client-side 3D reconstruction
and rendering. Since both concepts share several similarities, here, we will focus on
presenting the differences between them. Figure 4 (middle, right) depicts screenshots
of an implementation of this concept executing on a smartphone.
In the mapping stage, the client interprets the 3D surface patches derived from
the images as 3D points with the attribute colour and object ID. In addition, each
point is assigned a spatial extent in object space derived from the surface patch. We
assign a radius to each point that effectively interprets a patch as a sphere. Note that
other representations such as circular disks, elliptical disks or voxels could be used.
Conceptually, the spheres derived from one image set approximate the continuous
surface of the visual representation of the 3DGeoVE visible in the image set. The
spheres define the visible surface geometry and topology. Then, the spheres are
added to an octree, a hierarchical spatial data structure. Spheres are stored in the
nodes and leafs of the octree. In the previously introduced IBMR representation,
each image set is transformed into one corresponding depth mesh and, thus, the 3D
surface patches derived from an image stay in the context of that image. In contrast,
in the PBMR representation, the surface patches are disconnected from each other
and their originating image set. They are added separately to the octree. The octree is
used for aggregating data, storing data in multiple resolutions, querying data and
rendering data. Data received from multiple calls to the WVS is integrated in a
unified manner. Multiple resolutions are stored at different levels of the data
structure, whereby each node stores a generalised representation of its child nodes.
The data structure supports querying spatial data in logarithmic complexity.
We assume that in practical cases the amount of data needed for visualising
3DGeoVEs exceeds the main memory capacities of targeted computers used for
executing the client. Moreover, in contrast to the IBMR representation, the
aggregation of the local representation has the potential to be significantly more
effective and efficient. To exploit this potential, we aggregate on the client as much
data as feasible. For this, we propose to utilise appropriate out-of-core techniques
(Gobbetti et al. 2008) for implementing the client application. The general strategy is
to first retrieve the data as images from the WVS and dynamically add the data to
the octree. Then, when the amount of the locally accumulated data exceeds the main
memory capacity, parts of the octree are stored on the local hard disk and are
removed from main memory. Subsequently, when data is needed that does not reside
Journal of Location Based Services 13
in main memory but on the hard disk, it is retrieved from that location instead of
from the remote service.
In the rendering stage, the client uses the octree for rendering novel views. The
octree supports rendering the multi-resolution representation with view frustum
culling, level-of-detail control and control of a trade-off between performance and
quality. The rendering algorithm traverses the octree. When the projected screen
space area of a node falls below a given threshold (e.g. one pixel, or more than one
pixel for coarser representation and faster rendering), the spheres contained in the
node are rendered. When a leaf node is reached, its projected screen space area is
generally larger than the threshold and, thus, larger than one pixel. In this case, the
contained spheres are rendered with the splatting (Gross and Pfister 2007) technique.
The controller in the PBMR concept has the same role as in the IBMR concept.
However, the different behaviour of the mapping and rendering stages (e.g.
regarding aggregation) require an accordingly adapted sampling strategy. The
implementation of the PBMR concept supports the seven interaction categories in a
similar way as the implementation of the IBMR concept presented in Section 5.2.
6. Discussion
In this section, we discuss how the approach proposed in Sections 4 and 5 supports
meeting the requirements identified in Section 2.
6.1. Applying SOA and standards
We propose designing 3D geovisualisation systems as distributed systems based on
SOA and OGC standards. By designing the system as a distributed system, the
resources for generating visual representations in terms of network, storage and
computing capacity can be allocated to computers within a network. Thus, local
clients and devices are freed from the burden to provide all required resources
locally. This can result in lightweight clients that can operate, e.g. in web browsers
and on mobile devices (supporting R5). Applying SOA for designing distributed
systems has the potential to improve several identified requirements. SOA promotes
interface orientation, encapsulation, hiding of implementation details, a common
base technology (e.g. web services) and a unified architectural view on the system
landscape on a high level of abstraction. These characteristics potentially improve
support for integration (R1), interoperability (R2) and platform independency (R5).
Moreover, reusable, competing services can encapsulate complex computer graphics
and geovisualisation concepts, techniques and metaphors to support effective visual
representations (R4) (Do
¨llner 2005). Applying standards on the application level (i.e.
OGC standards) and on the base technology level (e.g. from W3C, OASIS, ISO) as
available and feasible potentially improves interoperability (R2).
6.2. Applying image-based representations
In our approach, interactive geovisualisation clients retrieve sets of 2D images of
projective views of 3DGeoVE generated by the 3D rendering service WVS. We
decided to separate the client and the service after the rendering stage and to use
14 D. Hildebrandt et al.
standards image formats as interchange formats. We argue that this approach has
specific advantages in meeting the stated requirements compared to other common
approaches. In particular, this includes approaches that separate client/service after
the mapping stage and transfer representations based on scene graphs, geometries
and textures (such as the W3DS).
Applying images for communication offers advantages regarding integration and
interoperability (R1, R2). Images are conceptually simple, robust, commonly used
and supported. Additionally, image formats exist (e.g. JPEG, PNG) that are
standardised, commonly used and supported, and storage and processing efficient.
Using the G-buffer concept (Saito and Takahashi 1990), multiple information layers
of a 3D model (e.g. 3D position, normal, colour, and object ID of surface elements)
can be encoded into 2D images. Thus, images can be used as an alternative
representation for 3D models that sample 3D models in a discrete and multi-
dimensional way and reduce the diversity and heterogeneity of their original
representations (e.g. points, triangles, NURBS, voxel) to a simpler, unified
representation.
Regarding the communication between client and service which can be
considered one major bottleneck the image-based representation reduces the client/
service communication complexity to a constant factor primarily depending on the
image resolution (R3). For one requested view, standardised, mature image formats
encode explicitly per pixel what is required to reproduce the view with given quality
preferences using specific, highly optimised compression algorithms. The required
storage size of an uncompressed image does not depend on the complexity of the
original 3D scene. It defines the upper bound for a compressed image representation.
In contrast, for instance, the size of representations based on scene graphs directly
depends on the complexity of the 3D scene. For visualising massive amounts of
geodata, typically, the storage required for encoding one view as an image is
significantly smaller than encoding it as a scene graph. Transferring representations
based on scene graphs to clients puts practical limits on the complexity of the models
that can be accessed and portrayed. These limits are significantly lower when
applying the image-based representation.
Furthermore, using image-based representations has the potential to better
support effective, high-quality visualisation (R4) and platform independency (R5).
Visual representations presented to users do not significantly depend on client
resources such as storage and computing capacities. In particular, visualisations are
not restricted by the requirement that the employed 3D rendering is compatible with
every hardware and software configuration that potential users could provide.
Instead, geodata and processing reside in controlled, potentially powerful server
environments. This allows lightweight clients that can operate, e.g. in web browsers
and on mobile devices. Furthermore, visualisations are not limited by the
expressiveness of an intermediate, standardised description of the visual represen-
tation. For instance, when using a scene graph representation, clients are expected to
render the model as specified in the scene graph. Since common scene graph
representations (e.g. VRML, X3D, KML) cannot express every visualisation and
rendering technique that is applied in the 3D geovisualisation domain, their
expressiveness is limited. When using a 3D rendering service, the service encapsulates
the visualisation and rendering techniques and merely accepts parameters for
controlling the rendering instead of a specification of the rendering process.
Journal of Location Based Services 15
The image-based approach offers advantages for styling (R7). A portrayal service
that implements the mapping and rendering stages can offer control over these stages
and, thus, a major part of the pipeline to a client. The offered types of styling are
only limited by what can be expressed by output images. In contrast, a portrayal
service that does not include the rendering stage leaves the responsibility for styling
in the rendering stage to the client. This increases the complexity of the client and
decreases interoperability. Additionally, the offered types of styling are restricted to
what can be expressed by scene graph-based representations.
In summary, the image-based approach directly supports meeting all the
requirements identified in Section 2 except interactivity (R6). Efficiently supporting
interactivity remains a major challenge that we address with the presented concepts
in Section 5.
6.3. Concepts for image-based, interactive visualisation clients
In this subsection, we discuss how the three concepts for image-based, interactive
visualisation presented in Section 5 clients address the requirements identified in
Section 2.
Advantages of the first presented concept based on additional service-side
functionality (Section 5.1) include that clients can be exceedingly lightweight (R5),
are easy to integrate with other applications (R1, R2), present original instead of
approximated visual representations of the remote 3DGeoVE (R4), immediately
display results of WVS requests with changed styling specifications in the next
view (R7) and implement several interaction techniques efficiently by calling specific
operations on the WVS instead of retrieving the needed source data and
implementing the operations locally (R3). On the contrary, the concept offers no
support for real-time navigation (R6), there is no reuse of image data between
consecutive views (R3), and the efficiency and effectiveness of the implementation of
specific interaction techniques depends on if and how well they are supported by
specific service operations (R6).
Advantages of the second concept based on IBMR (Section 5.2) include that its
implementation, hardware resource requirements and integration efforts are only
moderately complex (R1, R2, R5), it effectively provides low-latency interaction and
display updates (R6), supports several interaction techniques efficiently by exploiting
the retrieved G-buffers from the WVS (R6), reuses the images retrieved from the WVS
over several frames rendered locally on the client (R3) and displays results of WVS
requests with changed styling specifications as soon as the limited set of locally
maintained depth meshes and images with previously requested styling are evicted
from local memory (R7). On the other hand, aggregating locally a 3D reconstruction
of visual representations of the remote 3DGeoVE based on depth meshes is not
optimally effective and efficient (R1), retrieved images can only be reused for a limited
number of frames rendered locally (R3) and the locally rendered novel views are
approximations that suffer from hard to control under- and oversampling issues (R4).
In comparison to the IBMR concept, the advantages of the third concept based
on PBMR (Section 5.3) include effectively providing low-latency interaction and
display updates (R6), more effective and efficient aggregation of the local 3D
reconstruction from images (R1), reusing the images retrieved from the WVS for an
16 D. Hildebrandt et al.
extensive amount of time and numerous frames rendered locally on the client due to
storing the 3D reconstruction out-of-core on local hard disk for latter reuse (R3) and
the potential for locally rendering novel views with a higher visual quality and higher
efficiency due to the improved aggregation and the multi-resolution data structure
that exhibits less sampling issues (R4). On the contrary, the PBMR concept requires
deleting the complete local 3D reconstruction when the styling specification for WVS
requests changes since the 3D reconstruction inherently represents the results of the
previously used styling specification (R7, R3). It requires more hardware resources
than the IBMR concept due to the potentially large memory footprint of the local
3D reconstruction kept in memory and out-of-core on local hard disk and the
computing intensive rendering of the 3D reconstruction (R5, R3). Moreover,
creating an implementation for high quality, multi-resolution PBMR that efficiently
utilises potentially limited hardware resources and, in particular, a GPU, is complex
and challenging (R5). Hence, implementation complexities, hardware resource
requirements and integration efforts of the PBMR concept are the most complex
when comparing the three concepts (R1, R2, R5).
6.4. Quantitative results
In the following, we present preliminary quantitative results of our initial, not yet
optimised proof-of-concept implementations of a 3D rendering service implementing
the WVS interface and three 3D clients implementing the three proposed concepts.
Moreover, we report on initial industry impact of our work.
In the first experiment, we aim at measuring the rate at which the 3D rendering
service can provide service consumers with rendered images. For this experiment, we
created a service consumer that stresses the service by sending 40 requests to the service
each requesting three image layers (colour, depth and object ID) for one 3D view. The
service consumer sends up to 10 requests in parallel. The service processes each request
sequentially. For each request, the service consumer receives one HTTP multipart
response containing the three requested image layers. In total, the service generates
and delivers 120 images. We measure the time for sending a request, rendering the
images, compressing the images (JPEG, PNG), sending the images and decompressing
the images by the service consumer. The experiment is performed in an intranet
environment. The service is executed on a desktop PC (Windows Server 2003,
1.86GHz double core, 2 GB RAM, nVidia GeForce GTX 260). The service consumer
is executed on a different PC connected to the network. As a result, we measure that a
service consumer can receive images at an average rate of 5.7 images per second for an
image resolution of 512 512 and 2.6 for 1024 1024. In a second experiment, we
measure the memory size of generated and transferred image layers while navigating
through the 3DGeoVE. For an image resolution of 512 512, on average, colour
required 77.95 kbytes (JPEG), depth 199.63 kbytes (PNG) and object ID 9.56 kbytes
(PNG). We expect to achieve higher delivery and compression rates in the future by
applying advanced parallel processing and compression techniques.
In a third experiment, we measure the latency of interactions of a user with the 3D
client implementation based on additional service-side functionality (Section 5.1).
In summary, each interaction with the client that requires no requests from the WVS
(e.g. defining marks, lines and paths in the 3D view) shows no perceivable latency.
The interactions that require requests from the WVS (e.g. moving the virtual
Journal of Location Based Services 17
camera, measuring paths) show latencies under 400 ms. Requesting a new 3D view
with the GetView operation (single colour layer, 512 512) turns out to be the most
time consuming operation.
In a fourth experiment, we measure the rendering rate of the 3D client
implementation based on IBMR (Section 5.2). The implementation is based on Java
and OpenGL. The client is executed in a web browser on a notebook (Windows XP,
2.4 GHz double core, 3 GB RAM, nVidia Quadro FX 570M with 512 MB RAM). In
this experiment, we log the rendering rate of the client while a user navigates several
minutes through the 3DGeoVE using different navigation techniques. While the user
navigates, the client retrieves images (512 512 resolution) from the service as
appropriate. The average rate of frames per second is zero when the user is not
interacting with the client and the current view does not change since the 3D view is
not updated in this situation, 284 when the user looks around from a fixed camera
position (rendering images from the WVS organised locally as a single cube map),
102 when the user employs a fly navigation technique (rendering few depth meshes)
and 71 when the user uses a goto navigation technique (rendering up to 12 depth
meshes). We expect to achieve higher rendering rates in the future by applying
adaptively triangulated depth meshes.
In a fifth experiment, we measure the rendering rate of the 3D client
implementation based on PBMR (Section 5.3). The implementation is based on
Cþþ, OpenGL ES and Apple iOS. The client is executed on an Apple iPhone 3GS.
As in the previous experiment, we log the rendering rate of the client while a user
navigates several minutes through the 3DGeoVE using different navigation
techniques. While the user navigates, the client retrieves images (320 480
resolution) from the service as appropriate. In summary, the average frame rate is
zero when the user is not interacting with the client and the current view does not
change (as in the IBMR client), 20 when the user changes the virtual camera
parameters (interaction mode) and below 5 for the duration that the virtual camera
parameters stay constant (quality mode). In the interaction mode, the client can
achieve a user defined frame rate (e.g. 20) by increasing the threshold for the
projected screen space area of octree nodes and, thus, sacrifices visual quality for
rendering speed. In the quality mode, the client sets the threshold to below one
favouring quality over speed. In this mode, frame rates below one can occur despite
the output-sensitive approach that already incorporates LOD. Reasons for this
include that the implementation currently does not support occlusion culling
(Akenine-Mo
¨ller et al. 2008) (i.e. data is increasingly aggregated and used for
rendering, however, occluded parts are not discarded early in the rendering), and
that managing and rendering from a dynamic octree is generally not as efficient as
when using a static octree.
The authors collaborated with an industry partner, Autodesk Inc., on work on
the display client and the client based on IBMR. This collaboration lead to an
integration of these approaches into products of our industry partner.
7. Conclusions
In this article, we identified seven practically relevant requirements for
3D geovisualisation systems with a focus on 3DGeoVE informed by the
18 D. Hildebrandt et al.
existing literature. We introduced the fundamentals of the SOA paradigm, standards,
and the distributed visualisation pipeline. We presented a general approach for
designing 3D geovisualisation systems intended to meet the previously identified
requirements. It was based on the introduced fundamental concepts, image-based
representations and the WVS. Three concepts for image-based, interactive visualisa-
tion clients were presented as instances of the general approach. Finally, we discussed
how the proposed general approach and the three concrete concepts support meeting
the previously identified requirements.
As the key advantage of the image-based approach, the complexity that a client is
exposed to for displaying a visual representation is reduced to a constant factor
primarily depending on the image resolution. The client is shielded from the
arbitrarily complex process and its data representations of generating visual
representations from massive geodata. In summary, the image-based approach
directly supports meeting all the requirements identified in Section 2 except
interactivity. The three presented concepts exploit the complexity reduction of the
image-based approach and already provide interactivity to different degrees.
However, providing interactivity within a service-oriented, standards- and image-
based approach remains a major challenge. Our future work aims at improving
efficiency, the quality of visual representations, and providing interactivity on the
same level as can be experienced in non-distributed 3D geovisualisation systems.
Acknowledgements
The authors thank Lars Schneider and Norman Holz for contributing to the implementation
of the point-based rendering client, the 3D Content Logistics GmbH (www.3dcontentlogis-
tics.com) for inspiring discussions on the topic, and Autodesk Inc. for successful
collaboration.
References
Akenine-Mo
¨ller, T., Haines, E., and Hoffman, N., 2008. Real-time rendering. 3rd ed.
Natick, MA, USA: A. K. Peters, Ltd.
Andrienko, G., et al., 2005. Creating instruments for ideation: software approaches to
geovisualization. Oxford: Elsevier.
Basanow, J., et al., 2008. Towards 3D spatial data infrastructures (3D-SDI) based on open
standards experiences, results and future issues, Lecture Notes in Geoinformation and
Cartography. New York: Springer, 65–86.
Bishr, Y.A., 1998. Overcoming the semantic and other barriers to GIS interoperability.
International Journal of Geographical Information Science, 12 (4), 299–314.
Brodlie, K., et al., 2004. Distributed and collaborative visualization. Computer Graphics
Forum, 23 (2), 223–251.
Brodlie, K.W., et al., 2007. Adaptive Infrastructure for Visual Computing. In:Proceedings of
theory and practice of computer graphics. Bangor, 147–156.
Chang, C.-F. and Ger, S.-H., 2002. Enhancing 3D graphics on mobile devices by image-based
rendering. In:Proceedings of the Third IEEE PCM 2002. London, UK, Springer-Verlag.
Do
¨llner, J., 2005. Geovisualization and real-time 3D computer graphics. In: J. Dykes,
A.M. MacEachren, and M.-J. Kraak, eds. Exploring geovisualization. Amsterdam:
Elsevier Science, 325–344.
Journal of Location Based Services 19
Doyle, A. and Cuthbert, A., 1998. Essential model of interactive portrayal. Open Geospatial
Consortium Inc., November 1998.
Dykes, J., 2005. Facilitating interaction for geovisualization. In: J. Dykes, A.M. MacEachren,
and M.-J. Kraak, eds. Exploring geovisualization. Amsterdam: Elsevier, 265–291.
Erl, T., 2005. Service-oriented architecture: concepts, technology, and design. NJ: Prentice Hall,
Upper Saddle River.
Filip, D., 2009. Introducing smart navigation in street view. Available from: http://google-
latlong.blogspot.com/2009/06/introducing-smart-navigation-in-street.html [Accessed 31
March 2011].
Ge, J., 2007. A point-based remote visualization pipeline for large-scale virtual reality.
PhD thesis, University of Illinois at Chicago.
Gobbetti, E., Kasik, D., and Yoon, S.-E. 2008. Technical strategies for massive model
visualization.In:Proceedings of the 2008 ACM symposium on solid and physical
modeling, ACM.
Gross, M. and Pfister, H., 2007. Point-based graphics. San Francisco, CA, USA: Morgan
Kaufmann Publishers Inc.
Haber, R.B. and McNabb, D.A., 1990. Visualization idioms: a conceptual model for scientific
visualization systems. In: B. Shriver, G.M. Nielson, and L. Rosenblum, eds.
Visualization in scientific computing. Los Alamitos: IEEE Computer Society Press,
74–93.
Hagedorn, B., Hildebrandt, D., and Do
¨llner, J., 2009. Towards advanced and interactive web
perspective view services. In: T. Neutens, and P. De Maeyer, eds. Developments in 3D
geo-information sciences. New York: Springer, 33–51.
Hagedorn, B., Hildebrandt, D., and Do
¨llner, J., 2010. Web view service discussion paper,
Version 0.6.0. Open Geospatial Consortium Inc., February.
Hildebrandt, D. and Do
¨llner, J., 2009. Implementing 3D geovisualization in spatial data
infrastructures: the pros and cons of 3D portrayal services. Geoinformatik 2009, 35, 1–9.
Lamberti, F. and Sanna, A., 2007. A streaming-based solution for remote visualization of 3D
graphics on mobile devices. IEEE Transactions on Visualization and Computer Graphics,
13 (2), 247–260.
Lupp, M., ed., 2007. Styled layer descriptor profile of the web map service implementation
specification, Version 1.1.0. Open Geospatial Consortium Inc., June.
MacEachren, A.M. and Kraak, M.-J., 2001. Research challenges in geovisualization.
Cartography and Geographic Information Science, 28 (1), 3–12.
MacEachren, A.M., et al., 2004. Geovisualization for knowledge construction and decision
support. IEEE Computer Graphics and Applications, 24 (1), 13–17.
Neubauer, S. and Zipf, A., eds., 2009. 3D-Symbology encoding discussion draft, Version 0.0.1.
Open Geospatial Consortium Inc.
Open Geospatial Consortium (OGC), 2010. Available from: http://www.opengeospatial.org/
[Accessed 31 March 2011].
Papazoglou, M.P., et al., 2007. Service-oriented computing: state of the art and research
challenges. Computer, 40 (11), 38–45.
Rhyne, T.-M. and MacEachren, A.M., 2004. Visualizing geospatial data. In:SIGGRAPH
2004: ACM SIGGRAPH 2004 Course Notes. New York: ACM, 31.
Saito, T. and Takahashi, T., 1990. Comprehensible rendering of 3-D shapes. SIGGRAPH
Computer Graphics,24 (4), 197–206.
Schilling, A. and Kolbe, T.H., eds., 2010. Draft for candidate OpenGIS web 3D service interface
standard, Version 0.4.0. Open Geospatial Consortium Inc.
Shum, H.-Y., Chan, S.-C., and Kang, S.B., 2007. Image-based rendering. New York: Springer.
20 D. Hildebrandt et al.
Sisneros, R., et al., 2007. A multi-level cache model for run-time optimization of remote
visualization. IEEE Transactions on Visualization and Computer Graphics, 13 (5),
991–1003.
Wang, H., et al., 2008. Service-oriented approach to collaborative visualization. Concurrency
and Computation: Practice & Experience, 20 (11), 1289–1301.
Yi, J.S., et al., 2007. Toward a deeper understanding of the role of interaction in information
visualization. IEEE Transactions on Visualization and Computer Graphics, 13 (6),
1224–1231.
Journal of Location Based Services 21
... For instance, visually styling an image in image space at image resolution instead of processing the original features allows styling an image in a decoupled manner after image generation and implementing styling output sensitively, effectively and efficiently [7]. In a distributed visualization system, an interactive visualization client can generate novel views from locally cached IReps that the client retrieved from servers to improve client-side efficiency, decoupling and platform independence [8]. Common 2D web map viewers apply this general principle. ...
... A Web Image-based Styling Service (WISS) [7] styles IViews according to specified styling specifications. A Novel View Service (NVS) [8] interactively renders novel views from specified viewpoints and input IViews. As a supplementary application of the SRA, we propose an interactive navigation technique for V3DCMs that exploits IViews [9]. ...
... View-independent computations are performed on the server-side and reused on the client-side for different novel views with view-dependent computations applied on top. We discuss further related work in dedicated articles that relate to IViews and WVS [7,47,48], WISS [7], NVS [8] and the proposed navigation technique [9]. ...
Article
Full-text available
Modern 3D geovisualization systems (3DGeoVSs) are complex and evolving systems that are required to be adaptable and leverage distributed resources, including massive geodata. This article focuses on 3DGeoVSs built based on the principles of service-oriented architectures, standards and image-based representations (SSI) to address practically relevant challenges and potentials. Such systems facilitate resource sharing and agile and efficient system construction and change in an interoperable manner, while exploiting images as efficient, decoupled and interoperable representations. The software architecture of a 3DGeoVS and its underlying visualization model have strong effects on the system's quality attributes and support various system life cycle activities. This article contributes a software reference architecture (SRA) for 3DGeoVSs based on SSI that can be used to design, describe and analyze concrete software architectures with the intended primary benefit of an increase in effectiveness and efficiency in such activities. The SRA integrates existing, proven technology and novel contributions in a unique manner. As the foundation for the SRA, we propose the generalized visualization pipeline model that generalizes and overcomes expressiveness limitations of the prevalent visualization pipeline model. To facilitate exploiting image-based representations (IReps), the SRA integrates approaches for the representation, provisioning and styling of and interaction with IReps. Five applications of the SRA provide proofs of concept for the general applicability and utility of the SRA. A qualitative evaluation indicates the overall suitability of the SRA, its applications and the general approach of building 3DGeoVSs based on SSI.
... For instance, visually styling an image in image space at image resolution instead of processing the original features allows styling an image in a decoupled manner after image generation and implementing styling output sensitively, effectively and efficiently [7]. In a distributed visualization system, an interactive visualization client can generate novel views from locally cached IReps that the client retrieved from servers to improve client-side efficiency, decoupling and platform independence [8]. Common 2D web map viewers apply this general principle. ...
... A Web Image-based Styling Service (WISS) [7] styles IViews according to specified styling specifications. A Novel View Service (NVS) [8] interactively renders novel views from specified viewpoints and input IViews. As a supplementary application of the SRA, we propose an interactive navigation technique for V3DCMs that exploits IViews [9]. ...
... View-independent computations are performed on the server-side and reused on the client-side for different novel views with view-dependent computations applied on top. We discuss further related work in dedicated articles that relate to IViews and WVS [7,47,48], WISS [7], NVS [8] and the proposed navigation technique [9]. ...
Article
Full-text available
Modern 3D geovisualization systems (3DGeoVSs) are complex and evolving systems that are required to be adaptable and leverage distributed resources, including massive geodata. This article focuses on 3DGeoVSs built based on the principles of service-oriented architectures, standards and image-based representations (SSI) to address practically relevant challenges and potentials. Such systems facilitate resource sharing and agile and efficient system construction and change in an interoperable manner, while exploiting images as efficient, decoupled and interoperable representations. The software architecture of a 3DGeoVS and its underlying visualization model have strong effects on the system's quality attributes and support various system life cycle activities. This article contributes a software reference architecture (SRA) for 3DGeoVSs based on SSI that can be used to design, describe and analyze concrete software architectures with the intended primary benefit of an increase in effectiveness and efficiency in such activities. The SRA integrates existing, proven technology and novel contributions in a unique manner. As the foundation for the SRA, we propose the generalized visualization pipeline model that generalizes and overcomes expressiveness limitations of the prevalent visualization pipeline model. To facilitate exploiting image-based representations (IReps), the SRA integrates approaches for the representation, provisioning and styling of and interaction with IReps. Five applications of the SRA provide proofs of concept for the general applicability and utility of the SRA. A qualitative evaluation indicates the overall suitability of the SRA, its applications and the general approach of building 3DGeoVSs based on SSI.
... The goal is to avoid any compatibility issues between a plugin and a browser or platform. The system should work independently from the software and platforms of the client (R3), allowing to optimize dissemination of the products and to pre-empt compatibility issues , Hildebrandt et al. 2011. ...
... is an important requirement mentioned in the literature along with integration. They are needed to connect computers in an efficient and effective manner on more than one level of abstraction (Brodlie et al. 2007, Hildebrandt et al. 2011). Interoperability has many advantages, including allowing to build flexible and adapting systems, that can fulfill various objectives (Andrienko et al. 2005) and offering access to different geodata sources in a homogenous way with a single set of processing tools (Altmaier & Kolbe 2003). ...
... Non-functional features are also part of the requirements, especially the support for straightforward updating, scale-up and extensibility (R5) on one hand and reuse and robustness (R6) on the other. R5 and R6 participate to the life span of the product and to the optimization of its use , Hildebrandt et al. 2011. Furthermore, for flexibility and extensibility reasons, open sources solutions (R7) should be favored. ...
Conference Paper
Full-text available
The field of web cartography, and thus of web atlases, has been growing and changing fast due to the democratization of the digital media, the world wide web and finally the 3D technologies. In this article, we will discuss the advantages and challenges that arise with the use of service-oriented architectures and 3D visualization for web atlases. A literature and technology review will allow to define requirements for service-driven 3D atlases. Then, we will test a prototype against these requirements to assess strengths and weaknesses of available solutions. Finally, we will offer concluding remarks and further directions of development.
... It allows for optimizing the depiction of complex 3D scene geometry and is well suited for server-side rendering. In general, generated images can be transferred efficiently to clients that reconstruct the original 3D scene based on these images [HHD11]. For example, a rendering server can generate a six-sided cube map of a 3D scene, then transfer it to a client viewer, which displays it [DHK12]. ...
... It uses per-pixel data for information about depth, object ID, or object class. It aims at building lightweight, interactive geovisualization clients, that use techniques for image-based rendering introduced earlier[HHD11;DHK12]. A recently presented example of a geovisualization system based on the OGC 3DPS has uses the standard as an interface for different rendering back-ends based on ray tracing [Gut + 16]. ...
Thesis
Full-text available
Virtual 3D city models represent and integrate a variety of spatial data and georeferenced data related to urban areas. With the help of improved remote-sensing technology, official 3D cadastral data, open data or geodata crowdsourcing, the quantity and availability of such data are constantly expanding and its quality is ever improving for many major cities and metropolitan regions. There are numerous fields of applications for such data, including city planning and development, environmental analysis and simulation, disaster and risk management, navigation systems, and interactive city maps. The dissemination and the interactive use of virtual 3D city models represent key technical functionality required by nearly all corresponding systems, services, and applications. The size and complexity of virtual 3D city models, their management, their handling, and especially their visualization represent challenging tasks. For example, mobile applications can hardly handle these models due to their massive data volume and data heterogeneity. Therefore, the efficient usage of all computational resources (e.g., storage, processing power, main memory, and graphics hardware, etc.) is a key requirement for software engineering in this field. Common approaches are based on complex clients that require the 3D model data (e.g., 3D meshes and 2D textures) to be transferred to them and that then render those received 3D models. However, these applications have to implement most stages of the visualization pipeline on client side. Thus, as high-quality 3D rendering processes strongly depend on locally available computer graphics resources, software engineering faces the challenge of building robust cross-platform client implementations. Web-based provisioning aims at providing a service-oriented software architecture that consists of tailored functional components for building web-based and mobile applications that manage and visualize virtual 3D city models. This thesis presents corresponding concepts and techniques for web-based provisioning of virtual 3D city models. In particular, it introduces services that allow us to efficiently build applications for virtual 3D city models based on a fine-grained service concept. The thesis covers five main areas: 1. A Service-Based Concept for Image-Based Provisioning of Virtual 3D City Models It creates a frame for a broad range of services related to the rendering and image-based dissemination of virtual 3D city models. 2. 3D Rendering Service for Virtual 3D City Models This service provides efficient, high-quality 3D rendering functionality for virtual 3D city models. In particular, it copes with requirements such as standardized data formats, massive model texturing, detailed 3D geometry, access to associated feature data, and non-assumed frame-to-frame coherence for parallel service requests. In addition, it supports thematic and artistic styling based on an expandable graphics effects library. 3. Layered Map Service for Virtual 3D City Models It generates a map-like representation of virtual 3D city models using an oblique view. It provides high visual quality, fast initial loading times, simple map-based interaction and feature data access. Based on a configurable client framework, mobile and web-based applications for virtual 3D city models can be created easily. 4. Video Service for Virtual 3D City Models It creates and synthesizes videos from virtual 3D city models. Without requiring client-side 3D rendering capabilities, users can create camera paths by a map-based user interface, configure scene contents, styling, image overlays, text overlays, and their transitions. The service significantly reduces the manual effort typically required to produce such videos. The videos can automatically be updated when the underlying data changes. 5. Service-Based Camera Interaction It supports task-based 3D camera interactions, which can be integrated seamlessly into service-based visualization applications. It is demonstrated how to build such web-based interactive applications for virtual 3D city models using this camera service. These contributions provide a framework for design, implementation, and deployment of future web-based applications, systems, and services for virtual 3D city models. The approach shows how to decompose the complex, monolithic functionality of current 3D geovisualization systems into independently designed, implemented, and operated service- oriented units. In that sense, this thesis also contributes to microservice architectures for 3D geovisualization systems—a key challenge of today’s IT systems engineering to build scalable IT solutions.
... It has been used in 2D web mapping for a long time, with OGC WMS and tile caches serving rendered 2D maps that can be displayed by desktop GIS software, browser plugins, and mobile applications. 3D portrayal in web applications is possible either by rendering perspective images on the server, which can be integrated into a cube map as described by Hildebrandt et al. (2011), or by providing scene graph elements that can be rendered by web clients using 3D hardware rendering. The introduction of WebGL in browsers has boosted the development of interactive 3D applications for the web, also for geospatial visualization (Christen et al. 2012). ...
... The concept of rendering perspective images and sending them over the network is followed by Hildebrandt et al. (2011) andDöllner et al. (2012), which is a valid approach for enabling semiinteractive visualization on mobile devices. Images can be easily encoded and displayed by all mobile platforms nowadays. ...
Article
In this thesis, concepts for developing Spatial Data Infrastructures with an emphasis on visualizing 3D landscape and city models in distributed environments are discussed. Spatial Data Infrastructures are important for public authorities in order to perform tasks on a daily basis, and serve as research topic in geo-informatics. Joint initiatives at national and international level exist for harmonizing procedures and technologies. Interoperability is an important aspect in this context - as enabling technology for sharing, distributing, and connecting geospatial data and services. The Open Geospatial Consortium is the main driver for developing international standards in this sector and includes government agencies, universities and private companies in a consensus process. 3D city models are becoming increasingly popular not only in desktop Virtual Reality applications but also for being used in professional purposes by public authorities. Spatial Data Infrastructures focus so far on the storage and exchange of 3D building and elevation data. For efficient streaming and visualization of spatial 3D data in distributed network environments such as the internet, concepts from the area of real time 3D Computer Graphics must be applied and combined with Geographic Information Systems (GIS). For example, scene graph data structures are commonly used for creating complex and dynamic 3D environments for computer games and Virtual Reality applications, but have not been introduced in GIS so far. In this thesis, several aspects of how to create interoperable and service-based environments for 3D spatial data are addressed. These aspects are covered by publications in journals and conference proceedings. The introductory chapter provides a logic succession from geometrical operations for processing raw data, to data integration patterns, to system designs of single components, to service interface descriptions and workflows, and finally to an architecture of a complete distributed service network. Digital Elevation Models are very important in 3D geo-visualization systems. Data structures, methods and processes are described for making them available in service based infrastructures. A specific mesh reduction method is used for generating lower levels of detail from very large point data sets. An integration technique is presented that allows the combination with 2D GIS data such as roads and land use areas. This approach allows using another optimization technique that greatly improves the usability for immersive 3D applications such as pedestrian navigation: flattening road and water surfaces. It is a geometric operation, which uses data structures and algorithms found in numerical simulation software implementing Finite Element Methods. 3D Routing is presented as a typical application scenario for detailed 3D city models. Specific problems such as bridges, overpasses and multilevel networks are addressed and possible solutions described. The integration of routing capabilities in service infrastructures can be accomplished with standards of the Open Geospatial Consortium. An additional service is described for creating 3D networks and for generating 3D routes on the fly. Visualization of indoor routes requires different representation techniques. As server interface for providing access to all 3D data, the Web 3D Service has been used and further developed. Integrating and handling scene graph data is described in order to create rich virtual environments. Coordinate transformations of scene graphs are described in detail, which is an important aspect for ensuring interoperability between systems using different spatial reference systems. The Web 3D Service plays a central part in nearly all experiments that have been carried out. It does not only provide the means for interactive web-visualizations, but also for performing further analyses, accessing detailed feature information, and for automatic content discovery. OpenStreetMap and other worldwide available datasets are used for developing a complete architecture demonstrating the scalability of 3D Spatial Data Infrastructures. Its suitability for creating 3D city models is analyzed, according to requirements set by international standards. A full virtual globe system has been developed based on OpenStreetMap including data processing, database storage, web streaming and a visualization client. Results are discussed and compared to similar approaches within geo-informatics research, clarifying in which application scenarios and under which requirements the approaches in this thesis can be applied.
... In these methods, the basic principle is that the majority of the processing is done on the server end and only a stream of images or video of the output is passed to the client device. For example, Paravati et al. [9] have presented a method for a video stream adaptation, whereas Hildebrandt et al. [3] [4] have proposed another approach utilizing extended cube map based rendering. As in these methods the virtual space is re-produced in the client device as a video or image stream, the client does not need to know about the geometric complexity of the original data. ...
Chapter
Web3D has gradually become the mainstream online 3D technology to support Metaverse. However, massive multiplayer online Web3D still faces challenges such as slow culling of potentially visible set at servers, networking congestion and sluggish online rendering at web browsers. To address the challenges, in this paper we propose a novel Web3D pipeline that coordinates PVS culling, networking transmitting, and Web3D rendering in a fine-grained way. The pipeline integrates three key steps: establishment of a granularity-aware voxelization scene graph, fine-grained PVS culling and transmitting scheduling, and incremental instanced rendering. Our experiments on a massive 3D plant have demonstrated that the proposed pipeline outperforms existing Web3D approaches in terms of transmitting and rendering.
Conference Paper
Full-text available
Design, implementation, and operation of interactive 3D map services are faced with a large number of challenges including (a) processing and integration of massive amounts of heterogeneous and distributed 2D and 3D geodata such as terrain models, buildings models, and thematic georeferenced data, (b) assembling, styling, and rendering 3D map contents according to application requirements and design principles, and (c) interactive provisioning of created 3D maps on mobile devices and thin clients as well as their integration as third-party components into domain-specific web and information systems. This paper discusses concept and implementation of a service-oriented platform that addresses these major requirements of 3D web mapping systems. It is based on a separation of concerns for data management, 3D rendering, application logic, and user interaction. The main idea is to divide 3D rendering process into two stages. In the first stage, at the server side, we construct an image-based, omni-directional approximation of the 3D scene by means of multi-layered virtual 3D panoramas; in the second stage, at the client side, we interactively reconstruct the 3D scene based on the panorama. We demonstrate the prototype implementation for real-time 3D rendering service and related iOS 3D client applications. In our case study, we show how to interactively visualize a complex, large-scale 3D city model based on our service-oriented platform.
Chapter
Full-text available
Smartphones with larger screens, powerful processors, abundant memory, and an open operation system provide many possibilities for 3D city or photorealistic model applications. 3D city or photorealistic models can be used by the users to locate themselves in the 3D world, or they can be used as methods for visualizing the surrounding environment once a smartphone has already located the phone by other means, e.g. by using GNSS, and then to provide an interface in the form of a 3D model for the location-based services. In principle, 3D models can be also used for positioning purposes. For example, matching of images exported from the smartphone and then registering them in the existing 3D photorealistic world provides the position of the image capture. In that process, the central computer can do a similar image matching task when the users locate themselves interactively into the 3D world. As the benefits of 3D city models are obvious, this chapter demonstrates the technology used to provide photorealistic 3D city models and focus on 3D data acquisition and the methods available in 3D city modeling, and the development of 3D display technology for smartphone applications. Currently, global geoinformatic data providers, such as Google, Nokia (NAVTEQ), and TomTom (Tele Atlas), are expanding their products from 2D to 3D. This chapter is a presentation of a case study of 3D data acquisition, modeling and mapping, and visualization for a smartphone, including an example based on data collected by mobile laser scanning data from the Tapiola (Espoo, Finland) test field.
Conference Paper
Full-text available
Virtual 3D city models serve as integration platforms for complex geospatial and georeferenced information and as medium for effective communication of spatial information. In this paper, we present a system architecture for service-oriented, interactive D visualization of massive 3D city models on thin clients such as mobile phones and tablets. It is based on high performance, server-side 3D rendering of extended cube maps, which are interactively visualized by corresponding 3D thin clients. As key property, the complexity of the cube map data transmitted between server and client does not depend on the model's complexity. In addition, the system allows the integration of thematic raster and vector geodata into the visualization process. Users have extensive control over the contents and styling of the visual representations. The approach provides a solution for safely, robustly distributing and interactively presenting massive 3D city models. A case study related to city marketing based on our prototype implementation shows the potentials of both server-side 3D rendering and fully interactive 3D thin clients on mobile phones.
Conference Paper
Full-text available
Recent hardware and software advances have demonstrated that it is now practicable to run large visual computing tasks over heterogeneous hardware with output on multiple types of display devices. As the complexity of the enabling infrastructure increases, then so too do the demands upon the programmer for task integration as well as the demands upon the users of the system. This places importance on system developers to create systems that reduce these demands. Such a goal is an important factor of autonomic computing, aspects of which we have used to influence our work. In this paper we develop a model of adaptive infrastructure for visual systems. We design and implement a simulation engine for visual tasks in order to allow a system to inspect and adapt itself to optimise usage of the underlying infrastructure. We present a formal abstract representation of the visualization pipeline, from which a user interface can be generated automatically, along with concrete pipelines for the visualization. By using this abstract representation it is possible for the system to adapt at run time. We demonstrate the need for, and the technical feasibility of, the system using several example applications.
Article
Full-text available
This course reviews concepts and highlights new directions in GeoVisualization. We review four levels of integrating geospatial data and geographic information systems (GIS) with scientific and information visualization (VIS) methods. These include:• Rudimentary: minimal data sharing between the GIS and Vis systems• Operational: consistency of geospatial data• Functional: transparent communication between the GIS and Vis systems• Merged: one comprehensive toolkit environmentWe review how to apply both information and scientific visualization fundamentals to the visual display of geospatial and geoinformatics data. Distributed GeoVisualization systems that allow for collaborative synchronous and asynchronous visual exploration and analysis of geospatial data via the Web, Internet, and large-screen group-enabled displays are discussed. This includes the application of intelligent agent and spatial data mining technologies. Case study examples are shown in real time during the course.
Article
The polygon-mesh approach to 3D modeling was a huge advance, but today its limitations are clear. Longer render times for increasingly complex images effectively cap image complexity, or else stretch budgets and schedules to the breaking point. Point-based graphics promises to change all that, and this book explains how. Comprised of contributions from leaders in the development and application of this technology, Point-Based Graphics examines it from all angles, beginning with the way in which the latest photographic and scanning devices have enabled modeling based on true geometry, rather than appearance. From there, it's on to the methods themselves. Even though point-based graphics is in its infancy, practitioners have already established many effective, economical techniques for achieving all the major effects associated with traditional 3D Modeling and rendering. You'll learn to apply these techniques, and you'll also learn how to create your own. The final chapter demonstrates how to do this using Pointshop3D, an open-source tool for developing new point-based algorithms. A copy of this tool can be found on the enclosed CD. The first book on a major development in graphics by the pioneers in the field * This technique allows 3D images to be manipulated as easily as Photoshop works with 2D images * Includes CD-ROM with the open source software program Pointshop3D for experimentation with point graphics.
Chapter
Ideation relates to the formation of ideas and concepts—the end goal of geovisualization. There are many tools and techniques for creating instruments for ideation—sophisticated hardware, advanced programming languages, graphics libraries, visual programming systems, and complex GUIs. In each, the developer or visualizer wishes to generate effective interactive graphic realizations of their data that are useful to them and/or their users. This chapter expands upon these ideas and considers the way each of these issues influences the uses and development of software instruments that support the exploratory process. Some examples of software approaches are also documented. Current technology has an important enabling and limiting impact upon the available range of instruments for ideation, which changes significantly over time. A major benefit of contemporary computer technology is the possibility to rapidly generate various graphical displays from data. This gives an opportunity to try alternative transient realizations of data, to discard those deemed ineffectual but when necessary reproduce them again, and to look at several displays simultaneously to provide multiple views of data. It is not only the increase of computer power that offers new opportunities to create more sophisticated instruments, but also the progress in software environments such as the development of programming tools that are high level and/or cross platform and the availability of libraries and reusable software components.
Chapter
This chapter provides a brief overview of some of the approaches available to support geovisualization. Various tools and techniques have been introduced and contextualized through the personal experience of capitalizing upon the opportunities afforded by technologies that enable to use and develop interactive graphics to prompt thinking. Efficient combination of various ways of instructing computers is identified as a key objective to overcoming impediments to the process of geovisualization and the concept of the "visualization effort" required to chase ideas and support the thought process is emphasized. A number of means of increasing efficiencies, sharing software components, and reusing resources to facilitate interaction are discussed. Scripting is identified as an approach that offers much to fields where application design involves combining existing software functionality in new and unpredictable ways in an iterative process of continual change. Geovisualization is one such application area and flexible high-level environment for instructing computers that offer rapid results and efficiencies and flexibility by drawing upon existing functionality; the opportunity to augment this through integration with lower level languages possess considerable scope for use as instruments for ideation.
Article
In this contribution, we would like to outline the impact of real-time 3D computer graphics on geovisualization. Various real-time 3D rendering techniques have been developed recently that provide technical fundamentals for the design and implemen-tation of new geovisualization strategies, systems, and environments. Among these key techniques are multi-resolution modelling, multi-texturing, dynamic texturing, programmable shading, and multi-pass rendering. They lead to significant improve-ments in visual expressiveness and interactivity in geovisualization systems. Several examples illustrate applications of these key techniques. Their complex implementation requires encapsulated, re-usable, and extensible components as critical elements to their success and shapes the software architecture of geovisualization systems and applications.
Article
New visualization techniques are frequently demonstrated and much academic effort goes into the production of software tools to support visualization. Here, the authors of subsequent chapters in this section identify reasons why they continue to enhance and develop the instruments that they design to support the process of geovisualization, justifying their ongoing work and in doing so offering some perspectives on and solutions to the issues that they address. A number of inter-related themes arise including: advances in technology that create opportunities and generate demands for new geovisualization solutions; increasingly rich data sets and sources that drive design due to the associated potential for revealing new structures and relationships; various and novel tasks to which geovisualization is being applied associated with debate and continuing research concerning the kinds of instrument that are required to best undertake particular tasks in particular conditions; an increasingly diverse set of users who require a variety of tools, environments and systems to support ideation in its numerous forms, including those who participate in simulations of visualization when learning; changes in the available expertise that prompt the development of ideas and instruments that borrow from advances and methods in cognate disciplines such
Article
Visualization is a powerful tool for analyzing data and presenting results in science, engineering and medicine. This paper reviews ways in which it can be used in distributed and/or collaborative environments. Distributed visualization addresses a number of resource allocation problems, including the location of processing close to data for the minimization of data traffic. The advent of the Grid Computing paradigm and the link to Web Services provides fresh challenges and opportunities for distributed visualization—including the close coupling of simulations and visualizations in a steering environment. Recent developments in collaboration have seen the growth of specialized facilities (such as Access Grid) which have supplemented traditional desktop video conferencing using the Internet and multicast communications. Collaboration allows multiple users—possibly at remote sites—to take part in the visualization process at levels which range from the viewing of images to the shared control of the visualization methods. In this review, we present a model framework for distributed and collaborative visualization and assess a selection of visualization systems and frameworks for their use in a distributed or collaborative environment. We also discuss some examples of enabling technology and review recent work from research projects in this field.