Conference PaperPDF Available

Design and evaluation of a collaborative online visualization and steering framework implementation for computational grids

Authors:

Abstract and Figures

Today's large-scale scientific research often relies on the collaborative use of a Grid or c-Science infrastructure (e.g. DEISA, EGEE, TeraGrid, OSG) with computational, storage, or other types of physical resources. One of the goals of these emerging infrastructures is to support the work of scientists with advanced problem-solving tools. Many e-Science applications within these infrastructures aim at simulations of a scientific problem on powerful parallel computing resources. Typically, a researcher first performs a simulation for some fixed amount of time and then analyses results in a separate post-processing step, for instance, by viewing results in visualizations. In earlier work we have described early prototypes of a Collaborative Online Visualization and Steering (COVS) Framework in Grids that performs both -simulation and visualization -at the same time (online) to increase the efficiency of e-Scientists. This paper evaluates the evolved mature reference implementation of the COVS framework design that is ready for production usage within Web service-based Grid and e-Science infrastructures.
Content may be subject to copyright.
Design and Evaluation of a Collaborative Online
Visualization and Steering Framework
Implementation for Computational Grids
Morris Riedel #1 , Thomas Eickermann #, Wolfgang Frings #, Sonja Dominiczak #, Daniel Mallmann #,
Thomas ussel #, Achim Streit #, Paul Gibbon #, Felix Wolf # +, Wolfram Schiffmann , Thomas Lippert #
#Central Institute for Applied Mathematics, John von Neumann Institute for Computing
Forschungszentrum J¨
ulich, D-52425, J¨
ulich, Germany
1m.riedel@fz-juelich.de
+Department of Computer Science
RWTH Aachen University, D-52056, Aachen, Germany
Institute of Computer Architecture, Department of Computer Science
University of Hagen, 58097, Hagen, Germany
AbstractToday’s large-scale scientific research often relies on
the collaborative use of a Grid or e-Science infrastructure (e.g.
DEISA, EGEE, TeraGrid, OSG) with computational, storage,
or other types of physical resources. One of the goals of these
emerging infrastructures is to support the work of scientists with
advanced problem-solving tools. Many e-Science applications
within these infrastructures aim at simulations of a scientific
problem on powerful parallel computing resources. Typically, a
researcher first performs a simulation for some fixed amount
of time and then analyses results in a separate post-processing
step, for instance, by viewing results in visualizations. In earlier
work we have described early prototypes of a Collaborative
Online Visualization and Steering (COVS) Framework in Grids
that performs both - simulation and visualization - at the same
time (online) to increase the efficiency of e-Scientists. This paper
evaluates the evolved mature reference implementation of the
COVS framework design that is ready for production usage
within Web service-based Grid and e-Science infrastructures.
I. INTRODUCTION
Grid infrastructures such as DEISA, EGEE, OSG or Tera-
Grid provide wide varieties of Grid services to enable large-
scale resource sharing and access to unprecedented amounts
of various types of Grid resources. An important objective
for Virtual Organizations (VOs) [1] that result from these
sharing across organizational boundaries is to make efficient
use of the provisioned computational Grid resources such as
supercomputers, clusters, or server farms.
Scientific applications within these VOs and underlying
Grid infrastructures aim at simulations of a physical, biologi-
cal, chemical, or other types of domain-specific processes or
unsolved scientific problems. These applications typically rely
on parallel computing techniques to compute solutions for
such scientific problems. Parallel computing simulations use
computers with multiple processors that are able to jointly
work on one or more specific problems at the same time. The
outcome of these simulations are often analyzed in a separate
post-processing step, for instance by viewing the results in a
scientific domain-specific visualization application.
In order to increase the efficiency of e-Scientists and thus
their complete VOs, the collaborative online visualization
and steering (COVS) technique emerged that performs the
simulation and visualization at the same time. In this context
online visualization refers to e-Scientists that are able to
observe the intermediate processing steps during the computa-
tion of the simulation. This allows for computational steering
[2] to influence the computation of the simulation during
its run-time on a supercomputer or cluster. This saves cost-
intensive computational time on Grid resources by quickly
reacting on potentially misrouted applications with steering
their parameters back to correct values or even guide the
applications with steering to interesting locations in the model.
The lack of a widely accepted common COVS frameworks
within the major Grid middlewares (e.g. UNICORE, gLite,
Globus Toolkits) motivates the development of the framework
presented here. In earlier work we described the requirements
and design issues of early prototypes of the COVS framework
[3]. This paper emphasizes on the reference implementation
of the COVS framework that is based on the UNICORE
Grid middleware and the VISIT communication library [4].
However, an implementation of a COVS framework will be
only accepted in realistic Grid scenarios if it reaches high lev-
els of usability and sophisticated performance. Therefore, the
contribution of this work is an evaluation of the framework’s
architectural design. Thus, to demonstrate that the COVS
framework is of practical relevance, the reference implementa-
tion is applied to a real-world test case, including performance
analysis of data connections and scalability anyalysis of the
key component enabling the collaboration.
Fig. 1. COVS framework reference implementation that is based on UNICORE as Grid middleware and the VISIT communication library.
Following the introduction the scene is set in Section 2
where we present the design of the COVS framework reference
implementation in UNICORE and its core building blocks.
Section 3 then evaluates the proposed architectural design with
respect to usability for end-users and performance measure-
ments. A survey of related work is described in Section 4,
while this paper ends with some concluding remarks.
II. DESIGN AND REFERENCE IMPLEMENTATION
This section introduces the core building blocks of the
COVS framework and its components for collaborative sce-
narios that are necessary in large-scale Grid infrastructures.
The intention of it is to describe how existing components
of the visualization and Grid community fit into the designed
framework and how several components can be augmented to
provide a full functional COVS reference implementation for
end-users that meet the requirements that appear within Grids.
Figure 1 provides an overview of the reference implemen-
tation of the COVS framework’s architectural design. The
framework determines the architecture of COVS applications
and defines the overall structure by addressing the key re-
sponsibilities and the interaction between its components.
The main motivation of the COVS framework is to support
High Performance Computing (HPC) applications in the area
of e-Science and thus to be used as a tool for efficiently
solving complex scientific problems such as grand challenge
problems. In addition, the COVS framework must be inte-
grated seamlessly into the different Grid environments (e.g.
DEISA) by hiding the differences in security policies, systems
architectures, access methods and resource representations to
reach an overall transparency for end-users.
A. Addressing Collaborative Aspects in COVS
A COVS framework implementation in Grids allows for
an easier collaboration between geographically distributed e-
Scientists during data analysis. Therefore, the COVS design
raise a demand for a multiplexer entity (e.g. VISIT Multi-
plexer) that distributes the scientific data output from one
parallel simulation to n bi-directional connections that connect
the n scientific visualizations. This multiplexer represents a
novel component and can be considered as a key compo-
nent within the framework and interconnects the simulation
with n visualizations as shown in Figure 1. In addition, the
design relies on a collaboration entity (e.g. the new VISIT
Collaboration Server) that transports collaboration data (e.g.
turn of viewpoints) from one visualization to all other n-1
visualizations. Hence, the collaboration entity interacts with
all visualizations to ensure every participant shares the same
view on the data. Both the scientific and collaboration data
transfer have to be secured, for instance with an SSH tunnel
to avoid firewall problems. The framework provides the COVS
Grid service that controls the multiplexer and collaboration
server entities. That means a participant in the master role
is able to add and remove participants using the COVS Grid
services and is the only one that is able to submit/abort the
scientific simulation to/on the Grid. The session management
represents a major difference than single user control as well
the problem that appear when when one participant steers one
parameter to the right while another one steers it to the left.
We use an explicit request token mechanism (setsteerer()) to
ensure that only one participant is able to steer the simulation
at the same time. A similar mechanism is used to specify that
only one participant at the same time is allowed to change the
view (setcollaborator()). To sum up, the functionality of one
participant differs from the functionality of another.
B. Architectural Design to Support e-Science Applications
The architecture of the COVS framework is specifically
designed to support a wide variety of parallel simulations
and visualizations from numerous scientific domains that
both represent rather domain-specific core building blocks
of the framework. As often within HPC environments, such
simulations are typically implemented by using the Message
Passing Interface (MPI) standard or other parallel computing
paradigms. The parallel simulations that are used in conjunc-
tion with the COVS framework are submitted to the compu-
tational resource using a Grid client (e.g. GPE Grid Client
[5]) and the underlying Grid middleware (e.g. UNICORE)
of the correspondent Grids (e.g. DEISA). In the context of
the online visualization of its outcome, this simulation must
provide data in a stepwise fashion to enable the visualization of
single computational steps. Hence, if the simulation provides
interim results, they can be transferred via a communication
library (e.g. VISIT) to the visualization and afterwards turned
into visualization idioms [6] by a visualization technology (e.g.
VTK) to show the result of the actual computation status. In
this context, a visualization idiom is any specific sequence of
data enrichment and enhancement transformations, visualiza-
tion mappings, and rendering transformations that produce a
display of a scientific dataset within a visualization application.
In addition to scientific data, steering commands must be
transferred from the visualization to the simulation and this
data transfer can be securely accomplished via bi-directional
connections over SSH, because most firewalls allow access via
SSH to the highly protected systems running the simulation.
The COVS reference implementation uses a technique for an
SSH tunnel establishment in Grids that was shown by Riedel
et al. in [7] and relies on an RSA-based authentication while
a Grid middleware performs an RSA key exchange.
C. Enable Collaboration with COVS Grid Services
Today, Grid services conform to Open Grid Services Ar-
chitecture (OGSA) [8] and typically implemented by using
the OASIS Web Services Resource Framework (WS-RF) [9]
standard. In particular, the WS-RF compliant COVS Grid
service represent another core building block of the COVS
framework. In the reference implementation, for example,
it is implemented as a higher-level service on top of the
UNICORE Atomic Services (UAS) [10] that provide basic job
submission/management and file transfer functionalities. In
more detail, the COVS Grid service consists of two WS-RF
compliant services as shown in Figure 1, namely the COVS
Factory service and the COVS Session service. The COVS
Factory service implements the WS-RF implied factory pattern
that is defined as any kind of service that brings a stateful WS-
Resource (e.g. COVS session resource) into existence [9]. It
can be used to create new COVS session resources while the
access to these resources is provided by the COVS Session
service. The COVS session resource expose the status of the
COVS session by using the WS-Resource properties [9] mech-
anisms. Figure 1 illustrates the COVS Session service that
consists of a MultiplexerAdapter that controls the multiplexer
entity. The COVS Session service provides operations (e.g.
RemoveParticipant()) that are forwarded to this multiplexer
using an XML-based protocol. Similar to the Multiplexer-
Adapter, the design comprises a CollaborationAdapter that is
also integrated as one component within the COVS Session
service. It can be used to control the collaboration server via
service operations (e.g. shutdown()) that forwards actions to
the collaboration server using an XML-based protocol. Any
information gathered by the collaboration server is forwarded
to the service and in turn converted into properties of the
COVS session resource.
To sum up, the scope of the WS-RF compliant COVS
Session service reaches from dynamic collaboration to au-
thorized session management control. Authentication and au-
thorization is provided by the Grid middleware. In the refer-
ence implementation, end-users of the COVS framework are
authenticated via their X.509 credentials at the UNICORE
Gateway [11] and authorized within the UNICORE User
DataBase (UUDB) [12]. Finally, the next paragraph reveals
how these management capabilities for collaborative scenarios
are accessed via common open standards such as WS-RF.
D. Open Standards-based COVS Session Management
In general, a major disadvantage of Service Oriented Archi-
tectures (SOAs) such as modern Grids is that its core technolo-
gies (e.g. WS-RF over SOAP [13]) are typically not suitable
for a high amount of scientific data that is regularly trans-
ferred between the simulation and visualization. Therefore,
the COVS framework relies on SSH tunnels for bi-directional
connections, but uses the open standard technologies of SOAs
for the COVS session management as shown in Figure 1.
Fig. 2. GPE Application client with loaded COVS GridBean.
Also, Figure 1 illustrates that a end-user of the COVS
framework uses two applications at the client tier that is
the scientific visualization and a Grid client (e.g. GPE Grid
client). The Grid client is used to submit the scientific parallel
simulation to the Grid middleware (e.g. UNICORE) but is also
used for the SSH key exchange that is necessary to establish an
Fig. 3. Network infrastructure of the Grid testbed for the evaluations of the COVS framework implementation. The performance of the VISIT/SSH connection
(red) is of major interest since this connection transfers the scientific data from the parallel simulation (VISIT client) to the scientific visualization (VISIT
server). Furthermore this connection is responsible to transfer steering commands from the scientific visualization in nearly real-time to the parallel simulation.
SSH connection between client tier and target system tier. In
more detail, a dedicated COVS framework-specific client plug-
in (e.g. COVS GridBean) is responsible for that following the
mechanisms as described in [7].
The main goal of the COVS client plug-in is a sophisticated
GUI that provides the functionality to monitor and control
a COVS session using Web service message exchanges. To
provide an example, Figure 2 shows the COVS GridBean
of the reference implementation and its GUI in the context
of joining available COVS sessions that are exposed by the
COVS Grid service implementation within the Grid middle-
ware. Of course, the GUI also provides functionalities such as
connect/disconnect participants during the session run-time,
pause simulation/continue simulation or abort a session just
to list some. This functionality is conveniently provided via
pop-up menus to end-users of the framework.
III. DESIGN EVALUATION
This section evaluates the proposed architecture of the
COVS framework for Grid and e-Science infrastructures with
respect to different usability metrics, focussing on particular
on performance measurements of key components/protocols,
because this has a very high impact on the overall acceptance
of the framework by end-users.
A. Experimental Setup
This paragraph describes an experimental configuration that
lays the foundation to examine the feasibility of the COVS
framework reference implementation. The experimental set
up and later evaluations depend on the particular deploy-
ment of the framework to allow for performance evaluations.
Figure 3 is part of the JuNet network infrastructure within
Forschungszentrum J¨ulich and illustrates a particular deploy-
ment of the framework. It emphasizes on the network inter-
connection between the machines running COVS components.
On the OSI-layer 3, JuNet is composed of various IP-
subnets that are interconnected by a central router (zam047-
168). Client- and server-machines are typically connected to
switches with 100 or 1000 Mbit/s Ethernet, depending on their
communication requirements. The end-user laptop (zam326)
and the UNICORE server (zam461) are located in the same
subnet and are attached to the infrastructure with 100 Mbit/s
interfaces. Therefore their communication does not traverse
the router. The login-node of the supercomputer JUMP, is
connected to JuNet via two channeled 1 Gbit/s interfaces.
Since JUMP is located in a separate IP-subnet, the laptop and
the UNICORE server communicate with JUMP via the router.
The testbed will be used for performance measurements.
The performance of the illustrated Web service message
exchanges can be disregarded since they are only used to
transport small XML documents via SOAP that are not data-
intensive. Also, the NJS-TSI protocol as well as the collabo-
ration and multiplexer server control protocol only transport
small pieces of text and XML over the wire. Hence, the
only data-intensive connection that is of major interest for
performance evaluations is the VISIT/SSH connection between
the end-user laptop and the supercomputer JUMP.
B. Performance of Bi-directional Online Connection
One of the key considerations within Grids is the secure
transport of information and data between users and resources.
In particular, it is one crucial point of the design since
interactive steering of simulations raises a high demand for
low latency to reach real-time behavior. Hence, in order to
provide sophisticated steering capabilities of parallel simula-
tions in a timely manner, the protocols used in the COVS
framework design must achieve low latency over secure bi-
directional connections that can be realized via SSH tunnels.
In this paragraph, we provide performance measurements to
indicate that the usage of the communication library within
Grid environments via SSH (VISIT/SSH) is still feasible
instead of using the plain VISIT protocol unsecured over TCP
(VISIT/TCP). In this context it is important to understand
that the performance or usability of the system relies on what
kind of low-level transport is used for data exchange. Because
Web services and XML provides flexibility at the cost of
performance, the COVS framework uses Web services only for
session management but also provides a mechanism whereby
a high-performance connection can be used. Until now, TCP
was feasible in not distributed environment and the usage of
SSH within Grids is feasible since it provides firewall-friendly
secure connections. Hence, this distinction between manage-
ment information transfer (via Web services) and scientific
data transfer (via SSH) is a fundamental approach of the COVS
framework. If all is done via Web services, the latency is very
noticable to the human users and the bandwidth is significantly
reduced due to the verbosity of XML or UNICORE internal
protocols as shown in an earlier approach [14].
In Figure 3, the VISIT communication library uses a
vttproxy component [4] that acts as a proxy for the visualiza-
tion located in the same security domain as the simulation.
The measurement presented in Figure 4 were performed
with a generic ping-pong program that is part of the VISIT
distribution. This program sends messages of varying length
from the VISIT client (i.e. parallel simulation) to the VISIT
server (i.e. visualization) and back to the client several times
and provides statistics for the latency. In particular, Figure 4
shows the latencies that are measured as half of the message
round-trip time.
Fig. 4. Ping-Pong latency measured as half of message round-trip time using
VISIT/SSH and VISIT/TCP within the Grid testbed.
The Performance measurements between a Linux-
Visualization Client (1.7 GHz Pentium) and the login node
of the J¨ulich Supercomputer JUMP over the Grid testbed
(see Figure 3) result in an SSH connection startup time of
1.5 seconds, a ping-pong latency of 2299 microseconds and
a bandwidth of 84 MBit/s (for a message size of 1 MByte)
compared to 345 microseconds / 86 MBit/s for a direct
unencrypted VISIT connection via TCP between the same
systems. In this test scenario, the bandwidth degradation
from using the SSH-tunnel is only about 2.3%. While an
increase of latency by 2 milliseconds is significant (a factor
of more than 6 in the testbed), it is less relevant in wide-area
networks, where the signal propagation delay is about 1
millisecond per 200 km, not including additional delays in
network-components as switches or routers.
UNICORE uses Web service message exchanges to collect
information for the establishment of the SSH connection or
to control collaborative sessions. Therefore, UNICORE only
affects the time needed to initialize SSH connections but not
the scientific data transfer over SSH itself. To conclude, the
COVS framework reference implementation establishes a se-
cure way of providing a bi-directional connection between the
simulation and visualization without critical loss of latency.
C. Collaborative versus Single User Control (Multiplexer)
The VISIT multiplexer is the essential new component
introduced into the data flow to enable collaborative visu-
alizations. To measure its influence on the performance, we
have compared the throughput of messages of different sizes
with and without multiplexer and with varying number of
visualization clients attached in a testbed with gigabit ethernet
and without SSH. The results are illustrated in Figures 5
and 6. While the throughput per participant descreases with
growing number of prticipants the aggregate throughput (sum
of throughput of all participants) even increases due to better
utilization of the network.
Fig. 5. Throughput dependence on message size and multiplexer setup.
D. Analysis of Improved Usability Dimensions/Metrics
The usage of the COVS framework with typical HPC par-
allel applications in Grid infrastructures provides e-Scientists
with an improved usability that is extremely hard to measure
precisely. Therefore, this paragraph provides a closer look on
an example experiment that describes the PEPC simulation
[15] and Xnbody [16] visualization highlighting different
dimensions and metrics used in the analysis.
The first dimension focusses on the domain/technical knowl-
edge of the end-users. Figure 7 presents a snapshot of the
VISIT-enabled Xnbody visualization used with the VISIT-
Fig. 6. Throughput showing the overhead/scalability of the multiplexer.
enabled parallel simulation PEPC, but without using the ben-
efits a COVS framework implementation. Therefore, an end-
user has to manually provide all necessary details for the
connection establishment via VISIT (seappassword, seapser-
vice [4]) and the startup of the vttproxy to enable the transfer
via the SSH tunnel. That means an end-user must provide
aServicename (seapservice) and a Password (seappassword)
for the identification of the visualization at the remote site.
Purely optional is the definition of the Interface to choose
from different network interfaces that may be available (*
for default). In addition, the end-user must provide a full
qualified Host and a Username on the remote machine. Most
notably, the end-user must know the exact path to the Proxy
(vttproxy) on the remote machine. This could be particularly
difficult since the installation of the VISIT library on a remote
machine such as a supercomputer is usually undertaken by the
administrators of this machine and not by an individual end-
user. All in all, scientists that represent an end-user must know
a lot of technical details before they can connect to an ongoing
parallel simulation with this VISIT-based application. The pro-
visioning of the above described necessary pieces of technical
information can be much more automaticely managed by using
the COVS framework implementation presented here. When
using the PEPC parallel simulation and Xnbody visualization
with the COVS framework in collaborative scenarios all these
issues become transparent to the end-users by using the ’use
UNICORE’ checkbox. In other words, all the described pieces
of information do not have to be provided by the end-users
anymore, instead UNICORE provides all these details to the
scientific visualization via named pipes.
The next usability dimension demonstrates the improve-
ments in handling the complexity of collaborative scenarios.
As shown in Figure 7, typically more issues arise when
performing collaborative visualization scenarios and much
more connection details have to provided. First and foremost,
Fig. 7. Using the Xnbody scientific visualization without the COVS
framework implementation implies knowledge of technical details. A end-
user must manually provide all necessary information (zoomed red boxes).
Instead, when using the ’use UNICORE’ checkbox, all pieces of information
will be automatically provided by UNICORE.
all participants of a COVS session are identified with the
VISIT seapservice:seappassword combination. Hence, with-
out using the COVS framework, one participant must man-
ually configure the VISIT multiplexer by providing all the
different seapservice and seappassword combinations of the
geographically dispersed participants. This information must
be exchanged using out-of-band mechanisms such as EMail,
Telephone or Skype. In addition, this user must have the
knowledge which participants are allowed to participate in a
COVS session and thus must provide manual authorization
and authentication of participants. Hence, this can be particu-
larly difficult when the number of participants is significantly
increased. In addition, all end-users have to know the contact
information of the collaboration server as shown in Figure 7,
and if communicated over insecure networks, also the Host
and Username to establish an SSH connection to the host of
the collaboration server must be manually provided by the
end-users within the Xnbody GUI. COVS solves this using
Grid technologies so that all participants can conveniently
request for participation via the COVS GridBean and all the
necessary pieces of information are automaticly transferred,
which includes the automatic configuration of the VISIT
Multiplexer and VISIT Collaboration Server.
Another usability metric is the simpler session management
interface. The leader of the session conveniently uses the
COVS GridBean GUI to manage the session and the GUI in
turn forwards operations (e.g. connect/disconnect participant)
to the underlying UNICORE Grid middleware using Web
service message exchanges. All this is hidden from all the
participants that just use the GPE Grid client with the COVS
GridBean. Even the use of the Grid client itself provides
improved usability by providing a convenient way to submit
jobs to remote resources.
Another metric for improved usability is the simpler inter-
face for authentication. The authorization and authentication
of end-users is done automatically by the Grid middleware
and retains the important single sign-on feature of Grid en-
vironments. Thus, instead of providing several passwords for
remote hosts, only once the keystore of the Grid middleware
must be unlocked via one password to gain full access.
To sum up, the transparency of Grids is the overall goal
of using the COVS framework with scientific parallel ap-
plications. Its improved usability legitimates the use of the
framework by e-Scientists in real application use cases within
production Grids, because the efforts of scientists for connect-
ing to a remote simulation is significantly reduced as described
by the different usability metrics. Furthemore, several Grid
infrastructures, for instance several sites within the EGEE
Grid, map certificate identities to pool accounts which leads
to the fact that a username and hostname can not be known
beforehand for static manually configured SSH tunnels. In
such scenarios, the COVS framework provides capabilities to
establish an SSH tunnel to the remote site and thus also to use
applications in highly dynamic environments.
E. Support for End-users of the COVS Framework
The fundamental idea of the COVS framework is to provide
e-Scientists with a tool that is easy to deploy and use by
avoiding work that is not related to their own scientific
area or application code. Therefore, this paragraph evaluates
what end-users actually have to do when they want to use a
COVS framework implementation. Thus, it is clarified which
components must be already provided by a COVS framework
implementation and which components the end-users have
to provide in which form. Needless to say, it is important
to evaluate if the work that end-users have to invest to use
the COVS framework is feasible and can be expected to be
accepted in production Grids today.
For the clarification of these questions it is worth looking
at real production Grids such as D-Grid [17] or DEISA.
Both infrastructures have already plans to move to Web
service-based Grid middlewares (e.g. UNICORE 6) in the
near future. Besides the core Grid services (e.g. job submis-
sion and management, file transfer, and storage), additional
higher-level services such as COVS Grid services can also
be deployed within the Grid middlewares. Hence, the core
building blocks Grid middleware and thus COVS Grid services
will be automatically provided by the e-Science infrastructure.
The deployment of the COVS Grid services implies the
configuration and installation of the dedicated communication
library it is based on (e.g. VISIT), including its multiplexer
and collaboration entities. Furthermore, the Grid client will
also be provided to gain access to the infrastructures.
To conclude, a deployed implementation of the COVS
framework design provides the most core building blocks for
end-users in a ready-to-use form, only the scientific area-
specific parallel simulation and visualization must be provided
by the end-user. This implies, that both components must be
instrumented with communication library-specific calls (e.g.
VISIT server and client calls) in order to enable the usage of
the COVS framework. Many parallel e-Science applications
already have visualizations that are based on post-processing
techniques. Hence, the real work that end-users have to do
is to instrument their own code with communication library
calls to enable the data and steering command exchange.
Finally, end-users of a COVS framework implementation
must request a personal X.509 certificate at the corresponding
Certificate Authority (CA). Using such a certificate allows end-
users to gain access to the infrastructure and its resources
via the Grid client (e.g. GPE Client) and also to the COVS
Grid services when loading the client-specific plug-in (e.g.
COVS GridBean) into this client. However, this is a general
demand for end-users that want to use resources within Grid
infrastructures and not a COVS framework specific issue.
IV. COVS FRAMEWORK USER COMMUNITIES
The COVS framework implementation is used by the
ASTRO-Grid D community Grid within D-Grid in the context
of the nbody parallel simulation code. Furthermore, it is used
with the PEPC parallel application at the John von Neumann
Institute for Computing (NIC) in J¨ulich in conjunction with
the Xnbody visualization (see Fig. 7). This demonstrates the
adoption by user communities that use the framework to sig-
nificantly increase their analysis of scientific data provided by
nbody or PEPc through collaborative sessions with participants
ot the whole VO.
V. RELATED WORK
There is a quite a lot of related work in the area of
visualization and steering technologies in Grid infrastructures.
The UK RealityGrid project provides a steering library that
enable calls which can be embedded into its three components
that are simulation, visualization, and a steering client. Re-
cently, prototypes of this library are renewed to be conform to
OGSA. In comparison to the work presented here, the COVS
framework is loosely coupled to the Grid middleware while
the recent efforts around the RealityGrid steering library are
focusing on its tighter integration into the Imperial College
e-Science Networked Infrastructure (ICENI) [18].
Another well-known system is developed within the Aus-
trian Grid initiative. The Grid Enabled Visualization Pipeline
(GVID) [19] provides high quality visualizations of scientific
datasets on thin clients. In more detail, the data of the scientific
simulations are efficiently encoded with the H262 code into a
video stream and transferred to the thin client afterwards. The
client, in turn decodes the video stream for visualization of
the scientific data. The system also offers steering capabilities
similar to the approach within this paper, but realized via so-
called Event-Encoders that run on the thin clients and sent
steering commands to the simulation. However, the major
difference to our approach is that it is not seamlessly integrated
as one higher-level service into a common Grid middleware.
NAREGI provided an API that consists of a visualiza-
tion library and a Grid visualization service API [20]. The
visualization library can be used to connect simulation ap-
plications by the support of multiple visualization function-
alities. The visualization service API wraps this library to
provide Grid service functionality that are a set of WS-RF
compliant services, for instance Coupled Simulation Services,
Post-Processing Services, or Molecular Visualization Services.
Even if this approach is using the WS-RF standard similar
as our approach, the internal architecture is rather different.
To provide an example, the scientific data as well as its
rendering is completely computed within the Grid that finally
is represented by a compressed image. The COVS framework,
on the other hand, sends the scientific data in an online
connection to the client for rendering and to allow for accurate
steering of result parameters.
Finally, Brodlie et al. describes in [21] a well known
rather high-level framework for collaborative visualization in
distributed environments, while our contribution is much more
oriented and closer to production Grid scenarios today.
VI. CONCLUSIONS
This paper introduced the reference implementation of the
COVS framework design by using the UNICORE 6 Grid
middleware and the VISIT communication library. The eval-
uations have shown that the choice of the communication
library is a crucial step for the implementation of a COVS
framework since this core building block has dependencies
with all others. Most notably,the implementation of the COVS
framework can be used by all visualizations and simulations
that rely on the selected communication library. Nevertheless,
the communication library can also be replaced by another
library that provides similar capabilities such as bi-directional
online connections, scalable data multiplexers and collabora-
tion servers.
Also, the Grid middleware is an important cornerstone
and the proof of concept implementation with UNICORE 6
indicates that the COVS framework could be, in principle,
implemented in any WS-RF compliant Grid middleware. This
open source implementation of the COVS framework was
successfully demonstrated at the EuroPar 2006 conference,
at the Fujitsu UNICORE booth at OGF18 in Washington,
at the Supercomputing 2006 conference in Tampa and in a
visualization and steering session at OGF19 in Chapel Hill.
Furthermore, it was demonstrated to end-users at a DEISA
Training. More recently, Intel flyers use the COVS framework
implementation presented within this paper for marketing of
their open source GPE client suite.
Nevertheless, deploying the proposed COVS architecture is
an important next step to broadly incorporate implementations
of the COVS framework into production Grid environments.
The reference implementation described here relies on the WS-
based UNICORE 6 middleware. UNICORE 6 will be soon
evaluated by the DEISA and D-Grid Grid infrastructures for
production usage. When these production Grids shift their ac-
cess methods from UNICORE 5 to UNICORE 6, the reference
implementation of the COVS framework can be also deployed
as one higher-level service for production usage. In general,
efficient usage of computational resources by using COVS
and thus beneficial steering technologies must be improved
with the goal to incorporate such steering tools into the usual
workflows of e-Scientists. Once an implementation of the
COVS framework is deployed within production Grids such
as DEISA or D-Grid, an important tool for a efficient use of
Grid and e-Science infrastructures is accomplished.
REFERENCES
[1] I. Foster et al.,The Anatomy of the Grid - Enable Scalable Virtual
Organizations. John Wiley and Sons Ltd., 2003.
[2] R. Marshall et al., “Visualization methods and simulation steering for
a 3D turbulence model for Lake Erie,” ACM CIGGRAPH Computer
Graphics, vol. 24(2), pp. 89–97, 1990.
[3] M.Riedel et al., “VISIT/GS: Higher Level Grid Services for Scientific
Collaborative Online Visualization and Steering in UNICORE Grids,”
in To appear in Proc. of Int. Symposium on Parallel and Distributed
Computing, Linz, 2007.
[4] VISIT. [Online]. Available: http://www.fz-juelich.de/zam/visit
[5] R. Ratering et al., “GridBeans: Supporting e-Science and Grid Appli-
cations,” in Proc. of 2nd IEEE e-Science, Amsterdam, 2006.
[6] R. Haber et al., “Visualization Idioms: A conceptual model for scientific
visualization systems,” Vis. in Scientific Computing, pp. 74–93.
[7] M. Riedel et al., “Enhancing Scientific Workflows with Secure Shell
Functionality in UNICORE Grids,” in Proc. of 1st IEEE e-Science,
Melbourne, 2005.
[8] I. Foster et al.,The Open Grid Services Architecture V.1.5. OGF
(GFD80), 2006.
[9] WSRF-Technical Committee. [Online]. Available: http://www.oasis-
open.org/committees/wsrf/
[10] M. Riedel et al., “Standardization Processes of the UNICORE Grid
System,” in Proceedings of 1st Austrian Grid Symposium, Linz, 2005,
pp. 191–203.
[11] R. Menday, “The Web Services Architecture and the UNICORE Gate-
way,” in Proc. of the Int. Conf. on Internet and Web Applications and
Services, 2006.
[12] A. Streit et al., “UNICORE - From Project Results to Production Grids,”
Grid Computing: The New Frontiers of High Performance Processing,
Advances in Parallel Computing, vol. 14, pp. 357–376, 2005.
[13] M. Gudgin et al.,SOAP Version 1.2 Part 1: Messaging Framework.
W3C Recommendation, 2003.
[14] T. Eickermann et al., “Steering UNICORE Applications with VISIT,”
Phil. Transactions of the Royal Society, vol. 363, pp. 1855–1865, 2005.
[15] S. Pfalzner and P. Gibbon, Many-Body Tree Methods in Physics.
Cambridge University Press, 1996, ISBN-10: 0521019168.
[16] Xnbody. [Online]. Available: http://www.fz-juelich.de/zam/xnbody
[17] D-Grid. [Online]. Available: http://www.d-grid.de/
[18] J. Cohen et al., “RealityGrid: An Integrated Approach to Middleware
through ICENI,” Phil. Transactions of the Royal Society, vol. 363, pp.
1817–1827, 2005.
[19] T. Koeckerbauer et al., “GVid - Video Coding and Encryption for
Advanced Grid Visualization,” in Proc. of 1st Austrian Grid Symposium,
Linz, 2005, pp. 204–218.
[20] P. Kleijer et al., “API for Grid Based Visualization Systems,” GGF 12
Workshop on Grid Application Programming Interfaces, 2004.
[21] K. Brodlie et al., “Distributed and Collaborative Visualization,” Com-
puter Graphics Forum, vol. 23, 2004.
... This in turn allows for computational steering to influence the computation of the simulation during runtime on a supercomputer. In this context, we have shown in earlier work that the efficiency of e-scientists can be further improved by leveraging strong security environments and collaborative Web service-based features when using a COVS framework [22] in UNICORE Grids such as DEISA. ...
... In earlier work [26], we have shown a prototype COVS technique implementation based on the visualization interface toolkit (VISIT) [13] and the Grid middleware of DEISA named as the Uniform Interface to Computing Resources (UNICORE) [28]. Since then the approach grew to a broader COVS framework [23] and we further published at the Grid 2007 conference in [22] that the approach taken is feasible and provides sophisticated performance. More recently, we investigated in [21] the impact of using the computational steering capabilities of the COVS framework implementation in UNICORE on largescale HPC systems of DEISA (e.g. ...
... To provide an example, only a participant in the steerer role is able to influence the application during its runtime. This is internally realized by forwarding suitable actions or commands via the multiplexer adapter, which in turn controls and manage the VISIT multiplexer [22]. The same approach is implemented in the COVS session service in terms of the collaborator role that uses the collaboration adapter to control and manage the VISIT collaboration server [22] ...
Article
Full-text available
Especially within grid infrastructures driven by high-performance computing (HPC), collaborative online visualization and steering (COVS) has become an important technique to dynamically steer the parameters of a parallel simulation or to just share the outcome of simulations via visualizations with geographically dispersed collaborators. In earlier work, we have presented a COVS framework reference implementation based on the UNICORE grid middleware used within DEISA. This paper lists current limitations of the COVS framework design and implementation related to missing fine-grained authorization capabilities that are required during collaborative COVS sessions. Such capabilities use end-user information about roles, project membership, or participation in a dedicated virtual organization (VO). We outline solutions and present a design and implementation of our architecture extension that uses attribute authorities such as the recently developed virtual organization membership service (VOMS) based on the security assertion markup language (SAML).
... In this contribution we will highlight certain design concepts of the Collaborative Online Visualization and Steering (covs) framework and its reference implementation within unicore [15], which allows for interactive access to Grid applications. While many work around covs was published [13] [12] [10] [9] [11], this paper emphasizes on features of how we deal with dynamic management of n participants, Grid transparency in terms of hostnames and ports to satisfy end-user without technical knowledge (e.g. single sign-on). ...
... visit provides several components and most notably a visit server (integrated in visualizations), a visit client (integrated in simulations), and additionally a visit Collaboration Server and the visit Multiplexer to deal with collaborative scenarios. With the successful integration of visit, our approach grew to a mature covs framework implementation, which evaluation [10] proved that the approach taken is feasible and provides sophisticated performance. Since then, we observed a trend towards higher degrees of parallelism by employing a larger number of moderately fast processor cores. ...
... The current covs framework reference implementation architecture is shown in Figure 1 [10] ...
Chapter
Full-text available
Large-scale scientific research often relies on the collaborative use of massive computational power, fast networks, and large storage capacities provided by e-science infrastructures (e.g., deisa, egee) since the past several years. Especially within e-science infrastructures driven by high-performance computing (hpc) such as deisa, collaborative online visualization and computational steering (covs) has become an important technique to enable hpc applications with interactivity and visualized feedback mechanisms. In earlier work we have shown a prototype covs technique implementation based on the visualization interface toolkit (visit) and the Grid middleware of deisa named as Uniform Interface to Computing Resources (unicore). Since then the approach grew to a broader covs framework. More recently, we investigated the impact of using the computational steering capabilities of the covs framework implementation in unicore on large-scale hpc systems (i.e., ibm BlueGene/P with 65536 processors) and the use of attribute-based authorization. In this chapter we emphasize on the improved collaborative features of the covs framework and present new insights of how we deal with dynamic management of n participants, transparency of Grid resources, and virtualization of hosts of end-users. We also show that our interactive approach to hpc systems fully supports the necessary single sign-on feature required in Grid and e-science infrastructures. KeywordsScientific visualization-Computational steering-COVS-VISIT-UNICORE
... Existing frameworks supporting bi-directional channels are gVID [59], e-Viz [91] and Collaborative Online Visualization and Steering (COVS) in the context of Uniform Interface to Computing Resources (UNICORE) 2 [93]. In [39], Gibbon et. ...
Thesis
Karastoyanova et al. created eScienceSWaT (eScience SoftWare Engineering Technique), that targets at providing a user-friendly and systematic approach for creating applications for scientific experiments in the domain of e-Science. Even though eScienceSWaT is used, still many choices about the scientific experiment model, IT experiment model and infrastructure have to be made. Therefore, a collection of best practices for building scientific experiments is required. Additionally, these best practice need to be connected and organized. Finally, a Decision Support System (DSS) that is based on the best practices and enables decisions about the various choices for e-Science solutions, needs to be developed. Hence, various e-Science applications are examined in this thesis. Best practices are recognised by abstracting from the identified problem-solution pairs in the e-Science applications. Knowledge and best practices from natural science, computer science and software engineering are stored in patterns. Furthermore, relationship types among patterns are worked out. Afterwards, relationships among the patterns are defined and the patterns are organized in a pattern library. In addition, the concept for a DSS that provisions the patterns and its prototypical implementation are presented.
... This Web Services technology communicates its parties regardless of platform and language implementations, by using standard eXtensible Markup Language (XML) schemas, which provide well-formed data packages and conformity to consensus standards, thus allowing automatic information extraction and verification. When using this web services framework on a grid middleware platform, many computers, potentially thousands, share data, applications and computing capacity to achieve a desirable outcome, in a manner transparent to the end-user who only interacts with a single entity (Riedel et al., 2007). ...
Article
Full-text available
Natural resources management policies often entail a complex environmental decision-making process. This process can be greatly enhanced if it is based on an exploratory-envisioning system such as the Spatial Information Exploration and Visualisation Environment (SIEVE). This system integrates Geographical Information Systems, collaborative virtual environments, and other Spatial Data Infrastructures with highly interactive game-engine software. By leveraging these technologies, the system increases the potential for every participant, regardless of his level of involvement to have a better understanding of the issues at hand and to make better informed decisions. In a like manner, current scientific research has taken advantage of e-science platforms that share resources and enhance distributed simulation, analysis and visualization. Many of these infrastructures use one or more collaborative software paradigms like Grid Computing, High Level Architecture (HLA) and Service Oriented Architecture (SOA), which together provide an optimal environment for heterogeneous and distant, real-time collaboration. While significant progress has been made using these collaborative platforms, frequently there is no particular software suite that fulfils all requirements for an entire organization or case study. In these cases, an end-user must cope manually with a collection of tools and its exporting/importing capabilities to obtain the output needed for a particular purpose. This paper proposes a modular, real-time collaborative framework based upon user and tool-wrapping interfaces that are compliant not only with the aforementioned exploratory virtual environment, but also with web service-based Grid and HLA technology guidelines. The framework architecture is divided as follows: • Visualization Layer Services: composed of modules that offer the end visualization outcome, which depends on performance/quality of detail required to visualize the same data provided by the next layer. This layer includes Web Client services, Virtual Collaborative Environment interface services and high definition rendering services. • Management/Orchestration Layer Services: process services that link and sequence services according to existing and potentially new visualization requirements. These automated services further delegate specialized functions such as management, security, batch processing and similar features. This layer includes a Workflow Manager, a Simulation Real Time Infrastructure Manager, a Render Manager and a Grid Middleware Manager. • Data Layer Services: data sources that can be composited to feed spatial and non-spatial information requirements that the orchestration layer needs to fulfil its lifecycle. • Communication Services: encapsulating CityGML information using Web Services protocols (Web Service Description Language -WSDL, Simple Object Access Protocol -SOAP, and Universal Description Discovery and Integration -UDDI), data is transferred from all layers through Wrappers/Interfaces that are implemented by standard contracts on each module. In this manner, this framework orchestrates the use of heterogeneous software tools which collectively support distributed visual spatial analysis and complex environmental decision-making processes. A proof-of-concept prototype will be presented to illustrate a combination of representative commercial and open source software used in the area of spatial visualization, distributed computing and complex environmental simulation.
... In this paradigm of the interactive access, the Grid middleware authorizes and creates a bi-directional channel for numerous different use case applications. Different frameworks have been developed in the Grid middleware systems to enable this approach to end-users such as gGVID [17] in the context of gLite, eViz [3] in the context of Globus, and COVS [19] in the context of UNICORE. Examples for this approach are the plasma physics code PEPC [14] and the astro-physics code nbody6++ [24] that are used with the UNICORE-based COVS framework implementation for collaborative visualization and steering sessions. ...
Conference Paper
Full-text available
Simulation and thus scientific computing is the third pillar alongside theory and experiment in todays science and engineering. The term e-science evolved as a new research field that focuses on collaboration in key areas of science using next generation infrastructures to extend the powers of scientific computing. This paper contributes to the field of e-science as a study of how scientists actually work within currently existing Grid and e-science infrastructures. Alongside numerous different scientific applications, we identified several common approaches with similar characteristics in different domains. These approaches are described together with a classification on how to perform e-science in next generation infrastructures. The paper is thus a survey paper which provides an overview of the e-science research domain.
Chapter
The collaborative virtual environment framework SIEVE allows users to automatically build virtual environments and explore them collaboratively in real-time to aid decision making. SIEVE is currently being used in several application areas around landscape visualization and management and security and emergency response. Specific application areas include climate change, future land use exploration, land use productivity analysis and marine security response scenarios. This paper focuses on extensions to SIEVE based on col-laborative data sharing web technologies. SIEVE Builder Web allows users to access remote SDI data via a web-mapping service to create and download 3D environments. Another component currently in development allows the import of ancillary data into SIEVE by creating a data mashup. To integrate online data and shared computing facilities we are building a web-based framework to integrate multiple applications to complement SIEVE. Finally, we allow users to exchange spatially referenced photographs remotely within SIEVE Viewer.
Article
Full-text available
For many research endeavours, e-Infrastructures need to provide predictable, on-demand access to large-scale computational resources with high data availability. These need to scale with the research communities requirements and use. One example of such an e-Infrastructure is the Australian Urban Research Infrastructure Network (AURIN – www.aurin.org.au) project, which supports Australia-wide research in and across the urban and built environment. This paper describes the architecture of the AURIN infrastructure and its support for access to distributed (federated) and highly heterogeneous data sets from a wide range of providers. We present how this architecture solution leverages the intersection of high throughput computing (HTC), infrastructure as a service (IaaS) Cloud services and big data technologies including use of NoSQL resources. The driving concept in this architecture and the focus of this paper is the ability for scaling up or down depending on resource demands at any given time. This is done automatically and on demand avoiding either under- or over-utilization of resources. This resource-optimization-driven infrastructure has been designed to ensure that peak loads can be predicted and successfully coped with, as well as avoid wasting resources during non-peak times. This overall management strategy has resulted in an e-Infrastructure that provides a flexible, evolving research environment that scales with research needs, rather than providing a rigid (static) end product.
Conference Paper
The steadily increasing amounts of scientific data and the analysis of 'big data' is a fundamental characteristic in the context of computational simulations that are based on numerical methods or known physical laws. This represents both an opportunity and challenge on different levels for traditional distributed computing approaches, architectures, and infrastructures. On the lowest level data-intensive computing is a challenge since CPU speed has surpassed IO capabilities of HPC resources and on the higher levels complex cross-disciplinary data sharing is envisioned via data infrastructures in order to engage in the fragmented answers to societal challenges. This paper highlights how these levels share the demand for 'high productivity processing' of 'big data' including the sharing and analysis of 'large-scale science data-sets'. The paper will describe approaches such as the high-level European data infrastructure EUDAT as well as low-level requirements arising from HPC simulations used in distributed computing. The paper aims to address the fact that big data analysis methods such as computational steering and visualization, map-reduce, R, and others are around, but a lot of research and evaluations still need to be done to achieve scientific insights with them in the context of traditional distributed computing infrastructures.
Conference Paper
Full-text available
We present recent innovation in a field of advanced, multipurpose streaming solutions for the grid. The described solution is based on the Unigrids Streaming Framework [7] which has been adopted to the UNICORE 6 middleware and extended. The main focus of this paper is the UGSF Data Flow Editor, which is an universal tool for powerful streaming composition. It has been developed to provide users with a graphical interface for streaming applications on the grid.
Conference Paper
Full-text available
In recent years, the Virtual Organization Membership Service (VOMS) emerged within Grid infrastructures providing dynamic, fine-grained, access control needed to enable resource sharing across Virtual Organization (VOs). VOMS allows to manage authorization information in a VO scope to enforce agreements established between VOs and resource owners. VOMS is used for authorization in the EGEE and OSG infrastructures and is a core component of the respective middleware stacks gLite and VDT. While a module for supporting VOMS is also available as part of the authorization service of the Globus Toolkit, there is currently no support for VO-level authorization within the new Web services-based UNICORE 6. This paper describes the evolution of VOMS towards an open standard compliant service based on the Security Assertion Markup Language (SAML), which in turn provides mechanisms to fill the VO-level authorization service gap within Web service-based UNICORE Grids. In addition, the SAML-based VOMS allows for cross middleware VO management through open standards.
Article
"Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high-performance orientation. In this article, we define this new field. First, we review the "Grid problem," which we define as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources-what we refer to as virtual organizations. In such settings, we encounter unique authentication, authorization, resource access, resource discovery, and other challenges. It is this class of problem that is addressed by Grid technologies. Next, we present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. We describe requirements that we believe any such mechanisms must satisfy, and we discuss the central role played by the intergrid protocols that enable interoperability among different Grid systems. Finally, we discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. We maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
Conference Paper
The term “the Grid” was coined in the mid-1990s to denote a proposed distributed computing infrastructure for advanced science and engineering. Considerable progress has since been made on the construction of such an infrastructure, but the term “Grid” has also been conflated, at least in popular perception, to embrace everything from advanced networking to artificial intelligence. One might wonder whether the term has any real substance and meaning. Is there really a distinct “Grid problem” and hence a need for new “Grid technologies”? If so, what is the nature of these technologies, and what is their domain of applicability? While numerous groups have interest in Grid concepts and share, to a significant extent, a common vision of Grid architecture, we do not see consensus on the answers to these questions.
Article
Studying the dynamics of a large number of particles interacting through long-range forces, commonly referred to as the "N-body problem", is a central aspect of many different branches of physics. In recent years, physicists have made significant advances in the development of fast N-body algorithms to deal efficiently with such complex problems. This book gives a thorough introduction to these so-called "tree methods", setting out the basic principles and giving many practical examples of their use. The authors assume no prior specialist knowledge, and they illustrate the techniques throughout with reference to a broad range of applications. The book will be of great interest to graduate students and researchers working on the modeling of systems in astrophysics, plasma physics, nuclear and particle physics, condensed matter physics and materials science.
Article
The term "the Grid" was coined in the mid-1990s to denote a proposed distributed computing infrastructure for advanced science and engineering [4]. Considerable progress has since been made on the construction of such an infrastructure (e.g., [1,6,7]) but the term "Grid" has also been conflated, at least in popular perception, to embrace everything from advanced networking to artificial intelligence. One might wonder whether the term has any real substance and meaning. Is there really a distinct "Grid problem" and hence a need for new "Grid technologies"? If so, what is the nature of these technologies, and what is their domain of applicability? While numerous groups have interest in Grid concepts and share, to a significant extent, a common vision of Grid architecture, we do not see consensus on the answers to these questions.
Article
However, since short, a new area has seized part of the grid community: grid-computing applications. Without the application layer, the grid computing will have no sense and just be a nice proof of concept. With the focus shifting toward applications, new problems emerge that are not or were not taking in account during the initial development of the core. The requirements may highly differ between each application, but it can always be resumed to a grid-based communication with the middleware medium. The scientific community is since always greedy of simulations. Simulations are most of the time targeted at High Performance Computing systems (HPC), which is exactly what the grid is tackling and proposing. With the grid it is possible to launch large-scale simulations over a virtual environment. Using this power of the grid to just launch or transfer end results is not satisfactory, more is possible. 1. Most simulations in any scientific fields can never accomplish their purpose without visualization (in concurrent or in post-processing), which enables researchers to observe and analyze their unrecognizable numerical results. 2. A solver has not to be static; it can be steered by a simple client on the fly. This enables the researcher to change, alter or rectify the simulation without having to stop and restart it.
Article
A computational model of Lake Erie serves as a framework for a study of visualization techniques and display methods. Various display methods are used to examine the 3D data. The methods use primitive representations of polygons, volumes, lines and particles. The display methods also incorporate stereo imagery and animation. Three techniques of integrating the control of the computationl model and the display of images are discussed. These visualization techniques are post-processing, tracking and steering. A distributed software environment is used which implements these visualization techniques and display methods. The technique of steering is emphasized with a description of the software requirements and examples. A significant increase in productivity and comprehension is shown when steering is used.
Article
Many production Grid infrastructures such as DEISA, EGEE, or TeraGrid have begun to offer services to endusers that include access to computational resources. The major goal of these infrastructures is to facilitate the routine interaction of scientists and their workflows with advanced tools and seamless access to computational resources via Grid middleware systems such as UNICORE, gLite or Globus Toolkits. While UNICORE 5 is used in production Grids since several years, recently an early prototype of the new Web services-based UNICORE 6 became available that will be continously improved in the next months for its use in production. In absence of a widely accepted framework for visualization and steering, the new UNICORE 6 Grid middleware provides not such a higherlevel service by default. This motivates this contribution to support e-Scientists in upcoming WS-based UNICORE Grids with visualization and steering techniques. In this paper we present the augmentation of the early standards-based UNICORE 6 prototype with a higher-level service for collaborative online visualization and steering. It describes the seamless integration of this service within UNICORE Grids by retaining the convenient single sign-on feature.