ArticlePDF Available
J Grid Computing (2009) 7:285–286
DOI 10.1007/s10723-009-9138-z
Grid Interoperability for e-Research
Morris Riedel ·Gabor Terstyanszky
Published online: 25 September 2009
© Springer Science + Business Media B.V. 2009
Computational simulations and thus scientific
computing is the third pillar alongside theory and
experiment in science and engineering today. The
term e-science evolved as a new research field
that focus on collaboration in key areas of science
using next generation computing infrastructures
such as Grids to extend the potential of scientific
computing. More recently, increasing complexity
of e-science applications that embrace multiple
physical models (i.e. multi-physics) and consider a
larger range of scales (i.e. multi-scale) is creating
a steadily growing demand for world-wide inter-
operable Grid infrastructures that allow for new
innovative types of e-science by jointly using a
broader variety of computational resources. Since
such interoperable Grid infrastructures are still
not seamlessly provided today, the topic ‘Grid
interoperability’ emerged as a broader research
field in the last couple of years.
The lack of Grid interoperability is a hindrance
since we observe a growing interest in the coor-
dinated use of more than one Grid with a single
M. Riedel (B)
Institute for Advanced Simulation,
Forschungszentrum Jülich, Jülich, Germany
e-mail: m.riedel@fz-juelich.de
G. Terstyanszky
University of Westminster, London, UK
e-mail: terstyg@westminster.ac.uk
client that controls interoperable components de-
ployed in different Grid infrastructures. In fact,
we have shown in a recent classification [1]that
among simple scripts with limited control func-
tionality (i.e. loops), scientific application client
plug-ins, complex workflows, and interactive ac-
cess, there is also Grid interoperability mentioned
as one approach to perform e-science today. Such
interoperable federated Grids have the poten-
tial to facilitate e-research and thus scientific ad-
vances, which would not be possible using only
a single Grid infrastructure. These advances arise
from the advantages that federated Grid resources
provide, such as access to a wide variety of hetero-
geneous resources, aggregated higher throughput,
and lower time-to-solution.
In more detail, we observe that more and
more Grid end-users raise the demand to access
both High Throughput Computing (HTC)-driven
Grids (EGEE, OSG, etc.) and High Perfor-
mance Computing (HPC)-driven infrastructures
(DEISA, TeraGrid, etc.) from a single client or
science portal. In this context, the fundamental
difference between HPC and HTC is that HPC
resources (i.e. supercomputers) provide a good in-
terconnection of cpus/cores while HTC resources
(i.e. pc pools) do not. This joint use is typically mo-
tivated by the theory and concept that tackle the
scientific problem and is modeled within the cor-
responding codes that in turn lead to some codes
that are ‘nicely parallel’ (i.e. HTC) and others
286 M. Riedel, G. Terstyanszky
that are better suited to be computed as ‘massively
parallel’ (i.e. HPC) simulations. In addition, the
joint use of HTC- and HPC-Grids is often mo-
tivated by the fact that often end-users perform
smaller evaluation runs with their codes on HTC
resources before performing full-blown produc-
tion runs on large-scale HPC resources. This saves
rare computational time on costly HPC resources
within HPC-driven Grids.
At the time of writing, it is an interesting time
period for European Grids in the sense of the up-
coming transition process from the project-based
EGEE project to a longer sustainable European
Grid Initiative (EGI) while DEISA and the
Partnership for Advanced Computing in Europe
(PRACE) are jointly creating an HPC infrastruc-
ture for emerging peta-scale applications. In the
US, we see an upcoming third phase of the
TeraGrid in the context of the extreme digital
(XD) resources for science and engineering transi-
tion. Nevertheless, what we learned from the past
and what we can expect from the future is that
the underlying computing paradigms will remain
in the sense that requirements for HTC and HPC
will be still present. That’s still valid even in times
where, in principle, HTC and HPC codes could
be executed on one large-scale cluster such as the
IBM BlueGene/P while having thus much more
focus on the computed data itself instead on the
computational paradigms that are being used.
Since the difference between these underlying
computational paradigms (i.e. HTC and HPC) will
still exist in the future, interoperability between
Grid infrastructures that offer seamless access to
both types of computational resources will be
further needed in future. Since, we observe a
rather slow adoption of emerging open standards
in deployed Grid middleware systems on these
infrastructures in the past; the Grid communi-
ties developed many different approaches to Grid
interoperability that are classified by Riedel et al.
in [3]. Nevertheless, common open standards are
the one and only way to enable a long-term seam-
less cross-Grid access and thus we have worked
on the understanding of how such standards can
be further improved to increase their adoption in
production Grid middleware. In fact, we have per-
formed many interoperability tests and worked
with a wide variety of interoperability use cases
[2] in the context of the Grid Interoperation Now
(GIN) community group of OGF. The lessons
learned from all these activities have been given as
an input to the OGF Production Grid Infrastruc-
ture (PGI) working group in order to improve
existing emerging open standards towards an im-
proved production usage following a well defined
reference model [3].
This journal special issue highlights selected
contributions to the greater research field of Grid
interoperability in general and provides an in-
teresting set of information about world-wide
projects that work in this particular research field.
It thus represents a good supplement to the pro-
ceedings of the International Grid Interoperabil-
ity and Interoperation Workshops (IGIIW) that
we have organized in the past.
References
1. Riedel, M., Streit, A., Wolf, F., Lippert, T.H.,
Kranzlmüller, D.: Classification of different approaches
for e-science applications in next generation computing
infrastructures. In: Proceedings of the 4th IEEE
Conference on e-Science (e-Science), pp. 198–205,
Indianapolis, Indiana, USA (2008)
2. Riedel, M., Laure, E., et al.: Interoperation of world-
wide production e-science infrastructures. Journal of
Concurrency and Computation: Practice and Experi-
ence 21, 961–990 (2009)
3. Riedel, M., Wolf, F., Kranzlmüller, D., Streit, A.,
Lippert, T.: Research advances by using interoperable
e-science infrastructures—the infrastructure interoper-
ability reference model applied in e-science. Journal of
Cluster Computing, Special Issue Recent Advances in
e-Science. doi:10.1002/cpe.1402
Book
Full-text available
Until today, Cloud providers only offer a limited set of non-negotiable service levels to their customers. Most often these service levels are expressed as guarantees for availability together with the offer to have access to a virtualised environment with a certain performance the customer may select from a number of predefined configurations. This simplifies the life of the provider, e.g., in terms of effort to maintain an adequate infrastructure, or regarding the effort for reducing the risk violating Service Level Agreements (SLAs) with its customer. In consequence, the current practice is slanted towards the benefit of the provider and ignores more specific requirements of its customers, e.g. regarding data protection and related guarantees. An analysis of the underlying problems show s two major fields to be worked on for solving the problem: Firstly, each provider uses its own proprietary technology for managing SLAs throughout their life-cycles. However, first standards are available and could be employed allowing the customer to use a single standard interface to negotiate with several providers. Secondly, there is neither a common set of terms to describe Cloud customers requirements regarding the Cloud services requested, nor, the back side of the medal, there is a common set of terms to describe the Quality of Service (QoS) of the Cloud providers' offerings. The focus of the presented work is (i) on the standard technology for negotiating and creating SLAs and (ii) the common terms and metrics describing providers' offerings and customers' requirements. Without t hese terms mapping the customers' requirements to cloud providers' offerings is a tedious manual and error-prone process and resulting SLAs will remain rudimentary. Additionally, both providers and their customers would benefit from more sophisticated and negotiable Service Level Agreements using existing standards. These SLAs are both (i) binding and monitorable agreements between the customer and the provider covering the customers' requirements and (ii) the basis for a QoS-aware Cloud resource management on the side of the provider including provisioning of physical machines and optimised allocation of virtual machines. Besides more traditional QoS aspects, terms related to Cloud Federation, Data Protection or Security Level Agreements are covered. Customers may use standards complian t agreement templates with the providers' offerings to select suitable providers for starting the negotiation to the extent the provider allows in the template. During the negotiation the provider may take into account the actual degree of capacity utilisation of its infrastructure and active SLAs, and may use the SLA resulting from a successful negotiation to further optimise its infrastructure through application and VM consolidation. The work presented in this thesis covers about a decade of research starting in the environment of Grid computing and ending with today's Cloud computing.
Article
This chapter introduces the interrelated concepts of e-Research and e-Mentoring, reviews some recent works related to them, and discusses their importance in a global, Internet-based world. In this chapter, a conceptual framework is proposed to distinguish among the concepts of e-Research, e-Science, and Cyberinfrastructure, which are frequently used synonymously in the existing literature. Then, some issues related to e-Mentoring are discussed, including its characteristics, benefits, challenges, and a review of different Web 2.0 tools that can facilitate and promote e-Mentoring practices in most research organizations. Some personal experiences in e-Mentoring are then related. These experiences involve different universities and international programs, and their study points out several key factors of a successful e-Mentoring collaboration.
Article
Full-text available
Infrastructure federation is becoming an increasingly important issue for modern Distributed Computing Infrastructures (DCIs): Dynamic elasticity of quasi-static Grid environments, incorporation of special-purpose resources into commoditized Cloud infrastructures, cross-community collaboration for increasingly diverging areas of modern e-Science, and Cloud Bursting pose major challenges on the technical level for many resource and middleware providers. Especially with respect to increasing costs of operating data centers, the intelligent yet automated and secure sharing of resources is a key factor for success. With the D-Grid Scheduler Interoperability (DGSI) project within the German D-Grid Initiative, we provide a strategic technology for the automatically negotiated, SLA-secured, dynamically provisioned federation of resources and services for Grid-and Cloud-type infrastructures. This goal is achieved by complementing current DCI schedulers with the ability to federate infrastructure for the temporary leasing of resources and rechanneling of workloads. In this work, we describe the overall architecture and SLA-secured negotiation protocols within DGSI and depict an advanced mechanism for resource delegation through means of dynamically provisioned, virtualized middleware. Through this methodology, we provide the technological foundation for intelligent capacity planning and workload management in a cross-infrastructure fashion.
Chapter
This chapter investigates challenges and provides proven solutions in the context of e-science infrastructure interoperability, because we want to guide worldwide infrastructure interoperability efforts. This chapter illustrates how an increasing number of e-scientists can take advantage of using different types of e-science infrastructures jointly for their e-research activities. The goal is to give readers who are working in computationally driven research infrastructures (e.g., as within European Strategy Forum on Research Infrastructures (ESFRIs) scientific user community projects) the opportunity to transfer processes to their particular situations. Hence, although the examples and processes of this chapter are closely aligned with specific setups in Europe, many lessons learned can be actually used in similar environments potentially arising from ESFRI projects that seek to use the computational resources within EGI and PRACE via their own research infrastructure, techniques, and tools. Furthermore, we emphasize that readers should get a sense of the concept and benefits of interoperability, especially by using sustainable standard-based approaches.
Article
In global Grids, interoperation is important. It enables communities to work together, helps prevent vendor lock-in, and in principle enables “cloud-like” resource provision by permitting different resources to meet needs from other communities. In this paper, we discuss a practical example of achieving interoperation between storage resources, and the lessons learned. The aim is to meet current use cases for interoperation with no additional software development. Apart from the practical results, experiences from this work will be relevant to other interoperation activities.
Article
Service Grids like the EGEE Grid can not provide the required number of resources for many VOs. Therefore extending the capacity of these VOs with volunteer or institutional desktop Grids would significantly increase the number of accessible computing resources that can particularly advantageously be exploited in case of parameter sweep applications. This objective has been achieved by the EDGeS project that built a production infrastructure enabling the extension of gLite VOs with several volunteer and institutional desktop Grids. The paper describes the technical solution of integrating service Grids and desktop Grids and, the actual EDGeS production infrastructure. The main objectives and current achievements of the follow-up EDGI project have also been described showing how the existing EDGeS infrastructure can be further extended with clouds. KeywordsGrid–Desktop Grid–BOINC–gLite–Infrastructure–Interoperability–Bridge–Parallel–Distributed
Conference Paper
Full-text available
Simulation and thus scientific computing is the third pillar alongside theory and experiment in todays science and engineering. The term e-science evolved as a new research field that focuses on collaboration in key areas of science using next generation infrastructures to extend the powers of scientific computing. This paper contributes to the field of e-science as a study of how scientists actually work within currently existing Grid and e-science infrastructures. Alongside numerous different scientific applications, we identified several common approaches with similar characteristics in different domains. These approaches are described together with a classification on how to perform e-science in next generation infrastructures. The paper is thus a survey paper which provides an overview of the e-science research domain.
Article
Full-text available
Computational simulations and thus scientific computing is the third pillar alongside theory and experiment in todays science. The term e-science evolved as a new research field that focuses on collaboration in key areas of science using next generation computing infrastructures (i.e. co-called e-science infrastructures) to extend the potential of scientific computing. During the past years, significant international and broader interdisciplinary research is increasingly carried out by global collaborations that often share a single e-science infrastructure. More recently, increasing complexity of e-science applications that embrace multiple physical models (i.e. multi-physics) and consider a larger range of scales (i.e. multi-scale) is creating a steadily growing demand for world-wide interoperable infrastructures that allow for new innovative types of e-science by jointly using different kinds of e-science infrastructures. But interoperable infrastructures are still not seamlessly provided today and we argue that this is due to the absence of a realistically implementable infrastructure reference model. Therefore, the fundamental goal of this paper is to provide insights into our proposed infrastructure reference model that represents a trimmed down version of ogsa in terms of functionality and complexity, while on the other hand being more specific and thus easier to implement. The proposed reference model is underpinned with experiences gained from e-science applications that achieve research advances by using interoperable e-science infrastructures.
Article
Full-text available
Many production Grid and e-Science infrastructures have begun to offer services to end-users during the past several years with an increasing number of scientific applications that require access to a wide variety of resources and services in multiple Grids. Therefore, the Grid Interoperation Now—Community Group of the Open Grid Forum—organizes and manages interoperation efforts among those production Grid infrastructures to reach the goal of a world-wide Grid vision on a technical level in the near future. This contribution highlights fundamental approaches of the group and discusses open standards in the context of production e-Science infrastructures. Copyright © 2009 John Wiley & Sons, Ltd.
Research advances by using interoperable e-science infrastructures—the infrastructure interoper-ability reference model applied in e-science Special Issue Recent Advances in e-Science
  • M Riedel
  • F Wolf
  • D Kranzlmüller
  • A Streit
  • T Lippert
Riedel, M., Wolf, F., Kranzlmüller, D., Streit, A., Lippert, T.: Research advances by using interoperable e-science infrastructures—the infrastructure interoper-ability reference model applied in e-science. Journal of Cluster Computing, Special Issue Recent Advances in e-Science. doi:10.1002/cpe.1402
Research advances by using interoperable e-science infrastructures-the infrastructure interoperability reference model applied in e-science
  • M Riedel
  • F Wolf
  • D Kranzlmüller
  • A Streit
  • T Lippert
Riedel, M., Wolf, F., Kranzlmüller, D., Streit, A., Lippert, T.: Research advances by using interoperable e-science infrastructures-the infrastructure interoperability reference model applied in e-science. Journal of Cluster Computing, Special Issue Recent Advances in e-Science. doi:10.1002/cpe.1402