Conference Paper

UNICORE 6 – A European Grid Technology.

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... As shown in Figure 1 the RAS interacts with Grid/HTC as well as HPC based resources through Grid middleware-based adapters. For Grid/HTC based job requests RAS communicates with gLite [10] services, whereas for HPC services it interacts with UNICORE [11] services. gLite is the middleware used in the EGEE infrastructure, while UNICORE provides access to resources of the DEISA infrastructure. ...
... Hence, a proprietary gLite UI adapter is used to access HTC-oriented resources within EGEE/EGI. In contrast, the vine toolkit is used as another adapter to use the proprietary UNICORE Atomic Services (UAS) [11] and thus HPC-driven Grid resources within DEISA/PRACE. At the time of writing the resources of the corresponding infrastructures are accessible via the EUFORIA Virtual Organization (VO) of EGEE and the EUFORIA Virtual Community (VC) of DEISA. ...
Conference Paper
The interoperability of e-Science infrastructures like DEISA/PRACE and EGEE/EGI is an increasing demand for a wide variety of cross-Grid applications, but interoperability based on common open standards adopted by Grid middleware is only starting to emerge and is not broadly provided today. In earlier work, we have shown how refined open standards form a reference model, which is based on careful academic analysis of lessons learned obtained from production cross-Grid applications that require access to both, High Throughput Computing (HTC) resources as well as High Performance Computing (HPC) resources. This paper provides insights in several concepts of this reference model with a particular focus on the finding of using HPC and HTC resources with the fusion applications BIT1 and a cross-infrastructure workflow based on the HELENA and ILSA fusion applications. Based on lessons learned over years gained with production interoperability setups and experimental interoperability work between production Grids like EGEE, DEISA, and NorduGrid, we illustrate how open Grid standards (e.g. OGSA-BES, JSDL, GLUE2, etc) can be used to overcome several limitations of the production architecture of the EUFORIA framework paving the way to a more standards-based and thus more maintainable and efficient solution.
... Uniform Interface to Computing Resources (UNICORE) was a German project that was started in 1997 to support grid computing and technologies in Europe and beyond [72]. This grid middleware went through several versions to incorporate many standards and new services, including UNICORE 6 [73]. Major grid deployments include the Large Hadron Collider (LHC) Computing Grid [74], Open Science Grid [75], TeraGrid [76], UK National Grid Service [77], German D-Grid [78], and XSEDE [79]. ...
Article
High-performance computing (HPC) is now present in many spheres and domains of modern society. The special issue of Concurrency Computation contains research papers addressing the state-of-the-art in HPC and simulation and reflects some of the trends reported earlier. P. Trunfio deals with a peer-to-peer file-sharing model that takes into account energy efficiency. This is achieved by means of a sleep-and-wake approach. The second manuscript falling within this category is the work by Yeh and colleagues. Their work consists of prioritizing both memory management and disk scheduling processes. The approach, which was applied to the Linux OS, allows the authors to improve process performances. Alexandru and colleagues deal with parallel computing and computation time prediction as applied to the area of computational electromagnetic modeling and simulation. The prediction model allows for an analytical estimation of the required number of computing resources in order to optimize their use while delivering the performance sought. The work by Ortega and researchers present a novel and compact form to storage of large sparse matrices coupled with the adoption of a hybrid MPI-GPU parallelization. This combined approach allows systems to extend the dimension of the Helmholtz problem they are able to solve. Neves and Araujo also deal with large sparse matrices, so common in many scientific problems. They focus on binary matrix-vector multiplication in particular. Guo and Wang also propose a prediction model. But in this case, the model assists researchers in evaluating the performances of a GPU architecture when a matrix-vector multiplication is required.
... The best suited, and, when required, most powerful supercomputer architectures are selected for each project. DEISA also supports multisite supercomputing for many independent supercomputer jobs (e.g., parameter sweeps) through various technical means (e.g., UNICORE [73], DESHL, Globus [32], Application Hosting Environment [85]), using the DEISA global file system and its single name space. Data management is also supported via GridFTP. ...
Article
This report has two objectives. First, we describe a set of the production distributed infrastructures currently available, so that the reader has a basic understanding of them. This includes explaining why each infrastructure was created and made available and how it has succeeded and failed. The set is not complete, but we believe it is representative. Second, we describe the infrastructures in terms of their use, which is a combination of how they were designed to be used and how users have found ways to use them. Applications are often designed and created with specific infrastructures in mind, with both an appreciation of the existing capabilities provided by those infrastructures and an anticipation of their future capabilities. Here, the infrastructures we discuss were often designed and created with specific applications in mind, or at least specific types of applications. The reader should understand how the interplay between the infrastructure providers and the users leads to such usages, which we call usage modalities. These usage modalities are really abstractions that exist between the infrastructures and the applications; they influence the infrastructures by representing the applications, and they influence the ap- plications by representing the infrastructures.
Chapter
A Grid enables remote, secure access to a set of distributed, networked computing and data resources. Clouds are a natural complement to Grids towards the provisioning of IT as a service. To “Grid-enable” applications, users have to cope with: complexity of Grid infrastructure; heterogeneous compute and data nodes; wide spectrum of Grid middleware tools and services; the e-science application architectures, algorithms and programs. For clouds, on the other hand, users don’t have many possibilities to adjust their application to an underlying cloud architecture, because of its transparency to the user. Therefore, the aim of this chapter is to guide users through the important stages of implementing HPC applications on Grid and cloud infrastructures, together with a discussion of important challenges and their potential solutions. As a case study for Grids, we present the Distributed European Infrastructure for Supercomputing Applications (DEISA) and describe the DEISA Extreme Computing Initiative (DECI) for porting and running scientific grand challenge applications on the DEISA Grid. For clouds, we present several case studies of HPC applications running on Amazon’s Elastic Compute Cloud EC2 and its recent Cluster Compute Instances for HPC. This chapter concludes with the author’s top ten rules of building sustainable Grid and cloud e-infrastructures.
Conference Paper
As scientific workflows are becoming more complex and apply compute-intensive methods to increasingly large data volumes, access to HPC resources is becoming mandatory. We describe the development of a novel plug in for the Tavern a workflow system, which provides transparent and secure access to HPC/grid resources via the UNICORE grid middleware, while maintaining the ease of use that has been the main reason for the success of scientific workflow systems. A use case from the bioinformatics domain demonstrates the potential of the UNICORE plug in for Tavern a by creating a scientific workflow that executes the central parts in parallel on a cluster resource.
ResearchGate has not been able to resolve any references for this publication.