Conference Paper

Grid services for Commercial Simulation Packages

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Collaborative research has facilitated the development of distributed systems that provide users nontrivial access to geographically dispersed resources that are administered in multiple computer domains. The term grid computing is popularly used to refer to such distributed systems. Scientific simulations have traditionally been the primary benefactor of grid computing. The application of this technology to simulation in industry has, however, been negligible. This research investigates grid technology in the context of Commercial Simulation Packages (CSPs). Towards this end, the paper identifies (a) six CSP-specific grid services, (b) identifies grid middleware that could be used to provide the CSP-specific grid services, and (c) list CSPs that include vendor-specific solutions for these grid services. The authors hope that this research will lead to an increased awareness of the potential of grid computing among simulation end users and CSP vendors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Article
In studying by simulation, choosing an appropriate software would be a difficult task because many are available. On the other hand, few researches focus on classification of simulation tools/languages and their comparison. This paper makes a survey on taxonomies of discrete simulation software and then presents six taxonomies for them. The first taxonomy is in different approaches for worldviews, which includes event scheduling, activity scanning, three-phase and process interaction. The second one is based on how the software handles entities and the third one is to have simulation software with (without) some programming capabilities. The fourth taxonomy is based on how discrete simulation software aids in the construction of user’s applications. The fifth taxonomy is related to executing the model and sixth one is concerned with the level of autonomy used in elements of simulation models. Afterwards, more than 60 simulation software are evaluated in the taxonomies provided. Finally the major challenges for next generation simulation packages are presented, based on the observations derived from the evaluation.
Article
Canadian Blood Services produces and distributes approximately 850,000 units of red cells annually. These units are distributed through ten production and/or distribution sites. Each distribution site acts as a regional hub serving between 20 and 110 hospital customers. Distribution sites hold a target inventory that is based on an integer number of median days demand on hand. In this paper, we report on the development and use of a simulation based methodology to evaluate network inventory policies for regional blood distribution sites in Canada. A generic framework was developed to represent each of the ten different regional networks. The modelling approach was validated by comparing model results against data from two networks. Once validated, ten instances were developed. For each model instance, a set of experiments was conducted, from which response surfaces were created. Non-linear optimization methods were applied to identify optimal supplier/consumer inventory policies using the response surfaces. We conclude that a generic modelling framework can be useful for regional blood supply chains, but suggest that at least four instances are necessary to recoup the efforts of building a reusable model.
Conference Paper
Simulation software such as Simul8 is used to study complex systems in many areas. Experimentation can be time consuming. If a study requires many experiments, and each experiment requires multiple replications, then even with short run times the overall time to perform the study can be large. If models take longer to run, then this time can be excessive. Many organisations have commodity PCs that often remain idle or run applications that do not demand the processing power of a typical contemporary PC. Volunteer Computing is a form of Desktop Grid Computing that aims to use vast numbers of home computers to support computing applications. The SZTAKI Desktop Grid (SZDG) uses a modified form of the volunteer computing software BOINC to implement an institution-wide Desktop Grid. To investigate the feasibility of using Volunteer Computing with Simul8, this paper reports on experiences of porting Simul8 to a SZDG.
Article
Full-text available
Discrete-event simulation is one of the most popular modelling techniques. It has developed significantly since the inception of computer simulation in the 1950s, most of this in line with developments in computing. The progress of simulation from its early days is charted with a particular focus on recent history. Specific developments in the past 15 years include visual interactive modelling, simulation optimization, virtual reality, integration with other software, simulation in the service sector, distributed simulation and the use of the worldwide web. The future is then speculated upon. Potential changes in model development, model use, the domain of application for simulation and integration with other simulation approaches are all discussed. The desirability of continuing to follow developments in computing, without significant developments in the wider methodology of simulation, is questioned.Journal of the Operational Research Society (2005) 56, 619–629. doi:10.1057/palgrave.jors.2601864 Published online 22 September 2004
Conference Paper
Full-text available
power, storage capacity, data and applications available to users as readily as electricity and other utilities. Grid infrastructures and applications have traditionally been geared towards dedicated, centralized, high performance clusters running on UNIX flavour operating systems (commonly referred to as cluster-based grid computing). This can be contrasted with desktop-based grid computing which refers to the aggregation of nondedicated, de-centralized, commodity PCs connected through a network and running (mostly) the Microsoft Windows™ operating system. Large scale adoption of such Windows™-based grid infrastructure may be facilitated via grid-enabling existing Windows applications. This paper presents the WinGrid™ approach to grid enabling existing Windows™- based Commercial-Off-The-Shelf (COTS) simulation packages (CSPs). Through the use of a case study developed in conjunction with Ford Motor Company, the paper demonstrates how experimentation with the CSP Witness™ and FIRST can achieve a linear speedup when WinGrid™ is used to harness idle PC computing resources. This, combined with the lessons learned from the case study, has encouraged us to develop the web service extensions to WinGrid™. It is hoped that this would facilitate wider acceptance of WinGrid™ among enterprises having stringent security policies in place.
Conference Paper
Full-text available
We address the problem of how many workers should be allocated for executing a distributed application that follows the master-worker paradigm, and how to assign tasks to workers in order to maximize resource efficiency and minimize application execution time. We propose a simple but effective scheduling strategy that dynamically measures the execution times of tasks and uses this information to dynamically adjust the number of workers to achieve a desirable efficiency, minimizing the impact in loss of speedup. The scheduling strategy has been implemented using an extended version of MW, a runtime library that allows quick and easy development of master -worker computations on a computational grid. We report on an initial set of experiments that we have conducted on a Condor pool using our extended version of MW to evaluate the effectiveness of the scheduling strategy.
Conference Paper
Full-text available
The high Level Architecture (HLA) is an IEEE standard for interoperating simulation federates. In this paper, we describe a set of requirements that simulation packages need to satisfy in order to be made interoperable using the HLA standard. AutoSched AP, a commercial off-the-shelf simulation package (CSP) which is widely used in the semiconductor industry, was used as a case study for this interoperation exercise. We demonstrated that a straightforward customization of the CSP through a middleware that provides standard functions for interoperation may not provide a satisfactory solution. A specially optimized time synchronization mechanism needs to be installed to ensure good execution efficiency. Experimental results using a Borderless Fab model that comprises of two factory models show that an optimized time synchronization mechanism results in an execution time that is ten times better than a straightforward application of the HLA runtime infrastructure's time synchronization mechanism.
Conference Paper
Full-text available
This paper describes our efforts to provide a grid-based parallel visualization environment to visualize massive dataset in parallel. The visualization environment is implemented as a visualization service on the grid. This paper focuses on deploying a proxy process on the master node of each cluster in the grid, to ensure that Globus jobs can be scheduled on internal nodes of clusters that have only local IP addresses. We have conducted an experiment to visualize in parallel a dataset of computational domains in a grid environment with PC clusters
Article
Full-text available
In an attempt to investigate blood unit ordering policies, researchers have created a discrete-event model of the UK National Blood Service (NBS) supply chain in the Southampton area of the UK. The model has been created using Simul8, a commercial-off-the-shelf discrete-event simulation package (CSP). However, as more hospitals were added to the model, it was discovered that the length of time needed to perform a single simulation severely increased. It has been claimed that distributed simulation, a technique that uses the resources of many computers to execute a simulation model, can reduce simulation runtime. Further, an emerging standardized approach exists that supports distributed simulation with CSPs. These CSP Interoperability (CSPI) standards are compatible with the IEEE 1516 standard The High Level Architecture, the defacto interoperability standard for distributed simulation. To investigate if distributed simulation can reduce the execution time of NBS supply chain simulation, this paper presents experiences of creating a distributed version of the CSP Simul8 according to the CSPI/HLA standards. It shows that the distributed version of the simulation does indeed run faster when the model reaches a certain size. Further, we argue that understanding the relationship of model features is key to performance. This is illustrated by experimentation with two different protocols implementations (using Time Advance Request (TAR) and Next Event Request (NER)). Our contribution is therefore the demonstration that distributed simulation is a useful technique in the timely execution of supply chains of this type and that careful analysis of model features can further increase performance.
Article
Full-text available
The possibilities of distributed simulation have been discussed for well over a decade, yet there is only limited evidence of its implementation, particularly within industry. The reasons for this are discussed by identifying the potential applications of distributed simulation and linking these to the ways in which simulation is practiced. The extent to which distributed simulation is a demand led or technology led innovation is discussed. A possible contradiction between distributed simulation and good modeling practice is also identified, that is, the ability to develop large/complex models against the recommendation to develop simple models. This leads to three conclusions: not everyone needs distributed simulation, distributed simulation is both demand and technology led, and the possibilities of distributed simulation are both beneficial and dangerous to modeling practice.
Conference Paper
Full-text available
"Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high-performance orientation. In this article, we define this new field. First, we review the "Grid problem," which we define as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources-what we refer to as virtual organizations. In such settings, we encounter unique authentication, authorization, resource access, resource discovery, and other challenges. It is this class of problem that is addressed by Grid technologies. Next, we present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. We describe requirements that we believe any such mechanisms must satisfy, and we discuss the central role played by the intergrid protocols that enable interoperability among different Grid systems. Finally, we discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. We maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
Article
Full-text available
Motivation: In silico experiments in bioinformatics involve the co-ordinated use of computational tools and information repositories. A growing number of these resources are being made available with programmatic access in the form of Web services. Bioinformatics scientists will need to orchestrate these Web services in workflows as part of their analyses. Results: The Taverna project has developed a tool for the composition and enactment of bioinformatics workflows for the life sciences community. The tool includes a workbench application which provides a graphical user interface for the composition of workflows. These workflows are written in a new language called the simple conceptual unified flow language (Scufl), where by each step within a workflow represents one atomic task. Two examples are used to illustrate the ease by which in silico experiments can be represented as Scufl workflows using the workbench application. Availability: The Taverna workflow system is available as open source and can be downloaded with example Scufl workflows from http://taverna.sourceforge.net
Conference Paper
Full-text available
The convergence of conventional Grid computing with public resource computing (PRC) offers potential benefits in the enterprise setting. For this work we took the popular PRC toolkit BOINC and used it to execute a previously monolithic Microsoft Excel financial model across several commodity computers. Our experience indicates that speedup approaching linear may be realised for certain scenarios, and that this approach offers a viable route to leveraging idle desktop PCs in the enterprise.
Conference Paper
Full-text available
We describe a number of applications of simulation methods to practical problems in finance and insurance. The first entails the simulation of a two-stage model of a property-casualty insurance operation. The second application simulates the operation of an insurance regime for home equity conversion mortgages (also known as reverse mortgages). The third is an application of simulation in the context of Value at Risk, a widely-used measure for assessing the performance of portfolios of assets and/or liabilities. We conclude with an application of simulation in the testing of the efficient market hypothesis of the U.S. stock market.
Conference Paper
Full-text available
As new technologies become available we need to identify their potential for application. In some cases new technologies are developed in response to specific needs (demand led), in others the technologies are developed and suitable applications are then sought (technology led). In this paper, the potential for applying distributed simulation is discussed. Three modes of simulation practice are described: software engineering; process of organizational change; and facilitation. The ways in which distributed simulation might aid these modes of practice are identified as well as some of the difficulties in adopting distributed simulation. The extent to which distributed simulation is an example of a demand led or a technology led innovation is also discussed. The emphasis is particularly on the practice of simulation in business, where distributed simulation has to date had little impact.
Conference Paper
Full-text available
For large international companies with their own simulation team it is often hard to select new discrete event simulation software. Often, preferences and application areas between countries differ, and simulation software already in use influences the outcome of the selection process. Available selection methods do not suffice in such cases. Therefore, a two-phase evaluation and selection methodology is proposed. Phase one quickly reduces the long-list to a short-list of packages. Phase two matches the requirements of the company with the features of the simulation package in detail. Different methods are used for a detailed evaluation of each package. Simulation software vendors participate in both phases. The approach was tested for the Accenture world-wide simulation team. After the study, we can conclude that the methodology was effective in terms of quality and efficient in terms of time. It can easily be applied for other large organizations with a team of simulation specialists.
Conference Paper
Full-text available
Networks of workstations have become a popular architecture for distributed simulation due to their high availability as opposed to specialized multiprocessor computers. Networks of workstations are also a well-suited framework for distributed simulation systems based on the High Level Architecture (HLA). However using workstations in a distributed simulation system may eventually affect the availability of computing resources for the users who need their computers as working tools. Thus, for coarse grained distributed simulation it may be desirable to let the users control to what extent their workstations should participate in a distributed simulation. The authors present a resource sharing system (RSS) that provides a client user interface on each potentially participating workstation. With the RSS clients, users of workstations can control the availability of their computer for the HLA simulation federation. An RSS manager keeps track of available computing resources and balances the participating HLA federates among the available workstations
Conference Paper
Full-text available
The design, implementation, and performance of the Condor scheduling system, which operates in a workstation environment, are presented. The system aims to maximize the utilization of workstations with as little interference as possible between the jobs it schedules and the activities of the people who own workstations. It identifies idle workstations and schedules background jobs on them. When the owner of a workstation resumes activity at a station, Condor checkpoints the remote job running on the station and transfers it to another workstation. The system guarantees that the job will eventually complete, and that very little, if any, work will be performed more than once. A performance profile of the system is presented that is based on data accumulated from 23 stations during one month
Article
Full-text available
The design and implementation of a national computing system and data grid has become a reachable goal from both the computer science and computational science point of view. A distributed infrastructure capable of sophisticated computational functions can bring many benefits to scientific work, but poses many challenges, both technical and socio-political. Technical challenges include having basic software tools, higher-level services, functioning and pervasive security, and standards, while socio-political issues include building a user community, adding incentives for sites to be part of a user-centric environment, and educating funding sources about the needs of this community. This paper details the areas relating to Grid research that we feel still need to be addressed to fully leverage the advantages of the Grid. Keywords: Grid computing, survey
Article
Full-text available
this paper, we review the motivations for computational steering and introduce the RealityGrid steering library and associated software. We then outline the capabilities of the library and describe the service-oriented architecture of the latest implementation, in which the steering controls of the application are exposed through an OGSIcompliant Grid service
Article
Full-text available
* . We address the problem of how many workers should be allocated for executing a distributed application that follows the master-worker paradigm, and how to assign tasks to workers in order to maximize resource efficiency and minimize application execution time. We propose a simple but effective scheduling strategy that dynamically measures the execution times of tasks and uses this information to dynamically adjust the number of workers to achieve a desirable efficiency, minimizing the impact in loss of speedup. The scheduling strategy has been implemented using an extended version of MW, a runtime library that allows quick and easy development of master-worker computations on a computational grid. We report on an initial set of experiments that we have conducted on a Condor pool using our extended version of MW to evaluate the effectiveness of the scheduling strategy. 1. Introduction In the last years, Grid computing [1] has become a real alternative to traditional su...
Article
"Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high-performance orientation. In this article, we define this new field. First, we review the "Grid problem," which we define as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources-what we refer to as virtual organizations. In such settings, we encounter unique authentication, authorization, resource access, resource discovery, and other challenges. It is this class of problem that is addressed by Grid technologies. Next, we present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. We describe requirements that we believe any such mechanisms must satisfy, and we discuss the central role played by the intergrid protocols that enable interoperability among different Grid systems. Finally, we discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. We maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
Article
Providing Grid users with a widely accessible, homogeneous and easy-to-use graphical interface is the foremost aim of Grid-portal development. These portals if designed and implemented in a proper and user-friendly way, might fuel the dissemination of Grid-technologies, hereby promoting the shift of Grid-usage from research into real life, industrial application, which is to happen in the foreseeable future, hopefully. This paper highlights the key issues in Grid-portal development and introduces P-GRADE Portal being developed at MTA SZTAKI. The portal allows users to manage the whole life-cycle of executing a parallel application in the Grid: editing workflows, submitting jobs relying on Grid-credentials and analyzing the monitored trace-data by means of visualization.
Article
It is probably true, at least in part, that each generation assumes that the way it operates is the only way to go about things. It is easy to forget that different approaches were used in the past and hard to imagine what other approaches might be used in the future. We consider the symbiotic relationship between general developments in computing, especially in software and parallel developments in discrete event simulation. This shows that approaches other than today's excellent simulation packages were used in the past, albeit with difficulty, to conduct useful simulations. Given that few current simulation packages make much use of recent developments in computer software, particularly in component-based developments, we consider how simulation software might develop if it utilized these developments. We present a brief description of DotNetSim, a prototype component-based discrete event simulation package to illustrate our argument. Journal of Simulation (2006) 0, 000-000. doi:10.1057/palgrave.jos.4250004
Article
Discrete-event simulation (DES) has been with us for around 50 years. During this time, the field has seen significant progress as witnessed by the plethora of software packages and reported applications. But what of the future? Where does the field of DES need to go in the next 10 years? As part of this first issue of the Journal of Simulation (JOS), the Editors-in-Chief have surveyed the Editorial Board for their answers to this question. In particular, those surveyed were asked to comment on four areas: simulation technology, simulation experimentation and analysis, simulation applications and simulation practice. The findings from the 13 responses obtained are summarized under these same headings in the JOS 2006 Survey.Journal of Simulation (2006) 1, 1–6. doi:10.1057/palgrave.jos.4250002
Article
Discrete-event simulation first emerged in the late 1950s and it has grown in popularity steadily to be now recognized as the most frequently used of the classical Operational Research techniques across a range of industries—manufacturing, travel, finance, health and beyond. I have been engaged with such simulation from 1964 up to the present day. This paper reviews the history and evolution of discrete-event simulation from his personal perspective, with a particular interest in software development up to 1992. Extrapolating from that history, the paper goes on to comment on the prospective continuing evolution of simulation and its software.
Article
Powerful services and applications are being integrated and packaged on the Web in what the industry now calls "cloud computing"
Article
The vision of grid computing is to make computational power, storage capacity, data and applications available to users as readily as electricity and other utilities. Grid infrastructures and applications have traditionally been geared towards dedicated, centralized, high-performance clusters running on UNIX ‘flavour’ operating systems (commonly referred to as cluster-based grid computing). This can be contrasted with desktop-based grid computing that refers to the aggregation of non-dedicated, de-centralized, commodity PCs connected through a network and running (mostly) the Microsoft Windows operating system. Large-scale adoption of such Windows-based grid infrastructure may be facilitated via grid enabling existing Windows applications. This paper presents the WinGrid approach to grid-enabling existing Windows-based commercial-off-the-shelf simulation packages (CSPs). Through the use of two case studies developed in conjunction with a major automotive company and a leading investment bank, respectively, the contribution of this paper is the demonstration of how experimentation with the CSP Witness (Lanner Group) and the CSP Analytics (SunGard Corporation) can achieve speedup when using WinGrid middleware on both dedicated and non-dedicated grid nodes. It is hoped that this research would facilitate wider acceptance of desktop grid computing among enterprises interested in a low-intervention technological solution to speeding up their existing simulations. Copyright © 2009 John Wiley & Sons, Ltd.
Conference Paper
We describe a number of applications of simulation methods to practical problems in finance and insurance. The first entails the simulation of a two-stage model of a property-casualty insurance operation. The second application simulates the operation of an insurance regime for home equity conversion mortgages (also known as reverse mortgages). The third is an application of simulation in the context of Value at Risk, a widely-used measure for assessing the performance of portfolios of assets and/or liabilities. We conclude with an application of simulation in the testing of the efficient market hypothesis of the U.S. stock market.
Article
The exploitation of idle cycles on pervasive desktop PC systems offers the opportunity to increase the available computing power by orders of magnitude (10x - 1000x). However, for desktop PC distributed computing to be widely accepted within the enterprise, the systems must achieve high levels of efficiency, robustness, security, scalability, manageability, unobtrusiveness, and openness/ease of application integration. We describe the Entropia distributed computing system as a case study, detailing its internal architecture and philosophy in attacking these key problems. Key aspects of the Entropia system include the use of: 1) binary sandboxing technology for security and unobtrusiveness, 2) a layered architecture for efficiency, robustness, scalability and manageability, and 3) an open integration model to allow applications from many sources to be incorporated. Typical applications for the Entropia System includes molecular docking, sequence analysis, chemical structure modeling, and risk management. The applications come from a diverse set of domains including virtual screening for drug discovery, genomics for drug targeting, material property prediction, and portfolio management. In all cases, these applications scale to many thousands of nodes and have no dependences between tasks. We present representative performance results from several applications that illustrate the high performance, linear scaling, and overall capability presented by the Entropia system.
Article
Application development for distributed computing "Grids" can benefit from tools that variously hide or enable application-level management of critical aspects of the heterogeneous environment. As part of an investigation of these issues, we have developed MPICH-G2, a Grid-enabled implementation of the Message Passing Interface (MPI) that allows a user to run MPI programs across multiple computers, at the same or different sites, using the same commands that would be used on a parallel computer. This library extends the Argonne MPICH implementation of MPI to use services provided by the Globus Toolkit for authentication, authorization, resource allocation, executable staging, and I/O, as well as for process creation, monitoring, and control. Various performance-critical operations, including startup and collective operations, are configured to exploit network topology information. The library also exploits MPI constructs for performance management; for example, the MPI communicator construct is used for application-level discovery of, and adaptation to, both network topology and network quality-of-service mechanisms. We describe the MPICH-G2 design and implementation, present performance results, and review application experiences, including record-setting distributed simulations.
Article
The last decade has seen a substantial increase in commodity computer and network performance, mainly as a result of faster hardware and more sophisticated software. Nevertheless, there are still problems, in the fields of science, engineering, and business, which cannot be effectively dealt with using the current generation of supercomputers. In fact, due to their size and complexity, these problems are often very numerically and/or data intensive and consequently require a variety of heterogeneous resources that are not available on a single machine. A number of teams have conducted experimental studies on the cooperative use of geographically distributed resources unified to act as a single powerful computer. This new approach is known by several names, such as, metacomputing, scalable computing, global computing, Internet computing, and more recently peer-to-peer or Grid computing. The early efforts in Grid computing started as a project to link supercomputing sites, but have now grown far beyond its original intent. In fact, many applications that can benefit from the Grid infrastructure, including collaborative engineering, data exploration, high throughput computing, and of course distributed supercomputing. Moreover, due to the rapid growth of the Internet and Web, there has been a rising interest in Web-based distributed computing, and many projects have been started and aim to exploit the Web as an infrastructure for running coarse-grained distributed and parallel applications. In this context, the Web has the capability to a platform for parallel and collaborative work as well as a key technology to create a pervasive and ubiquitous Grid-based infrastructure. This paper aims to present the state-of-the-art of Grid computing and attempts to survey the m...
Article
An increased need for collaborative research among different organizations, together with continuing advances in communication technology and computer hardware, has facilitated the development of distributed systems that can provide users non-trivial access to geographically dispersed computing resources (processors, storage, applications, data, instruments, etc.) that are administered in multiple computer domains. The term grid computing or grids is popularly used to refer to such distributed systems. A broader definition of grid computing includes the use of computing resources within an organization for running organization-specific applications. This research is in the context of using grid computing within an enterprise to maximize the use of available hardware and software resources for processing enterprise applications. Large scale scientific simulations have traditionally been the primary benefactor of grid computing. The application of this technology to simulation in industry has, however, been negligible. This research investigates how grid technology can be effectively exploited by simulation practitioners using Windows-based commercially available simulation packages to model simulations in industry. These packages are commonly referred to as Commercial Off-The-Shelf (COTS) Simulation Packages (CSPs). The study identifies several higher level grid services that could be potentially used to support the practise of simulation in industry. It proposes a grid computing framework to investigate these services in the context of CSP-based simulations. This framework is called the CSP-Grid Computing (CSP-GC) Framework. Each identified higher level grid service in this framework is referred to as a CSP-specific service. A total of six case studies are presented to experimentally evaluate how grid computing technologies can be used together with unmodified simulation packages to support some of the CSP-specific services. The contribution of this thesis is the CSP-GC framework that identifies how simulation practise in industry may benefit from the use of grid technology. A further contribution is the recognition of specific grid computing software (grid middleware) that can possibly be used together with existing CSPs to provide grid support. With its focus on end-users and end-user tools, it is intended that this research will encourage wider adoption of grid computing in the workplace and that simulation users will derive benefit from using this technology.
Conference Paper
BOINC (Berkeley Open Infrastructure for Network Computing) is a software system that makes it easy for scientists to create and operate public-resource computing projects. It supports diverse applications, including those with large storage or communication requirements. PC owners can participate in multiple BOINC projects, and can specify how their resources are allocated among these projects. We describe the goals of BOINC, the design issues that we confronted, and our solutions to these problems.
Conference Paper
Fault tolerance is essential to the further development of desktop grid computing system in order to guarantee continuous and reliable execution of tasks in spite of failures. In a desktop grid computing environment, volunteers are often susceptible to volunteer autonomy failures such as volatility failure and interference failure in the middle of execution of tasks because a desktop grid computing maximally respects autonomy of volunteers. The failures result in an independent livelock problem (i.e. the delay and blocking of the entire execution of a job). Therefore, the failures should be considered in a scheduling mechanism. In This work, in order to tolerate volunteer autonomy failures, we propose a new fault tolerant scheduling mechanism. First, we specify a volunteer autonomy failures and an independent livelock problem. Then, we propose a volunteer availability which reflects the degree of volunteer autonomy failures. Finally, we propose a fault tolerant scheduling mechanism based on volunteer availability (which is called VAFTSM).
Conference Paper
The paper recognises that good communication and interaction are key factors to the success of a simulation project and suggests that groupware technology can increase the chances of success. To underline this, the paper reviews the process of simulation to illustrate the amount of communication and interaction that must take place during a simulation project. The paper then discusses computer supported cooperative work and groupware, a research field and information technology that has successfully supported communication and interaction in other industries. To illustrate how groupware may by used by the simulation consultant, net-conferencing, exemplified by Microsoft's NetMeeting, is presented. The paper ends with some observations on the future of these applications in simulation modelling
Article
The rapid increase in the speed and capacity of commonly available PCs is providing an opportunity to use distributed computing to tackle major modeling tasks such as climate simulation. The CLIMATEPREDICTION.COM project has developed the software necessary to carry out such a project in the public domain. The paper describes the development of the demonstration release software, along with the computational challenges such as data mining, visualization, and distributed database management
Article
This paper describes the definition and implementation of an OpenMP-like set of directives and library routines for shared memory parallel programming in Java. A specification of the directives and routines is proposed and discussed. A prototype implementation, consisting of a compiler and a runtime library, both written entirely in Java, is presented, which implements most of the proposed specification. Some preliminary performance results are reported. Copyright # 2001 John Wiley & Sons, Ltd
Article
Computational science portals are emerging as useful and necessary interfaces for performing operations on the Grid. The Grid Portal Development Kit (GPDK) facilitates the development of Grid portals and provides several key reusable components for accessing various Grid services. A Grid portal provides a customizable interface allowing scientists to perform a variety of Grid operations including remote program submission, file staging, and querying of information services from a single, secure gateway. The GPDK leverages off existing Globus/Grid middleware infrastructure as well as commodity Web technology including Java Server Pages and servlets. The design and architecture of the GPDK is presented as well as a discussion on the portal building capabilities of the GPDK, allowing application developers to build customized portals more effectively by reusing common core services provided by the GPDK. Copyright
Article
INTRODUCTION The Lorentz contracted electromagnetic fields of two fully stripped heavy ions passing each other at ultra-relativistic energies become very strong so that many lepton pair are produced out of the vacuum. In this collisions, heavy ions are sufficiently energetic and massive that they follow the straight-line trajectories. We can also assume that the collisions are peripheral without nuclear interactions. The Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory and Large Hadron Collider (LHC) at CERN have motivated great interest to study this electromagnetic phenomenon. In a recent paper, [1] authors have obtained the nonperturbative amplitudes for free electron-positron pair production by solving the two-center Dirac equation in the ultrarelativistic limit. Although the transition amplitudes differs from the perturbative result, integrated cross section over the impact parameter gives identical to the perturbative result [2]. Similar calculati
Article
With the advent of Grid and application technologies, scientists and engineers are building more and more complex applications to manage and process large data sets, and execute scientific experiments on distributed resources. Such application scenarios require means for composing and executing complex workflows. Therefore, many efforts have been made towards the development of workflow management systems for Grid computing. In this paper, we propose a taxonomy that characterizes and classifies various approaches for building and executing workflows on Grids. We also survey several representative Grid workflow systems developed by various projects world-wide to demonstrate the comprehensiveness of the taxonomy. The taxonomy not only highlights the design and engineering similarities and differences of state-of-the-art in Grid workflow systems, but also identifies the areas that need further research.
Condor -a hunter of idle workstations The resource sharing system: dynamic federate mapping for HLA-based distributed simulation A Grid Computing Framework for Commercial Simulation Packages; PhD Thesis. School of Information Systems
  • M Litzkow
  • M Livny
  • M Mutkaty
  • Washington
  • Usa Dc
  • J Lüthi
  • S Großmann
Litzkow, M., M. Livny, and M. Mutka. 1988. Condor -a hunter of idle workstations. In Proceedings of the 8th International Conference of Distributed Computing Systems, 104-111. IEEE Computer Socie-ty, Washington, DC, USA. Lüthi, J. and S. Großmann. 2001. The resource sharing system: dynamic federate mapping for HLA-based distributed simulation. In Proceedings of the 15th Workshop on Parallel and Distributed Simulation, 91-98. IEEE Computer Society, Washington, DC, USA. Mustafee, N. 2007. A Grid Computing Framework for Commercial Simulation Packages; PhD Thesis. School of Information Systems, Computing and Mathematics, Brunel University, UK. Available on-line <http://bura.brunel.ac.uk/handle/2438/4009>. [accessed April 28, 2010].
Group-oriented collaboration: the access grid collaboration system The Grid: Blueprint for a New Computing Infrastruc-ture (2nd Edition), chapter 15 Gaming reality: biennial survey of discrete-event simulation software tools
  • R Stevens
Stevens, R. and Futures Lab Group. 2004. Group-oriented collaboration: the access grid collaboration system. In Foster, I. and Kesselman, C. (eds.), The Grid: Blueprint for a New Computing Infrastruc-ture (2nd Edition), chapter 15. San Francisco, CA: Morgan Kaufmann. Swain J. J. 2005. Gaming reality: biennial survey of discrete-event simulation software tools. OR/MS To-day (December 2005). Institute for Operations Research and the Management Sciences (INFORMS),
Mustafee and Taylor server software that needs to be installed on only one desktop grid node. WinGrid-WS implements the
  • ³sxoo´ Dssurdfk
  • Sxoo Mrevv Iurp Wkhru Pruh Lqirupdwlrq
  • Sohdvh
  • Wrr
Mustafee and Taylor server software that needs to be installed on only one desktop grid node. WinGrid-WS implements the ³SXOO´ DSSURDFK ZRUNHUVV SXOO MREVV IURP WKH :65 )RU PRUH LQIRUPDWLRQ SOHDVH UHIHU WRR (Anders 2006; Mustafee et al. 2006).
The P-GRADE Grid Portal The grid portal development kit
  • C Németh
  • G Dózsa
  • R Lovas
  • P Kacsuk
Németh, C., G. Dózsa, R. Lovas, and P. Kacsuk. 2004. The P-GRADE Grid Portal. In Proceedings of the International Conference on Computational Science and its Applications (ICCSA 2004), 10-19. In A. Mustafee and Taylor Laganà, M. L. Gavrilova, V. Kumar, Y. Mun, C. J. K. Tan, and O. Gervasi (eds.), Lecture notes in Computer Science, volume 3044, Springer-Verlag, Germany. Novotny, J. 2002. The grid portal development kit. Concurrency and Computation: Practice and Expe-rience, 14(13-15): 1129-1144.
FireGrid: integrated emergency response and fire safety engineering for the future built environment Computational steering in RealityGrid
  • D Berry
  • A Usmani
  • J Torero
  • A Tate
  • S Mclaughlin
  • S Potter
  • A Trew
  • R Baxter
  • M Bull
  • M Atkinson
Berry, D., A. Usmani, J. Torero, A. Tate, S. McLaughlin, S. Potter, A. Trew, R. Baxter, M. Bull, and M. Atkinson. 2005. FireGrid: integrated emergency response and fire safety engineering for the future built environment. In Proceedings of the 2005 UK e-Science All Hands Meeting, 1034±1041. BOINC 2010. Berkeley Open Infrastructure for Network Computing project homepage. Available via <http://boinc.berkeley.edu/> [accessed April 28, 2010]. Brooke, J. M., P. V. Coveney, J. Harting, S. Jha, S. M. Pickles, R. L. Pinning, and A. R. Porter. 2003. Computational steering in RealityGrid. In Proceedings of the 2003 UK e-Science All Hands Meeting, 885-888.
BOINC: a system for public-resource computing and storage Grids and grid technologies for wide-area distributed com-puting. Software -Practice and Experience
  • D P Anderson
  • Usa Baker
  • R Buyya
  • D Laforenza
Anderson, D. P. 2004. BOINC: a system for public-resource computing and storage. In Proceedings of the 5th International Workshop on Grid Computing, 4-10. IEEE Computer Society, Washington, DC, USA. Baker, M., R. Buyya, and D. Laforenza. 2002. Grids and grid technologies for wide-area distributed com-puting. Software -Practice and Experience, 32(15): 1437-1466.
Simulation and inventory control. Operational Research Series
  • R Brooks
  • S Robinson
  • C Lewis
Brooks, R., S. Robinson, and C. Lewis. 2001. Simulation and inventory control. Operational Research Series. Hampshire, UK: Palgrave. Chance, D. M. 2004. Monte carlo simulation, teaching note 96-03. Available online <http://www.bus.lsu.edu/academics/finance/faculty/dchance/Instructio nal/TN96-03.pdf>. [accessed April 28, 2010].
Globus toolkit homepage Available via &lt
  • Globus