Article

Network Based High Performance Computing

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In the past few years there is a change in the view of high performance applications and parallel computing. Initially such applications were targeted towards dedicated parallel machines. Recently trend is changing towards building meta-applications composed of several modules that exploit heterogeneous platforms and employ hybrid forms of parallelism. The aim of this paper is to propose a model of virtual parallel computing. Virtual parallel computing system provides a flexible object oriented software framework that makes it easy for programmers to write various parallel applications. Keywords—Applet, Efficiency, Java, LAN I. INTRODUCTION HE power of Internet and Intranet can be used for integrating remote and heterogeneous computer into a single global computing facility for parallel and collaborative work. To gain control over the resources of Internet-based computers for parallel computing has introduced new difficulties and problems that have never been addressed by parallel computing in LAN (local area network) environment (3). Some of the difficulties are the heterogeneity of the participating systems, difficulties in administering distributed applications, security concerns of users, and matching of applications and users. The proposed virtual parallel computing system examines the aspect of high-performance computing. The scope of the virtual parallel computing system is in the possibility of carrying out computations that require very large computational power with the cooperation of processors on LAN. Execution time and result of the computation are the only parameters of interest. Another simplification is by dealing mostly with the use of CPU-time of the processors and less with other resources like memory. Finally, focus is on very large computations.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
. This paper describes DynamicPVM, an extension to PVM (Parallel Virtual Machine) [1] . PVM enables users to write parallel applications using message passing primitives and statically places the parallel tasks on a collection of nodes. System schedulers schedule atomic jobs over a predefined number of nodes. DynamicPVM addresses the problem of scheduling parallel tasks over a set of nodes. It therefore has to integrate a process checkpointing, migration and restart restart mechanism with the PVM runtime support system. DynamicPVM facilitates an efficient use of existing computational resources for computational jobs consisting of parallel subtasks. Typical target HPC platforms for DynamicPVM are multi user, multi tasking, loosely coupled processors. Introduction The number of workstations in industrial and academical institutions has grown tremendously over the past years. A migration from centralized mainframes to collections of these high-performance workstations connected by LANs ...
Article
Full-text available
Amdahl's law predicts time reduction for a fixed problem size. If you instead apply P processors to a task that has serial fraction f, scaling the problem to take the same amount of time as before, the speedup is f + P(1–f) = P – f(P – 1) and the serial fraction f does not theoretically limit parallel speed enhancement if the workload scales in its parallel component.
Conference Paper
Full-text available
First Page of the Article
Article
Full-text available
The authors describe the design and performance of scheduling facilities for finding idle hosts in a workstation-based distributed system. They focus on the tradeoffs between centralized and decentralized architectures with respect to scalability, fault tolerance, and simplicity of design, as well as several implementation issues of interest when multicast communication is used. They conclude that the principal tradeoff between the two approaches is that a centralized architecture can be scaled to a significantly greater degree and can more easily monitor global system statistics whereas a decentralized architecture is simpler to implement
Article
From the Book:At a time when the Internet has occupied the covers of bothBusiness Week and Time and every daily newspaper speculates on numbers of users and billions of dollars in "opportunities," when the President and Vice President of the United States have their own electronic mail addresses, and when the Supreme Court makes its dicta available via anonymous ftp, it is appropriate to look at the origins and development of this wondrous entity.At the end of 1969, the ARPANET, the first packet-switching computer network, consisted of four sites. At the end of 1994, there were nearly four million hosts. While there is much discussion as to just how many users each of these hosts represents, the range is from a (conservative) average of three to a (flamboyantly unrealistic) ten: That is, from 12 to 40 million users worldwide.Many tens of thousands of networks make up the Internet, which is a network of networks. Many of these networks are not full participants in the Internet, meaning that there are many applications which they cannot employ. In Neuromancer, a 1984 science fiction novel, William Gibson used the term "the matrix" for his cyberspace. John S. Quarterman employed the term in his 1990 compendium, and it has since come into common usage. I use the Matrix here to refer to all computers capable of sending and receiving electronic mail. Though not even a part of the original ARPANET, mail is now the prime application for the Matrix user.Max Beerbohm once criticized Quiller-Couch for writing "a veritable porcupine of quotations." I recognize that the same indictment could be handeddown against me. And that some of my "quotations" are not so much quills as battering-rams. However, some of them are feathers (or perhaps down comforters). There is general feeling that the inventors of technological wonders are deadly dull, that they have no interests outside their work, and that writings about technology are unreadable. And I admit that much of this is (selectively) true. So I have larded this history with lighter works: Len Kleinrock's and Vint Cerf's verse, as well as parodies by a number of others. And the final appendix contains Kleinrock's most recent verse and Cerf's future history in its entirety.This book could not have been written without the active cooperation of many of the original participants. At the head of the list stand Vint Cerf, Bob Kahn, Alex McKenzie, Mike Padlipsky, Jon Postel, John Quarterman, and Dave Walden. They have tolerated my questions and supplied me with documents with humor and grace. I am beholden to Marlyn Johnson of SRI and to a number of staff members of Bolt Beranek and Newman for locating and giving me access to documents I would never have otherwise read: Ivanna Abruzzese, Jennie Connolly, Lori McCarthy, Bob Menk, Aravinda Pillalamarri, and Terry Tollman.The assistance of the following is gratefully acknowledged: Rick Adams, Jaap Akkerhuis, Eric Allman, Piet Beertema, Steve Bellovin, Bob Bishop, Roland Bryan, Peter Capek, David Clark, Lyman Chapin, Glyn Colinson, Peter Collinson, Sunil Das, Dan Dern, Harry Forsdick, Donalyn Frey, Simson Garfinkel, Michel Gien, John Gilmore, Teus Hagen, Mark Horton, Peter Houlder, Peter Kirstein, Len Kleinrock, Kirk McKusick, Bob Metcalfe, Mike Muuss, Mike O'Dell, Craig Partridge, Brian Redman, Brian Reid, Jim Reid, Larry Roberts, Keld Simonsen, Gene Spafford, Hanery Spencer, Bob Taylor, Brad Templeton, Ray Tomlinson, Rebecca Wetzel, and Hubert Zimmermann.Len Tower and Stuart McRobert have saved me from more gaucheries than I care to recall, as have the (anonymous) readers of the manuscript. Tom Stone and Kathleen Billus at Addison-Wesley have once again shepherded me successfully through the reefs from conception to production.Much of the material in the Time-Lines is derived from that of John Quarterman and Smoot Carl-Mitchell, to whom I am grateful. As I have neither a dog nor a cat, I can only (as always) thank Dr. Mary W. Salus and almost-Dr. Emily W. Salus for their niggling and carping, which has improved all my work over the past 25 years.P.H.S.BostonJanuary 1995
Article
The Butler system is a set of programs running on Andrew workstations at CMU that give users access to idle workstations. Current Andrew users use the system over 300 times per day. This paper describes the implementation of the Butler system and tells of our experience in using it. In addition, it describes an application of the system known as gypsy servers, which allow network server programs to be run on idle workstations instead of using dedicated server machines.
Article
Powerful workstations interconnected by networks have become widely available as sources of computing cycles. Each workstation is typically owned by a single user in order to provide a high quality of service for the owner. In most cases, an owner does not have computing demands as large as the capacity of the workstation. Therefore, most of the workstations are underutilized. Nevertheless, some users have demands that exceed the capacities of their workstations. In order to effectively share the capacity of workstations, there must be algorithms that allocate available capacity and long periods when owners do not use their stations. To understand the profile of station availability, we analyzed the usage patterns of a cluster of workstations. The workstations were available more than 75% of the time observed. Large capacities were steadily available on an hour to hour, day to day, and month to month basis. These capacities were available not only during the evening hours and on weekends, but during the busiest times of normal working hours. A stochastic model was developed which was based on an analysis of the relative frequency distribution and the correlation of available and non-available interval lengths. A 3-stage hyperexponential cumulative distribution has been fitted to the observed cumulative relative frequency of available periods. The fitted distribution closely matches the observed relative frequency distribution. This stochastic model is important as a workload generator for the performance evaluation of capacity sharing strategies of a cluster of workstations. The model assists in the design of resource management algorithms that take advantage of the characteristics of the usage patterns.
Article
A continuing challenge to the scientific research and engineering communities is how to fully utilize computational hardware. In particular, the proliferation of clusters of high performance workstations has become an increasingly attractive source of compute power. Developments to take advantage of this environment have previously focused primarily on managing the resources, or on providing interfaces so that a number of machines can be used in parallel to solve large problems. Both approaches are desirable, and indeed should be complementary. Unfortunately, the resource management and parallel processing systems are usually developed by independent groups, and they usually do not interact well together. To bridge this gap, we have developed a framework for interfacing these two sorts of systems. Using this framework, we have interfaced PVM, a popular system for parallel programming with Condor, a powerful resource management system. This combined system is operational, and we have made further developments to provide a single coherent environment.
Article
We present a highly scalable approach to distributed parallel computing on workstations in the Internet which provides significant speed-up to molecular biology sequence analysis. Recent developments show that smaller numbers of workstations connected via a local area network can be used efficiently for parallel computing. This work emphasizes scalability with respect to the number of workstations employed. We show that a massively parallel approach using several hundred workstations, dispersed over all continents, can successfully be applied for solving problems with low requirements on communication bandwidth. We calculated the optimal local alignment scores between a single genetic sequence and all sequences of a genetic sequence database using the search code that is well known among molecular biologists. In a heterogeneous network with more than 800 workstations this job terminated after several minutes, in contrast to several days it would have taken on a single machine.
Article
Desktop computers are idle much of the time. Ongoing trends make aggregate LAN “waste”-idle compute cycles-an increasingly attractive target for recycling. Piranha, a software implementation of adaptive parallelism, allows these waste cycles to be recaptured by putting them to work running parallel applications. Most parallel processing is static: programs execute on a fixed set of processors throughout a computation. Adaptive parallelism allows for dynamic processor sets which means that the number of processors working on a computation may vary, depending on availability. With adaptive parallelism, instead of parceling out jobs to idle workstations, a single job is distributed over many workstations. Adaptive parallelism is potentially valuable on dedicated multiprocessors as well, particularly on massively parallel processors. One key Piranha advantage is that task descriptors, not processes, are the basic movable, remappable computation unit. The task descriptor approach supports strong heterogeneity. A process image representing a task in mid computation can't be moved to a machine of a different type, but a task descriptor can be. Thus, a task begun on a Sun computer can be completed by an IBM machine. The authors show that adaptive parallelism has the potential to integrate heterogeneous platforms seamlessly into a unified computing resource and to permit more efficient sharing of traditional parallel processors than is possible with current systems
Article
this paper is a description of our implementation and our experiences using it. By "process migration" we mean the ability to move a process's execution site at any time from a source machine to a destination (or target) machine of the same architecture. In practice, process migration in Sprite usually occurs at two particular times. Most often, migration happens as part of the exec system call when a resource-intensive program is about to be initiated. Exec-time migration is particularly convenient because the process's virtual memory is reinitialized by the exec system call and thus need not be transferred from the source to the target machine. The second common
Article
Parallel Virtual Machine (PVM) is a widely-used software system that allows a heterogeneous set of parallel and serial UNIX-based computers to be programmed as a single distributed-memory parallel machine. In this paper, an extension to PVM to support dynamic process migration is presented. Support for migration is important in general-purpose workstation environments since it allows parallel computations to co-exist with other applications, using idle-cycles as they become available and off-loading from workstations when they are no longer free. A description and evaluation of the design and implementation of the prototype Migratable PVM system is presented together with some performance results. 1 Introduction PVM [1, 2, 3] is a software system that allows a heterogeneous network of parallel and serial computers to be programmed as a single computational resource. This resource appears to the application programmer as a potentially large distributed-memory virtual computer. Such a s...
Article
The importance of adapting networks of workstations for use as parallel processing platforms is well established. However, current solutions do not always address important issues that exist in real networks. External factors like the sharing of resources, unpredictable behavior of the network, and failures, are present in multiuser networks and must be addressed. CALYPSO is a prototype software system for writing and executing parallel programs on non-dedicated platforms, based on COTS networked workstations, operating systems, and compilers. Among notable properties of the system are: (1) simple programming paradigm incorporating shared memory constructs and separating the programming and the execution parallelism, (2) transparent utilization of unreliable shared resources by providing dynamic load balancing and fault tolerance, and (3) effective performance for large classes of coarse-grained computations. We present the system and report our initial experiments and performance re...
Article
In this paper, we argue that because of recent technology advances, networks of workstations (NOWs) are poised to become the primary computing infrastructure for science and engineering, from low end interactive computing to demanding sequential and parallel applications. We identify three opportunities for NOWs that will benefit endusers: dramatically improving virtual memory and file system performance by using the aggregate DRAM of a NOW as a giant cache for disk; achieving cheap, highly available, and scalable file storage by using redundant arrays of workstation disks, using the LAN as the I/O backplane; and finally, multiple CPUs for parallel computing. We describe the technical challenges in exploiting these opportunities -- namely, efficient communication hardware and software, global coordination of multiple workstation operating systems, and enterprise-scale network file systems. We are currently building a 100-node NOW prototype to demonstrate that practical solutions exist to these technical challenges. Keywords: Networks of Workstations, Communications, Parallel Computing, Message Passing, File Systems, Network Virtual Memory, Global Resource Management, Availability 1.
Great Internet Mersenne prime search
  • George Woltman: Url
  • Woltman
Woltman: URL, George Woltman. Great Internet Mersenne prime search. Available at http: //www.mersenne.org/.
A case for networks of workstations: NOW. IEEE Micro, Feb
  • T E Patterson
  • D E Anderson
  • D A Culler
  • Patterson
  • Now The
  • Team
Patterson: 1995, T.E. Anderson, D.E. Culler, D. A. Patterson, and the NOW Team. A case for networks of workstations: NOW. IEEE Micro, Feb. 1995. URL: http://now.cs.berkeley. edu/Case/case.html [14] Pearson: URL, K. Pearson. Internet based Distributed Computing Projects. URL: http://www. nyx.net/ " kpearson/distrib.html.
RSA Laboratories Secret-Key Challenge
  • Rsa Data Security
RSA Data Security: 97, RSA Laboratories Secret-Key Challenge. Available at http://www.rsa.com/rsalabs/97challenge.
Net: Monarch, Project Monarch
  • Distributed
Distributed. Net: Monarch, Project Monarch. Available at http://www.distributed.net/des.
Hunting for Wasted Computing Power: New Software for Computing Net-works Puts Idle PC's to Work
  • S Fields
  • Fields
Fields: 1993, S. Fields. Hunting for Wasted Computing Power: New Software for Computing Net-works Puts Idle PC's to Work. Research Sampler. University of Wisconsin Madison. 1993. URL: http://www.cs.wisc.edu/condor/doc/WiscIdea.html.
Salus: 1995, P. Salus. Casting the Net: From Arpanet to Internet and Beyond
  • Rsa Data Security
  • Url
  • Rsa Data
  • Rsa Security
  • Factoring Challenge
RSA Data Security: URL, RSA Data Security RSA Factoring Challenge. [20] Salus: 1995, P. Salus. Casting the Net: From Arpanet to Internet and Beyond. AddisonWesley, 1995.
The Collaborators Opportunity. Science
  • W Wulf
  • Wulf
Wulf: 1993, W. Wulf. The Collaborators Opportunity. Science. Aug. 1993.
Net: Bovine, Project Bovine
  • Distributed
Distributed. Net: Bovine, Project Bovine. Available at http://www.distributed. Net/rc5.