Conference Paper

Using generative design patterns to generate parallel code for a distributed memory environment.

DOI: 10.1145/781498.781532 Conference: Proceedings of the ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPOPP 2003, June 11-13, 2003, San Diego, CA, USA
Source: DBLP

ABSTRACT A design pattern is a mechanism for encapsulating the knowledge of experienced designers into a re-usable artifact. Parallel design patterns reflect commonly occurring parallel communication and synchronization structures. Our tools, CO2P3S (Correct Object-Oriented Pattern-based Parallel Programming System) and MetaCO2P3S, use . A programmer selects the parallel design patterns that are appropriate for an application, and then adapts the patterns for that specific application by selecting from a small set of code-configuration options. CO2P3S then generates a custom framework for the application that includes all of the structural code necessary for the application to run in parallel. The programmer is only required to write simple code that launches the application and to fill in some application-specific sequential hook routines. We use generative design patterns to take an application specification (parallel design patterns + sequential user code) and use it to generate parallel application code that achieves good performance in shared memory and distributed memory environments. Although our implementations are for Java, the approach we describe is tool and language independent. This paper describes generalizing CO2P3S to generate distributed-memory parallel solutions.

0 Bookmarks
 · 
61 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Naturally (embarrassingly) parallel computational experiments represent one of the most common uses of computing clusters in scientific environments. For these applications, scientists often manage parameter iterations and task execution using hand-written shell scripts or custom computational frameworks. As experiments are repeated, new tasks added, and the numbers, types, and order of the parameters changed, maintaining these scripts and frameworks quickly becomes a bottleneck for development. In this paper, we present a lightweight pattern for distributed computational experiments that separates parameters, experimental tasks, and job scheduling to support evolving projects without increasing the complexity of the runtime environment. Rather than introduce a comprehensive framework, we suggest a new approach to the problem that is simple to implement and maintain and can support multiple strategies for distribution. To illustrate this, we implement the pattern in both Python and C++ and discuss how each implementation is applicable to different types of problems.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: It is generally the case that corporate information systems consist of heterogeneous software subsystems that interact using many various processes and protocols. Applications that execute within such subsystems tend to be designed in isolation with little or no thought given to the requirements for future interaction. To provide bridges between these heterogeneous subsystems, one-off "hacked" solutions are usually introduced which rely upon maintenance of the status quo for all aspects of the execution environment and are thus inherently "brittle". Such a situation is inappropriate for large-scale and highly decentralised system deployments. In order to make such systems more robust and exhibit scalable performance characteristics, it is preferable to construct them with the ability to react to changes in the environment that they operate within. This research seeks to provide a method of how to engender "agility" into system components to improve their ability to deal with unpredictable environments. Our approach is to view systems and components from an interactive perspective and provide a middleware mechanism that enables a "variable" degree of coupling between system components. To achieve this we introduce three high-level "dimensions" of coupling, namely mediation, adaptation and crystallisation. Each dimension is characterised by the location of behaviour required for interaction and patterns of behaviour movement. The coordination characteristics of these dimensions of coupling are specified to establish a separation of coordination and application functionalities in endogenous distributed systems. The outcomes of this research project are: a definition for the dimensions of coupling that have been identified, a protocol to perform transitions between dimensions and a preliminary framework for the development of more agile applications.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we deal with building parallel programs based on sequential application code and generic components providing specific functionality for parallelization, like load balancing or fault tolerance. We describe an architectural approach employing aspect-oriented programming to assemble arbitrary object-oriented components. Several non-trivial crosscutting concerns arising from parallelization are addressed in the light of different applications, which are representative of the most common types of parallelism. In particular, we demonstrate how aspect-oriented techniques allow us to leave all existing code untouched. We evaluate and compare our approach with its counterparts in conventional object-oriented programming. Copyright © 2008 John Wiley & Sons, Ltd.
    Software Practice and Experience 01/2009; 39:807-832. · 1.01 Impact Factor

Full-text (3 Sources)

View
12 Downloads
Available from
Jun 4, 2014