Conference Paper

Cooperative Task Management Without Manual Stack Management.

Conference: Proceedings of the General Track: 2002 USENIX Annual Technical Conference, June 10-15, 2002, Monterey, California, USA
Source: DBLP

ABSTRACT Abstract Cooperative task management can provide program ar - chitects with ease of reasoning about concurrency is - sues This property is often espoused by those who recommend "event - driven" programming over "multi - threaded" programming Those terms conflate several issues In this paper, we clarify the issues, and show how one can get the best of both worlds: reason more simply about concurrency in the way "event - driven" advocates recommend, while preserving the readability and main - tainability of code associated with "multithreaded" pro - gramming We identify the source of confusion about the two pro - gramming styles as a conflation of two concepts: task management and stack management Those two con - cerns define a two - axis space in which "multithreaded" and "event - driven" programming are diagonally oppo - site; there is a third "sweet spot" in the space that com - bines the advantages of both programming styles We point out pitfalls in both alternative forms of stack man - agement, manual and automatic , and we supply tech - niques that mitigate the danger in the automatic case Finally, we exhibit adaptors that enable automatic stack management code and manual stack management code to interoperate in the same code base

1 Bookmark
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The potential for unexpected interference between threads makes multithreaded programming notoriously difficult. Programmers use a variety of synchronization idioms such as locks and barriers to restrict where interference may actually occur. Unfortunately, the resulting actual interference points are typically never documented and must be manually reconstructed as the first step in any subse-quent programming task (code review, refactoring, etc). This paper proposes explicitly documenting actual interference points in the program source code, and it presents a type and effect system for verifying the correctness of these interference specifications. Experimental results on a variety of Java benchmarks show that this approach provides a significant improvement over prior sys-tems based on method-level atomicity specifications. In particular, it reduces the number of interference points one must consider from several hundred points per thousand lines of code to roughly 13 per thousand lines of code. Explicit interference points also serve to highlight all known concurrency defects in these benchmarks.
  • [Show abstract] [Hide abstract]
    ABSTRACT: The staged event-driven architecture (SEDA) can be seen as a milestone as regards integration of threads and events in a single model. By decomposing applications into sets of multi-threaded stages connected by event queues, SEDA allows for the use of each concurrency model where most appropriate. Inside each SEDA stage, the number and scheduling policy of threads can be adjusted to enhance performance. SEDA lends itself to parallelization on multi-cores and is well suited for many high-volume data stream processing systems and highly concurrent event processing systems. In this paper, we propose an extension to the staged model that decouples application design from specific execution environments, encouraging a stepwise approach for designing concurrent applications, similar to Foster’s PCAM methodology. We also present Leda, a platform that implements this extended model. In Leda, stages are defined purely by their role in application logic, with no concern for locality of execution, and are bound together through asynchronous communication channels, called connectors, to form a directed graph representing the flow of events inside the application. Decisions about the configuration of the application at execution time are delayed to later phases of the implementation process. Stages in the application graph can then be grouped to form clusters, and each cluster is mapped to an exclusive OS process, running on an arbitrary host. Finally, we discuss two example applications which we developed to evaluate the Leda platform.
    The Journal of Supercomputing 01/2014; DOI:10.1007/s11227-014-1110-4 · 0.84 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a language-based technique to unify two seemingly opposite programming models for build-ing massively concurrent network services: the event-driven model and the multithreaded model. The result is a unified concurrency model providing both thread abstractions and event abstractions. Using this model, each component in an application can be implemented using the appropriate abstraction, simplifying the design of complex, multithreaded systems software. This paper shows how to implement the unified con-currency model in Haskell, a pure, lazy, functional pro-gramming language. It also demonstrates how to use these techniques to build an application-level thread li-brary with support for multiprocessing and asynchronous I/O mechanisms in Linux. The thread library is type-safe, is relatively simple to implement, and has good performance. Application-level threads are extremely lightweight (scaling to ten million threads) and our scheduler, which is implemented as a modular and ex-tensible event-driven system, outperforms NPTL in I/O benchmarks.