Conference Paper

POET: Parameterized Optimizations for Empirical Tuning.

DOI: 10.1109/IPDPS.2007.370637 Conference: 21th International Parallel and Distributed Processing Symposium (IPDPS 2007), Proceedings, 26-30 March 2007, Long Beach, California, USA
Source: DBLP

ABSTRACT The excessive complexity of both machine architectures and applications have made it difficult for compilers to stat- ically model and predict application behavior. This observa- tion motivates the recent interest in performance tuning using empirical techniques. We present a new embedded scripting language, POET (Parameterized Optimization for Empirical Tuning), for parameterizing complex code transformations so that they can be empirically tuned. The POET language aims to significantly improve the generality, flexibility, and efficiency of existing empirical tuning systems. We have used the language to parameterize and to empirically tune three loop optimizations—interchange, blocking, and unrolling— for two linear algebra kernels. We show experimentally that the time required to tune these optimizations using POET, which does not require any program analysis, is significantly shorter than that when using a full compiler- based source-code optimizer which performs sophisticated program analysis and optimizations.

0 Bookmarks
 · 
84 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: GeCoS is an open source framework that provide a highly productive environment for hardware design. GeCoS primarily targets custom hardware design using High Level Synthesis, distinguishing itself from classical compiler infrastructures. Compiling for custom hardware makes use of domain specific semantics that are not considered by general purpose compilers. Finding the right balance between various performance criteria, such as area, speed, and accuracy, is the goal, contrary to the typical goal in high performance context to maximize speed. The GeCoS infrastructure facilitates the prototyping of hardware design flows, going beyond compiler analyses and transformations. Hardware designers must interact with the compiler for design space exploration, and it is important to be able to give instant feedback to the users.
    Source Code Analysis and Manipulation (SCAM), 2013 IEEE 13th International Working Conference on; 01/2013
  • Source
    Petascale Computing: Algorithms and Applications, 01/2007: pages 443-462; Chapman & Hall / CRC Press, Taylor and Francis Group.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Scientific libraries are written in a general way in anticipation of a variety of use cases that reduce optimization opportunities. Significant performance gains can be achieved by specializing library code to its execution context: the application in which it is invoked, the input data set used, the architectural platform and its backend compiler. Such specialization is not typically done because it is time consuming, leads to nonportable code and requires performance-tuning expertise that application scientists may not have. Tool support for library specialization in the above context could potentially reduce the extensive understanding required while significantly improving performance, code reuse and portability. In this work, we study the performance gains achieved by specializing the single processor sparse linear algebra functions in PETSc (Portable, Extensible Toolkit for Scientific Computation) in the context of three scalable scientific applications on the Hopper Cray XE6 Supercomputer at NERSC. We use CHiLL (Compos able High-Level Loop Transformation Framework) to apply source level transformations tailored to the special needs of sparse computations and automatically generate highly optimized PETSc functions. We demonstrate significant performance improvements of more than 1.8X on the library functions and overall gains of 9 to 24% on three scalable applications that use PETSc's sparse matrix capabilities.
    Parallel and Distributed Processing Symposium Workshops & PhD Forum (IPDPSW), 2012 IEEE 26th International; 01/2012

Full-text (2 Sources)

View
54 Downloads
Available from
Jun 4, 2014