Conference Paper

Automatic Transformations for Communication-Minimized Parallelization and Locality Optimization in the Polyhedral Model.

DOI: 10.1007/978-3-540-78791-4_9 Conference: Compiler Construction, 17th International Conference, CC 2008, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2008, Budapest, Hungary, March 29 - April 6, 2008. Proceedings
Source: DBLP

ABSTRACT The polyhedral model provides powerful abstractions to optimize loop nests with regular accesses. Affine transformations in this model capture a com- plex sequence of execution-reordering loop transformations that can improve per- formance by parallelization as well as locality enhancement. Although a signifi- cant body of research has addressed affine scheduling and partitioning, the prob- lemofautomaticallyfindinggoodaffinetransformsforcommunication-optimized coarse-grained parallelization together with locality optimization for the general case of arbitrarily-nested loop sequences remains a challenging problem. We propose an automatic transformation framework to optimize arbitrarily- nested loop sequences with affine dependences for parallelism and locality si- multaneously. The approach finds good tiling hyperplanes by embedding a pow- erful and versatile cost function into an Integer Linear Programming formulation. These tiling hyperplanes are used for communication-minimized coarse-grained parallelization as well as for locality optimization. The approach enables the min- imization of inter-tile communication volume in the processor space, and mini- mization of reuse distances for local execution at each node. Programs requir- ing one-dimensional versus multi-dimensional time schedules (with scheduling- based approaches) are all handled with the same algorithm. Synchronization-free parallelism, permutable loops or pipelined parallelism at various levels can be detected. Preliminary studies of the framework show promising results.

1 Bookmark
 · 
86 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: High level synthesis (HLS) is an important enabling technology for the adoption of hardware accelerator technologies. It promises the performance and energy efficiency of hardware designs with a lower barrier to entry in design expertise, and shorter design time. State-of-the-art high level synthesis now includes a wide variety of powerful optimizations that implement efficient hardware. These optimizations can implement some of the most important features generally performed in manual designs including parallel hardware units, pipelining of execution both within a hardware unit and between units, and fine-grained data communication. We may generally classify the optimizations as those that optimize hardware implementation within a code block (intra-block) and those that optimize communication and pipelining between code blocks (inter-block). However, both optimizations are in practice difficult to apply. Real-world applications contain data-dependent blocks of code and communicate through complex data access patterns. Existing high level synthesis tools cannot apply these powerful optimizations unless the code is inherently compatible, severely limiting the optimization opportunity. In this paper we present an integrated framework to model and enable both intra- and inter-block optimizations. This integrated technique substantially improves the opportunity to use the powerful HLS optimizations that implement parallelism, pipelining, and fine-grained communication. Our polyhedral model-based technique systematically defines a set of data access patterns, identifies effective data access patterns, and performs the loop transformations to enable the intra- and inter-block optimizations. Our framework automatically explores transformation options, performs code transformations, and inserts the appropriate HLS directives to implement the HLS optimizations. Furthermore, our framework can automatically generate the optimized communication blocks for fine-grained communication between hardware blocks. Experimental evaluation demonstrates that we can achieve an average of 6.04X speedup over the high level synthesis solution without our transformations to enable intra- and inter-block optimizations.
    Proceedings of the ACM/SIGDA international symposium on Field programmable gate arrays; 01/2013
  • 01/2013;
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we suggest a new approach for solving the hyperplane problem, also known as “wavefront” computation. In direct contrast to most approaches that reduce the problem to an integer programming one or use several heuristic approaches, we gather information at compile time and delegate the solution to run time. We present an adaptive technique which intuitively calculates which new threads will be able to be executed in the next computation cycle based on which threads are executed in the current one. Moving the solution to the run time environment provides us with higher versatility alongside a perfect solution of the underlying hyperplane pattern being discovered without the need to perform any prior calculations. The main contribution of this paper is the presentation of the self adaptive algorithm, an algorithm which does not need to know the tile size (which controls the granularity of parallelism) beforehand. Instead, the algorithm itself adapts the tile size while the program is running in order to achieve optimal efficiency. Experimental results show that if we have a sufficient number of parallel processing elements to diffuse the scheduler’s workload, its overhead becomes low enough that it is overshadowed by the net gain in parallelism. For the implementation of the algorithm we suggest, and for our experimentations our parallelizing compiler C2μTC/SL is used, a C parallelizing compiler which maps sequential programs on the SVP processor and model.
    Parallel Computing 10/2013; 39(10):603–614. · 1.89 Impact Factor

Full-text (2 Sources)

Download
39 Downloads
Available from
May 23, 2014