Henk Corporaal

Technische Universiteit Eindhoven, Eindhoven, North Brabant, Netherlands

Are you Henk Corporaal?

Claim your profile

Publications (286)38.99 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes a modeling approach to capture the mapping of an application on a platform. The approach is based on Scenario-Aware Dataflow (SADF) models. In contrast to the related work, we express the complete design-space in a single formal SADF model. This allows us to have a compact and explorable state-space linked with an executable model capable of symbolically analyzing different mappings for their timing behavior. We can model different bindings for application tasks, different static-orders schedules for tasks bound in shared resources, as well as naturally capturing resource claiming/unclaiming using SADF semantics. Moreover, by using the inherent properties of dataflow graphs and the dynamic behavior of a Finite-State Machine, we can model different levels of pipelining, such as full application pipelining and interleaved pipelining of consecutive executions of the application. The size of the model is independent of the number of executions of the application. Since we are able to capture all this behavior in a single SADF model we can use available dataflow analysis, such as worst-case and best-case throughput and deadlock-freedom checking. Furthermore, since the model captures the design-space independently of the analysis technique, one can use different exploration approaches to analyze different sets of requirements.
    13th ACM-IEEE International Conference on Formal Methods and Models for System Design; 09/2015
  • [Show abstract] [Hide abstract]
    ABSTRACT: CPS play an important role in the modern high-tech industry. Designing such systems is a challenging task due to the multi-disciplinary nature of these systems, and the range of abstraction levels involved. To facilitate hands-on experience with such systems, we develop a cyber-physical platform that aids in research and education on CPS. This paper describes this platform, which contains all typical CPS components. The platform is used in various research and education projects for bachelor, master, and PhD students. We discuss the platform and a number of projects and the educational opportunities they provide.
    WESE Workshop 2015; 09/2015
  • Source
    Siham Tabik · Maurice Peemen · Nicolas Guil · Henk Corporaal
    [Show abstract] [Hide abstract]
    ABSTRACT: Stencil computation is of paramount importance in many fields, in image processing, structural biology and biomedicine, among others. There exists a permanent demand of maximizing the performance of stencils on state-of-the-art architectures, such graphics processing units (GPUs). One of the important issues when optimizing these kernels for the GPU is the selection of the best thread-block that maximizes the overall performance. Usually, programmers look for the optimal thread-block configuration in a reduced space of square thread-block configurations or simply use the best configurations reported in previous works, which is usually 16 × 16. This paper provides a better understanding of the impact of thread-block configurations on the performance of stencils on the GPU. In particular, we model locality and parallelism and consider that the optimal configurations are within the space that provides: (1) a small number of global memory communications; (2) a good shared memory utilization with small numbers of conflicts; (3) a good streaming multi-processors utilization; and (4) a high efficiency of the threads within a thread-block. The model determines the set of optimal thread-block configurations without the need of executing the code. We validate the proposed model using six stencils with different halo widths and show that it reduces the optimization space to around 25% of the total valid space. The configurations in this space achieve at least a throughput of 75% of the best configuration and guarantee the inclusion of the best configurations. Copyright © 2015 John Wiley & Sons, Ltd.
    Concurrency and Computation Practice and Experience 08/2015; DOI:10.1002/cpe.3591 · 1.00 Impact Factor
  • Source
    Mathias Funk · Piet van der Putten · Henk Corporaal
    [Show abstract] [Hide abstract]
    ABSTRACT: Nowadays interactive electronics products offer a huge functionality to prospective customers, but often it is too huge and complex to be grasped and used successfully. In this case, customers obviate the struggle and return the products to the shop. Also the variability in scope and features of a product is so large that an up-front specification becomes hard if not impossible. To avoid the problem of an inadequate match between customer expectations and designer assumptions, new sources of product usage information have to be developed. One possibility is to integrate observation functionality into the products, continuously involving real users in the product development process. The integration of such functionality is an often overlooked challenge that should be tackled with an appropriate engineer- ing methodology. This paper presents on-going work about a novel design for observation approach that supports early observation integrations and enables the cooperation with various information stakeholders. We show how observation can be embedded seamlessly in a model-driven development process using UML. An industrial case-study shows the feasibility of the approach.
  • Dongrui She · Yifan He · Luc Waeijen · Henk Corporaal
    [Show abstract] [Hide abstract]
    ABSTRACT: Energy efficiency is one of the most important metrics in embedded processor design. The use of wide SIMD architecture is a promising approach to build energy-efficient high performance embedded processors. In this paper, we propose a design framework for a configurable wide SIMD architecture that utilizes an explicit datapath to achieve high energy efficiency. The framework is able to generate processor instances based on architecture specification files. It includes a compiler to efficiently program the proposed architecture with standard programming languages including OpenCL. This compiler can analyze the static memory access patterns in OpenCL kernels, generate efficient mappings, and schedule the code to fully utilize the explicit datapath. Extensive experimental results show that the proposed architecture is efficient and scalable in terms of area, performance, and energy. In a 128-PE SIMD processor, the proposed architecture is able to achieve up to 200 times speed-up and reduce the total energy consumption by 50 % compared to a basic RISC processor.
    Journal of Signal Processing Systems 07/2015; 80(1). DOI:10.1007/s11265-014-0957-1 · 0.60 Impact Factor
  • Luc Waeijen · Dongrui She · Henk Corporaal · Yifan He
    [Show abstract] [Hide abstract]
    ABSTRACT: Energy efficiency has become one of the most important topics in computing. To meet the ever increasing demands of the mobile market, the next generation of processors will have to deliver a high compute performance at an extremely limited energy budget. Wide single instruction, multiple data (SIMD) architectures provide a promising solution, as they have the potential to achieve high compute performance at a low energy cost. We propose a configurable wide SIMD architecture that utilizes explicit datapath techniques to further optimize energy efficiency without sacrificing computational performance. To demonstrate the efficiency of the proposed architecture, multiple instantiations of the proposed wide SIMD architecture and its automatic bypassing counterpart, as well as a baseline RISC processor, are implemented. Extensive experimental results show that the proposed architecture is efficient and scalable in terms of area, performance, and energy. In a 128-PE SIMD processor, the proposed architecture is able to achieve an average of 206 times speed-up and reduces the total energy dissipation by 48.3 % on average and up to 94 %, compared to a reduced instruction set computing (RISC) processor. Compared to the corresponding SIMD architecture with automatic bypassing, an average of 64 % of all register file accesses is avoided by the 128-PE, explicitly bypassed SIMD. For total energy dissipation, an average of 27.5 %, and maximum of 43.0 %, reduction is achieved.
    Journal of Signal Processing Systems 07/2015; 80(1). DOI:10.1007/s11265-014-0950-8 · 0.60 Impact Factor
  • Source
    Roel Jordans · Henk Corporaal
    [Show abstract] [Hide abstract]
    ABSTRACT: Software-pipelining is an important technique for increasing the instruction level parallelism of loops during compilation. Currently, the LLVM compiler infrastructure does not offer this optimization although some target specific implementations do exist. We have implemented a high-level method for software-pipelining within the LLVM framework. By implementing this within LLVM's optimization layer we have taken the first steps towards a target independent software-pipelining method.
    18th International Workshop on Software and Compilers for Embedded Systems, Schloss Rheinfels, St. Goar, Germany; 06/2015
  • Ang Li · Akash Kumar · Yajun Ha · Henk Corporaal
    [Show abstract] [Hide abstract]
    ABSTRACT: Volume image registration remains one of the best candidates for Graphics Processing Unit (GPU) acceleration because of its enormous computation time and plentiful data-level parallelism. However, an efficient GPU implementation for image registration is still challenging due to the heavy utilization of expensive atomic operations for similarity calculations. In this paper, we first propose five GPU-friendly Correlation Ratio (CR) based methods to accelerate the process of image registration. Compared to widely used Mutual Information (MI) based methods, the CR-based approaches require less resource for shadow histograms, a faster storage, such as the on-chip scratchpad memory, therefore can be fully exploited to achieve better performance. Second, we make design space exploration of the CR-based methods, and study the trade-off of introducing shadow histograms on different storage (shared memory, global memory) by computation units of different granularity (thread, warp, thread block). Third, we exhaustively test the proposed designs on GPUs of different generations (Fermi, Kepler and Maxwell) so that performance variations due to hardware migration are addressed. Finally, we evaluate the performance impact corresponding to the tuning of concurrency, algorithm settings as well as overheads incurred by preprocessing, smoothing and workload unbalancing. We highlight our last CR approach which completely avoids updating conflicts of histogram calculation, leading to substantial performance improvements (up to 55x speedup over naive CPU implementation). It reduces the registration time from 145s to 2.6s for two typical 256x256x160 volume images on a Kepler GPU.
    Microprocessors and Microsystems 05/2015; DOI:10.1016/j.micpro.2015.04.002 · 0.43 Impact Factor
  • Article: Bones
    Cedric Nugteren · Henk Corporaal
    ACM Transactions on Architecture and Code Optimization 12/2014; 11(4):1-25. DOI:10.1145/2665079 · 0.50 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: For next-generation radio telescopes such as the Square Kilometre Array, seemingly minor changes in scientific constraints can easily push computing requirements into the exascale domain. The authors propose a model for engineers and astronomers to understand these relations and make tradeoffs in future instrument designs.
    Computer 09/2014; 47(9):48-54. DOI:10.1109/MC.2014.235 · 1.44 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Programming models such as CUDA and OpenCL allow the programmer to specify the independence of threads, effectively removing ordering constraints. Still, parallel architectures such as the graphics processing unit (GPU) do not exploit the potential of data-locality enabled by this independence. Therefore, programmers are required to manually perform data-locality optimisations such as memory coalescing or loop tiling. This work makes a case for locality-aware thread scheduling: re-ordering threads automatically for better locality to improve the programmability of multi-threaded processors. In particular, we analyse the potential of locality-aware thread scheduling for GPUs, considering among others cache performance, memory coalescing and bank locality. This work does not present an implementation of a locality-aware thread scheduler, but rather introduces the concept and identifies the potential. We conclude that non-optimised programs have the potential to achieve good cache and memory utilisation when using a smarter thread scheduler. A case-study of a naive matrix multiplication shows for example a 87% performance increase, leading to an IPC of 457 on a 512-core GPU.
    7th International Workshop on Multi-/Many-Core Computing Systems (MuCoCoS); 08/2014
  • Erkan Diken · Roel Jordans · Lech Jozwiak · Henk Corporaal
    [Show abstract] [Hide abstract]
    ABSTRACT: Many applications in important domains, such as communication, multimedia, etc. show a significant data-level parallelism (DLP). A large part of the DLP is usually exploited through application vectorization and implementation of vector operations in processors executing the applications. While the amount of DLP varies between applications of the same domain or even within a single application, processor architectures usually support a single vector width. This may not be optimal and may cause a substantial energy and performance inefficiency. Therefore, an adequate more sophisticated exploitation of DLP is highly relevant. This paper studies the construction and exploitation of VLIW ASIPs with multiple vector widths.
    MECO 2014 - 3rd Mediterranean Conference on Embedded Computing, Budva, Montenegro; 06/2014
  • Roel Jordans · Lech Jozwiak · Henk Corporaal
    [Show abstract] [Hide abstract]
    ABSTRACT: Genetic algorithms are commonly used for automatically solving complex design problem because exploration using genetic algorithms can consistently deliver good results when the algorithm is given a long enough run-time. However, the exploration time for problems with huge design spaces can be very long, often making exploration using a genetic algorithm practically infeasible. In this work, we present a genetic algorithm for exploring the instruction-set architecture of VLIW ASIPs and demonstrate its effectiveness by comparing it to two heuristic algorithms. We present several optimizations to the genetic algorithm configuration, and demonstrate how caching of intermediate compilation and simulation results can reduce the exploration time by an order of magnitude.
    MECO 2014 - 3rd Mediterranean Conference on Embedded Computing, Budva, Montenegro; 06/2014
  • Luc Waeijen · Dongrui She · Henk Corporaal · Yifan He
    [Show abstract] [Hide abstract]
    ABSTRACT: It has been shown that wide Single Instruction Multiple Data architectures (wide-SIMDs) can achieve high energy efficiency, especially in domains such as image and vision processing. In these and various other application domains, reduction is a frequently encountered operation, where multiple input elements need to be combined into a single element by an associative operation, e.g. addition or multiplication. There are many applications that require reduction such as: partial histogram merging, matrix multiplication and min/max-finding. Wide-SIMDs contain a large number of processing elements (PEs), which in general are connected by a minimal form of interconnect for scalability reasons. To efficiently support reduction operations on wide-SIMDs with such a minimal interconnect, we introduce two novel reduction algorithms which do not rely on complex communication networks or any dedicated hardware. The proposed approaches are compared with both dedicated hardware and other software solutions in terms of performance, area, and energy consumption. A practical case study demonstrates that the proposed software approach has much better generality, flexibility and no additional hardware cost. Compared to a dedicated hardware adder tree, the proposed software approach saves 6.8% area with a performance penalty of only 6.5%.
  • Firew Siyoum · Marc Geilen · Henk Corporaal
    [Show abstract] [Hide abstract]
    ABSTRACT: Embedded streaming applications require design-time temporal analysis to verify real-time constraints such as throughput and latency. In this paper, we introduce a new analytical technique to compute temporal bounds of streaming applications mapped onto a shared multiprocessor platform. We use an expressively rich application model that supports adaptive applications where graph structure, execution times and data rates may change dynamically. The analysis technique combines symbolic simulation in (max; +) algebra with worst-case resource availability curves. It further enables a tighter performance guarantee by improving the WCRTs of service requests that arrive in the same busy time. Evaluation on real-life application graphs shows that the technique is tens of times faster than the state-of-the-art and enables tighter throughput guarantees, up to a factor of 4, compared to the typical worst-case analysis.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Numerous applications in important domains, such as communication, multimedia, etc. show a significant data-level parallelism (DLP). A large part of the DLP is usually exploited through application vectorization and implementation of vector operations in processors executing the applications. While the amount of DLP varies between applications of the same domain or even within a single application, processor architectures usually support a single vector width. This may not be optimal and may cause a substantial energy inefficiency. Therefore, an adequate more sophisticated exploitation of DLP is highly relevant. This paper proposes the use of heterogeneous vector widths and a method to explore the heterogeneous vector widths for VLIW ASIPs. In our context, heterogeneity corresponds to the usage of two or more different vector widths in a single ASIP. After a brief explanation of the target ASIP architecture model, the paper describes the vector-width exploration method and explains the associated design automation tools. Subsequently, experimental results are discussed.
    Microprocessors and Microsystems 05/2014; 38(8). DOI:10.1016/j.micpro.2014.05.004 · 0.43 Impact Factor
  • Source
    Roel Jordans · Erkan Diken · Lech Jozwiak · Henk Corporaal
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we introduce and discuss the BuildMaster framework. This framework supports the design space exploration of application specific VLIW processors and offers automated caching of intermediate compilation and simulation results. Both the compilation and the simulation cache can greatly help to shorten the exploration time and make it possible to use more realistic data for the evaluation of selected designs. In each of the experiments we performed, we were able to reduce the number of required simulations with over 90% and save up to 50% on the required compilation time.
    DDECS 2014 - 17th International Symposium on Design and Diagnostics of Electronic Circuits and Systems, Warsaw, Poland; 04/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: As modern GPUs rely partly on their on-chip memories to counter the imminent off-chip memory wall, the efficient use of their caches has become important for performance and energy. However, optimising cache locality systematically requires insight into and prediction of cache behaviour. On sequential processors, stack distance or reuse distance theory is a well-known means to model cache behaviour. However, it is not straightforward to apply this theory to GPUs, mainly because of the parallel execution model and fine-grained multi-threading. This work extends reuse distance to GPUs by modelling: 1) the GPU’s hierarchy of threads, warps, threadblocks, and sets of active threads, 2) conditional and non-uniform latencies, 3) cache associativity, 4) miss-status holding-registers, and 5) warp divergence. We implement the model in C++ and extend the Ocelot GPU emulator to extract lists of memory addresses. We compare our model with measured cache miss rates for the Parboil and PolyBench/GPU benchmark suites, showing a mean absolute error of 6% and 8% for two cache configurations. We show that our model is faster and even more accurate compared to the GPGPU-Sim simulator
    High Performance Computer Architecture (HPCA), Orlando, FL, USA; 02/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: Instruction Set Customization is a well-known technique to enhance the performance and efficiency of Application-Specific Processors (ASIPs). An extensive application profiling can indicate which parts of a given application, or class of applications, are most frequently executed, enabling the implementation of such frequently executed parts in hardware as custom instructions. However, a naive ad hoc instruction set customization process may identify and select poor instruction extension candidates, which may not result in a significantly improved performance with low circuit-area and energy footprints. In this paper we propose and discuss an efficient instruction set customization method and automatic tool, which exploit the maximal common subgraphs (common operation patterns) of the most frequently executed basic blocks of a given application. The speed results from our tool for a VLIW ASIP are provided for a set of benchmark applications. The average execution time reduction ranges from 30% to 40%, with only a few custom instructions.
    2014 IEEE 5th Latin American Symposium on Circuits and Systems (LASCAS); 02/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: Graphics processing units (GPUs) are becoming increasingly popular for compute workloads, mainly because of their large number of processing elements and high-bandwidth to off-chip memory. The roofline model captures the ratio between the two (the compute-memory ratio), an important architectural parameter. This work proposes to change the compute-memory ratio dynamically, scaling the voltage and frequency (DVFS) of 1) memory for compute-intensive workloads and 2) processing elements for memory-intensive workloads. The result is an adaptive roofline-aware GPU that increases energy efficiency (up to 58%) while maintaining performance.
    International Workshop on Adaptive Self-tuning Computing Systems, Vienna, Austria; 01/2014

Publication Stats

2k Citations
38.99 Total Impact Points


  • 2003–2015
    • Technische Universiteit Eindhoven
      • • Department of Electrical Engineering
      • • Embedded Systems Institute (ESI)
      Eindhoven, North Brabant, Netherlands
  • 2013
    • University of Cordoba (Spain)
      • Department of Computers Architecture and Technology, Electronics and Electric Technology
      Cordoue, Andalusia, Spain
  • 2008
    • NXP Semiconductors
      Eindhoven, North Brabant, Netherlands
  • 2004–2006
    • imec Belgium
      • Smart Systems and Energy Technology
      Louvain, Flemish, Belgium
  • 1900–2006
    • Delft University of Technology
      • Information- and Communication Technology Section
      Delft, South Holland, Netherlands