Article

Adaptable Particle-in-Cell algorithms for graphical processing units

Computer Physics Communications (Impact Factor: 2.41). 03/2011; 182(3):641-648. DOI: 10.1016/j.cpc.2010.11.009
Source: DBLP

ABSTRACT Emerging computer architectures consist of an increasing number of shared memory computing cores in a chip, often with vector (SIMD) co-processors. Future exascale high performance systems will consist of a hierarchy of such nodes, which will require different algorithms at different levels. Since no one knows exactly how the future will evolve, we have begun development of an adaptable Particle-in-Cell (PIC) code, whose parameters can match different hardware configurations. The data structures reflect three levels of parallelism, contiguous vectors and non-contiguous blocks of vectors, which can share memory, and groups of blocks which do not. Particles are kept ordered at each time step, and the size of a sorting cell is an adjustable parameter. We have implemented a simple 2D electrostatic skeleton code whose inner loop (containing 6 subroutines) runs entirely on the NVIDIA Tesla C1060. We obtained speedups of about 16-25 compared to a 2.66 GHz Intel i7 (Nehalem), depending on the plasma temperature, with an asymptotic limit of 40 for a frozen plasma. We expect speedups of about 70 for an 2D electromagnetic code and about 100 for a 3D electromagnetic code, which have higher computational intensities (more flops/memory access).

0 Bookmarks
 · 
124 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We have designed Particle-in-Cell algorithms for emerging architectures. These algorithms share a common approach, using fine-grained tiles, but different implementations depending on the architecture. On the GPU, there were two different implementations, one with atomic operations and one with no data collisions, using CUDA C and Fortran. Speedups up to about 50 compared to a single core of the Intel i7 processor have been achieved. There was also an implementation for traditional multi-core processors using OpenMP which achieved high parallel efficiency. We believe that this approach should work for other emerging designs such as Intel Phi coprocessor from the Intel MIC architecture.
    Computer Physics Communications 03/2014; 185(3):708–719. DOI:10.1016/j.cpc.2013.10.013 · 2.41 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Modern graphics processing units (GPUs) have been widely utilized in magnetohydrodynamic (MHD) simulations in recent years. Due to the limited memory of a single GPU, distributed multi-GPU systems are needed to be explored for large-scale MHD simulations. However, the data transfer between GPUs bottlenecks the efficiency of the simulations on such systems. In this paper we propose a novel GPU Direct-MPI hybrid approach to address this problem for overall performance enhancement. Our approach consists of two strategies: (1) We exploit GPU Direct 2.0 to speedup the data transfers between multiple GPUs in a single node and reduce the total number of message passing interface (MPI) communications; (2) We design Compute Unified Device Architecture (CUDA) kernels instead of using memory copy to speedup the fragmented data exchange in the three-dimensional (3D) decomposition. 3D decomposition is usually not preferable for distributed multi-GPU systems due to its low efficiency of the fragmented data exchange. Our approach has made a breakthrough to make 3D decomposition available on distributed multi-GPU systems. As a result, it can reduce the memory usage and computation time of each partition of the computational domain. Experiment results show twice the FLOPS comparing to common 2D decomposition MPI-only implementation method. The proposed approach has been developed in an efficient implementation for MHD simulations on distributed multi-GPU systems, called MGPU-MHD code. The code realizes the GPU parallelization of a total variation diminishing (TVD) algorithm for solving the multidimensional ideal MHD equations, extending our work for single GPU computation Wong et al. (2011) to multiple GPUs. Numerical tests and performance measurements are conducted on the TSUBAME 2.0 supercomputer at the Tokyo Institute of Technology. Our code achieves 2 TFLOPS in double precision for the problem with 12003 grid points using 216 GPUs.
    Computer Physics Communications 07/2014; 185(7). DOI:10.1016/j.cpc.2014.03.018 · 2.41 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The Gyrokinetic Toroidal Code (GTC) uses the particle-in-cell method to efficiently simulate plasma microturbulence. This work presents novel analysis and optimization techniques to enhance the performance of GTC on large-scale machines. We introduce cell access analysis to better manage locality vs. synchronization tradeoffs on CPU and GPU-based architectures. Our optimized hybrid parallel implementation of GTC uses MPI, OpenMP, and NVIDIA CUDA, achieves up to a 2× speedup over the reference Fortran version on multiple parallel systems, and scales efficiently to tens of thousands of cores.
    International Journal of High Performance Computing Applications 10/2013; 27(4):454-473. DOI:10.1177/1094342013492446 · 1.63 Impact Factor

Preview

Download
6 Downloads
Available from