Conference Paper

Network Bandwidth Measurements and Ratio Analysis with the HPC Challenge Benchmark Suite (HPCC).

DOI: 10.1007/11557265_48 Conference: Recent Advances in Parallel Virtual Machine and Message Passing Interface, 12th European PVM/MPI Users' Group Meeting, Sorrento, Italy, September 18-21, 2005, Proceedings
Source: DBLP

ABSTRACT The HPC Challenge benchmark suite (HPCC) was released to analyze the performance of high-performance computing architectures using several kernels to measure dieren t memory and hardware access patterns comprising latency based measurements, memory streaming, inter-process communication and oating point computation. HPCC de- nes a set of benchmarks augmenting the High Performance Linpack used in the Top500 list. This paper describes the inter-process communication benchmarks of this suite. Based on the eectiv e bandwidth benchmark, a special parallel random and natural ring communication benchmark has been developed for HPCC. Ping-Pong benchmarks on a set of process pairs can be used for further characterization of a system. This paper analyzes rst results achieved with HPCC. The focus of this paper is on the balance between computational speed, memory bandwidth, and inter-node communication.

2 Bookmarks
 · 
84 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end computing (HEC) platforms, primarily because of their generality, scalability, and cost effectiveness. However, the growing gap between sustained and peak performance for full-scale scientific applications on conventional supercomputers has become a major concern in high performance computing, requiring significantly larger systems and application scalability than implied by peak performance in order to achieve desired performance. The latest generation of custom-built parallel vector systems have the potential to address this issue for numerical algorithms with sufficient regularity in their computational structure. In this work we explore applications drawn from four areas: magnetic fusion (GTC), plasma physics (LB-MHD-3D), astrophysics (Cactus), and material science (PARATEC). We compare performance of the vector-based Cray X1, X1E, Earth Simulator, NEC SX-8, with performance of three leading commodity-based super-scalar platforms utilizing the IBM Power3, Intel Itanium2, and AMD Opteron processors. Our work makes several significant contributions: a new data-decomposition scheme for GTC that (for the first time) enables a breakthrough of the teraflop barrier; the introduction of a new three-dimensional lattice Boltzmann magneto-hydrodynamic implementation used to study the onset evolution of plasma turbulence that achieves over 26 Tflop/s on 4800 ES processors; the highest per processor performance (by far) achieved by the full-production version of the Cactus ADM-BSSN; and the largest PARATEC cell size atomistic simulation to date. Overall, results show that the vector architectures attain unprecedented aggregate performance across our application suite, demonstrating the tremendous potential of modern parallel vector systems.
    International Journal of High Performance Computing Applications 01/2008; 22:5-20. · 1.30 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper provides a comprehensive performance evaluation of the NEC SX-8 system at the High Performance Computing Center Stuttgart which has been in operation since July 2005. It provides a description of the installed hardware together with its performance for some synthetic benchmarks and five real world applications. All the applications achieved sustained Tflop/s performance. Additionally, the measurements presented show the ability of the system to solve not only large problems with a very high performance, but also medium sized problems with high efficiency using a large number of processors.
    International Journal of High Performance Computing Applications 01/2008; 22:131-148. · 1.30 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The virtual machine (VM) technology has received an increasing interest a spotlight both in the industry and the research communities. Although the potential advantages of virtualization in HPC workloads have been documented, the potential impact to application performance in HPC environments is not clearly understood. This paper presents a study on performance evaluation of virtual HPC systems using High Performance Computing Challenge (HPCC) benchmark suite and xVM as the workload representative and VM technology, respectively. Based on the extended AHP (Analytic Hierarchy Process) method, we propose an efficient performance evaluation model based on extended AHP and analyze the results and quantify the performance overhead of xVM in terms of compute, memory, and network overhead. Our analysis shows that the computational and network performance in HVM is slightly better and the memory performance is significantly better compared to paravirtualization.
    01/2009;

Full-text

View
4 Downloads