Conference Paper

Performance of Network Subsystems for a Technical Simulation on Linux Clusters.

Conference: International Conference on Parallel and Distributed Computing Systems, PDCS 2005, November 14-16, 2005, Phoenix, AZ, USA
Source: DBLP
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The focus of this paper is to evaluate the use of two different network topologies for Ethernet networks in small Common Off The Shelf (COTS) clusters. The fully meshed network topology was evaluated and its impact on latency and bandwidth was measured and compared to the more traditional switched network topology. This was done at MPI level by measuring the point-to-point round trip latency (ping-pong) and all-to-alla bandwidth for different sized messages. The results from the experiments are presented and the overall the benefits and drawbacks of the both approaches are discussed.
    International Conference on Applied Computing, San Sebastian, Spain; 05/2006
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The ability to precisely predict how memory contention degrades performance when co-scheduling programs is critical for reaching high performance levels in cluster, grid and cloud environments. In this paper we present an overview and compare the performance of state-of-the-art characterization methods for memory aware (co-)scheduling. We evaluate the prediction accuracy and co-scheduling performance of four methods: one slowdown-based, two cache-contention based and one based on memory bandwidth usage. Both our regression analysis and scheduling simulations find that the slowdown based method, represented by Memgen, performs better than the other methods. The linear correlation coefficient \(R^2\) of Memgen’s prediction is 0.890. Memgen’s preferred schedules reached 99.53 % of the obtainable performance on average. Also, the memory bandwidth usage method performed almost as well as the slowdown based method. Furthermore, while most prior work promote characterization based on cache miss rate we found it to be on par with random scheduling of programs and highly unreliable.
    The Journal of Supercomputing 04/2015; 71(4). DOI:10.1007/s11227-014-1374-8 · 0.84 Impact Factor

Full-text (2 Sources)

Available from
Jun 5, 2014