Conference Paper

The impact of paravirtualized memory hierarchy on linear algebra computational kernels and software

DOI: 10.1145/1383422.1383440 Conference: Proceedings of the 17th International Symposium on High-Performance Distributed Computing (HPDC-17 2008), 23-27 June 2008, Boston, MA, USA
Source: DBLP


Previous studies have revealed that paravirtualization im- poses minimal performance overhead on High Performance Computing (HPC) workloads, while exposing numerous ben- efits for this field. In this study, we are investigating the memory hierarchy characteristics of paravirtualized systems and their impact on automatically-tuned software systems. We are presenting an accurate characterization of memory attributes using hardware counters and user-process account- ing. For that, we examine the proficiency of ATLAS, a quintessential example of an autotuning software system, in tuning the BLAS library routines for paravirtualized sys- tems. In addition, we examine the effects of paravirtual- ization on the performance boundary. Our results show that the combination of ATLAS and Xen paravirtualiza- tion delivers native execution performance and nearly iden- tical memory hierarchy performance profiles. Our research thus exposes new benefits to memory-intensive applications arising from the ability to slim down the guest OS without influencing the system performance. In addition, our find- ings support a novel and very attractive deployment scenario for computational science and engineering codes on virtual clusters and computational clouds.

Download full-text


Available from: Haihang You, Oct 09, 2015
30 Reads
  • Source
    • "Recently, much interest for the use of virtualization has been shown by the HPC community, spurred by two seminal studies [6], [32] that find virtualization overhead to be negligible for computation intensive HPC kernels and applications such as the network attached storage (NAS) and NAS parallel benchmark (NPB) benchmarks. Other studies have investigated the performance of virtualization for specific HPC application domains [28], [33], or for mixtures of Web and HPC workloads running on virtualized (shared) resources [34]. "
  • Source
    • "On the one hand, the cloud offers scientists instant availability of large computational power at an affordable price. This is achieved via low-overhead virtualization of hardware resources [47] [48] [49]. On the other hand, the common practice of using commodity interconnects and shared resources in the cloud alters fundamental assumptions that scientific applications were based on in the past. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Scientists are currently evaluating the cloud as a new platform. Many important scientific applications, however, perform poorly in the cloud. These applications proceed in highly parallel discrete time-steps or "ticks," using logical synchronization barriers at tick boundaries. We observe that network jitter in the cloud can severely increase the time required for communication in these applications, significantly increasing overall running time. In this paper, we propose a general parallel framework to process time-stepped applications in the cloud. Our framework exposes a high-level, data-centric programming model which represents application state as tables and dependencies between states as queries over these tables. We design a jitter-tolerant runtime that uses these data dependencies to absorb latency spikes by (1) carefully scheduling computation and (2) replicating data and computation. Our data-driven approach is transparent to the scientist and requires little additional code. Our experiments show that our methods improve performance up to a factor of three for several typical time-stepped applications.
  • Source
    • "Academic research community has also taken a keen interest in creating frameworks such as Virtual Workspaces [18], OpenNebula [22], Eucalyptus [10],[39] and Aneka [5] to address this opportunity. According to Buyya & Yeo [5], the hype about cloud computing is getting realized in the form of real world solutions. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The notion of cloud computing capability is gathering momentum rapidly. However, the governance and enterprise architecture to obtain repeatable, scalable and secure business outcomes from cloud computing is still greatly undefined. There is very little research explored to define a framework that not only considers financial motivations, but also business initiatives, IT governance structures, IT operational control structures and technical architecture requirements to evaluate the benefits regarding cloud investment. We are proposing a novel model to address this. This model can be leveraged by an organisation to evaluate the 'tipping point' where the organisation can make an informative decision to embrace cloud computing at the expense of on-premise hosting options. The authors refer to this model as cloud computing tipping point (C2TP) model. The model is a service centric framework created by mapping cloud computing attributes with industry best practices such as ValIT, COBIT and ITIL.
    01/2011; 1(1):3 - 22. DOI:10.1504/IJCC.2011.043243
Show more