[show abstract] [hide abstract]
ABSTRACT: With increasing availability of Cloud computing services, this paper addresses the challenge consumers of Infrastructure-as-a-Service (IaaS) have in determining which IaaS provider and resources are best suited to run an application that may have specific Quality of Service (QoS) requirements. Utilising application modelling to predict performance is an attractive concept, but is very difficult with the limited information IaaS providers typically provide about the computing resources. This paper reports on an initial investigation into using Dwarf benchmarks to measure the performance of virtualised hardware, conducting experiments on BonFIRE and Amazon EC2. The results we obtain demonstrate that labels such as 'small', 'medium', 'large' or a number of ECUs are not sufficiently informative to predict application performance, as one might expect. Furthermore, knowing the CPU speed, cache size or RAM size is not necessarily sufficient either as other complex factors can lead to significant performance differences. We show that different hardware is better suited for different types of computations and, thus, the relative performance of applications varies across hardware. This is reflected well by Dwarf benchmarks and we show how different applications correlate more strongly with different Dwarfs, leading to the possibility of using Dwarf benchmark scores as parameters in application models.
IEEE 3rd International Conference on Cloud Computing Technology and Science, CloudCom 2011, Athens, Greece, November 29 - December 1, 2011; 01/2011