Conference Paper

An empirical study of profiling strategies for released software and their impact on testing activities.

DOI: 10.1145/1013886.1007522 Conference: Proceedings of the ACM/SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2004, Boston, Massachusetts, USA, July 11-14, 2004
Source: DBLP

ABSTRACT An understanding of how software is employed in the field can yield many opportunities for quality improvements. Profiling released software can provide such an understanding. However, profiling released software is diffcult due to the potentially large number of deployed sites that must be profiled, the extreme transparency expectations, and the remote data collection and deployment management process. Researchers have recently proposed various approaches to tap into the opportunities and overcome those challenges. Initial studies have illustrated the application of these approaches and have shown their feasibility. Still, the promising proposed approaches, and the tradeoffs between overhead, accuracy, and potential benefits for the testing activity have been barely quantified. This paper aims to over-come those limitations. Our analysis of 1200 user sessions on a 155 KLOC system substantiates the ability of field data to support test suite improvements, quantifies different approaches previously introduced in isolation, and assesses the efficiency of profiling techniques for released software and the effectiveness of their associated testing efforts.

0 Bookmarks
 · 
72 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Reproducing and learning from failures in deployed software is costly and difficult. Those activities can be facilitated, however, if the circumstances leading to a failure can be recognized and properly captured. To anticipate failures we propose to monitor system field behavior for simple trace instances that deviate from a baseline behavior experienced in-house. In this work, we empirically investigate the effectiveness of various simple anomaly detection schemes to identify the conditions that precede failures in deployed software. The results of our experiment provide a preliminary assessment of these schemes, and expose the tradeoffs between different anomaly detection algorithms applied to several types of observable attributes under varying levels of in-house testing.
    Empirical Software Engineering 01/2007; 12:447-469. · 1.18 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Dynamic program analysis techniques depend on accurate program traces. Program instrumentation is commonly used to collect these traces, which causes overhead to the program execution. Various techniques have addressed this problem by minimizing the number of probes/witnesses used to collect traces. In this paper, we present a novel distributed trace collection framework wherein, a program is executed multiple times with the same input for different sets of witnesses. The partial traces such obtained are then merged to create the whole program trace. Such divide-and-conquer strategy enables parallel collection of partial traces, thereby reducing the total time of collection. The problem is particularly challenging as arbitrary distribution of witnesses cannot guarantee correct formation of traces. We provide and prove a necessary and sufficient condition for distributing the witnesses which ensures correct formation of trace. Moreover, we describe witness distribution strategies that are suitable for parallel collection. We use the framework to collect traces of field SAP-ABAP programs using breakpoints as witnesses as instrumentation cannot be performed due to practical constraints. To optimize such collection, we extend Ball-Larus' optimal edge-based profiling algorithm to an optimal node-based algorithm. We demonstrate the effectiveness of the framework for collecting traces of SAP-ABAP programs.
    Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering; 08/2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We address the testing of complex, highly-configurable systems - particularly those without test oracles - by testing in the field using built-in oracles from functions' metamorphic properties. This work is advised by Prof. Gail Kaiser.
    01/2008;

Full-text

Download
0 Downloads
Available from