Conference Paper

Infrastructure support for controlled experimentation with software testing and regression testing techniques

Dept. of Comput. Sci. & Eng., Nebraska Univ., Lincoln, NE, USA
DOI: 10.1109/ISESE.2004.1334894 Conference: Empirical Software Engineering, 2004. ISESE '04. Proceedings. 2004 International Symposium on
Source: IEEE Xplore

ABSTRACT Where the creation, understanding, and assessment of software testing and regression testing techniques are concerned, controlled experimentation is an indispensable research methodology. Obtaining the infrastructure necessary to support such experimentation, however, is difficult and expensive. As a result, progress in experimentation with testing techniques has been slow, and empirical data on the costs and effectiveness of techniques remains relatively scarce. To help address this problem, we have been designing and constructing infrastructure to support controlled experimentation with testing and regression testing techniques. This paper reports on the challenges faced by researchers experimenting with testing techniques, including those that inform the design of our infrastructure. The paper then describes the infrastructure that we are creating in response to these challenges, and that we are now making available to other researchers, and discusses the impact that this infrastructure has and can be expected to have.

0 Bookmarks
 · 
101 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Case studies can help companies to evaluate the benefits of testing techniques and tools before their possible incorporation into the testing processes. Although general guidelines and organizational frame-works exist describing what a case study should consist of, no general methodological framework exists that can be instantiated to easily de-sign case studies to evaluate different testing techniques. In this paper we define a first version of a general methodological framework for eval-uating software testing techniques, that focusses on the evaluation of effectiveness and efficiency. Using this framework, (1) software testing practitioners can more easily define case studies through an instantia-tion of the framework, (2) results can be better compared since they are all executed according to a similar design, and (3) the gap in existing work on methodological evaluation frameworks will be narrowed.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: There exists a real need in industry to have guidelines on what testing techniques use for different testing objectives, and how usable (effective, efficient, satisfactory) these techniques are. Up to date, these guidelines do not exist. Such guidelines could be obtained by doing secondary studies on a body of evidence consisting of case studies evaluating and comparing testing techniques and tools. However, such a body of evidence is also lacking. In this paper, we will make a first step towards creating such body of evidence by defining a general methodological evaluation framework that can simplify the design of case studies for comparing software testing tools, and make the results more precise, reliable, and easy to com-pare. Using this framework, (1) software testing practitioners can more easily define case studies through an instantiation of the framework, (2) results can be better compared since they are all executed according to a similar design, (3) the gap in existing work on methodological evaluation frameworks will be narrowed, and (4) a body of evidence will be initiated. By means of validating the framework, we will present successful applications of this methodological framework to various case studies for evaluating testing tools in an industrial environment with real objects and real subjects.
    QSIC 2012; 08/2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: AMPLE locates likely failure-causing classes by comparing method call sequences of passing and failing runs. A differ-ence in method call sequences, such as multiple deallocation of the same resource, is likely to point to the erroneous class. In this paper, we describe the implementation of AMPLE as well as its evaluation.
    01/2005;

Full-text (2 Sources)

Download
0 Downloads
Available from