Conference Paper

Infrastructure support for controlled experimentation with software testing and regression testing techniques

Dept. of Comput. Sci. & Eng., Nebraska Univ., Lincoln, NE, USA
DOI: 10.1109/ISESE.2004.1334894 Conference: Empirical Software Engineering, 2004. ISESE '04. Proceedings. 2004 International Symposium on
Source: CiteSeer

ABSTRACT Where the creation, understanding, and assessment of software testing and regression testing techniques are concerned, controlled experimentation is an indispensable research methodology. Obtaining the infrastructure necessary to support such experimentation, however, is difficult and expensive. As a result, progress in experimentation with testing techniques has been slow, and empirical data on the costs and effectiveness of techniques remains relatively scarce. To help address this problem, we have been designing and constructing infrastructure to support controlled experimentation with testing and regression testing techniques. This paper reports on the challenges faced by researchers experimenting with testing techniques, including those that inform the design of our infrastructure. The paper then describes the infrastructure that we are creating in response to these challenges, and that we are now making available to other researchers, and discusses the impact that this infrastructure has and can be expected to have.

0 Followers
 · 
124 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Case studies can help companies to evaluate the benefits of testing techniques and tools before their possible incorporation into the testing processes. Although general guidelines and organizational frame-works exist describing what a case study should consist of, no general methodological framework exists that can be instantiated to easily de-sign case studies to evaluate different testing techniques. In this paper we define a first version of a general methodological framework for eval-uating software testing techniques, that focusses on the evaluation of effectiveness and efficiency. Using this framework, (1) software testing practitioners can more easily define case studies through an instantia-tion of the framework, (2) results can be better compared since they are all executed according to a similar design, and (3) the gap in existing work on methodological evaluation frameworks will be narrowed.
  • Source
  • [Show abstract] [Hide abstract]
    ABSTRACT: The Software Engineering Research community has slowly recognized that empirical studies are an important way of validating ideas and increasingly our community has stopped accepting the sufficiency of arguing that a smart person has come up with the idea and therefore it must be good. This has led to a flood of Software Engineering papers that contain at least some form of empirical study. However, not all empirical studies are created equal, and many may not even provide any useful information or value. We survey the gradual shift from essentially no empirical studies, to a small number of ones of questionable value, and look at what we need to do to insure that our empirical studies really contribute to the state of knowledge in the field. Thus we have the good, the bad, and the ugly. What are we as a community doing correctly? What are we doing less well than we should be because we either don't have the necessary artifacts or because the time and resources required to do "the good" is perceived to be too great? And where are we missing the boat entirely in terms of not addressing critical questions and often not even recognizing that these questions are central even if we don't know the answers. We look to see whether we can find some commonality in the projects that have really made the transition from research to widespread practice to see whether we can identify some common themes.
    Empirical Software Engineering and Measurement (ESEM), 2011 International Symposium on; 01/2011

Preview (2 Sources)

Download
1 Download
Available from