Conference Paper

Infrastructure support for controlled experimentation with software testing and regression testing techniques

Dept. of Comput. Sci. & Eng., Nebraska Univ., Lincoln, NE, USA
DOI: 10.1109/ISESE.2004.1334894 In proceeding of: Empirical Software Engineering, 2004. ISESE '04. Proceedings. 2004 International Symposium on
Source: IEEE Xplore

ABSTRACT Where the creation, understanding, and assessment of software testing and regression testing techniques are concerned, controlled experimentation is an indispensable research methodology. Obtaining the infrastructure necessary to support such experimentation, however, is difficult and expensive. As a result, progress in experimentation with testing techniques has been slow, and empirical data on the costs and effectiveness of techniques remains relatively scarce. To help address this problem, we have been designing and constructing infrastructure to support controlled experimentation with testing and regression testing techniques. This paper reports on the challenges faced by researchers experimenting with testing techniques, including those that inform the design of our infrastructure. The paper then describes the infrastructure that we are creating in response to these challenges, and that we are now making available to other researchers, and discusses the impact that this infrastructure has and can be expected to have.

0 Bookmarks
 · 
80 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a scalable, statement-level visualization that shows related code in a way that supports human interpretation of clustering and context. The visualization is applicable to many software-engineering tasks through the utilization and visualization of problem-specific meta-data. The visualization models statement-level code relations from a system-dependence-graph model of the program being visualized. Dynamic, run-time information is used to augment the static program model to further enable visual cluster identification and interpretation. In addition, we performed a user study of our visualization on an example program domain. The results of the study show that our new visualization successfully revealed relevant context to the programmer participants.
    Visualizing Software for Understanding and Analysis (VISSOFT), 2011 6th IEEE International Workshop on; 10/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In today’s information society, flash memory has become a virtually indispensable component, particularly for mobile devices. In order for mobile devices to operate successfully, it is essential that flash memory be controlled correctly through flash storage platform software such as the file system, flash translation layer, and low-level device drivers. However, as is typical for embedded software, conventional testing methods often fail to detect hidden flaws in the software due to the difficulty of creating effective test cases. As a different approach, model checking techniques guarantee a complete analysis, but only on a limited scale. In this paper, we describe an empirical study wherein a concolic testing method is applied to the multi-sector read operation for flash storage platform software. This method combines a concrete dynamic execution and a symbolic execution to automatically generate test cases for full path coverage. Through the experiments, we analyze the advantages and weaknesses of the concolic testing approach on the flash storage platform software. Flash memory–concolic testing–unit analysis, and embedded software verification
    Formal Aspects of Computing 05/2012; 24(3):1-20. · 0.50 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: There exists a real need in industry to have guidelines on what testing techniques use for different testing objectives, and how usable (effective, efficient, satisfactory) these techniques are. Up to date, these guidelines do not exist. Such guidelines could be obtained by doing secondary studies on a body of evidence consisting of case studies evaluating and comparing testing techniques and tools. However, such a body of evidence is also lacking. In this paper, we will make a first step towards creating such body of evidence by defining a general methodological evaluation framework that can simplify the design of case studies for comparing software testing tools, and make the results more precise, reliable, and easy to com-pare. Using this framework, (1) software testing practitioners can more easily define case studies through an instantiation of the framework, (2) results can be better compared since they are all executed according to a similar design, (3) the gap in existing work on methodological evaluation frameworks will be narrowed, and (4) a body of evidence will be initiated. By means of validating the framework, we will present successful applications of this methodological framework to various case studies for evaluating testing tools in an industrial environment with real objects and real subjects.
    QSIC 2012; 08/2012

Full-text (2 Sources)

View
0 Downloads
Available from