Conference Paper

Instrumenting where it hurts: an automatic concurrent debugging technique.

DOI: 10.1145/1273463.1273469 Conference: Proceedings of the ACM/SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2007, London, UK, July 9-12, 2007
Source: DBLP

ABSTRACT As concurrent and distributive applications are becoming more common and debugging such applications is very dif- ficult, practical tools for automatic debugging of concurrent applications are in demand. In previous work, we applied automatic debugging to noise-based testing of concurrent programs. The idea of noise-based testing is to increase the probability of observing the bugs by adding, using instru- mentation, timing "noise" to the execution of the program. The technique of finding a small subset of points that causes the bug to manifest can be used as an automatic debugging technique. Previously, we showed that Delta Debugging can be used to pinpoint the bug location on some small pro- grams. In the work reported in this paper, we create and evaluate two algorithms for automatically pinpointing program loca- tions that are in the vicinity of the bugs on a number of in- dustrial programs. We discovered that the Delta Debugging algorithms do not scale due to the non-monotonic nature of the concurrent debugging problem. Instead we decided to try a machine learning feature selection algorithm. The idea is to consider each instrumentation point as a feature, execute the program many times with dierent instrumen- tations, and correlate the features (instrumentation points) with the executions in which the bug was revealed. This idea works very well when the bug is very hard to reveal using instrumentation, correlating to the case when a very specific timing window is needed to reveal the bug. However, in the more common case, when the bugs are easy to find using in- strumentation (i.e., instrumentation on many subsets finds the bugs), the correlation between the bug location and in- This work is partially supported by the European Com- munity under the Information Society Technologies (IST) programme of the 6th FP for RTD - project SHADOWS contract IST-035157. The authors are solely responsible for the content of this paper. It does not represent the opinion of the European Community, and the European Community is not responsible for any use that might be made of data appearing therein. strumentation points ranked high by the feature selection algorithm is not high enough. We show that for these cases, the important value is not the absolute value of the evalua- tion of the feature but the derivative of that value along the program execution path. As a number of groups expressed interest in this research, we built an open infrastructure for automatic debugging al- gorithms for concurrent applications, based on noise injec- tion based concurrent testing using instrumentation. The infrastructure is described in this paper.

0 Bookmarks
 · 
52 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The general subject of my interest is finding concurrency bugs in complex soft-ware systems written in object-oriented programming languages. Concurrent, or multi-threaded, programming has become very popular in recent years. How-ever, as concurrent programming is far more demanding than sequential, its increased use leads to a significantly increased number of bugs that appear in commercial software due to errors in synchronization. This stimulates a more in-tensive research in the field of detecting of such bugs. Despite persistent effort of wide community of researchers, a satisfiable solution of this problem for common programming languages like Java does not exist. I have focused on combination of three already existing approaches: (i) dynamic analysis which is in certain cases able to precisely detect bugs along an execution path, (ii) static analysis which is able to collect various information concerning tested application, and (iii) systematic testing that helps to examine as many different execution paths as possible. Moreover, I plan to incorporate artificial intelligence algorithms into process of testing of complex concurrent software and for bugs that are hard to detect I consider development of self-healing methods that can suppress manifestation of detected bugs during execution.
    03/2010;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Noise injection disturbs the scheduling of program threads in order to increase the probability that more of their different legal interleavings occur during the testing process. However, there exist many different types of noise heuristics with many different parameters that are not easy to set such that noise injection is really efficient. In this paper, we propose a new way of using genetic algorithms to search for suitable types of noise heuristics and their parameters. This task is formalized as the test and noise configuration search problem in the paper, followed by a discussion of how to represent instances of this problem for genetic algorithms, which objectives functions to use, as well as parameter tuning of genetic algorithms when solving the problem. The proposed approach is evaluated on a set of benchmarks, showing that it provides significantly better results than the so far preferred random noise injection.
    Proceedings of the 4th international conference on Search Based Software Engineering; 09/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: Delta debugging has been proposed to isolate failure-inducing changes when regressions occur. In this work, we focus on evaluating delta debugging in practical settings from developers' perspectives. A collection of real regressions taken from medium-sized open source programs is used in our evaluation. Towards automated debugging in software evolution, a tool based on delta debugging is created and both the limitations and costs are discussed. We have evaluated two variants of delta debugging. Different from successful isolation in Zeller's initial studies, the results in our experiments vary wildly. Two thirds of isolated changes in studied programs provide direct or indirect clues in locating regression bugs. The remaining results are superfluous changes or even wrong isolations. In the case of wrong isolations, the isolated changes cause the same behaviour of the regression but are failure-irrelevant. Moreover, the hierarchical variant does not yield definite improvements in terms of the efficiency and accuracy.
    Journal of Systems and Software 10/2012; 85(10):2305-2317. · 1.14 Impact Factor

Full-text (2 Sources)

View
15 Downloads
Available from
Jun 1, 2014