Conference Paper

Instrumenting where it hurts: an automatic concurrent debugging technique.

DOI: 10.1145/1273463.1273469 Conference: Proceedings of the ACM/SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2007, London, UK, July 9-12, 2007
Source: DBLP

ABSTRACT As concurrent and distributive applications are becoming more common and debugging such applications is very dif- ficult, practical tools for automatic debugging of concurrent applications are in demand. In previous work, we applied automatic debugging to noise-based testing of concurrent programs. The idea of noise-based testing is to increase the probability of observing the bugs by adding, using instru- mentation, timing "noise" to the execution of the program. The technique of finding a small subset of points that causes the bug to manifest can be used as an automatic debugging technique. Previously, we showed that Delta Debugging can be used to pinpoint the bug location on some small pro- grams. In the work reported in this paper, we create and evaluate two algorithms for automatically pinpointing program loca- tions that are in the vicinity of the bugs on a number of in- dustrial programs. We discovered that the Delta Debugging algorithms do not scale due to the non-monotonic nature of the concurrent debugging problem. Instead we decided to try a machine learning feature selection algorithm. The idea is to consider each instrumentation point as a feature, execute the program many times with dierent instrumen- tations, and correlate the features (instrumentation points) with the executions in which the bug was revealed. This idea works very well when the bug is very hard to reveal using instrumentation, correlating to the case when a very specific timing window is needed to reveal the bug. However, in the more common case, when the bugs are easy to find using in- strumentation (i.e., instrumentation on many subsets finds the bugs), the correlation between the bug location and in- This work is partially supported by the European Com- munity under the Information Society Technologies (IST) programme of the 6th FP for RTD - project SHADOWS contract IST-035157. The authors are solely responsible for the content of this paper. It does not represent the opinion of the European Community, and the European Community is not responsible for any use that might be made of data appearing therein. strumentation points ranked high by the feature selection algorithm is not high enough. We show that for these cases, the important value is not the absolute value of the evalua- tion of the feature but the derivative of that value along the program execution path. As a number of groups expressed interest in this research, we built an open infrastructure for automatic debugging al- gorithms for concurrent applications, based on noise injec- tion based concurrent testing using instrumentation. The infrastructure is described in this paper.

0 Bookmarks
 · 
65 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: Delta debugging has been proposed to isolate failure-inducing changes when regressions occur. In this work, we focus on evaluating delta debugging in practical settings from developers' perspectives. A collection of real regressions taken from medium-sized open source programs is used in our evaluation. Towards automated debugging in software evolution, a tool based on delta debugging is created and both the limitations and costs are discussed. We have evaluated two variants of delta debugging. Different from successful isolation in Zeller's initial studies, the results in our experiments vary wildly. Two thirds of isolated changes in studied programs provide direct or indirect clues in locating regression bugs. The remaining results are superfluous changes or even wrong isolations. In the case of wrong isolations, the isolated changes cause the same behaviour of the regression but are failure-irrelevant. Moreover, the hierarchical variant does not yield definite improvements in terms of the efficiency and accuracy.
    Journal of Systems and Software 10/2012; 85(10):2305-2317. · 1.14 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Noise injection disturbs the scheduling of program threads in order to increase the probability that more of their different legal interleavings occur during the testing process. However, there exist many different types of noise heuristics with many different parameters that are not easy to set such that noise injection is really efficient. In this paper, we propose a new way of using genetic algorithms to search for suitable types of noise heuristics and their parameters. This task is formalized as the test and noise configuration search problem in the paper, followed by a discussion of how to represent instances of this problem for genetic algorithms, which objectives functions to use, as well as parameter tuning of genetic algorithms when solving the problem. The proposed approach is evaluated on a set of benchmarks, showing that it provides significantly better results than the so far preferred random noise injection.
    Proceedings of the 4th international conference on Search Based Software Engineering; 09/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: The paper proposes a novel algorithm called AtomRace for a dynamic detection of data races. Data races are detected as a special case of atomicity violations on atomic sections specially defined to span just particular read/write instructions and the transfer of control to and from them. A key ingredient allowing AtomRace to efficiently detect races on such short atomic sections is a use of techniques for a careful injection of noise into the scheduling of the monitored programs. The approach is very simple, fully automated, avoids false alarms, and allows for a lower overhead and better scalability than many other existing dynamic data race detection algorithms. We illustrate these facts by a set of experiments with a prototype implementation of AtomRace. Further, AtomRace can also be applied to detect atomicity violations on more general atomic sections than those used for the data race detection. They can be defined by the user or obtained by some static analysis.
    Proceedings of the 6th workshop on Parallel and distributed systems: testing, analysis, and debugging; 07/2008

Full-text (2 Sources)

Download
19 Downloads
Available from
Jun 1, 2014