Conference Paper

Instrumenting where it hurts: an automatic concurrent debugging technique.

DOI: 10.1145/1273463.1273469 Conference: Proceedings of the ACM/SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2007, London, UK, July 9-12, 2007
Source: DBLP

ABSTRACT As concurrent and distributive applications are becoming more common and debugging such applications is very dif- ficult, practical tools for automatic debugging of concurrent applications are in demand. In previous work, we applied automatic debugging to noise-based testing of concurrent programs. The idea of noise-based testing is to increase the probability of observing the bugs by adding, using instru- mentation, timing "noise" to the execution of the program. The technique of finding a small subset of points that causes the bug to manifest can be used as an automatic debugging technique. Previously, we showed that Delta Debugging can be used to pinpoint the bug location on some small pro- grams. In the work reported in this paper, we create and evaluate two algorithms for automatically pinpointing program loca- tions that are in the vicinity of the bugs on a number of in- dustrial programs. We discovered that the Delta Debugging algorithms do not scale due to the non-monotonic nature of the concurrent debugging problem. Instead we decided to try a machine learning feature selection algorithm. The idea is to consider each instrumentation point as a feature, execute the program many times with dierent instrumen- tations, and correlate the features (instrumentation points) with the executions in which the bug was revealed. This idea works very well when the bug is very hard to reveal using instrumentation, correlating to the case when a very specific timing window is needed to reveal the bug. However, in the more common case, when the bugs are easy to find using in- strumentation (i.e., instrumentation on many subsets finds the bugs), the correlation between the bug location and in- This work is partially supported by the European Com- munity under the Information Society Technologies (IST) programme of the 6th FP for RTD - project SHADOWS contract IST-035157. The authors are solely responsible for the content of this paper. It does not represent the opinion of the European Community, and the European Community is not responsible for any use that might be made of data appearing therein. strumentation points ranked high by the feature selection algorithm is not high enough. We show that for these cases, the important value is not the absolute value of the evalua- tion of the feature but the derivative of that value along the program execution path. As a number of groups expressed interest in this research, we built an open infrastructure for automatic debugging al- gorithms for concurrent applications, based on noise injec- tion based concurrent testing using instrumentation. The infrastructure is described in this paper.

0 Followers
 · 
75 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Partial orders are used extensively for modeling and analyzing concurrent computations. In this paper, we define two properties of partially ordered sets: width-extensibility and interleaving-consistency, and show that a partial order can be a valid state based model: (1) of some synchronous concurrent computation iff it is width-extensible, and (2) of some asynchronous concurrent computation iff it is width-extensible and interleaving-consistent. We also show a duality between the event based and state based models of concurrent computations, and give algorithms to convert models between the two domains. When applied to the problem of checkpointing, our theory leads to a better understanding of some existing results and algorithms in the field. It also leads to efficient detection algorithms for predicates whose evaluation requires knowledge of states from all the processes in the system.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Delta debugging has been proposed to isolate failure-inducing changes when regressions occur. In this work, we focus on evaluating delta debugging in practical settings from developers' perspectives. A collection of real regressions taken from medium-sized open source programs is used in our evaluation. Towards automated debugging in software evolution, a tool based on delta debugging is created and both the limitations and costs are discussed. We have evaluated two variants of delta debugging. Different from successful isolation in Zeller's initial studies, the results in our experiments vary wildly. Two thirds of isolated changes in studied programs provide direct or indirect clues in locating regression bugs. The remaining results are superfluous changes or even wrong isolations. In the case of wrong isolations, the isolated changes cause the same behaviour of the regression but are failure-irrelevant. Moreover, the hierarchical variant does not yield definite improvements in terms of the efficiency and accuracy.
    Journal of Systems and Software 10/2012; 85(10):2305-2317. DOI:10.1016/j.jss.2011.10.016 · 1.25 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a novel methodology for localizing faults in code as it evolves. Our insight is that the essence of failure-inducing edits made by the developer can be captured using mechanical program transformations (e.g., mutation changes). Based on the insight, we present the FIFL framework, which uses both the spectrum information of edits (obtained using the existing FaultTracer approach) as well as the potential impacts of edits (simulated by mutation changes) to achieve more accurate fault localization. We evaluate FIFL on real-world repositories of nine Java projects ranging from 5.7KLoC to 88.8KLoC. The experimental results show that FIFL is able to outperform the state-of-the-art FaultTracer technique for localizing failure-inducing program edits significantly. For example, all 19 FIFL strategies that use both the spectrum information and simulated impact information for each edit outperform the existing FaultTracer approach statistically at the significance level of 0.01. In addition, FIFL with its default settings outperforms FaultTracer by 2.33% to 86.26% on 16 of the 26 studied version pairs, and is only inferior than FaultTracer on one version pair.
    ACM SIGPLAN Notices 10/2013; 48(10). DOI:10.1145/2509136.2509551 · 0.62 Impact Factor

Full-text (2 Sources)

Download
28 Downloads
Available from
Jun 1, 2014