Conference Paper

Automatic Debugging of Concurrent Programs through Active Sampling of Low Dimensional Random Projections.

DOI: 10.1109/ASE.2008.41 In proceeding of: 23rd IEEE/ACM International Conference on Automated Software Engineering (ASE 2008), 15-19 September 2008, L'Aquila, Italy
Source: DBLP

ABSTRACT Concurrent computer programs are fast becoming prevalent in many critical applications. Unfortunately, these programs are especially difficult to test and debug. Recently, it has been suggested that injecting random timing noise into many points within a program can assist in eliciting bugs within the program. Upon eliciting the bug, it is necessary to identify a minimal set of points that indicate the source of the bug to the programmer. In this paper, we pose this problem as an active feature selection problem. We propose an algorithm called the iterative group sampling algorithm that iteratively samples a lower dimensional projection of the program space and identifies candidate relevant points. We analyze the convergence properties of this algorithm. We test the proposed algorithm on several real-world programs and show its superior performance. Finally, we show the algorithms' performance on a large concurrent program.

0 Bookmarks
 · 
75 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: A noise maker is a tool that seeds a concurrent program with conditional synchronization primitives, such as yield(), for the purpose of increasing the likelihood that a bug manifest itself. We introduce a novel fault model that classifies locations as "good", "neutral", or "bad," based on the effect of a thread switch at the location. Using the model, we explore the terms under which an efficient search for real-life concurrent bugs can be conducted. We accordingly justify the use of probabilistic algorithms for this search and gain a deeper insight of the work done so far on noise- making. We validate our approach by experimenting with a set of programs taken from publicly available multi-threaded benchmarks. Our empirical evidence demonstrates that real-life behavior is similar to one derived from the model.
    Leveraging Applications of Formal Methods, Second International Symposium, ISoLA 2006, Paphos, Cyprus, 15-19 November 2006; 01/2006
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Summary form only given. Testing multithreaded, concurrent, or distributed programs is acknowledged to be a very difficult task. We decided to create a benchmark of programs containing documented multithreaded bugs that can be used in the development of testing tool for the domain. In order to augment the benchmark with a sizable number of programs, we assigned students in a software testing class to write buggy multithreaded Java programs and document the bugs. This paper documents this experiment. We explain the task that was given to the students, go over the bugs that they put into the programs both intentionally and unintentionally, and show our findings. We believe this part of the benchmark shows typical programming practices, including bugs, of novice programmers. In grading the assignments, we used our technologies to look for undocumented bugs. In addition to finding many undocumented bugs, which was not surprising given that writing correct multithreaded code is difficult, we also found a number of bugs in our tools. We think this is a good indication of the expected utility of the benchmark for multithreaded testing tool creators.
    18th International Parallel and Distributed Processing Symposium (IPDPS 2004), CD-ROM / Abstracts Proceedings, 26-30 April 2004, Santa Fe, New Mexico, USA; 01/2004

Full-text (4 Sources)

View
23 Downloads
Available from
May 19, 2014