Conference Paper

Automatic Debugging of Concurrent Programs through Active Sampling of Low Dimensional Random Projections.

DOI: 10.1109/ASE.2008.41 Conference: 23rd IEEE/ACM International Conference on Automated Software Engineering (ASE 2008), 15-19 September 2008, L'Aquila, Italy
Source: DBLP

ABSTRACT Concurrent computer programs are fast becoming prevalent in many critical applications. Unfortunately, these programs are especially difficult to test and debug. Recently, it has been suggested that injecting random timing noise into many points within a program can assist in eliciting bugs within the program. Upon eliciting the bug, it is necessary to identify a minimal set of points that indicate the source of the bug to the programmer. In this paper, we pose this problem as an active feature selection problem. We propose an algorithm called the iterative group sampling algorithm that iteratively samples a lower dimensional projection of the program space and identifies candidate relevant points. We analyze the convergence properties of this algorithm. We test the proposed algorithm on several real-world programs and show its superior performance. Finally, we show the algorithms' performance on a large concurrent program.

0 Bookmarks
 · 
96 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: From the preface: This dictionary has two aims: (1) to provide a satisfactory amount of accurate information about subjects of interest to the user, and (2) to induce the user to read about topics other than those of immediate concern. The achievement of the second aim has (we hope) been effected by a deliberate breadth in the dictionary’s scope. As well as entries on statistical topics, there are entries on related topics in mathematics, operational research, and probability. In deciding on the topics for inclusion, we have had to think of the probable users. Many of the people using this dictionary as an aide-mémoire in statistics will be those who are meeting the subject for the first time, as students at school or university. Another large group of readers will be specialists in other subjects who have found the need to analyse their own data and have then encountered the gobbledegook associated with computer packages. Our selection of topics has been made with all of these people in mind. We have included approximately 150 short biographies. The criterion for inclusion has been that the individual concerned has made an important contribution to the development of the subject of statistics, or that the individual’s name forms part of the title of a topic (or both), and we have not restricted ourselves to the dead.
  • [Show abstract] [Hide abstract]
    ABSTRACT: A noise maker is a tool that seeds a concurrent program with conditional synchronization primitives, such as yield(), for the purpose of increasing the likelihood that a bug manifest itself. We introduce a novel fault model that classifies locations as "good", "neutral", or "bad," based on the effect of a thread switch at the location. Using the model, we explore the terms under which an efficient search for real-life concurrent bugs can be conducted. We accordingly justify the use of probabilistic algorithms for this search and gain a deeper insight of the work done so far on noise- making. We validate our approach by experimenting with a set of programs taken from publicly available multi-threaded benchmarks. Our empirical evidence demonstrates that real-life behavior is similar to one derived from the model.
    Leveraging Applications of Formal Methods, Second International Symposium, ISoLA 2006, Paphos, Cyprus, 15-19 November 2006; 01/2006
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Summary form only given. Testing multithreaded, concurrent, or distributed programs is acknowledged to be a very difficult task. We decided to create a benchmark of programs containing documented multithreaded bugs that can be used in the development of testing tool for the domain. In order to augment the benchmark with a sizable number of programs, we assigned students in a software testing class to write buggy multithreaded Java programs and document the bugs. This paper documents this experiment. We explain the task that was given to the students, go over the bugs that they put into the programs both intentionally and unintentionally, and show our findings. We believe this part of the benchmark shows typical programming practices, including bugs, of novice programmers. In grading the assignments, we used our technologies to look for undocumented bugs. In addition to finding many undocumented bugs, which was not surprising given that writing correct multithreaded code is difficult, we also found a number of bugs in our tools. We think this is a good indication of the expected utility of the benchmark for multithreaded testing tool creators.
    18th International Parallel and Distributed Processing Symposium (IPDPS 2004), CD-ROM / Abstracts Proceedings, 26-30 April 2004, Santa Fe, New Mexico, USA; 01/2004

Full-text (4 Sources)

Download
54 Downloads
Available from
May 19, 2014