January 2003
·
14 Reads
·
8 Citations
In discourse processing, two major problems are understanding the underlying connections between sucee.
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
January 2003
·
14 Reads
·
8 Citations
In discourse processing, two major problems are understanding the underlying connections between sucee.
October 2000
·
6 Reads
·
3 Citations
Artificial Intelligence Review
Novice unix users have many incorrect beliefsabout unix commands. An intelligent advisory system for unix should provide explanatory responses that correct thesemistaken beliefs. To do so, the system must be able to understand how the useris justifying these beliefs, and it must be ableto provide justifications for its own beliefs.These tasks not only require knowledgeabout specific unix-related plans butalso abstract knowledge about how beliefs can be justified.This paper shows how this knowledge can be representedand sketches how it can be used to form justificationsfor advisor beliefs and to understand justifications given for user beliefs.Knowledge about belief justification is captured byjustification patterns,domain-independent knowledge structuresthat are similar to the abstract knowledge structures usedto understand the point behind a story.These justification patterns allow the advisor to understand andformulate novel belief justifications, giving the advisorthe ability to recognize and respond to novel misconceptions.
March 2000
·
17 Reads
·
16 Citations
Science of Computer Programming
There are many commercial tools that address various aspects of the Year 2000 problem. None of these tools, however, make any documented use of plan-based techniques for automated concept recovery. This implies a general perception that plan-based techniques is not useful for this problem. This paper argues that this perception is incorrect and these techniques are in fact mature enough to make a significant contribution. In particular, we show representative code fragments illustrating "Year 2000" problems, discuss the problems inherent in recognizing the higher level concepts these fragments implement using pattern-based and rule-based techniques, demonstrate that they can be represented in a programming plan framework, and present some initial experimental evidence that suggests that current algorithms can locate these plans in linear time. Finally, we discuss several ways to integrate plan-based techniques with existing Year 2000 tools. 1991 Computing Reviews Classification System...
March 2000
·
16 Reads
·
6 Citations
Science of Computer Programming
The plan matching problem is to determine whether a program plan is present in a program. This problem has been shown to be NP-hard, which makes it an open question whether plan matching algorithms can be developed that scale sufficiently well to be useful in practice. This paper discusses experiments in the scalability of a series of constraint-based program plan matching algorithms we have developed. These empirical studies have led to significant improvements in the scalability of our plan matching algorithm, and they suggest that this algorithm can be successfully applied to large, real-world programs.
June 1999
·
100 Reads
·
1,018 Citations
IEEE Intelligent Systems and their Applications
Self-adaptive software requires high dependability robustness, adaptability, and availability. The article describes an infrastructure supporting two simultaneous processes in self-adaptive software: system evolution, the consistent application of change over time, and system adaptation, the cycle of detecting changing circumstances and planning and deploying responsive modifications
May 1999
·
28 Reads
·
41 Citations
Automated Software Engineering
Program understanding is often viewed as the task of extracting plans and design goals from program source. As such, it is natural to try to apply standard AI plan recognition techniques to the program understanding problem. Yet program understanding researchers have quietly, but consistently, avoided the use of these plan recognition algorithms. This paper shows that treating program understanding as plan recognition is too simplistic and that traditional AI search algorithms for plan recognition are not suitable, as is, for program understanding. In particular, we show (1) that the program understanding task differs significantly from the typical general plan recognition task along several key dimensions, (2) that the program understanding task has particular properties that make it particularly amenable to constraint satisfaction techniques, and (3) that augmenting AI plan recognition algorithms with these techniques can lead to effective solutions for the program understanding problem.
July 1998
·
18 Reads
·
34 Citations
Program understanding tools are currently not interoperable, leading researchers to waste significant resources reinventing already existing tools. Even commercial environments that have been designed to support the construction of program understanding tools have serious flaws in this regard. This paper discusses CORUM (Common Object-based Re-engineering Unified Model), an architecture to support interoperability between program understanding tools, and it provides several examples of CORUM's use in the construction of new tools for concept recognition and program visualization
March 1997
·
7 Reads
Users often work directly with a collection of legacy simulation programs. These users are responsible for producing the input files for these programs, executing them, and managing their results. This project is to design and construct a general-purpose environment to support this process. In particular, we explore how to base this environment on an explicit object-oriented domain model that describes the domain actions that are simulated by existing simulation programs and the domain objects that are provided as input to or generated as output from these programs. The goal is to demonstrate that it is both possible and beneficial to construct an environment through which users interact with legacy simulation programs solely through an explicit domain model. This report describes the general architecture of such an environment and provides detailed examples that show how this environment can be applied to support users working with a collection of programs that simulate the formation and orbit of space debris.
February 1996
·
6 Reads
·
2 Citations
1 Introduction It is an open question whether automated program understanding can become a practical, useable tool in the reverse engineering or maintaining of existing, real-world legacy systems. However, there are clearly several traits that any deployable automated program understanding tool must possess: 1. It must be based on an understanding algorithm that scales in practice to large programs. 2. It must produce an understanding targeted to the specific reverse engineering or maintenance tasks it is being used to support. 3. It must provide mechanisms that allow the programmers who perform reverse engineering or maintenance tasks to update its knowledge base. 4. It must integrate with other, existing tools that support maintenance and reverse engineering. 5. Finally, it must help the end-user achieve tasks more simply and more cheaply than alternative approaches. This abstract provides an overview of our approach to constructing a program understanding tool that possesses t...
January 1996
·
14 Reads
·
15 Citations
Over the past decade, researchers in program understanding have formulated many program understanding algorithms but have published few studies of their relative scalability. Consequently, it is difficult to understand the relative limitations of these algorithms and to determine whether the field of program understanding is making progress. The paper attempts to address this deficiency by formalizing the search strategies of several different program understanding algorithms as constraint satisfaction problems, and by presenting some preliminary empirical scalability results for these constraint-based implementations. These initial results suggest that, at least under certain conditions, constraint-based program understanding is close to being applicable to real-world programs
... One example of the use of goal-driven information search in non-diagnostic contexts is provided by the story understander AQUA (Ram, 1991;Ram, 1993), which varies the depth of its reading of input stories in order to actively seek desired information in the stories it reads. Another model in similar spirit is Quilici's (1994) QUACK, which models the goal-driven learning process by which novice UNIX users acquire expertise. In that model both action and reasoning, including explanation, are used to satisfy the learner's knowledge goals. ...
... [Quinlan 1990] and FLIPPER [Cohen 1995a], were used in this study. A 10-fold-cross validation 22 was used to estimate the error rates. While this study was more concerned with machine learning issues, the best reported error rate was 19.7%, which was obtained by FLIPPER The study also showed that the error rate of propositional learners such as C4.5 and RIPPER, having appropriate features, was not statistically significantly worse than the best of ILP results. ...
January 1996
Journal of Software Maintenance Research and Practice
... To obtain a richer understanding of software, planrecognition methods [23], [24] have been proposed. Given a representation of interesting software fragments, in terms of plans or clichés, the program-comprehension task becomes to recognize instances of these plans in the program. ...
March 2000
Science of Computer Programming
... Clustering is a technique of automatically constructing categories or taxonomies for a set of objects. Clustering aims at grouping all entities (e.g., source files or classes) into clusters [40] to support AR. For example, in [S28], the authors proposed a word clustering technique, Latent Dirichlet Allocation (LDA), which groups structural and lexical information of the systems to recover its layered architecture. ...
January 1995
Proceedings - International Conference on Software Engineering
... Recogniser [20] 2 Gold HB-CAS [9], [10], [11] 4 Johnson PROUST [17], [18] 3 Rich, Waters Programmer' s Apprentice [34], [35], [25], [26], [27], [28] 4 Chin, Quilici DECODE [6] 3 Woods et al. PU-CSP [22], [39], [40], [23], [43], [24], [41], ...
January 1996
Automated Software Engineering
... In software evolution, the idea of templates of design and patterns that can be identified, isolated, and studied has received a lot of attention. Often the artifacts studied go under different names such as work on identification of program plans [20, 25, 21] and program clichés [22, 9]. However this previous work in identification of programming style templates such as plans and clichés has not considered dependence structures of the form considered in the present paper. ...
Reference:
Dependence Anti Patterns
January 1996
... Von Mayrhauser, Vans, and Howe carried out extensive studies to examine the nature and process of program understanding, offering a precise model of the iterative and non-monotonic knowledge elicitation process of software understanding [20], [21]. Quilici and Chin [19] developed the tool DECODE, which offers an approach to externalize not only certain knowledge but also the hypothetic knowledge of not knowing a specific fact. DECODE already combines automatic and human-centered RE steps into a single repository. ...
January 1995
... AQUA [31], [32], for example, is a help system that conducts dialogues with UNIX users and tries to help them when they face problems in their interaction. In AQUA, the user explicitly states what his/her situation was when the problem arose. ...
October 2000
Artificial Intelligence Review
... Such concepts are represented using ASTs with additional constraints based on control-flows and data-flows. Quilici [8] extended this method by sacrificing the ability to recognize every concept located in the source code to improve efficiency. These approaches are similar to our proposal in that both compare the patterns of algorithms with the programs, even though our proposal only uses ASTs as patterns and such patterns are extracted from the model answers. ...
May 1994
Communications of the ACM
... In software evolution, the idea of templates of design and patterns that can be identified, isolated, and studied has received a lot of attention. Often the artifacts studied go under different names such as work on identification of program plans [20, 25, 21] and program clichés [22, 9]. However this previous work in identification of programming style templates such as plans and clichés has not considered dependence structures of the form considered in the present paper. ...
Reference:
Dependence Anti Patterns
March 2000
Science of Computer Programming