ACM SIGSOFT Software Engineering Notes

Published by Association for Computing Machinery
Online ISSN: 0163-5948
Publications
Conference Paper
The comparative analysis of test data criteria in software testing is considered, and an attempt is made to investigate how criteria have been and should be compared to each other. It is argued that there are two fundamentally different goals in comparing criteria: (1) to compare the error-exposing ability of criteria, and (2) to compare the cost of using the criteria for selecting and/or evaluating test data. Relations such as the power relation and probable correctness are clearly in the first category, and test case counting is clearly in the second category. Subsumption, in contrast, is not entirely in either category. It is shown that the subsumption relation primarily compares the difficulty of satisfying two criteria. If one assumes that the criteria being compared are applicable, then one can infer their relative power and size complexities from the subsumption relation. In addition, it is shown that, while the size complexity of a criterion gives some indication of the relative cost of using the criterion, it is by no means a sufficient measure of the overall difficulty of using that criterion, which also includes the process of checking whether the predicate defined by the criterion has been satisfied, which may not only be difficult, but impossible
 
Conference Paper
Summary form only given. The complexity of computing systems keeps growing even beyond human capabilities to handle management tasks for achieving the best benefits from such systems. Autonomic computing was introduced with the promise of self-management, where computing systems would be able to manage their behaviors. The concept of Web services emerged for the intension of enabling heterogeneous computing systems to seamlessly and dynamically interact with each other to empower intra-enterprise collaboration. This research is concerned with the issue of designing autonomic computing systems so that computing systems can evolve towards self-management paradigm. For this purpose, and by conducting the research, the relationship between the goals of autonomic computing and the promises of Web service has become enough persuasive to blend the two technologies in a composite one, which is autonomic Web services. In autonomic Web services, each entity is a Web service that is behaving autonomically. The research aims at proving that with autonomic Web services, computing systems will be able to manage themselves as well as their relationships with each other. To achieve this objective, the research proposes a system that implements the concept of autonomic Web services; a proof-of-concept prototype of this system is currently under development and testing.
 
Conference Paper
Mutation-based software testing is a powerful technique for testing software systems. It requires executing many slightly different versions of the same program to evaluate the quality of the test cases used to test the program. Mutation-based testing has been applied to sequential software; however, problems are encountered when it is applied to concurrent programs. These problems are a product of the nondeterminism inherent in the executions of concurrent programs. In this paper, we describe a general approach to testing and debugging concurrent programs, called deterministic execution testing and debugging. We then describe a combination of deterministic execution testing and mutation-based testing, called deterministic execution mutation testing (DEMT), and illustrate the DEMT approach with an example
 
The process of development for reuse  
Classification of Abstract Data Structures  
The reuse assessor and improver system  
Scheme for automating domain guidelines  
Conference Paper
In this paper, we discuss the general area of software development for reuse and reuse guidelines. We identify, in detail, language-oriented and domain-oriented guidelines whose effective use affects component reusability. This paper also proposes a tool support which can provide advice and can generate reusable components automatically and it is based on domain knowledge (reuse guidelines represented as domain knowledge).
 
Article
We address the research question of transforming dependability requirements into corresponding software architecture constructs, by proposing first that dependability needs can be classified into three types of requirements and second, an architectural pattern that allows requirements engineers and architects to map the three types of dependability requirements into three corresponding types of architectural components. The proposed pattern is general enough to work with existing requirements techniques and existing software architectural styles, including enterprise and product-line architectures.
 
Some available patterns from the catalog 
Chapter
This paper describes a domain-specific software development method based on object-oriented modeling, design patterns, and code generation principles. The example domain is building simulation, however, the approach is general and may be applied to other domains as well. Patterns are used to describe how the simulation objects interact. Code-templates associated with every pattern are used to generate the final application code. The method can be applied to generate large families of customized application frameworks from variations of the models. This is particularly useful for domains where applications have to exist in individually tailored versions for every project.
 
Chapter
Change is pervasive during software development, affecting objects, processes, and environments. In process centered environments, change management can be facilitated by software-process programming, which formalizes the representation of software products and processes using software-process programming languages (SPPLs). To fully realize this goal SPPLs should include constructs that specifically address the problems of change management. These problems include lack of representation of inter-object relationships, weak semantics for inter-object relationships, visibility of implementations, lack of formal representation of software processes, and reliance on programmers to manage change manually. APPL/A is a prototype SPPL that addresses these problems. APPL/A is an extension to Ada. The principal extensions include abstract, persistent relations with programmable implementations, relation attributes that may be composite and derived, triggers that react to relation operations, optionally-enforcible predicates on relations, and five composite statements with transaction-like capabilities. APPL/A relations and triggers are especially important for the problems raised here. Relations enable inter-object relationships to be represented explicitly and derivation dependencies to be maintained automatically. Relation bodies can be programmed to implement alternative storage and computation strategies without affecting users of relation specifications. Triggers can react to changes in relations, automatically propagating data, invoking tools, and performing other change management tasks. Predicates and the transaction-like statements support change management in the face of evolving standards of consistency. Together, these features mitigate many of the problems that complicate change management in software processes and process-centered environments.
 
Chapter
Advancements in network technology have led to the emergence of new computing paradigms that challenge established programming practices by employing weak forms of consistency and dynamic forms of binding. Code mobility, for instance, allows for invocation-time binding between a code fragment and the location where it executes. Similarly, mobile computing allows hosts (and the software they execute) to alter their physical location. Despite apparent similarities, the two paradigms are distinct in their treatment of location and movement. This paper seeks to uncover a common foundation for the two paradigms by exploring the manner in which stereotypical forms of code mobility can be expressed in a programming notation developed for mobile computing. Several solutions to a distributed simulation problem are used to illustrate the modeling strategy for programs that employ code mobility.
 
Chapter
Dynamic analysis is the analysis of the properties of a running program. In this paper, we explore two new dynamic analyses based on program profiling: — Frequency Spectrum Analysis. We show how analyzing the frequencies of program entities in a single execution can help programmers to decompose a program, identify related computations, and find computations related to specific input and output characteristics of a program. — Coverage Concept Analysis. Concept analysis of test coverage data computes dynamic analogs to static control flow relationships such as domination, postdomination, and regions. Comparison of these dynamically computed relationships to their static counterparts can point to areas of code requiring more testing and can aid programmers in understanding how a program and its test sets relate to one another.
 
Chapter
Fifteen teams recently used the WinWin Spiral Model to perform the system engineering and architecting of a set of multimedia applications for the USC Library Information Systems. Six of the applications were then developed into an Initial Operational Capability. The teams consisted of USC graduate students in computer science. The applications involved extensions of USC's UNIX-based, text-oriented, client-server Library Information System to provide access to various multimedia archives (films, videos, photos, maps, manuscripts, etc.). Each of the teams produced results which were on schedule and (with one exception) satisfactory to their various Library clients. This paper summarizes the WinWin Spiral Model approach taken by the teams, the experiences of the teams in dealing with project challenges, and the major lessons learned in applying the Model. Overall, the WinWin Spiral Model provided sufficient flexibility and discipline to produce successful results, but several improvements were identified to increase its cost-effectiveness and range of applicability.
 
Article
This article provides a safety checklist for use during the analysis of software requirements for spacecraft and other safety-critical, embedded systems. The checklist specifically targets the two most common causes of safety-related software errors: 1) inadequate interface requirements and 2) discrepancies between the documented requirements and the requirements actually needed for correct functioning of the system. The analysis criteria represented in the checklist are evaluated by application to two spacecraft projects. Use of the checklist to enhance the software requirements analysis is shown to reduce the number of safety-related software errors.
 
Diagram of a schema representing a typical instance of the AbstractFactory pattern, as corresponding to the example given in section 2.
Abstract factory pattern definition as consisting of the meta-schema, and the primal-schema. The two are separated by a slash. Metaclass-components are indicated with double lines at the sides. These define classes of class-components, e.g., all Concrete Widget class components in figure 5 are instance of "ConcreteFact." Association descriptors are again indicated by diamonds. Dashed lines are used to link a metaclass-component to its primal class-component.
Solution to implementing the state design pattern as used by Hueni et.al.[5]. Left: An object p refers in its property "state" to one of the state objects. Each of these state objects is a "singleton" instance of the classes given at the right side. The figure shows a typical sequence of messages. The class hierarchy at the right side shows that each state object is an instance of the abstract class TCPState, which defines the basic messages that can be sent in each state. Each of the subclasses specifically defines the behavior of the object for each of these states for each of these commands. A fully implemented protocol needs more states. Only subclasses are shown for states needed to establish a connection and for the closed state. Note that we shall call "messengers" as used in [5] "requests" in the text.
Meta-component structure for the state composition pattern. Metaclasscomponents are indicated with double lines at the sides. Association descriptors are again indicated by diamonds. See further the text and the explanation of figure 5 Note that since "MetaTransDescr" is a subtype of "MetaOperation," transition descriptors, which are instances of MetaTransDescr are really operation descriptors.
Snapshot of FACE Class-composition as made available through Kansas: Each class-component is represented by an object. Each property, e.g., “transitionTarget” (corresponding to the “transition” property described in figure 6) is itself represented by an object. Linking the tmp attribute of that object to another object corresponds to “attempting” to make the link between the owner of the property object and the other object, in this case the link is attempted between a concrete operation component for the transition ‘Close’ for the “transitionTarget” property to a concrete state class-component, namely the one that represents TCPClosed. By requesting evaluation of “addTmp” (push the “Evaluate” button) the composition will be made if it is a correct link, as is the case here. 
Chapter
Tools incorporating design patterns combine the advantage of having a high-abstraction level of describing a system and the possibility of coupling these abstractions to some underlying implementation. Still, all current tools are based on generating source code in which the design patterns become implicit. After that, further extension and adaptation of the software is needed but this can no longer be supported at the same level of abstraction. This paper presents FACE, an environment based on an explicit representation of design patterns, sustaining an incremental development style without abandoning the higher-level design pattern abstraction. A visual composition tool for FACE has been developed in the Self programming language.
 
Article
An approach to testing the consistency of specifications is explored, which is applicable to the design validation of communication protocols and other cases of step-wise refinement. In this approach, a testing module compares a trace of interactions obtained from an execution of the refined specification (e.g., the protocol specification) with the reference specification (e.g., the communication service specification). Nondeterminism in reference specifications presents certain problems. Using an extended finite state transition model for the specifications, a strategy for limiting the amount of nondeterminacy is presented. An automated method for constructing a testing module for a given reference specification is discussed. Experience with the application of this testing approach to the design of a transport protocol and a distributed mutual exclusion algorithm is described.
 
Article
. This paper describes new techniques to help with testing and debugging, using information obtained from path profiling. A path profiler instruments a program so that the number of times each different loop-free path executes is accumulated during an execution run. With such an instrumented program, each run of the program generates a path spectrum for the execution---a distribution of the paths that were executed during that run. A path spectrum is a finite, easily obtainable characterization of a program's execution on a dataset, and provides a behavior signature for a run of the program. Our techniques are based on the idea of comparing path spectra from different runs of the program. When different runs produce different spectra, the spectral differences can be used to identify paths in the program along which control diverges in the two runs. By choosing input datasets to hold all factors constant except one, the divergence can be attributed to this factor. The point of divergenc...
 
Article
A standard demonstration problem in object-oriented programming is the design of an automobile cruise control. This design exercise demonstrates object-oriented techniques well, but it does not ask whether the object-oriented paradigm is the best one for the task. Here we examine the alternative view that cruise control is essentially a control problem. We present a new software organization paradigm motivated by process control loops. The control view leads us to an architecture that is dominated by analysis of a classical feedback loop rather than by the identification of discrete stateful components to treat as objects. The change in architectural model calls attention to important questions about the cruise control task that aren't addressed in an object-oriented design. 1 . Design Idioms for Software Architectures Explicit organization patterns, or idioms, increasingly guide the composition of modules into complete systems. This stage of the design is usually called the a...
 
Article
Object Oriented Design by Prototype Methodology (OODPM) integrates two known technologies: the object approach and the prototype concept. Object oriented methodology is used for internal system design, and prototype methodology is used for external system design. This document is a template for a system design file using OODPM version 2015 (titles of paragraphs only). For full explanations for each paragraph look at [1]. This version developed after tens of projects that developed and plan using version 6 in a very vast projects for national information systems. This version companion by "OODPM - Methodology for Management Information Systems life Cycle" (meanwhile only in Hebrew)
 
1 Possible arrows on the contextual slide
1 The working context of the talks at session 2
1 The working context of the talks at session 3.
Article
Introduction and Workshop Structure As achieving high quality means the realization of customers needs, requirements engineering (RE) is the most crucial phase within software development. In the RE process not only the functional requirements but also the so-called `non-functional' or `quality' requirements of the planned software system have to be elicited from the customer and represented in a requirements document in order to provide the software designer a complete and correct specification. Conventional RE methods usually s=mupport only parts of this process or help stating only specific kinds of requirements. These methodological problems were the prime motivation for the REFSQ `94 workshop held in conjunction with the CAiSE `94 Conference on Advanced Information Systems Engineering in Utrecht, The Netherlands on June 6th and 7th 1994. In order to find solutions which handle the described deficiencies, it was the goal of the workshop to improve the understanding of the
 
Article
ions * Matthew B. Dwyer, Vicki Carr, Laura Hines Kansas State University Abstract Symbolic model checking techniques have been widely and successfully applied to statically analyze dynamic properties of hardware systems. Efforts to apply this same technology to the analysis of software systems has met with a number of obstacles, such as the existence of non-finite state-spaces. This paper investigates abstractions that make it possible to cost-effectively model check specifications of software for graphical user interface (GUI) systems. We identify useful abstractions for this domain and demonstrate that they can be incorporated into the analysis of a variety of systems with similar structural characteristics. The resulting domain-specific model checking yields fast verification of naturally occurring specifications of intended GUI behavior. 1 Introduction The majority of modern software applications have a graphical user interface (GUI). These interfaces serve a number of functions...
 
Petri Net  
Petri Net With Impossible Pairs Represented  
Boolean Variable Subnet  
Petri Net With Variable Subnet Added  
Article
Spurious results are an inherent problem of most static analysis methods. These methods, in an effort to produce conservative results, overestimate the executable behavior of a program. Infeasible paths and imprecise alias resolution are the two causes of such inaccuracies. In this paper we present an approach for improving the accuracy of Petri net-based analysis of concurrent programs by including additional program state information in the Petri net. We present empirical results that demonstrate the improvements in accuracy and, in some cases, the reduction in the search space that result from applying this approach to concurrent Ada programs. 1 Introduction Developers of concurrent software need cost-effective analysis methods to acquire confidence in the reliability of that software. Analysis of concurrent programs is difficult because, in many cases, the patterns of communication among the various parts of the program are complicated and the number of possible communications is...
 
FSP specification and LTS for process SERVER
LTS for RW_PROGRESS The problem of reader starvation can of course be fixed by introducing a "turn" variable that lets readers and writers run alternately when competition exists for the lock. Such a system should satisfy both the READER and WRITER progress properties. Examples of conditional progress properties related to the READERS_WRITERS system are shown below: progress WREL[i:W] = if {writer[i].acquire} then {writer[i].release} progress RREL[i:R] = if {reader[i].acquire} then {reader[i].release}
Büchi automaton used for checking progress property WRITER
Article
The liveness characteristics of a system are intimately related to the notion of fairness. However, the task of explicitly modelling fairness constraints is complicated in practice. To address this issue, we propose to check LTS (Labelled Transition System) models under a strong fairness assumption, which can be relaxed with the use of action priority. The combination of the two provides a novel and practical way of dealing with fairness. The approach is presented in the context of a class of liveness properties termed progress, for which it yields a particularly efficient modelchecking algorithm. Progress properties cover a wide range of interesting properties of systems, while presenting a clear intuitive meaning to users.
 
Article
The notion of joint actions provides a paradigm that allows the specification and design of distributed systems to focus on cooperative events rather than on the behavior of individual processes. For concurrency this introduces an abstraction that is independent of process structuring and of communication mechanisms. For the designer this means replacing the conventional process-oriented view by an action-oriented one, which has a profound effect on thinking about a system and on the design process. The approach is especially suited for formal derivation of concurrent systems by a layered introduction of properties. DisCo is an executable specification and design language based on joint actions. This paper introduces the basic principles of the joint action approach, together with the main capabilities of DisCo for supporting modularity and the derivation of distributed programs. 1. Introduction to Specification and Design by Joint Actions The notion of joint actions [7] provides a n...
 
Simple bivariate regression and two common problems  
Little-JIL specification for linear regression
Regression with substeps for linear and non-linear regression  
Article
Knowledge discovery in databases (KDD) is an increasingly widespread activity. KDD processes may entail the use of a large number of data manipulation and analysis techniques, and new techniques are being developed on an ongoing basis. A challenge for the effective use of KDD is coordinating the use of these techniques, which may be highly specialized, conditional and contingent. Additionally, the understanding and validity of KDD results can depend critically on the processes by which they were derived. We propose to use process programming to address the coordination of agents in the use of KDD techniques. We illustrate this approach using the process language Little-JIL to program a representative bivariate regression process. With Little-JIL programs we can clearly capture the coordination of KDD activities, including control flow, pre- and post-requisites, exception handling, and resource usage.
 
Article
A multi-mode software system contains several distinct modes of operation and a controller for deciding when to switch between modes. Even when developers rigorously test a multi-mode system before deployment, they cannot foresee and test for every possible usage scenario. As a result, unexpected situations in which the program fails or underperforms (for example, by choosing a non-optimal mode) may arise. This research aims to mitigate such problems by creating a new mode selector that examines the current situation, then chooses a mode that has been successful in the past, in situations like the current one. The technique, called program steering, creates a new mode selector via machine learning from good behavior in testing or in successful operation. Such a strategy, which generalizes the knowledge that a programmer has built into the system, may select an appropriate mode even when the original controller cannot. We have performed experiments on robot control programs written in a month-long programming competition. Augmenting these programs via our program steering technique had a substantial positive effect on their performance in new environments.
 
Article
The construction of software systems from pre#existing# independently developed software components will only occur when application builders can adapt software components to suit their needs. We propose that soft# ware components provide two interfaces # one for be# havior and one for adapting that behavior as needed. The ADAPT framework presented in this paper sup# ports both component designers in creating components that can easily be adapted# and application builders in adapting software components. The motivating exam# ple# using JavaBeans# shows how adaptation# not cus# tomization# is the key to component#based software. KEYWORDS Software components# JavaBeans# Adaptation 1 INTRODUCTION An important aim of software engineering is to produce reliable and robust software systems. As software sys# tems grow in size# however# it becomes infeasible to de# sign and construct software systems from scratch. Most software developers are familiar with reusing code from component libra...
 
Article
to high maintenance costs. While most experts agree on that, opinions - on how serious the problem of redundancies really is and how to tackle it - differ. In this paper, we present the study of redundancies in the Java Buffer library, JDK 1.4.1, which was recently released by Sun. We found that at least 68% of code in the Buffer library is redundant in the sense that it recurs in many classes in the same or slightly modified form. We effectively eliminated that 68% of code at the meta-level using a technique based on "composition with adaptation" called XVCL. We argue that such a program solution is easier to maintain than buffer classes with redundant code. In this experiment, we have designed our meta-representation so that we could produce buffer classes in exactly the same form as they appear in the original Buffer library. While we have been tempted to re-design the buffer classes, we chose not to do so, in order to allow for the seamless integration of the XVCL solution into contemporary programming methodologies and systems. This decision has not affected the essential results reported in this paper.
 
Article
Techniques for modular software design are presented applying software agents. The conceptual designs are domain independent and make use of specific domain aspects applying Multiagent AI. The stages of conceptualization, design and implementation are defined by new techniques coordinated by objects. Software systems are designed by knowledge acquisition, specification, and multiagent implementations. Multiagent implementations are defined for the modular designs, applying our recent projects which have lead to fault tolerant AI systems. A new high level concurrent syntax language is applied to the designs. A novel multi-kernel design technique is presented. Communicating pairs of kernels, each defining a part of the system, are specified by object-coobject super-modules. New linguistics constructs are defined for object level programming with String and Splurge functions treating object visibility and messages. Treating objects as abstract data types and a two level programming approa...
 
Article
The success and acceptance of reuse tools and libraries depends on their integration into existing software development environments. However, the addition of large libraries of reusable components to software design databases only exacerbates the problem of design data management. Object-oriented databases originated to meet the requirements of design data management that relational databases could not satisfy. This paper describes a semantic data model for an object-oriented database supporting an integrated Computer Aided Software Engineering environment (CASE). The data model promotes reuse by providing objects that match program design requirements to existing components in the reuse library. 1 Keywords: Software reuse, Computer-Aided Software Engineering, CASE, Semantic data modelling, Object-Oriented Database Systems. 1.0 Overview To successfully insert reuse into the software development process, we must integrate support for reuse into existing software tools and CASE environ...
 
Article
In large software systems such as digital libraries, electronic commerce applications, and customer support systems, the user interface and system are often complex and difficult to navigate. It is necessary to provide users with interactive online support to help users learn how to effectively use these applications. Such online help facilities can include providing tutorials and animated demonstrations, synchronized activities between users and system supporting staff for real time instruction and guidance, multimedia communication with support staff such as chat, voice, and shared whiteboards, and tools for quick identification of user problems. In this paper, we investigate how such interactive online help support can be developed and provided in the context of a working system, the Alexandria Digital Library (ADL) for geospatially-referenced data. We developed an online help system, AlexHelp!. AlexHelp! supports collaborative sessions between the user and the librarian (support st...
 
Article
Introduction Cleverly designed software often fails to strictly satisfy its specifications, but instead satisfies them behaviorally, in the sense that they appear to be true under all possible experiments. Hidden algebra extends prior work on abstract data types and algebraic specification [2, 6] to concurrent distributed systems, in a surprisingly simple way that also handles nondeterminism, internal states, and more [4, 3]. Advantages of an algebraic approach include decidability results in equational logic for problems that are undecidable for more expressive logics, and powerful algorithms like term rewriting and unification, for implementing equational logic tools. Much work in formal methods has addressed code verification, but since empirical studies show that little of software cost comes from coding errors, our approach focuses on behavioral specification and verification at the design level, thus avoiding the distracting complications of programming language semantics. Theory
 
Article
A file synchronizer restores consistency after multiple replicas of a filesystem have been changed independently. We present an algebra for reasoning about operations on filesystems and show that it is sound and complete with respect to a simple model. The algebra enables us to specify a file-synchronization algorithm that can be combined with several different conflict-resolution policies. By contrast, previous work builds the conflict-resolution policy into the specification, or worse, does not specify the synchronizer's behavior precisely. We classify synchronizers by asking whether conflicts can be resolved at a single disconnected replica and whether all replicas are identical after synchronization. We also discuss timestamps and argue that there is no good way to propagate timestamps when there is severe clock skew between replicas.
 
Article
The ability of reconfiguring software architectures in order to adapt them to new requirements or a changing environment has been of growing interest, but there is still not much formal work in the area. Most existing approaches deal with run-time changes in a deficient way. The language to express computations is often at a very low level of specification, and the integration of two different formalisms for the computations and reconfigurations require sometimes substantial changes. To address these problems, we propose a uniform algebraic approach with the following characteristics. 1. Components are written in a high-level program design language with the usual notion of state. 2. The approach combines two existing frameworks— one to specify architectures, the other to rewrite labelled graphs—just through small additions to either of them. 3. It deals with certain typical problems such as guaranteeing that new components are introduced in the correct state (possibly transferred from the old components they replace). 4. It shows the relationships between reconfigurations and computations while keeping them separate, because the approach provides a semantics to a given architecture through the algebraic construction of an equivalent program, whose computations can be mirrored at the architectural level.
 
Article
Dynamic detection of likely invariants is a program analysis that generalizes over observed values to hypothesize program properties. The reported program properties are a set of likely invariants over the program, also known as an operational abstraction. Operational abstractions are useful in testing, verification, bug detection, refactoring, comparing behavior, and many other tasks. Previous techniques for dynamic invariant detection scale poorly or report too few properties. Incremental algorithms are attractive because they process each observed value only once and thus scale well with data sizes. Previous incremental algorithms only checked and reported a small number of properties. This paper takes steps toward correcting this problem. The paper presents two new incremental algorithms for invariant detection and compares them analytically and experimentally to two existing algorithms. Furthermore, the paper presents four optimizations and shows how to implement them in the context of incremental algorithms. The result is more scalable invariant detection that does not sacrifice functionality.
 
Article
The Alternating Bit Protocol has been modeled via a straighforward application of the Gypsy methodology. A safety property was stated for its service specification and a procedural protocol specification was written using Gypsy procedure definitions. Mechanical verification was carried out, including proofs of the supporting lemmas. A unique aspect of this verification effort is the cooperative proof strategy that was employed, making use of two separate verification systems. The combined capabilities of both the Gypsy system and the Affirm system were utilized to achieve this result. 2 1. Introduction The world has yet another verification of the Alternating Bit Protocol. A brief description of this latest addition is presented. The protocol was modeled as an abstract program using the Gypsy verification methodology. A fully mechanical proof of a safety property was obtained. What is perhaps more interesting is that the proof was performed with the combined help of two separate ver...
 
Article
Traditionally, verification properties have been classified into safety and liveness properties. While this taxonomy has an attractive simplicity and is useful for identifying the appropriate analysis algorithm to use for checking a property, determining whether a property is safety, liveness, or neither can require significant mathematical insight on the part of the analyst. In this paper, we present an alternative property taxonomy. We argue that this taxonomy is a more natural classification of the kinds of questions that analysts want to ask. Moreover, most classes in our taxonomy have a known, direct mapping to the safety-liveness classification, and thus the appropriate analysis algorithm can be automatically determined.
 
Modechart requirements speciication of the railroad crossing system. ther activated by timing conditions or by mode transitions in the MONITOR. In Modechart, the GATECONTROLLER transition from MoveUp to Up is annotated with timing constraint (20,100), specifying a transition delay of 20 time units and a deadline of 100 time units. The Modechart transition from MoveUp to MoveDown is annotated with !BC, indicating it is activated when the MONITOR enters BC. In the SCR speciication, timing conditions In(MoveUp,19)
Article
this paper, we extend the SCR requirements notation to specify systems' timing properties. We also describe an analysis tool which automates the detailing and translating steps of our analysis technique and produces input for the model checker. To determine if we could verify interesting properties of existing system requirements, we use our new notation and tool to analyze requirements for two well-known small problems. In addition to performing successful verifications of safety and timing properties of these systems, we compare our reachability graphs and formulas with those of the Modechart verifier [12], a model checker for Real-Time Logic (RTL) [7] which is based on interval semantics.
 
Article
. Exception handling mechanisms provided by programming languages are intended to ease the difficulty of developing robust software systems. Using these mechanisms, a software developer can describe the exceptional conditions a module might raise, and the response of the module to exceptional conditions that may occur as it is executing. Creating a robust system from such a localized view requires a developer to reason about the flow of exceptions across modules. The use of unchecked exceptions, and in object-oriented languages, subsumption, makes it difficult for a software developer to perform this reasoning manually. In this paper, we describe a tool called Jex that analyzes the flow of exceptions in Java code to produce views of the exception structure. We demonstrate how Jex can help a developer identify program points where exceptions are caught accidentally, where there is an opportunity to add finer-grained recovery code, and where error-handling policies are not being followed...
 
Article
We describe how formal specifications given in terms of a high-level timed Petri net formalism (TB nets) can be analyzed to check the temporal properties of bounded invariance (the systems stays in a given state until time τ) and bounded response (the system will enter a given state within time τ). In particular, we concentrate on specifications given in a hierarchical, top-down manner, where one specification level refines a more abstract level. Our goal is to define the conditions under which the properties that are proven to hold at a given abstraction level are preserved at the next refined level. To do so, we define the concept of correct refinement, and we show that bounded invariance and bounded response are preserved by a correct refinement. We also provide a set of constructive rules that may be applied to refine a net in such a way that the resulting net is a correct refinement.
 
Article
Traditional information-flow analysis is mainly based on data-flow and control-flow analysis. In object-oriented program, because of pointer aliasing, inheritance, and polymorphism, information-flow analysis become very complicated. Especially, it is dicult to rely only on normal data and control-flow analysis techniques. some new approaches are required to analyze the information-flow between components in object-oriented program. In this paper, object-oriented program slicing technique is introduced. By this technique, the amount of information-flow, the width of information-flow and correlation coefficient between components can be computed. Some applications of the information-flow are also discussed and analyzed in this paper.
 
Article
its underlying state representation [BGP97]. Using constraint representations one can verify systems with infinite variable domains (which is not possible using finite representations such as BDDs). Our goal in this project is to develop a toolset which combines various symbolic representations in a single composite model checker. In the composite model checking approach each variable in the input system is mapped to a symbolic representation type [BGL98]. (For example, boolean and enumerated variables can be mapped to BDD representation, and integers can be mapped to Presburger constraint representation. ) Then, each atomic event in the input system is conjunctively partitioned where each conjunct specifies the effect of the event on the variables mapped to a single symbolic representation. Conjunctive partitioning of the atomic events allows pre- and post-condition computations to distribute over different symbolic representations. We plan to structure the composite model checking
 
Article
To facilitate research in the eld of reverse engineering and system renovation we have compiled an annotated bibliography. We put the contributions not only in alphabetical order but also grouped by topic so that readers focusing on a certain topic can read their annotations in the alphabetical listing. We also compiled an annotated list of pointers to information about reverse engineering and system renovation that can be reached via Internet. For the sake of ease we also incorporated a brief introduction to the eld of reverse engineering. Key Words & Phrases: Reverse engineering, Annotated bibliography, System renovation 1991 CR Categories: A.2, D.2.2, D.2.7, D.2.m, K.6.3 note: The authors were all in part sponsored by bank ABN AMRO, software house DPFinance, and the Dutch Ministery of Economical Aairs via the Senter Project #ITU95017 "SOS Resolver". The last author was also supported by the Netherlands Computer Science Research Foundation (SION) with nancial support from the Netherlands Organization for Scientic Research (NWO), project Interactive tools for program understanding, 612-33-002. 1 Executive Summary There is a constant need for updating and renovating business-critical software systems for many and divers reasons: business requirements change, technological infrastructure is modernized, the government changes laws, or the third millennium approaches, to mention a few. Therefore, that in the area of software engineering the subjects of reverse engineering and system renovation become more and more important. The interest in such subjects originates from the diculties that one encounters when attempting to maintain extremely large software systems. Such software systems are often called legacy systems, since it is a legacy of many dieren...
 
Time to check the properties
Article
In this paper we demonstrate how static concurrency analysis techniques can be used to verify application-specific properties of an architectural description. Specifically, we use two concurrency analysis tools, INCA, a flow equation based tool, and FLAVERS, a data flow analysis based tool, to detect errors or prove properties of a Wright architectural description of the gas station problem. Although both these tools are research prototypes, they illustrate the potential of static analysis for verifying that architectural descriptions adhere to important properties, for detecting problems early in the lifecycle, and for helping developers understand the changes that need to be made to satisfy the properties being analyzed. 1 Introduction With the advent of improved network technology, distributed systems are becoming increasingly common. Such systems are more difficult to reason about than sequential systems because of their inherent nondeterminism. In recognition of this, software ar...
 
Diierent views onto a component
The grammar inference problem
Conceptual dimensions for analyzing Software
Article
This paper reports on our approaches to combine various software comprehension techniques (and technologies) in order to establish confidence whether a given reusable component satisfies the needs of the intended reuse situation. Some parts of the problem we are addressing result from differences in knowledge representation about a component depending on whether this component is a well documented in-house development, some externally built componentry, or a COTS-component. Keywords Program comprehension, software visualization, cognitive models, specification animation, trace analysis 1. MOTIVATION While the issue of building software from building blocks [12, 15] shifts from using classical reusable building blocks to using off-the-shelf components, modern software technology supports software development on the basis of non-trivial componentry. However, one of the key issues causing the Not-Invented-Here syndrome [29] remains: How can developers be sure that the component they ...
 
Article
Current software testing practices focus, almost exclusively, on the implementation, despite widely acknowledged benefits of testing based on software specifications. We propose approaches to specification-based testing by extending a wide variety of implementation-based testing techniques to be applicable to formal specification languages. We demonstrate these approaches for the Anna and Larch specification languages. 1 Introduction Specifications provide valuable information for testing. Most software testing techniques, however, rely solely on the implementation for information upon which to select test data. These implementation-based testing techniques focus on the actual behavior of the implementation but ignore intended behavior, except inasmuch as test output is manually compared against it. On the other hand, considering information from formal specifications enables testing intended behavior as well as actual functionality. Specification-based testing techniques may direct ...
 
Article
As the design of software architectures emerges as a discipline within software engineering, it will become increasingly important to support architectural description and analysis with tools and environments. In this paper we describe a system for developing architectural design environments that exploit architectural styles to guide software architects in producing specific systems. The primary contributions of this research are: (a) a generic object model for representing architectural designs; (b) the characterization of architectural styles as specializations of this object model; and (c) a toolkit for creating an open architectural design environment from a description of a specific architectural style. We use our experience in implementing these concepts to illustrate how style-oriented architectural design raises new challenges for software support environments. 1 Introduction A critical aspect of any complex software system is its architecture. At an architectural level of de...
 
Article
An enterprise that uses evolving software is susceptible to destructive and even disastrous effects caused either by inadvertent errors, or by malicious attacks by the programmers employed to maintain this software. It is our thesis that these perils of evolving software can often be tamed by ensuring that suitable architectural principles are maintained as invariants of the evolution of a given software system. For example, it is often useful to partition a system into a set of divisions, constructing permanent--- i.e., evolution-invariant---"firewalls" between them, which will limit the effect that one division can have on the others. We define this concept of evolution-invariant, discuss its usefulness, and show how it can be realized under law-governed architecture. Keywords: evolution-invariants, evolving systems, embedded systems, law-governed architecture, firewalls in software, auditing. Work supported in part by NSF grants No. CCR-9308773 1 Introduction Software evoluti...
 
Stage 1: Deene the Scope of the Domain Stage 2: Deene/Reene DomainSpeciic Concepts/Requirements
Stage 3: Deene/Reene Domain-Speciic Design and Implementation Constraints
Article
"In order to reuse software, there needs to be software to reuse." -- Tracz [9] One of the dilemmas that has prevented software developers from reusing software is the lack of software artifacts to use or the existence of artifacts that are difficult to integrate. Domain-Specific Software Architectures (DSSAs) have been proposed[4] in order to address these issues. A DSSA not only provides a framework for reusable software components to fit into, but captures the design rationale and provides for a degree of adaptability. This paper 1 presents an outline for a DomainSpecific Software Architecture engineering process. Keywords: Domain Analysis, Domain Specific Software Architecture, Domain Engineering Introduction The purpose of the paper is to outline the domainengineering process 2 being used to generate a Domain-Specific Software Architecture (DSSA) as part the DARPA DSSA-ADAGE (Avionics Domain Application Generation Environment) Project 3 . It is based 1 A previous versi...
 
Article
ion for specifying architectural constraints and outlined an approach to enforcing constraints at run time using instrumented connectors. The talk raised the general question of "When should we enforce architectural constraints". ffl Ugo Montanari outlined new work on a "Tile Model" for concurrent systems that emphasises composability. The model shows promise in its ability to capture aspects of the dynamic structure. The talk raised the general issue of description of dynamic architectures. ffl Richard Hilliard talked about the industrial requirements for architectural description and outlined an extensible OO framework for architectural description. The talk discussed the need for multiple view architectural descriptions. ffl Larry Howard reported on his experience with developing structural models for air vehicle simulations. He identified the need for ADLs to express reusable structural abstractions. ffl Jeff Kramer presented work on Self Organising architectures in which compon...
 
The Compressing Proxy
Characteristic Values
Article
The progression of component-based software engineering (CBSE) is essential to the rapid, costeffective development of complex software systems. Given the choice of well-tested components, CBSE affords reusability and increases reliability. However, applications developed according to this practice can often suffer from difficult maintenance and control, problems that stem from improper or inadequate integration solutions. Avoiding such unfortunate results requires knowledge of what causes the interoperability problems in the first place. The time for this assessment is during application design.
 
Article
The ability of a new technology to reuse legacy systems is very important for its economic success. This paper presents a method for integrating legacy systems within distributed object architectures. The necessary steps required for integration are defined. It is explained how to define object interfaces. A detailed overview of how to implement the wrappers is given. The paper also answers the question which distributed object model is most suitable for legacy integration. Therefore a decision model is defined and the evaluation results are presented. Keywords: distributed object architectures, legacy integration, CORBA 1. Overview Distributed object technology is important for building newgeneration information systems. It extends object technology with the power of client/server architecture. From the technological viewpoint the distributed object technology provides a very strong foundation for modern information systems [3, 15]. Investigating Scenarios (1) Doing the Use Cas...
 
Top-cited authors
Dewayne E. Perry
  • University of Texas at Austin
Barbara Kitchenham
  • Keele University
Koushik Sen
  • Daffodil International University
Gul Agha
  • University of Illinois, Urbana-Champaign
Mary Jean Harrold
  • Georgia Institute of Technology