International Journal of Software Engineering and Knowledge Engineering

Published by World Scientific Publishing
Publications
Effective fault-handling in emerging complex distributed applications requires the ability to dynamically adapt resource allocation and fault-tolerance policies in response to possible changes in environment, application requirements, and available resources. The paper reports an effort on design and implementation of an adaptive fault-tolerance middleware (AFTM) using a CORBA-compliant object request broker resting on the Solaris open system platform. The paper also briefly discusses the essential capabilities of AFTM, the overall system architecture, and its design decisions
 
The concept of software architecture has recently emerged as a new way to improve our ability to effectively construct large-scale software systems. However, there is no formal architecture specification language available to model and analyze complex real-time systems. In this paper, an object-oriented logic-based architecture specification language for real-time systems is discussed. Representation of real-time properties and timing constraints, and their integration with the language to model real-time concurrent systems is given. Architecture-based specification languages enable the construction of large system architectures and provide a means of testing and validation. In general, checking the timing constraints of real-time systems is done by applying model checking to the constraint expressed as a formula in temporal logic. The complexity of such a formal method depends on the size of the representation of the system. It is possible that this size could increase exponentially when the system consists of several concurrently executing real-time processes. This means that the complexity of the algorithm is exponential in the number of processes of the system, and thus the size of the system becomes a limiting factor. Such a problem has been defined in the literature as the “state explosion problem”. We propose a method of incremental verification of architectural specifications for real-time systems. The method has a lower complexity in a sense that it does not work on the whole state space but only on a subset of it that is relevant to the property to be verified
 
PARFORMAN (PARallel FORMal ANnotation language) is a specification language for expressing the intended behaviour or known types of error conditions when debugging or testing parallel programs. The high-level debugging approach which is supported by PARFORMAN is model-based. Models of intended or faulty behaviour can be succinctly specified in PARFORMAN. These models are then compared with the actual behaviour in terms of execution traces of events, in order to localize possible bugs. PARFORMAN is based on an axiomatic model of target program behaviour. This model, called H-space (history-space), is formally defined through a set of general axioms about three basic relations between events. Events may be sequentially ordered, they may be parallel, or one of them might be included in another composite event. The notion of an event grammar is introduced to describe allowed event patterns over a certain application domain or language. Auxiliary composite events such as snapshots are introduced to be able to define the notion “occurred at the same time” at suitable levels of abstraction. In addition to debugging and testing, PARFORMAN can also be used to specify profiles and performance measurements
 
Biological information computing is rapidly advancing from homogeneous data computation to large-scale heterogeneous data computation. However, the development of data specification protocols, software middleware, and Web services, which support large-scale heterogeneous data exchange, integration, and computation, generally falls behind data expansion rates and bioinformatics demands. The ubiquitous bio-information computing (UBlC<sup>2</sup>) project aims to disseminate software packages to assist the development of heterogeneous bio-information computing applications that are interoperable and may run distributedly. UBlC<sup>2</sup> lays down the software architecture for integrating, retrieving, and manipulating heterogeneous biological information so that data behave like being stored in a unified database, UBIC<sup>2</sup> programming library implements the software architecture and provides application programming interfaces (APIs) to facilitate the development of heterogeneous bio-information computing applications. To achieve interoperability, UBIC<sup>2</sup> Web services use XML-based data communication means, which allow distributed applications to consume heterogeneous bio-information regardless of platforms. The documents and software package of UBlC<sup>2</sup> are available at http://www.ubic2.org.
 
The CAPSL Integrated Protocol Environment effort aims at providing an intuitive and expressive language for specifying authentication and key distribution protocols and supporting interfaces to various analysis tools. The CAPSL Intermediate Language CIL has been designed with the emphasis on simplifying translators from CIL to other analysis tools. In this paper we describe the design of a CIL-to-Spin connector. We describe how CIL concepts are translated into Spin and propose a general method to model the behaviors of honest principals and the intruder. Based on the method, a prototype connector has been implemented in Gentle, which can automatically translate CIL specification to promela code and LTL formula, thus greatly simplifying the modelling and analysis process
 
Concurrent programs are more difficult to test than sequential programs because of nondeterministic behavior. An execution of a concurrent program nondeterministically exercises a sequence of synchronization events, called a synchronization sequence (or SYN-sequence). Nondeterministic testing of a concurrent program P is to execute P with a given input many times in order to exercise distinct SYN-sequences and produce different results. We present a new testing approach, called reachability testing. If P with input X contains a finite number of SYN-sequences, reachability testing of P with input X can execute all possible SYN-sequences of P with input X. We show how to perform reachability testing of concurrent programs using read and write operations. Also, we present results of empirical studies comparing reachability and nondeterministic testing. Our results indicate that reachability testing has advantages over nondeterministic testing
 
Transformation systems allow to support developments leading from an initial specification to a final program in a totally formal way. Transformations use valid properties of input objects to produce new equivalent ones. Most transformations use functional properties to increase the efficiency of programs. Doing so, they affect non nonfunctional properties which, more often, are not formally expressed. The problem the author addresses is to recognize situations in which transformations can be achieved on the basis of the evaluation of a defined nonfunctional property and his aim is to relate program transformations to nonfunctional properties evaluations. Indeed, a particular transformation tactic can be applied when a given property does not hold. The DEVA language has been used as a support of experiments in the development of programs
 
Software engineering is particularly concerned with the construction of large systems. Existing software engineering tools tend to be adequate for medium size systems, but not as useful for large and very large systems. This paper presents a model viewing system as the basis for graph based CASE tools which overcomes this lack of scalability. With existing commercial tools the number of user steps to browse or edit a model increases with the size of the model. With the approach of this paper the number of steps remains constant regardless of the model size. In fact, only one step is required for operations such as adding a flow between two processes anywhere in the model, or moving a submodel to a new parent. This paper outlines the approach with a structured analysis example, provides a formal description of the model viewing system, and discusses some limitations
 
In object-oriented software development, it is generally accepted that inheritance should be used just to model a generalization/specialization relationship (i.e. an IS-A relation). This analysis/design guideline is too permissive. Some researchers thus advocate that a subclass should inherit the full a behavior of its superclass. Behavior inheritance, however, is far too restricted. In this paper, we establish a formal model that can clearly differentiate IS-A from behavior inheritance. Under this model, IS-A and behavior inheritance can be decomposed into more fundamental concepts: subsets and abstraction/concretization. Also, we are able to develop a set of refined guidelines regarding the use of inheritance
 
Areas of computer application are being broadened rapidly due to the rapid improvement of the performance of computer hardware. This results in increased demands for computer applications that are large and have complex temporal characteristics. This paper introduces a real-time systems analysis method named PARTS. PARTS supports analyses from two viewpoints: external viewpoint, a view of the system from the user's perspective, and internal viewpoint, a view from the developer's perspective. These viewpoints are specified using formal languages, which are: Real-Time Events Trace (RTET) for the external viewpoint, and Time Enriched Statechart (TES) and PARTS Data Flow Diagram (PDFD) for the internal viewpoint. All PARTS languages are based on the Metric Temporal Logic (MTL), and consistency of the specifications made from the two different viewpoints are analyzed based on the same MTL formalism
 
PERTS is a prototyping environment for real-time systems. It contains schedulers and resource access protocols for time-critical applications, together with a comprehensive set of tools for the analysis, validation, and evaluation of real-time systems built on the scheduling paradigms supported by these building blocks. This paper describes the underlying models of real-time systems supported by PERTS, as well as its capabilities and intended use. A key component is the schedulability analyzer. The basic version of this system of tools supports the validation and evaluation of real-time systems built on the framework of the periodic-task model. This system of tools is now available
 
Current interest in improving the effectiveness and predictability of software development has led to a recent focus on software process modeling and improvement. Process-centered software development environments (PCSDEs), have been examined as a useful adjunct to software process modeling. A number of PCSDEs have been designed and built; an examination of the range of potential users of such environments reveals a wide range of needs with respect to information about an enacted software process and how this information is presented. The paper describes one aspect of a PCSDE supporting multiple simultaneous views: the design of a representation of enacted software processes which is suitable for the generation of multiple simultaneous views
 
English-Auction protocol for surplus flight tickets
using UML class diagrams to specify agent behavior and its abbreviations
In the past, research on agent-oriented software engineering had been widely lacking touch with the world of industrial software development. Recently, a cooperation has been established between the Foundation of Intelligent Physical Agents (FIPA) and the Object Management Group (OMG) aiming to increase acceptance of agent technology in industry by relating to de facto standards (object-oriented software development) and supporting the development environment throughout the full system lifecycle. As a first result of this cooperation, we proposed AGENT UML [1]
 
Classification of metrics
In 2 Test approach at a glance
Excerpt of metrics
Trend analysis of selection rules
A well-known approach for identifying defect-prone parts of software in order to focus testing is to use different kinds of product metrics such as size or complexity. Although this approach has been evaluated in many contexts, the question remains if there are further opportunities to improve test focusing. One idea is to identify other types of information that may indicate the location of defect-prone software parts. Data from software inspections, in particular, appear to be promising. This kind of data might already lead to software parts that have inherent difficulties or programming challenges, and in consequence might be defect-prone. This article first explains how inspection and product metrics can be used to focus testing activities. Second, we compare selected product and inspection metrics commonly used to predict defect-prone parts (e.g., size and complexity metrics, inspection defect content metrics, and defect density metrics). Based on initial experience from two case studies performed in different environments, the suitability of different metrics for predicting defect-prone parts is illustrated. The studies revealed that inspection defect data seems to be a suitable predictor, and a combination of certain inspection and product metrics led to the best prioritizations in our contexts. In addition, qualitative experience is presented, which substantiates the expected benefit of using inspection results to optimize testing.
 
The task of designing secure software systems is fraught with uncertainty, as data on uncommon attacks is limited, costs are difficult to estimate, and technology and tools are continually changing. Consequently, experts may interpret the security risks posed to a system in different ways, leading to variation in assessment. This paper presents research into measuring the variability in decision making between security professionals, with the ultimate goal of improving the quality of security advice given to software system designers. A set of thirty nine cyber-security experts took part in an exercise in which they independently assessed a realistic system scenario. This study quantifies agreement in the opinions of experts, examines methods of aggregating opinions, and produces an assessment of attacks from ratings of their components. We show that when aggregated, a coherent consensus view of security emerges which can be used to inform decisions made during systems design.
 
Multi-agent systems... In this paper we introduce three additional organisational concepts - organisational rules, organisational structures, and organisational patterns - and discuss why we believe they are necessary for the complete specification of computational organisations. In particular, we focus on the concept of organisational rules and introduce a formalism, based on temporal logic, to specify them. This formalism is then used to drive the definition of the organisational structure and the identification of the organisational patterns. Finally, the paper sketches some guidelines for a methodology for agent-oriented systems based on our expanded set of organisational abstractions.
 
This paper reports on the development of the data binding tool and its use in Ada source code reusability and software system design assessment. The tool was built around the metric of data bindings. Data bindings fall in the category of data visibility metrics and are used to measure inter--component interactions. Software system components are defined in the context of the Ada language using a flexible scheme. They are used, along with cluster analysis, to present structural configurations of a software system. The clustering technique as well as the tool design and its problems are discussed. The analysis of dendrograms (trees of components produced by the tool) reveals several classes of systems dendrograms and provides a simple mechanism for Ada source code reusability. Finally, the implications of different design methodologies used to develop the test software are discussed and explanations for the several types of dendrogram formulations are given. Keywords: Data Bin...
 
Access to information using the Internet has undergone dramatic change and expansion recently. The unrivaled success of the World Wide Web has altered the Internet from something approachable only by the initiated to something of a media craze -- the information superhighway made manifest in the personal `home page.' This paper surveys the beginnings of network information discovery and retrieval, how the Web has created a surprising level of integration of these systems, and where the current state of the art lies in creating globally accessible information spaces and supporting access to those information spaces. * This work was supported in part by NASA as part of the Repository Based Software Engineering project, cooperative agreement NCC-9-16. 2 1 -- Introduction In a previous survey [15], I addressed repository services in support of software development. This paper reexamines some of those services in the more general context of information retrieval and examines a number o...
 
THE PrT MODEL FOR THE MULTI-AGENT BLOCKS PROBLEM 
How agents accomplish a goal task in a multi-agent system is usually specified by multi-agent plans built from basic actions (e.g. operators) of which the agents are capable. The plan specification provides the agents with a shared mental model for how they are supposed to collaborate with each other to achieve the common goal. Making sure that the plans are reliable and fit for the purpose for which they are designed is a critical problem with this approach. To address this problem, this paper presents a formal approach to modeling and analyzing multi-agent behaviors using Predicate/Transition (PrT) nets, a high- level formalism of Petri nets. We model a multi-agent problem by representing agent capabilities as transitions in PrT nets. To analyze a multi-agent PrT model, we adapt the planning graphs as a compact structure for reachability analysis, which is coherent to the concurrent semantics. We also demonstrate that one can analyze whether parallel actions specified in multi-agent plans can be executed in parallel and whether the plans can achieve the goal by analyzing the dependency relations among the transitions in the PrT model.
 
this paper we describe a generalization of our work to facilitate the specification and deployment of distributed, cooperating software agents for a decentralized workflow management system. Our distributed software agents are specified using a simple visual language that composes two basic units of functionality. Event filters receive events from workflow components or other agents and pass these on to related agents if they conform to specified patterns. Actions receive events, typically from filters, and carry out some specified processing in response to the events received. Allowing such agent specifications to be distributed requires the use of intermediate agents to facilitate interagent communication and coordination. We describe visual language-based approaches we have developed to do this. Distributed agents need to be created, configured, inspected and monitored by users, or by other agents. We describe how this is achieved in our workflow system by extensions of the visual languages used to specify the agents. We then describe our experiences in deploying our distributed agents, compare and contrast our approach to related research, and describe improvements and generalizations of our approaches we plan to make in the future
 
Entity-Relationship modelling is a rather intuitive technique for specifying the structure of complex data. The technique is popular in part because the structure of an ER-model is easily grasped, and it is usually supported by diagrams or other visualizing tools. This paper deals with a detailed analysis of ER-modelling with the goal of deriving an algebraic specification for a given ER-model. This is motivated by considerations regarding program specification for data intensive applications. We indicate haw the technique demonstrated here may be combined with formal techniques for specifying the functional behavior of a system. 1 INTRODUCTION 1 1 Introduction Entity-Relationship modelling is a rather intuitive technique for specifying the structure of complex data. The technique is popular in part because the structure of an ER-model is easily grasped, and it is usually supported by diagrams or other visualizing tools. This paper deals with a detailed analysis of ER-modelling with...
 
Transforming software requirements into a software design involves the iterative partition of a solution into software components. The process is human-intensive and does not guarantee that design objectives such as reusability, evolvability, and reliable performance are satisfied. The costly process of designing, building, and modifying high assurance systems motivates the need for precise methods and tools to generate designs whose corresponding implementations are reusable, evolvable, and reliable. This paper demonstrates an analytical approach for partitioning basic elements of a software solution into reusable and evolvable software components. First, we briefly overview the role of partitioning in current design methods and explain why computer-aided design (CAD) tools to automate the design of microelectromechanical systems (MEMS) are high assurance applications. Then we present our approach and apply it to the design of CAD software to layout an optimized design of a MEMS accel...
 
This paper motivates the need for AOCE and gives examples of using aspects during component requirements engineering, design and implementation. We begin with an overview of the concept of component aspects, using a component-based process management environment for illustration. We describe aspect-oriented component requirements engineering, and the refinement of component requirements codified by aspects into design-level aspects. Implementation of software components using designlevel aspect information is described, along with various run-time uses of aspects. Tool support is briefly discussed, and we compare and contrast our approach with other component development methods and architectures. We conclude with an overview of current and possible future research directions.
 
Three sample information retrieval systems, archie, autoLib, and WAIS, are compared as to their expressiveness and usefulness --- first, in the general context of information retrieval, and then as prospective software reuse repositories. While the representational capabilities of these systems are limited, they provide a useful foundation for future repository efforts, particularly from the perspective of repository distribution and coherent user interface design. 3 1 -- Introduction As information becomes an increasingly important sector of the global economy, the way in which we access that information -- and thereby the way in which we access and structure knowledge -- becomes a critical concern. The engineering of knowledge is quickly becoming an area of research in its own right, independent of its parent disciplines of artificial intelligence, database systems, and information retrieval; consider the title of the journal that you now hold in your hands. Wegner recognized the...
 
Process modeling is a rather young and very active research area. During the last few years, new languages and methods have been proposed to describe software processes. In this paper we try to clarify the issues involved in software process modeling and identify the main approaches. We start by motivating the use of process modeling and its main objectives. We then propose a list of desirable features for process languages. The features are grouped as either already provided by languages from other fields or as specific features of the process domain. Finally, we review the main existing approaches and propose a classification scheme. 2 Abstract Process modeling is a rather young and very active research area. During the last few years, new languages and methods have been proposed to describe software processes. In this paper we try to clarify the issues involved in software process modeling and identify the main approaches. We start by motivating the use of process model...
 
Modiied Resolution Rule
Reusing software may greatly increase the productivity of software engineers and improve the quality of developed software. Software component libraries have been suggested as a means for facilitating reuse. A major difficulty in designing software libraries is in the selection of a component representation that will facilitate the classification and the retrieval processes. Using formal specifications to represent software components facilitates the determination of reusable software because they more precisely characterize the functionality of the software, and the well-defined syntax makes processing amenable to automation. This paper presents an approach, based on formal methods, to the classification, organization and retrieval of reusable software components. From a set of formal specifications, a two-tiered hierarchy of software components is constructed. The formal specifications represent software that has been implemented and verified for correctness. The lower-level hierarchy is created by a subsumption test algorithm that determines whether one component is more general than another; this level facilitates the application of automated logical reasoning techniques for a fine-grained, exact determination of reusable candidates. The higher-level hierarchy provides a coarse-grained determination of reusable candidates and is constructed by applying a hierarchical clustering algorithm to the most general components from the lower-level hierarchy. The hierarchical organization of the software component specifications provides a means for storing, browsing, and retrieving reusable components that is amenable to automation. In addition, the formal specifications facilitate the verification process that proves a given software component correctly satisfies the current problem. A prototype browser that provides a graphical framework for the classification and retrieval process is described.
 
We evaluate a class of learning algorithms known as inductive logic programming (ILP) methods on the task of predicting fault density in C++ classes. Using these methods, a large space of possible hypotheses is searched in an automated fashion; further, the hypotheses are based directly on an abstract logical representation of the software, eliminating the need to manually propose numerical metrics that predict fault density. We compare two ILP systems, FOIL and FLIPPER, and conclude that FLIPPER generally outperforms FOIL on this problem. We analyze the reasons for the differing performance of these two systems, and based on the analysis, propose two extensions to FLIPPER: a user-directed bias towards easy-to-evaluate clauses, and an extension that allows FLIPPER to learn "counting clauses". Counting clauses augment logic programs with a variation of the "number restrictions" used in description logics, and significantly improve performance on this problem when prior knowledge is used. We also evaluate the use of ILP techniques for automatic generation of Boolean indicators and numeric metrics from the calling tree representation.
 
shows how M RPROP varies as additional properties are added to a 100 main subject knowledge base.
Calculation of M DIV. Each part shows an inheritance hierarchy with five main subjects and five properties (shown as five distinct shapes). Parts a and b show pathological situations where properties are all introduced at a single concept (the top and a leaf respectively) and M DIV is thus zero. Part f shows a well balanced case, where each concept introduces exactly the same number of new properties (one in this case) and M DIV is one. Parts c, d and e show intermediate cases.
The relationship between M CONCEN and M DIV. M CONCEN , shown in the x axis is a normalized version of the standard deviation of the number of properties introduced at each main subject. This is transformed using a sigmoid function, as plotted in the above graph, in order to obtain a metric that has a good subjective 'feel'.
Complexities of various inheritance hierarchies. Parts a through i show increasing complexity using the M ISA metric. Parts a and b show simplistic cases (M ISA would be small regardless of the size of the knowledge base). Parts c and d show the same structure, with variation only in multiple inheritance. Pairs (e,f) and (g,h) also show similar structures with variation in multiple inheritance, however M ISA can be seen to be somewhat independent of the adding of extra parents (such an action may either increase or decrease M ISA ).
Metrics are widely researched and used in software engineering; however there is little analogous work in the field of knowledge engineering. In other words, there are no widely-known metrics that the developers of knowledge bases can use to monitor and improve their work. In this paper we adapt the GQM (Goals-Questions-Metrics ) methodology that is used to select and develop software metrics. We use the methodology to develop a series of metrics that measure the size and complexity of concept-oriented knowledge bases. Two of the metrics measure raw size; seven measure various aspects of complexity on scales of 0 to 1, and are shown to be largely independent of each other. The remaining three are compound metrics that combine aspects of the other nine in an attempt to measure the overall `difficulty' or `complexity' of a knowledge base. The metrics have been implemented and tested in the context of a knowledge management system called CODE4. 1. Introduction There has been substantial ...
 
Sample Computations of Worst Execution Time (all times in msec)
: The Chimera Methodology is a software engineering paradigm that enables rapid development of real-time applications through use of dynamically reconfigurable and reusable software. It is targeted towards a distributed shared memory computing environment. The primary contribution of this research is the port-based object model of a real-time software component. The model is obtained by applying the portautomaton formal computational theory to object-based design. A finite state machine, detailed interface specifications, and a C-language template are used to define the port-based object. Tools to support the integration, scheduling, and state variable communication between the objects have been developed and incorporated into the Chimera Real-Time Operating System. Techniques for verifying correctness and analyzing performance are also provided for configuration managers that integrate software designed using the port-based object model. 1. INTRODUCTION The Chimera Methodology is a ...
 
Querying source code interactively for information is a critical task in reverse engineering of software. However, current source code query systems succeed in handling only small subsets of the wide range of queries possible on code, trading generality and expressive power for ease of implementation and practicality. We attribute this to the absence of clean formalisms for modeling and querying source code. In this paper, we present an algebraic framework (Source Code Algebra or SCA) for modeling and querying source code. The framework forms the basis of our query system for C source code. An analogy can be drawn with relational algebra, which forms the basis for relational databases. The benefits of using SCA include the integration of structural and flow information into a single source code data model, the ability to process high-level source code queries (command-line, graphical, relational, or pattern-based) by translating them into SCA expressions which can be evaluated using th...
 
A graph transformation system modelling the customer's view of a bank.
this paper we present a specification technique based on graph transformations which supports such a development approach. The use of graphs and graph transformations supports an intuitive understanding and an integration of static and dynamic aspects on a well-defined semantical base. On this background, formal notions of view and view relation are developed and the behaviour of views is described by a loose semantics. The integration of two views derived from a common reference model is done in two steps. First, dependencies between the views which are not given by the reference model are determined, and the reference model is extended appropriately. This is the task of a model manager. If the two views and the reference model are consistent, the actual view integration can be performed automatically. For the case of more than two views more general scenarios are developed and discussed. All concepts and results are illustrated at the well-known example of a banking system.
 
and physical media In general use the term medium denotes any kind of intermediate agency, means or channel. In the context of multimedia the term `medium' is applicable with two specific meanings, setting aside this common usage. Any subject matter being communicated has an associated medium which is its carrier, or vector, in the physical sense. This sense is applicable to air waves for speech, to printing ink on paper for text, to photographic film for moving images, or to the variety of physical substrates that may carry digitally coded signals. The variety and sophistication of traditional physical carrier media (for example the multivarious types which may be used in cataloguing the objects in an art museum), the characteristics of the objects associated with them (for example the many forms which the book has taken), and their social and cultural impact (for example that of photographic film as used in the cinema) form an important area of study in itself. In the second sense, ...
 
Constructing a valid configuration knowledge base.
Product model of a configurable PC.
A configured product as instance model.
Aggregation in the component port model.
In many domains, software development has to meet the challenges of developing highly adaptable software very rapidly. In order to accomplish this task, domain specific, formal description languages and knowledge-based systems are employed. From the viewpoint of the industrial software development process, it is important to integrate the construction and maintenance of these systems into standard software engineering processes. In addition, the descriptions should be comprehensible for the domain experts in order to facilitate the review process. For the realization of product configuration systems, we show how these requirements can be met by using a standard design language (UML-Unified Modeling Language) as notation in order to simplify the construction of a logic-based description of the domain knowledge. We show how classical description concepts for expressing configuration knowledge can be introduced into UML and be translated into logical sentences automatically. These sentences are exploited by a general inference engine solving the configuration task.
 
ER-like diagram for Modula-2 programs
Object and version plane
Cutout of a schema-compatible graph
Example for the use of derived attributes
User interface of the CoMa system
Due to increasing complexity of hardware and software systems, configuration management has been receiving more and more attention in nearly all engineering domains (e.g. electrical, mechanical, and software engineering). This observation has driven us to develop a domain--independent and adaptable configuration management model (called CoMa) for managing systems of engineering design documents. The CoMa model integrates composition hierarchies, dependencies, and versions into a coherent framework based on a sparse set of essential configuration management concepts. In order to give a clear and comprehensible specification, the CoMa model is defined in a high--level, multi--paradigm specification language (PROGRES) which combines concepts from various disciplines (database systems, knowledge--based systems, graph rewriting systems, programming languages). Finally, we also present an implementation which conforms to the formal specification and provides graphical, structure--oriented to...
 
Development Environment
Cooperative Execution
Incorrect Cooperative Interaction
t 1 terminates its execution without validating the constraint c 1
At its termination t 1 evaluates the constraint c 1 on the last consistent value of y.
this paper, a hybrid approach to support cooperation is presented. The originality of this approach is the ability to enforce general properties on cooperative interactions while using the semantic of applications to fit particular situations or requirements. This paper gives a brief idea about the general enforced properties on activity interactions. It describes in detail the semantic rules that control activity results, the impacts of the cooperation on these rules and how both of dimensions interact
 
Introduction Knowledge management by experience databases (EXDBs) is gradually getting into use. This applies e.g. for banking, oil production and ship building, as well as for software engineering. The goal is to create and sustain a learning organization, where the bottom-line criterion is satisfied customers in the spirit of TQM or ISO-9000. 2 Some useful definitions Knowledge is use of facts, truths or principles from studies or investigations. That is, the available information must be made operational for ("learned" by) the person or group in question. Thus, information is not automatically knowledge. Explicit knowledge is what can be formalized, e.g. as process models or guidelines in a quality system. Tacit knowledge is the operational skills among practitioners, including practical judgement capabilities (e.g. intuition). We can also distinguish between ease of transfer of local vs. global knowledge, and between ease of use of programmable (often explicit) vs. unique (o
 
, we have the following:
Semantics of Notation 1 Abstract Interpretation Abstract Semantics of Notation 2 of Notation 2 Concrete Semantics (Mapping Functions) Abstraction Functions Figure 2: Multiple View Analysis Framework. validity). These analyses are performed to guarantee that each design is well-- constructed [15, 13]. Well--constructed designs are subsequently subjected to the multiple view analysis. Given the wide range among design views, it is practically infeasible to perform MVA at the concrete representation level. Thus, some form of abstraction must be carried out before engaging in MVA. Our approach to abstraction is derived from the abstract interpretation method [16] (see Figure 2). The starting point of the framework is a concrete representation which associates with each Data Flow Diagram (DDFD ) and Structure Chart (DSC ) a formal graph representation of the corresponding designs. To facilitate the abstract representation analysis, an abstraction function is defined based on common feature...
 
production approach has its drawbacks in that the design of useful standard parts and assemblies is very expensive work and More than twenty years ago the idea of producing software requires craftsman experience. Also, once a set of standard parts systems from reusable software components was proposed. is created it may not suffice to construct all the objects desired. Since that time many changes have taken place in Computer Science and Software Engineering, but software systems are Our wish to build software systems from reusable software still built as one-of-a-kind craftsman efforts. A method for components represents a shift from craftsman production to software construction using components is rationalized using mass-production. This shift is forced upon us by the ever e x p e r i e n c e f r om s o f t wa r e c ompon e n t s , p r og r am increasing size of software systems we build. transformations, system architecture, industrial large systems, automatic programming and program generation. Experience Software Components with the method is discussed. The limiting factors of the method that prevent the widespread use of reusable software components are identified. The idea of constructing software from general, well-specified, and well-tested software components is an appealing one. After all, we software engineers have seen the computer hardware
 
: We present a model of the data structure domain that is expressed in terms of the GenVoca domain modeling concepts [Bat91]. We show how familiar data structures can be encapsulated as realms of plug-compatible, symmetric, and reusable components, and we show how complex data structures can be formed from their composition. The target application of our research is a precompiler for specifying and generating customized data structures. Keywords: software building-blocks, domain modeling, software reuse, data structures. 1.0 Introduction A fundamental goal of software engineering is to understand how software components fit together to form complex systems. Domain modeling is a means to achieve this goal; it is the study of a domain of similar software systems to identify the primitive and reusable components of that domain and to show how compositions of components not only explain existing systems but also predict families of yet unbuilt systems that have interesting and nove...
 
This paper is an attempt at mathematical investigation of software development process in the context of declarative logic programming. We introduce notions of specification and specification constructor which are developed from natural language description of a problem. Generalizations of logic programs, called lp-functions are introduced to represent these specifications. We argue that the process of constructing lp-function representing a specification S should be supported by certain types of mathematical results which we call representation theorems. We present two such theorems to illustrate the idea. 1 Introduction This paper is written in the framework of declarative logic programming paradigm (see, for instance, [17, 14]) which strives to reduce a substantial part of a programming process to the description of objects comprising the domain of interest and relations between these objects. After such description is produced by a programmer it can be queried to establish truth o...
 
Reverse engineering design space  
Program understanding can be enhanced using reverse engineering technologies. The understanding process is heavily dependent on both individuals and their specific cognitive abilities, and on the set of facilities provided by the program understanding environment. Unfortunately, most reverse engineering tools provide a fixed palette of extraction, selection, and organization techniques. This paper describes a programmable approach to reverse engineering. The approach uses a scripting language that enables users to write their own routines for common reverse engineering activities such as graph layout, metrics, and subsystem decomposition, thereby extending the capabilities of the reverse engineering toolset to better suit their needs. A programmable environment supported by this approach subsumes existing reverse engineering systems by being able to simulate facets of each one.
 
Since its introduction in 1969, the phrase "frame problem" has been attributed various interpretations. Most researchers in the field of Artificial Intelligence, define the frame problem as the problem of finding an effective representation for reasoning about change. Logicians use the phrase to refer to a much less general, technical problem within logic, whereas philosophers tend to interpret the phrase as the more general problem of determining (ir)relevance. All in all, this discrepancy has led to considerable confusion about the meaning of the phrase. We contend that most of this confusion can be avoided, if the original (robotics) context of the frame problem is adhered to. We present an engineering view on the frame problem that allows us to strip the frame problem from associated problem notions like qualification and ramification. The problem that remains is intimately related to the knowledge acquisition bottleneck in knowledge engineering. 1 The Frame Problem Literature spe...
 
A well-known security problem with MPOA is that cut-through connections generally bypasses firewall routers if there are any. None of the previously proposed approaches solved the problem properly. In this paper, we propose a novel firewalling scheme for MPOA that nicely fixes the security hole. Our firewalling scheme has three outstanding advantages that make it ideal for securing MPOA-based enterprise networks. First, based on our novel concept of "logical chokepoints", our firewalling scheme does not require the existence of physical chokepoints inside the network. Second, the scheme is nicely embedded into the MPOA protocol so that its cost, performance overhead, and protocol complexity are reduced to a minimum. Third, the scheme is centrally administrate-red so that it scales well to very large networks. 1 Introduction MPOA is proposed as a unified framework that allows MAC and internetwork layer protocols be transparently transported over an ATM network[2]. In MPOA, ATM...
 
Rule-based software development environments (RBDEs) model the software development process in terms of rules that encapsulate development activities, and assist in executing the process via forward and backward chaining over the rule base. We investigate the scaling up of RBDEs to support (1) multiple views of the rule base for multiple users and (2) evolution of the rule base over the lifetime of a project. Our approach is based on clarifying two distinct functions of rules and chaining: maintaining consistency and automation. By definition, consistency is mandatory whereas automation is not. Distinguishing the consistency and automation aspects of RBDE assistance mechanisms makes it possible to formalize the range of compatible views and the scope of mechanizable evolution steps. Throughout the paper, we use the MARVEL RBDE as an example application. Appeared in International Journal on Software Engineering & Knowledge Engineering, World Scientific, 2(1):59-78, March 1992...
 
Researchers who create software production environments face considerable problems. Software production environments are large systems that are very costly to develop. Furthermore, software production environments which support particular software engineering methods may not be applicable to a large number of software production projects. These conditions have formed a trend towards research into ways which will lessen the cost of developing software production environments. In particular, the trend has been towards the construction of meta-environments. In this paper, we attempt to categorize current meta-environment approaches. For each of the categories, we review research efforts which illustrate different approaches within that category. We conclude by presenting an emerging common thread of requirements which links this field together.
 
As software is increasingly used to control safety-critical systems, correctness becomes paramount. Formal methods in software development provide many benefits in the forward engineering aspect of software development. Reverse Engineering is the process of constructing a high level representation of a system from existing lower level instantiations of that system. Reverse engineering of program code into formal specifications facilitates the utilization of the benefits of formal methods in projects where formal methods may not have previously been used, thus facilitating the maintenance of safety-critical systems. Keywords: formal methods, formal specifications, reverse engineering, maintenance, safetycritical systems 1 Introduction As software is increasingly used to control safety-critical systems, correctness becomes paramount. The demand for software correctness becomes more evident when accidents, sometimes fatal, are due to software errors. For example, recently it was repor...
 
We have developed a framework for specifying high-level software designs. The core of the framework is a very simple visual notation. This notation enables designers to document designs as labelled rectangles and directed edges. In addition to the notation, our framework features a supporting formalism, called ISF (Interconnection Style Formalism). This formalism enables designers to customize the simple design notation by specifying the type of entities, relations, legal configurations of entities and relations, as well as scoping rules of the custom notation. In this paper we present the formal definition of ISF and use ISF to specify two custom design notations. We also describe how ISF specifications, using deductive database technology, are used to generate supporting tools for these custom notations.
 
Software process dynamics challenge the capabilities of process-centered software engineering environments. Dynamic task nets represent evolving software processes by hierarchically organized nets of tasks which are connected by control, data, and feedback flows. Project managers operate on dynamic task nets in order to assess the current status of a project, trace its history, perform impact analysis, handle feedback, adapt the project plan to changed product structures, etc. Developers are supported through task agendas and provision of tools and documents. Chained tasks may be executed in parallel (simultaneous engineering), and cooperation is controlled through releases of document versions. Dynamic task nets are formally specified by a programmed graph rewriting system. Operations on task nets are specified declaratively by graph rewrite rules at a high level of abstraction. Furthermore, editing, analysis, and execution steps on a dynamic task net, which may be interleaved seamlessly, are described in a uniform formalism.
 
Graph transformation is a general visual modeling language which is suitable for stating the dynamic semantics of the designed models formally. We present a highly understandable yet precise approach to formally define the behavioral semantics of UML 2.0 Activity diagrams by using graph transformation. In our approach we take into account control flow and data flow semantics. Our proposed semantics is based on token-like semantics and traverse-to-completion. The main advantage of our approach is automated formal verification and analysis of UML Activities. We use AGG to design Activities and we use our previous approach to model checking graph transformation system. Hereby, designers can verify and analyze designed Activity diagrams. Since workflow modeling is one of the main application areas of the Activities, we use our proposed semantics for modeling and verification of workflows to illustrate our approach.
 
Top-cited authors
Scott Deloach
  • Kansas State University
Jörg P. Müller
  • Technische Universität Clausthal
Bernhard Bauer
  • Universität Augsburg
Haris Mouratidis
  • University of Brighton
Paolo Giorgini
  • Università degli Studi di Trento