Article

Computer-Aided Software Engineering in a Distributed Workstation Environment

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Computer-Aided Software Engineering environments are becoming essential for complex software projects, just as CAD systems have become essential for complex hardware projects. DSEE, the DOMAIN Software Engineering Environment, is a distributed, production quality, software development environment that runs on Apollo workstations. DSEE provides source code control, configuration management, release control, advice management, task management, and user-defined dependency tracking with automatic notification. DSEE incorporates some of the best ideas from existing systems. This paper describes DSEE, contrasts it other systems, and discusses some of the technical issues involved in the construction of a highly-reliable, safe, efficient, and distributed development environment.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The relations revision-of, variant-of and their subtypes apply to configurations. This is in contrast to SCM systems like SCCS [34], RCS [39], CMS [1], and DSEE [25], where versions of configurations appear to be an afterthought. For example, with RCS, one would have to collect descriptions of all configurations and subconfigurations into a single, atomic object called a Makefile, and allow versions of the entire set only. ...
... A simple, linebased delta consumes between 9 and 16 per cent of its cleartext representation [34,39]. Leblang reports that delta storage combined with blank compression reduces that space to 1-2 per cent of clear-text [25]. 1 Clearly, delta storage makes the luxury of saving multiple versions of atomic objects af-fordable. It could also be applied to configurations, but may not produce dramatic savings because of the small size of those objects. ...
... Constraints of this sort are called "configuration threads" in DSEE [25]. By adding a cut-off constraint for the creation date (a maximum date), a configuration can be regenerated as it would have been produced at a certain date. ...
Article
Full-text available
Configuration management (CM) is the discipline of controlling changes in large and complex systems. Its goal is to prevent the chaos caused by the numerous corrections, extensions, and adaptations that are applied to any large system over its lifetime. The goal of CM is to ensure a systematic and traceable development process, so that a system is in a well-defined state with accurate specifications and verified quality attributes at all times.
... There are several merging techniques: text-based [Leblang 1984, Tichy 1985, Berliner 1990], syntactic-based [Asklund 1994, Buffenbarger 1995, semantic-based [Westfechtel 1991, Binkley 1995, operation-based [Shen 2004, Dig 2008 and merging algorithms such as two-way merge [Hunt 1976] and three-way merge [Lindhom 2001]. The current state-of-the-art however is mostly constituted by textual diff tools, the widely used version control systems use text-based merging techniques where semantics is not taken into account when merging. ...
... Text-based merging. Text-based merge approaches [Leblang 1984, Tichy 1985, Adams 1986, Berliner 1990, Lubkin 1991 consider software artifacts as text (or binary) files (i.e., ignoring semantic information). Commonly they use line-based merging, where lines of text are taken as indivisible units [Hunt 1976]. ...
Article
Modern software is built by teams of developers that work in a collaborative environment. The goal of this kind of development is that multiple developers can work in parallel. They can alter a set of shared artifacts and inspect and integrate the source code changes of other developers. For example, bug fixes, enhancements, new features or adaptations due to changing environment might be integrated into the system release. At a technical level, a collaborative development process is supported by version control systems. Since these version control systems allow developers to work in their own branch, merging and integration have become an integral part of the development process. These systems use automatic and advanced merging techniques to help developers to merge their modifications in the development repositories. However, these techniques do not guarantee to have a functional system. While the use of branching in the development process offers numerous advantages, the activity of merging and integrating changes is hampered by the lack of comprehensive support to assist developers in these activities. For example, the integration of changes can have an unexpected impact on the design or behavior of the system, leading to the introduction of subtle bugs. Furthermore, developers are not supported when integrating changes across branches (cherry picking), when dealing with branches that have diverged, when finding the dependencies between changes, or when assessing the potential impact of changes. In this dissertation we present an approach that aims at alleviating these problems by providing developers and, more precisely, integrators with semi-automated support for assisted integration within a branch and across branches. We focus on helping integrators with their information needs when understanding and integrating changes by means of characterizations of changes and streams of changes (i.e., sequence of successive changes within a branch) together with their dependencies. These characterizations rely on the first-class representation of systems' histories and changes based on program entities and their relationships rather than on files and text. For this, we provide a family of meta-models (Ring, RingH, RingS and RingC) that offer us the representation of program entities, systems' histories, changes and their dependencies, along with analyses for version comparison, and change and dependency identification. Instances of these meta-models are then used by our proposed tool support to enable integrators to analyze the characterizations and changes. Torch, a visual tool, and JET, a set of tools, actually provide the information needs to assist integration within a branch and across branches by means of the characterization of changes and streams of changes respectively.
... In the field of software development, commercial tools like Apollo DSEE [32] or SUN NSE [62] provide basic support for workspaces. These tools, however, support authorization and cooperation only at a very coarse level and are constrained to change notification. ...
... In CAD and software engineering, for example, workspaces, versions, and configurations [29] are by now generally accepted notions offering mechanisms for supporting cooperative work, customization, and system evolution. Commercial software engineering tools such as Apollo DSEE [32] or SUN NSE [62] provide basic support for workspaces. These tools, however, offer only a limited change notification mechanism. ...
Article
Large information bases that are used by several different users and applications accommodate the demands of their users more effectively, if they can be split into possibly overlapping fragments, called contexts. The latter allow one to focus attention on specific concerns such as topics, tasks, or user-views. This paper proposes a conceptual, generic framework for contexts supporting context-specific naming and representation of conceptual entities, relativized transaction execution, operations for context construction and manipulation, authorization, and change propagation. A partial validation of the framework is given by showing how specific topologies of contexts, associated with specific authorization and change propagation policies, result in design templates for modeling well-known applications such as modules, views and workspaces. Further, examples are used to illustrate how modifications of the templates lead to generalizations of these applications that better support specific applications, such as those calling for tight cooperative work. The context framework is aimed at providing a common kernel for the modeling of information base partitions in general and well-known notions such as views, workspaces, topics, versions and requirements engineering viewpoints, in particular.
... Since derived objects can be reproduced from a corresponding source object, binary pool files are administered in a cache fashion, i.e. when space runs short, binaries are cleaned out on a "least accessed" basis. The concept of binary pool files closely resembles DSEE's derived object pools [1,14]. ...
... 6.3.6 DSEE. DSEE [Leblang and McLean 1985;Leblang and Chase 1984;Leblang et al. 1988] integrates functions that were previously provided independently by tools such as Make and SCCS/RCS. Furthermore, DSEE supports rule-based construction of source configurations and improves system building by maintaining a cache of derived objects, using more accurate difference predicates than Make and parallelizing builds over a network of workstations. ...
Article
Full-text available
After more than 20 years of research and practice in software configuration management (SCM), constructing consistent configurations of versioned software products still remains a challenge. This article focuses on the version models underlying both commercial systems and research prototypes. It provides an overview and classification of different versioning paradigms and defines and relates fundamental concepts such as revisions, variants, configurations, and changes. In particular, we focus on intensional versioning, that is, construction of versions based on configuration rules. Finally, we provide an overview of systems that have had significant impact on the development of the SCM discipline and classify them according to a detailed taxonomy.
... There are several merging techniques: text-based [41,3,25], syntactic-based [5,2,44], semantic-based [44,4], operation-based [35,9] and merging algorithms such as two-way merge [22] and three-way merge [26]. Several tools such as Envy [40] take into account the underlying meta-model as a step towards a semantic merge. ...
Article
Revision Control Systems (e.g., SVN, Git, Mercurial) include automatic and advanced merging algorithms that help developers to merge their modifications with development repositories. While these systems can help to textually detect conflicts, they do not help to identify the semantic consequences of a change. Unfortunately, there is little support to help release masters (integrators) to take decisions about the integration of changes into the system release. Most of the time, the release master needs to read all the modified code, check the diffs to build an idea of a change, and dig for details from related unchanged code to understand the context and potential impact of some changes. As a result, such a task can be overwhelming. In this article we present a visualization tool to support integrators of object-oriented programs in comprehending changes. Our approach named Torch characterizes changes based on structural information, authors and symbolic information. It mixes text-based diff information with visual representation and metrics characterizing the changes. The current implementation of our approach analyses Smalltalk programs, and thus we describe our experiments applying it to Pharo, a large open-source system. We also report on the evaluations of our approach by release masters and developers of several open-source projects.
... Some configuration management systems, such as System Modeller [14,23], GANDALF [7,19,26,27], Adele [1,2,9,10], DSEE [15], Jasmine [18], shape [16,17] and Odin [5], allow for variants of implementations of modules, and so in configuring a system, one can "pick and choose", having version V of module M "require" version V of module M . The bookkeeping problems become even more daunting! ...
Article
Full-text available
Marmoset is a simple unix-based tool for creating and maintaining large systems built from reusable "modules". Marmoset provides unix commands for creating and editing modules, and for configuring modules so that they produce object code, libraries, or executables, as the case may be, from the component modules. A Marmoset module is implemented as a unix directory containing a number of files, some of which are supplied by the user, and others of which are generated automatically. For a given language, a user will supply a number of component files, including an "import" list of other modules which are required for this module to be configurable. The generated files might contain, among other things, external declarations and compiled code. The Marmoset link command automatically identifies all those modules imported directly or indirectly by the root module, and ensures (by reconfiguring if necessary) that all the output files are up to date. Should a particular version of a module be looked for, then the most relevant versions of the imported modules will be sought, and the most relevant versions of component files will be used. Marmoset significantly reduces the cost of software configuration, and thereby increases software reliability.
... We identified a significant number of research-based technologies belonging to this group most of which are now outdated. For coordination support in distributed software development environments we found CES [58], DistEdit [89], DSEE [92], GROVE [47], Mercury [82], and Pan [6]. Whereas, more recent ones are: Sangam, an Eclipse plug-in that enables complete sharing of workshops to simulate co-located pair programming described in [71] and now available as a OSS; Moomba [PS143], enabling distributed extreme programming; and, Tukan [124] and XPairtise [PS152] supporting distributed pair programming. ...
Technical Report
Full-text available
Context: A wide variety of technologies have been developed to support Global Software Development (GSD). However, the information about the dozens of available solutions is quite diverse and scattered making it quite difficult to have an overview able to identify common trends and unveil research gaps. Objective: The objective of this research is to systematically identify and classify a comprehensive list of the technologies that have been developed and/or used for supporting GSD teams. Method: This study has been undertaken as a Systematic Mapping Study (SMS). Our searches identified 1958 papers out of which 182 were found suitable for inclusion in this study. Results: We have identified 412 technologies reported to be developed and/or used for supporting GSD teams from 182 papers. The identified technologies have been analyzed in order to categorize them using four main classification schemas for providing a framework that can help identify the categories that have attracted significant amount of research and commercial efforts, and the research areas where there are gaps to be filled. Conclusions: The findings show that whilst commercial and open source solutions are predominantly focused on specialized tools as well as platforms, research effort was concentrated on providing integrated environments, frameworks, and plug-in based solutions. Considering the findings in the context of previously proposed research agendas, some of the key challenges for GSD research (i.e., collaborative tools and innovative knowledge management systems) shows that lots of collaborative technologies have been reported, but knowledge management is being addressed by focusing on supporting awareness, which is being considered as important as the three elements of 3C model (i.e., communication, collaboration, and coordination). We also conclude that future effort in this area should pay more attention to devising solutions which can fulfill several kinds of requirements necessitated by a broader set of challenges being faced by GSD practitioners rather than tackling individual issues.
... (3) Frequently revised documents like programs and graphics are stored most economically as a set of differences relative to a base version [7,13,16]. Since the changes usually occupy only a fraction of a complete copy, substantial space savings result. For example, difference techniques can store the equivalent of 10 to 50 revisions in the same space that would be required for saving two revisions in cleartext (i.e., one original and one backup copy). ...
Conference Paper
Software merging is the process of combining multiple existing versions of a source file, to produce a new version. Typically, the goal is for the new version to implement some kind of union of the features implemented by the existing versions. A variety of merge tools are available, but software merging is still a tedious process, and mistakes are easy to make. This paper describes the fundamentals of merging, surveys the known methods of software merging, including a method based on programming-language syntax, and discusses a set of tools that perform syntactic merging.
Conference Paper
Version management is a key aspect for large-scale software development. Several tools have been developed to aid the software developer in this task. Most of these tools propose version models which are strongly based on the concept of versions of single objects (like files). The PACT environment is an integrated software engineering environment being developed in the PACT project under the ESPRIT research programme. In the PACT environment, an approach for version management is proposed in keeping with the new generation of object bases. This paper describes this approach, called the Version Management Common Service (VMCS) model, which takes into account versions of collections of interrelated objects and the relationships between them and other objects in the object base. Versions of single objects are treated as a special case, that is as a collection of objects with only one element, the single object. The VMCS model is implemented on PCTE as a set of operations that may be called by tools. Interfaces to these operations are provided in the C, Ada, Lisp, and Prolog programming languages.
Conference Paper
Software manufacture is the process by which a software product is derived, through an often complex sequence of steps, from the primitive components of a system. This paper presents a model of software manufacture that addresses the amount of work that has to be done, after a given set of changes has been made, to consistently incorporate those changes in a given product. Based on a formal definition of a software configuration that characterizes a software product in terms of how it was manufactured, the model uses difference predicates to discriminate between changes that are significant and those that are not. A difference predicate is an assertion about the relationship between two sets of components. Difference predicates determine when one set of components can be substituted for another. By predicting when existing components can be substituted for the output of a manufacturing step, difference predicates determine which steps in the manufacturing process can be omitted when incorporating a given set of changes.
Conference Paper
Full-text available
Early software environments have supported a narrow range of activities (programming environments) or else been restricted to a single “hard-wired” software development process. The Arcadia research project is investigating the construction of software environments that are tightly integrated, yet flexible and extensible enough to support experimentation with alternative software processes and tools. This has led us to view an environment as being composed of two distinct, cooperating parts. One is the variant part, consisting of process programs and the tools and objects used and defined by those programs. The other is the fixed part, or infrastructure, supporting creation, execution, and change to the constituents of the variant part. The major components of the infrastructure are a process programming language and interpreter, object management system, and user interface management system. Process programming facilitates precise definition and automated support of software development and maintenance activities. The object management system provides typing, relationships, persistence, distribution and concurrency control capabilities. The user interface management system mediates communication between human users and executing processes, providing pleasant and uniform access to all facilities of the environment. Research in each of these areas and the interaction among them is described.
Conference Paper
A model for software configuration management that subsumes several existing systems is described. It is patterned after compiler models in which programs are transformed by multiple phases ending in an executable program. We model configuration management as transforming a high-level specification of a software product to be produced into a complete specification capable of being executed to construct the product. This transformational approach is used to model four existing systems and to compare and contrast their operation.
Conference Paper
Full-text available
With current compiler technology, changing a single line in a large software system may trigger massive recompilations. If the change occurs in a file with shared definitions, all compilation units depending upon that file must be recompiled to assure consistency. However, many of those recompilations may be redundant, because the change may actually affect only a small fraction of the overall system.This paper presents an efficient method for significantly reducing the set of modules that must be recompiled after a change. The method is based on reference sets and the isolation of differences. The cost of determining whether recompilation is necessary is negligible compared to the cost of compilation. The method is easily added to existing compilers, and can be extended to provide guidance to programmers if the change requires software updates.
Conference Paper
We present the change oriented model of versioning, which focuses strongly on functional changes in a software product and therefore can be seen as an alternative to the traditional, version oriented models. The change oriented model has advantages over these models, especially with regard to parallel development and systems with many optional features.
Article
A model for software configuration management that subsumes several existing systems is described. It is patterned after compiler models in which programs are transformed by multiple phases ending in an executable program. We model configuration management as transforming a high-level specification of a software product to be produced into a complete specification capable of being executed to construct the product. This transformational approach is used to model four existing systems and to compare and contrast their operation.
Conference Paper
Version control is one of the fundamental tasks of every software configuration management (SCM) tool. The way how a SCM tool organizes all the emerging versions within a software project influences the overall working method of the whole SCM tool. Most existing version control tools follow the idea of SCCS and RCS. They organize the different versions by managing a revision tree for each single document. This organization — we call it the intermixed organization — has some major disadvantages that can be avoided by using an orthogonal organization as shown by the author. The main difference between the orthogonal and the intermixed version organization is that the orthogonal organization emphasizes the entire project over its individual components. Consequently, the terms variant and revision span over the whole project and are orthogonal to each other. This paper first summarizes the fundamentals of orthogonal version management and then presents the version control tool Voodoo. Voodoo is based on the idea of orthogonal version management and uses a graphical user interface.
Conference Paper
The data models of a series of 11 configuration management systems--of varying type and complexity--are represented using containment data models. Containment data models are a specialized form of entityrelationship model in which entities may be containers or atoms, and the only permitted form of relationship is inclusion or referential containment. By using entities to represent the native abstractions of each system, and containment relationships to model inclusion and identifier references, systems can be modeled uniformly, permitting consistent cross-comparison of systems.
Conference Paper
Without Abstract
Conference Paper
Partitioning information bases such that their contents may be viewed from different situations and represented and processed in different contexts, constitutes a fundamental concern in various disciplines of computer science. Not surprisingly, numerous notations and techniques support certain aspects of the viewpoint abstraction. This paper motivates the use of a well defined terminology and framework regarding basic notions accompanying the viewpoint abstraction, such as contexts, perspectives, situations, and relativism. Furthermore, it establishes the cognitive and linguistic evidence on the usefulness of considering multiple views. A previous paper introduced a generic framework for contexts in order to provide a common kernel for the modelling of information base partitions. This paper demonstrates the embedding of the framework into an extensible, structurally object-oriented data/knowledge model and illustrates the applications of contexts and their accompanying mechanisms for authorization and change propagation to the modelling of (database) views, (software engineering) workspaces, and versions.
Article
Viewing entities from different situations and representing and processing them in different contexts constitutes a fundamental concern in various disciplines of computer science. Not surprisingly, the viewing abstraction is supported by many languages and techniques employed either for programming or “world modelling”. This paper presents an overview on various manifestations of viewing mechanisms in formal notations including software development techniques, knowledge representation languages, and data models. The concepts of context and perspective are introduced in form of a language-independent framework in order to capture and systematically discuss features that characterize viewing mechanisms, such as the relationship between the two, the relation between different perspectives on the same conceptual entity, or operations supporting effective construction of contexts. In addition, it is argued that the full power of viewing can be exploited by supporting both notions: contexts as well as perspectives. In order to achieve this support, any formal notation has to fulfill a number of general requirements which are stated as a result of the investigation and the survey.
Article
In the past ten years there has been a great deal of interest in the concept of a Software Development Environment (SDE) as a complete, unifying framework of services supporting most (or all) phases of software development and maintenance. We identify three levels at which the issue of integration in a SDE arises as a key concept—at the mechanism level (interoperability of the hardware and basic software), at the end-user services level (combining the methods and paradigms of the various tools), and at the process level (adapting end-user services to the working practices of different users, projects and organizations). In this article we examine SDEs from an integration perspective, describing the previous work in this area and analyzing the integration issues that must be addressed in an SDE. For illustrative purposes, a particular focus of the paper is the configuration management aspects of a SDE.
Article
Many database applications require the storage and manipulation of different versions of data objects. To satisfy the diverse needs of these applica- tions, current database systems support versioning at a very low level. This arti- cle demonstrates that application-independent versioning can be supported at a significantly higher level. In particular, we extend the EXTRA data model and EXCESS query language so that configurations can be specified conceptually and non-procedurally. We also show how version sets can be viewed multidimension- ally, thereby allowing configurations to be expressed at a higher level of abstrac- tion. The resulting model integrates and generalizes ideas in CAD systems, CASE systems, and temporal databases.
Article
Full-text available
With current compiler technology, changing a single line in a large software system may trigger massive recompilations. If the change occurs in a file with shared declarations, all compilation units depending upon that file must be recompiled to assure consistency. However, many of those recompilations may be redundant, because the change may affect only a small fraction of the overall system. Smart recompilation is a method for reducing the set of modules that must be recompiled after a change. The method determines whether recompilation is necessary by isolating the differences among program modules and analyzing the effect of changes. The method is applicable to languages with and without overloading. A prototype demonstrates that the method is efficient and can be added with modest effort to existing compilers.
Conference Paper
The integrated software development environment PantaPM is presented. This paper concentrates on the project management aspects of the environment, particularly on coordinating software development in a team. PantaPM is the extension of an editor environment, which supports syntax-driven editing of several files written in several languages within one session. PantaPM collects and manages information on all manipulations on text documents used in different development phases of a project. The main features of PantaPM are source code control, history of versions, multi-user access, configuration of products, information on delivered products, and plans for future modifications. All information on the project is collected in a project database. This database is presented to the user in text form. Project management actions can in principle be done by editing. However, most of them are usually executed automatically or can be selected from a menu. Parts of a database can be transferred to another database for partitioning projects and for working on separate workstations.
Article
Designers of knowledge-based systems seem to have the perception that realistic systems can be built by individual programmers. This approach no longer works for very large systems. Building large systems is a collaborative effort of people who each take responsibility for only part of the work. Software engineering has focused on the complications arising from system size and organizing people. This paper describes how software engineering might help in recognizing the problems of building large knowledge-based systems and in offering some methods tools and techniques to solve these problems. The paper ends with making the point that benefits may also flow in the other direction, from Artificial Intelligence to software engineering, resulting in a general improvement of these tools, methods and techniques.
Conference Paper
A conceptual architecture for software development environments (SDEs) is presented in terms of a new metaphor drawn from business enterprises. A metaphor is employed as the architecture is complex, requiring understanding from several perspectives. The metaphor provides a rich set of familiar concepts that strongly aid in understanding the environment architecture and software production. The metaphor is applicable to individual programming environments, software development environments supporting teams of developers, and to large-scale software production as a whole. The paper begins by considering three perspectives on SDEs, a function-based view, an objects-and-relations view, and a process-centered view. The process view, being the most encompassing, is held through the remainder of the paper. Three metaphors for organizing and explaining a process-centered environment are then examined, including the hierarchical contract model and the individual/family/city/state model. Next the corporation model is introduced and a detailed analogy is drawn between corporations and software development environments. Within the context of the corporation metaphor, three corporate organization schemes are reviewed and federal decentralization is argued to be most appropriate for an SDE. Relationships induced by such an organization are discussed and a mapping between the conceptual architecture and a possible implementation architecture is briefly discussed.
Article
The PCTE project is specifying, designing and implementing a host structure for Software Engineering Environments. The host structure is designed to run on powerful, bitmap screen terminals connected to a local network. It features an Object Management System based on a Binary Entity-Relationship model that manages a database that is transparently distributed over the workstations connected to the network. Migration to this new hosting structure is facilitated by compatibility at the executable file level with Unix System V*. This paper examines how the OMS corresponds to the requirements for Software Engineering databases.
Chapter
There have been many advances in software development technology and in software engineering methods and tools since the introduction of computers in the late 1940’s and early 1950’s. Perhaps the most significant advance in software quality and individual programmer productivity has arisen from the development, and evolution, of the high level programming language. A significant effect on software development productivity, if not always quality, has also arisen from the dramatic increase in the performance/price ratio of computer hardware, particularly from the advent of the workstation.
Article
Full-text available
Version control has been an essential aspect of any software development project since early 1980s. In the recent years, however, we see version control as a common feature embedded in many collaborative based software packages; such as word processors, spreadsheets and wikis. In this paper, we explain the common structure of version control systems, provide historical information on their development, and identify future improvements.
Article
One of the key challenges of green software is that various aspects have an impact to the overall energy consumption over the lifetime of a system operated by software. In particular, in the field of industrial applications, where embedded devices cooperate with many IT systems to make the industrial processes more efficient, to reduce waste or raw materials, and to save the environment, the concept of green software becomes unclear. In this paper, we address the green aspects of software in different phases - software construction, software execution, and software control in both inside an individual component and as a part of a complete industrial application. Furthermore, we demonstrate that the insight into system knowledge, not aspects related to software per se, is the key to create truly green software. Consequently, when considering truly software green, the focus is to be placed on the system level savings for embedded systems at the highest possible level where domain knowledge can be taken into account, not on software development or execution.
Article
An important problem in program development and maintenance is version control, i.e. the task of keeping a software system consisting of many versions and configurations well organized. The Revision Control System (RCS) is a software tool that assists with that task. RCS manages revisions of text documents, in particular source programs, documentation, and test data. It automates the storing, retrieval, logging and identification of revisions, and it provides selection mechanisms for composing configurations. This paper introduces basic version control concepts and discusses the practice of version control using RCS. For conserving space, RCS stores deltas, i.e. differences between successive revisions. Several delta storage methods are discussed. Usage statistics show that RCS's delta method is space and time efficient. The paper concludes with a detailed survey of version control tools.
Article
Reuse is becoming a potent trend in software development and a major way to boost software productivity. To put reusable software units together, one has to be able to find them with minimal effort in the first place. The effort needed to access, understand, and customize the code must be less than the effort required to create new code. A simple library of components cannot provide sufficient methods to facilitate the selection and interconnection of the reusable modules. The context of this work is the ROPCO (reuse of persistent code and object code) environment and the primary candidates for reuse are the modules and templates. The objective of this article is to present the design of an interconnection language which can be incorporated with other ROPCO components to facilitate the selection, customization, and interconnection of reusable modules in the ROPCO software development environment. This language helps to define the interface specifications of the components and find the best module(s)/template(s) meeting the desired specification. The detailed algorithms of the operations that are necessary at the user level to support the reuse of available components are given and described in detail with a view toward verification.
Article
While upgrading its computer installation, a UK company decided to provide workstations for its software development engineers. Despite several problems which lessened the benefits which could have been gained, this move was found to be worthwhile. The company believes that greater productivity gives it another ‘half’ engineer for every engineer using a workstation, although this gain is diluted by support costs.
Article
Software engineering environments surround their users with the software tools necessary for systematic development and maintenance of software. This report characterizes software engineering environments by their types, by their relationship to the software life cycle, and by their capabilities, limitations, primary users, and levels of support. This report provides examples of existing software engineering environments that are available commercially or in research laboratories with the features and characteristics they provide.
Conference Paper
Many database applications require the storage and manipulation of different versions of data objects. However, current database systems do not support versioning well. Each application area treats versions in its own way, and these ways are usually incompatible with each other. We show how this incompatibility can be resolved by separating the physical, conceptual, and logical levels of versioning. We develop a version specification language at the conceptual level, and a multidimensional specification language at the logical level. By encoding the logical versioning semantics of an application into orthogonal dimensions, we generalize the ideas of historical and temporal databases to arbitrary object-oriented databases. The result is a unified, application-independent treatment of versioning.
Article
The Version Server is a system for managing the versions and configurations of design descriptions as they change over time. In this paper we focus on the design and implementation of such a system, which we have built at U.C. Berkeley. The data model supported and the browser application are introduced to illustrate the system's user and application interface. The design decisions and details of the internal architecture are described and the system's performance is evaluated. For structure-oriented queries, such as ‘traverse an entire chip's design hierarchy’, the Version Server is about five times as fast as comparable design management systems that store their design objects as files in a hierarchical file system.
Chapter
More and more organizations depend on software for part of their activities. Software suppliers have great difficulty satisfying the increasing demand. The US Department of Defense, being a large user of software, has taken the initiative to push for increased programmers' productivity by improving the software production process. The first phase of the DoD's Software Initiative resulted in the design of the Ada language. The second phase, in which the emphasis has shifted from programs to systems and management, resulted in a variety of activities of which the Software Engineering Institute is one. The objective of the SEI is technology transition in the area of software development support. The SEI has selected a number of areas and topics of interest and has planned a sequence of phases in which more advanced technology is introduced. The SEI has started a number of projects to explore technology that is ready for transition. Several of these projects are related to the Ada Language.
Chapter
This paper describes the problems which arise when a team is developing or maintaining a software product. These problems are the sharing of objects, the side effect of modifications, the protection and structuring of teams and products. We discuss these problems and the solutions proposed by the Adele data base of programs
Chapter
We present in this paper some extensions to the data base of the Adele program. We define the notion of event, and the simple language which allows one to express the association between an event and actions. The actions are executed automatically when the event is raised. It is shown, using examples on recompilation policies, how this simple mechanism can be used to express and enforce the semantics of relations, to control and manage propagation, to easily program software management policies and constraints and finally how such a data base can be used as the kernel of a software engineering environment.
Chapter
Flexible teams are a new type of organizational entity that will become even more prevalent in the future. We define the concept of a flexible team, present selected attributes of such teams (composite membership and roles, diverse disciplines and skills, rapid communication alignment, and rapid process alignment), address the impact of these attributes on different categories of tools, and discuss implications for the design of computing environments to support flexible teams.
Chapter
The development of software engineering environments has had a long and close relationship with the development of advanced user interface technologies. This paper overviews this history, then discusses the particular requirements imposed by modern environments. Key requirements discussed include support for structured program text, complex application software architectures, concurrency, an open, changing toolset, distribution, and heterogeneity. The paper concludes by reviewing some representative current approaches and highlighting some key issues and opportunities. One issue discussed concerns both user interface system architectures and environment/tool architectures. It is the need for architectures and implementation techniques that support modularity, heterogeneity, inter-component communication, and component composition.
Article
Marvel is a knowledge-based programming environment that assists software development teams in performing and coordinating their activities. While designing Marvel, several granularity issues were discovered that have a strong impact on the degree of intelligence that can be exhibited, as well as on the friendliness and performance of the environment. The most significant granularity issues include the refinement of software entities in the software database and decomposition of the software tools that process the entities and report their results to the human users. This paper describes the many alternative granularities and explains the choices made for Marvel.
Conference Paper
Every fragment of code we write has dependencies and associated metadata. Code dependencies range from local references and standard library definitions to external third party libraries. Metadata spans from within source code files (hierarchical names and code comments) to external files and database servers (package-level dependency configurations, build and test results, code reviews etc.). This scattered storage and non-uniform access limits our programming environments in their functionality and extensibility. In this paper, we propose a modular system architecture, Haknam, better suited for code and related metadata sharing. Haknam precisely tracks code interdependencies, allows flexible naming and querying of code references, and collects code fragments and their related metadata as messages in a distributed log-centric pipeline. We argue that this setting brings considerable advantages. In particular, we focus on modular development of tools and services that can assist in programming-related tasks. Every new functionality can be simply added by creating and processing messages from the distributed pipeline.
Article
The problem of change propagation in multiuser software development environments distributed across a local-area network is addressed. The program is modeled as an attributed parse tree segmented among multiple user processes and changes are modeled as subtree replacements requested asynchronously by individual users. Change propagation is then implemented using decentralized incremental evaluation of an attribute grammar that defines the static semantic properties of the programming language. Building up to our primary result, we first present algorithms that support parallel evaluation on a centralized tree in response to single edits using a singe editing cursor and multiple edits with multiple editing cursors. Then we present our algorithm for parallel evaluation on a decentralized tree. We also present a protocol to guarantee reliability of the evaluation algorithm as components of the decentralized tree become unavailable due to failures and return to availability.
Conference Paper
Full-text available
Assembling a large system from its component elements is not a simple task. An adequate notation for specifying this task must reflect the system structure, accommodate many configurations of the system and many versions as it develops, and be a suitable input to the many tools that support software development. The language described here applies the ideas of λ-abstraction, hierarchical naming, and type-checking to this problem. Some preliminary experience with its use is also given.
Article
Full-text available
Lisp systems have been used for highly interactive programming for more than a decade During that time, special properties of the Lisp language (such as program/ data equivalence) have enabled a certain style of interactive programming to develop, characterized by powerful interactive support for the programmer, nonstandard program structures, and nonstandard program development methods. The paper summarazes the LISP style of interactive programming for readers outside the LisP community, describes those propertms of LisP systems that were essential for the development of this style, and discusses some current and not yet resolved issues
Article
Full-text available
Interlisp is a programming environment based on the LISP programming language left bracket Charniak et al. 1980 right bracket left bracket Friedman 1974 right bracket . In widespread use in the artificial intelligence community, Interlisp has an extensive set of user facilities, including syntax extension, uniform error handling, automatic error correction, an integrated structure-based editor, a sophisticated debugger, a compiler, and a filing system. Its most popular implementation is Interlisp-10, which runs under both the Tenex and Tops-20 operating systems for the DEC PDP-10 family. Interlisp-10 now has approximately 300 users at 20 different sites (mostly universities) in the US and abroad. It is an extremely well documented and maintained system. Interlisp has been used to develop and implement a wide variety of large application systems. Examples include the Mycin system for infectious disease diagnosis left bracket Shortliffe 1976 right bracket , the Boyer-Moore theorem prover left bracket Boyer and Moore 1979 right bracket , and the BBN speech understanding system left bracket Wolf and Woods 1980 right bracket . This article describes the Interlisp environment, the facilities available in it, and some of the reasons why Interlisp developed as it has.
Article
This paper identifies three major characteristics of large-scale computer programming projects. The design features of the Ada Language System which facilitate large-scale efforts are then described in terms of these characteristics. The Ada Language System is a programming support environment for the Ada Language.
Article
A simple algorithm is described for isolating the differences between two files. One application is the comparing of two versions of a source program or other file in order to display all differences. The algorithm isolates differences in a way that corresponds closely to our intuitive notion of difference, is easy to implement, and is computationally efficient, with time linear in the file length. For most applications the algorithm isolates differences similar to those isolated by the longest common subsequence. Another application of this algorithm merges files containing independently generated changes into a single file. The algorithm can also be used to generate efficient encodings of a file in the form of the differences between itself and a given “datum” file, permitting reconstruction of the original file from the diference and datum files.
Article
Good programmers break their projects into a number of pieces, each to be processed or compiled by a different chain of programs. After a set of changes is made, the series of actions that must be taken can be quite complex, and costly errors are frequently made. This paper describes a program that can keep track of the relationships between parts of a program, and issue the commands needed to make the parts consistent after changes are made. The underlying idea is quite simple and can be adapted to many other environments.
Article
The DOMAIN system is an architecture for networks of personal workstations and servers which creates an integrated distributed computing environment. Its distinctive features include: a network-wide file system of objects addressed by unique identifiers (UID's); the abstraction of a single level store for transparently accessing all objects, regardless of their location in the network; and a network-wide hierarchical name space. The implementations of these facilities exhibit several interesting approaches to layering the system software. In addition to network transparent data access, interprocess communication is provided as a basis for constructing distributed applications; as a result, we have some experience to guide the choice between these two alternative implementation techniques. Networks utilizing this architecture have been Operational for almost three years; some experience with it and lessons derived from that experience are presented, as are some performance data.