Software Practice and Experience

Published by Wiley
Online ISSN: 1097-024X
Publications
Article
The introduction of software technology in a life-dependent environment requires the development team to execute a process that ensures a high level of software reliability and correctness. Despite their popularity, agile methods are generally assumed to be inappropriate as a process family in these environments due to their lack of emphasis on documentation, traceability, and other formal techniques. Agile methods, notably Scrum, favor empirical process control, or small constant adjustments in a tight feedback loop. This paper challenges the assumption that agile methods are inappropriate for safety-critical software development. Agile methods are flexible enough to encourage the rightamount of ceremony; therefore if safety-critical systems require greater emphasis on activities like formal specification and requirements management, then an agile process will include these as necessary activities. Furthermore, agile methods focus more on continuous process management and code-level quality than classic software engineering process models. We present our experiences on the image-guided surgical toolkit (IGSTK) project as a backdrop. IGSTK is an open source software project employing agile practices since 2004. We started with the assumption that a lighter process is better, focused on evolving code, and only adding process elements as the need arose. IGSTK has been adopted by teaching hospitals and research labs, and used for clinical trials. Agile methods have matured since the academic community suggested they are not suitable for safety-critical systems almost a decade ago, we present our experiences as a case study for renewing the discussion.
 
Conference Paper
Researchers and practitioners in the area of parallel and distributed computing have been lacking a portable, flexible and robust distributed instrumentation system. We present the Baseline Reduced Instrumentation System Kernel (BRISK) that we have developed as a part of a real-time system instrumentation and performance visualization project. The design is based on a simple distributed instrumentation system model for flexibility and extensibility. The basic implementation poses minimalistic system requirements and achieves high performance. We show evaluations of BRISK using two distinct configurations: one emphasizes isolated simple performance metrics; and the other BRISK's operation on distributed applications, its built-in clock synchronization and dynamic on-line sorting algorithms
 
Conference Paper
This study presents a practical solution for data collection and restoration to migrate a process written in high level stack-based languages such as C and Fortran over a network of heterogeneous computers. We study a logical data model which recognizes complex data structures in process address space. Then, novel methods are developed to incorporate the model into a process and to collect and restore data efficiently. We have implemented a prototype software and performed experiments on different programs. Experimental and analytical results show that (I) a user-level process can be migrated across different computing platforms, (2) semantic information of data structures in the process's memory space can be correctly collected and restored, (3) the costs of data collection and restoration depend on the complexity of the logical model representing the process's data structures and the amount of data involved and (4) the implantation of the data collection and restoration mechanisms into the process is not a decisive factor of incurring execution overheads; with appropriate program analysis, we can achieve practically low overhead
 
Conference Paper
Pervasive computing applications involve both software concerns, like any software system, and integration concerns, for the constituent networked devices of the pervasive computing environment. This situation is problematic for testing because it requires acquiring, testing and interfacing a variety of software and hardware entities. This process can rapidly become costly and time-consuming when the target environment involves many entities. In this demonstration, we present DiaSim, a simulator for pervasive computing applications. To cope with widely heterogeneous entities, DiaSim is parameterized with respect to a description of a target pervasive computing environment. This description is used to generate both a programming framework to develop the simulation logic and an emulation layer to execute applications. Furthermore, a simulation renderer is coupled to DiaSim to allow a simulated pervasive system to be visually monitored and debugged.
 
Article
Many business web-based applications do not offer applications programming interfaces (APIs) to enable other applications to access their data and functions in a programmatic manner. This makes their composition difficult (for instance to synchronize data between two applications). To address this challenge, this paper presents Abmash, an approach to facilitate the integration of such legacy web applications by automatically imitating human interactions with them. By automatically interacting with the graphical user interface (GUI) of web applications, the system supports all forms of integrations including bi-directional interactions and is able to interact with AJAX-based applications. Furthermore, the integration programs are easy to write since they deal with end-user, visual user-interface elements. The integration code is simple enough to be called a "mashup".
 
Article
Cloud computing promises a radical shift in the provisioning of computing resource within the enterprise. This paper describes the challenges that decision makers face when assessing the feasibility of the adoption of cloud computing in their organisations, and describes our Cloud Adoption Toolkit, which has been developed to support this process. The toolkit provides a framework to support decision makers in identifying their concerns, and matching these concerns to appropriate tools/techniques that can be used to address them. Cost Modeling is the most mature tool in the toolkit, and this paper shows its effectiveness by demonstrating how practitioners can use it to examine the costs of deploying their IT systems on the cloud. The Cost Modeling tool is evaluated using a case study of an organization that is considering the migration of some of its IT systems to the cloud. The case study shows that running systems on the cloud using a traditional "always on" approach can be less cost effective, and the elastic nature of the cloud has to be used to reduce costs. Therefore, decision makers have to be able to model the variations in resource usage and their systems deployment options to obtain accurate cost estimates.
 
Article
Compressed bitmap indexes are used to speed up simple aggregate queries in databases. Indeed, set operations like intersections, unions and complements can be represented as logical operations (AND,OR,NOT) that are ideally suited for bitmaps. However, it is less obvious how to apply bitmaps to more advanced queries. For example, we might seek products in a store that meet some, but maybe not all, criteria. Such threshold queries generalize intersections and unions; they are often used in information-retrieval and data-mining applications. We introduce new algorithms that are sometimes two orders of magnitude faster than a naive approach. Our work shows that bitmap indexes are more broadly applicable than is commonly believed.
 
Article
In many important applications -- such as search engines and relational database systems -- data is stored in the form of arrays of integers. Encoding and, most importantly, decoding of these arrays consumes considerable CPU time. Therefore, substantial effort has been made to reduce costs associated with compression and decompression. In particular, researchers have exploited the superscalar nature of modern processors and SIMD instructions. Nevertheless, we introduce a novel vectorized scheme called SIMD-BP128 that improves over previously proposed vectorized approaches. It is nearly twice as fast as the previously fastest schemes on desktop processors (varint-G8IU and PFOR). At the same time, SIMD-BP128 saves up to 2 bits per integer. For even better compression, we propose another new vectorized scheme (SIMD-FastPFOR) that has a compression ratio within 10% of a state-of-the-art scheme (Simple-8b) while being two times faster during decoding.
 
Sampled bitmap characteristics and Roaring size.
Results on real data
Article
Bitmap indexes are commonly used in databases and search engines. By exploiting bit-level parallelism, they can significantly accelerate queries. However, they can use much memory. Thus we might prefer compressed bitmap indexes. Following Oracle's lead, bitmaps are often compressed using run-length encoding (RLE). In this work, we introduce a new form of compressed bitmaps called Roaring, which uses packed arrays for compression instead of RLE. We compare it to two high-performance RLE-based bitmap encoding techniques: WAH (Word Aligned Hybrid compression scheme) and Concise (Compressed 'n' Composable Integer Set). On synthetic and real data, we find that Roaring bitmaps (1) often compress significantly better (e.g., 2 times) and (2) are faster than the compressed alternatives (up to 900 times faster for intersections).
 
Article
Grids provide uniform access to aggregations of heterogeneous resources and services such as computers, networks and storage owned by multiple organizations. However, such a dynamic environment poses many challenges for application composition and deployment. In this paper, we present the design of the Gridbus Grid resource broker that allows users to create applications and specify different objectives through different interfaces without having to deal with the complexity of Grid infrastructure. We present the unique requirements that motivated our design and discuss how these provide flexibility in extending the functionality of the broker to support different low-level middlewares and user interfaces. We evaluate the broker with different job profiles and Grid middleware and conclude with the lessons learnt from our development experience.
 
Article
Cross-browser compatibility testing is concerned with identifying perceptible differences in the way a Web page is rendered across different browsers or configurations thereof. Existing automated cross-browser compatibility testing methods are generally based on Document Object Model (DOM) analysis, or in some cases, a combination of DOM analysis with screenshot capture and image processing. DOM analysis however may miss incompatibilities that arise not during DOM construction, but rather during rendering. Conversely, DOM analysis produces false alarms because different DOMs may lead to identical or sufficiently similar renderings. This paper presents a novel method for cross-browser testing based purely on image processing. The method relies on image segmentation to extract regions from a Web page and computer vision techniques to extract a set of characteristic features from each region. Regions extracted from a screenshot taken on a baseline browser are compared against regions extracted from the browser under test based on characteristic features. A machine learning classifier is used to determine if differences between two matched regions should be classified as an incompatibility. An evaluation involving 140 pages shows that the proposed method achieves an F-score exceeding 0.9, outperforming a state-of-the-art cross-browser testing tool based on DOM analysis.
 
Article
Segregation of roles into alternative accounts is a model which provides not only the ability to collaborate but also enables accurate accounting of resources consumed by collaborative projects, protects the resources and objects of such a project, and does not introduce new security vulnerabilities. The implementation presented here does not require users to remember additional passwords and provides a very simple consistent interface.
 
Time required to answer various queries in ms/query along with the storage requirement in bits/int, using the TREC Million-Query log. scheme bits/int time bits/int time bits/int time
Article
Sorted lists of integers are commonly used in inverted indexes and database systems. They are often compressed in memory. We can use the SIMD instructions available in common processors to boost the speed of integer compression schemes. By making use of superscalar execution together with vectorization, our S4-BP128-D4 scheme uses as little as 0.7 CPU cycles per decoded integer while still providing state-of-the-art compression. However, if the subsequent processing of the integers is slow, the effort spent on optimizing decoding speed can be wasted. To show that it does not have to be so, we (1) vectorize and optimize the intersection of posting lists; (2) introduce the SIMD Galloping algorithm. We exploit the fact that one SIMD instruction can compare 4 pairs of integers at once. We experiment with two TREC text collections, GOV2 and ClueWeb09 (Category B), using logs from AOL and the TREC million-query track. We show that using only the SIMD instructions ubiquitous in all modern CPUs, our techniques for conjunctive queries can double the speed of a state-of-the-art approach.
 
Chapter
Isolating computation and communication concerns into separate pure computation and pure coordination modules enhances modularity, understandability, and reusability of parallel and/or distributed software. MANIFOLD is a pure coordination language that encourages this separation. We use real, concrete, running MANIFOLD programs to demonstrate the concept of pure coordination modules and the advantage of their reuse in applications of different nature. Performance results for the examples presented in this paper show that the overhead of using MANIFOLD to achieve this enhanced modularity and reusability is in practice small, compared to the more conventional paradigms for the design and programming of parallel and distributed software. Keywords: coordination, reusability, parallism, distributed computation, performance measurements. 1 Introduction Some of the shortcomings of the common approaches to the design and development of parallel and distributed applications stem from the fun...
 
Chapter
Millipede is a generic run-time system for executing parallel programming languages in distributed environments. In this project, a set of basic constructs which are sufficient for most parallel programming languages is identified. These constructs are implemented on top of a cluster of workstations such that in order to run a specific parallel programming language in this distributed environment, all that is needed is a compiler, or a preprocessor, that maps the source language parallel code to the Millipede constructs. Some performance measurements of parallel programs on Millipede are also presented.
 
Article
In this paper, we propose a useful replacement for quicksort-style utility functions. The replacement is called Symmetry Partition Sort, which has essentially the same principle as Proportion Extend Sort. The maximal difference between them is that the new algorithm always places already partially sorted inputs (used as a basis for the proportional extension) on both ends when entering the partition routine. This is advantageous to speeding up the partition routine. The library function based on the new algorithm is more attractive than Psort which is a library function introduced in 2004. Its implementation mechanism is simple. The source code is clearer. The speed is faster, with O(n log n) performance guarantee. Both the robustness and adaptivity are better. As a library function, it is competitive.
 
Article
Profiling under UNIX is done by inserting counters into programs either before or during the compilation or assembly phases. A fourth type of profiling involves monitoring the execution of a program, and gathering relevant statistics during the run. This method and an implementation of this method are examined, and its advantages and disadvantages are discussed.
 
Article
The make command has been a central part of the Unix programming environment for over fifteen years. An excellent example of a Unix system software tool, it has a simple model and delegates most of its work to other commands. By dealing with general relationships between files and commands, make easily adapts to diverse applications. This generality, however, has become a handicap when compared with specialized integrated programming environments. Integrated environments are collections of tightly coupled (seamless) programs that can take advantage of programming language details not available to the loosely coupled (tool-based) make model. There are limitations to both approaches, but it would seem that the make model, at least for software construction, is reaching the breaking point. make can be revitalized by abandoning restrictive implementation details and by extending the basic model to meet modern software construction demands. This paper explores these demands and changes and their effects on the Unix system tool-based programming style.
 
Article
(This blast was produced at the Universities Computer Services Management Conference organized by International Computers Ltd. at the University of Surrey, England, September 1972. It is reproduced at our enthusiastic invitation. © 1973 Wiley Periodicals, ...
 
Article
The X Window System® has become widely accepted by many manufacturers. X provides network transparent access to display servers, allowing local and remote client programs to access a user's display. X is used on high performance workstation displays as well as terminals, and client programs run on everything from micro to super computers. This paper describes the trade-offs and basic design decisions made during the design of X Version 11. We presume familiarity with the paper describing X Version 10.
 
Article
Users of small computers must often program in assembler language. Macros are described which assist in the construction of block structured programs in assembler language. The macros are used in practical day-to-day programming in a cardiac electrophysiology laboratory in which the coarse grained control provided by the local FORTRAN compiler is not sufficient for, and even hinders, the writing of clear, easy to understand programs. The macros provide nestable control structures in place of the less structured transfers of conventional assembler language. The arithmetic and input/output control provided by the architecture of the machine is left fully available. The control structures implemented include conditional (IF, CASE), iteration (WHILE, REPEAT/UNTIL, FOR) and subroutine (PROC, CALL, etc.) constructs. No control of variable scope is provided. The macro implementation is discussed along with the code generated. There is a discussion of architectural features which allow the macros to be independent of specific register usage and addressing mode. Experience with use of the macros in a high-speed, real-time data acquisition and display environment is presented. We conclude that these macros are easy to use and assist in program readability and documentation.
 
Article
Some comments on 'a cohesion measure for object-oriented classes' are presented. It is believed that the improved CBMC measures class cohesion from the viewpoint of the usage criteria of instance variables and that it cannot allow meaningful interpretations about classes. It is exemplified that it characterizes the interaction patterns better than the original one does and that it could also be used as a guideline for quality evaluation so as to enable the restructuring of poorly designed classes.
 
Article
Although H. S. Chae's class cohesion measure considers not only the number of interactions, but also the patterns of the interactions among the constitute members of a class (which overcomes the limitations of previous class cohesion measures) it, however, only partly considers the patterns of interactions, and might cause the measuring results to be inconsistent with intuition in some cases. This paper discusses the demerits and proposes constructive amendments to Chae's cohesion measure. Copyright © 2001 John Wiley & Sons, Ltd.
 
Article
We are accustomed to thinking that the requirements of different users of FORTRAN (or any other language for that matter) are so varied that they can only be met by a series of compliers: a standard one, an optimizing one, an in-core batching one and so on. This paper describes a complier which, although its main aim was to process batches of small student jobs, was desinged to compete with all the manufacturer's compliers on their chosen ground.
 
Article
The transport of a PASCAL compiler from a CDC 6000 series computer to an ICL 1900 series computer is reported, with some comments on the method used.
 
Article
(This blast was produced at the Universities Computer Services Management Conference organized by International Computers Ltd. at the University of Surrey, England, September 1972. It is reproduced at our enthusiastic invitation. © 1973 Wiley Periodicals, ...
 
Article
(This blast was produced at the Universities Computer Services Management Conference organized by International Computers Ltd. at the University of Surrey, England, September 1972. It is reproduced at our enthusiastic invitation. © 1973 Wiley Periodicals, ...
 
Article
(This blast was produced at the Universities Computer Services Management Conference organized by International Computers Ltd. at the University of Surrey, England, September 1972. It is reproduced at our enthusiastic invitation. © 1973 Wiley Periodicals, ...
 
Article
With the aid of the EM (encoding machine) compiler tool kit, a preliminary version of a scalar C compiler was implemented on the Cyber 205 in a relatively short period of time. This C compiler emphasizes functionality more than efficiency. Several benchmark programs were used to measure the performance and to compare it with an equivalent C compiler for the VAX/UNIX system. In order to make it a production-quality C compiler, further enhancements will be necessary. This paper presents some motivating factors, implementation details, and proposes further work on developing the Cyber 205 C compiler.
 
Article
The design and implementation of a general purpose graphics software package (GINO) is described. GINO provides facilities for 3D graphics (co-ordinate transformation, clipping, intensity modulation) but is organized so that 2D facilities form a clean subset. It is device independent, permitting use of refresh CRT displays, storage tube displays and plotters. A characteristic feature is the use of small satellite computers attached to a large multiaccess computer (ATLAS 2) GINO takes the form of a subroutine library accessible from FORTRAN and other languages, and the case for this level of graphics software is argued. The reasons for not using a mandatory graphical data structure are also discussed. GINO is not biased towards any particular style of interaction, but two techniques are described; one based on the light pen and the other on teletype command languages Efficiency of implementation is achieved without loss of flexibility by use of a systems programming language (SAL).
 
Article
This paper describes some programs used to test ALGOL 60 compilers. The results of the tests on six compilers are given together with some comments on their likely effectiveness in locating bugs in other compilers.
 
Article
We illustrate the use of Algol 68 as a systems implementation language by reference to practical uses of the language. We argue that the code patch facility as implemented in Algol 68-R is a good way of interfacing systems programs to their machine-dependent environment, and does not violate the spirit of high-level language programming.
 
Article
PICTURES-68 is a set of procedures written in ALGOL-68R which enables picture variables to be defined and manipulated. The routines are intended to be used as a library prelude to an ALGOL-68 program requiring pictorial output.
 
Article
Algol 68 enables facilities for such things as arbitrary precision arithmetic to be provided in a particularly elegant and convenient way. The library segment mlaritha which provides such facilities is described. This segment enables numerical quantities to be stored and manipulated with almost the same degree of ease, or difficulty, as REAL quantities but with arbitrary and dynamically variable precision. The method of ‘NUMBER’ storage used in mlaritha is discussed in detail and the fundamental algorithms used for the arithmetic operations of addition, multiplication and division, etc., are described. Special attention is given to the ‘costs’ inherent in the use of the system; particularly in the time ‘costs’ of each of the operations and the dependence on precision.
 
Article
M. H. Halstead has argued that all computer programs are composed entirely of operators and operands. By counting these entities the software science theory then enables program properties such as vocabulary, length, volume, program level and language level to be calculated. For well written, or so‐called ‘pure programs’, one would expect, according to the theory, good agreement between certain observed and predicted values. Also, one might expect an intuitive ordering of language levels to be confirmed by the theory, with for example, Algol 68 having a higher language level than Fortran. In this paper two different counting strategies have been applied to one implementation of the Numerical Algorithms Group (NAG) Algol 68 library. The results do not entirely match expectation.
 
Article
This paper describes the facilities that have been provided for plotting graphs in Algol 68-R. The advantages of the method of approach are discussed and examples of use are given. In the absence of a universally accepted definition for a graphics language, the provision of graphical output facilities in high-level languages has been accomplished by SPECTRE. The basic approach in the use of SPECTRE is a two-stage scheme, the second stage being the processing of the macros produced by the first. The macros are generated by the graphical calls in the user-program. Facilities currently exist within Stage 1 of SPECTRE for graphical calls in Algol, Fortran, Cobol and Plan programs. The design of SPECTRE ensures that only minimum addition is needed to cater for Algol 68-R programs - the extra segment for the graphical routines is all that is required; Stage 2 remains completely unaltered. The two-stage scheme enables the user to obtain graphical output on any device without altering the original program; thus the lineprinter can be used for program development and testing, reserving the use of the incremental plotter for the final graph. Examples of use are given.
 
Article
This paper proposes a software architecture based on mobile agents for distributed process control applications. A set of agents is employed to handle, in a single manufacturing cell, automatic assignment of control tasks to controllers, monitoring of cell functionalities and dynamic cell reconfiguration. The agents operate in a two-layered structure: at the highest level, the planning agents analyse the inputs of the system designer and automatically create the field agents, which operate at the lowest level and embed the control tasks to be executed. Field agents, which are mobile, are able to reach autonomously the controllers of the cell, in order to perform the control activity there. Exploiting the mobility enables a field agent to change its running device when the variation of the design parameters or a system fault requires a new task distribution. A load-balancing algorithm is introduced, with the objective of assigning each field agent to a controller of the manufacturing cell in order to fairly distribute the computation load. The algorithm uses a branch-and-bound technique to explore all possible solutions and applies two heuristics to throw away non-feasible solutions and select the best branch to analyse. The algorithm is designed to run on-line in order to allow a fast task redistribution when a fault condition occurs in the process control environment. Copyright © 2008 John Wiley & Sons, Ltd.
 
Article
Frequently invoked large functions are common in non-numeric applications. These large functions present challenges to modern compilers not only because they require more time and resources at compilation time, but also because they may prevent optimizations such as function inlining. Often large portions of the code in a hot function fhost are executed much less frequently than fhost itself. Partial inlining is a natural solution to the problems caused by including cold code segments that are seldom executed into hot functions that are frequently invoked. When applying partial inlining, a compiler outlines cold statements from a hot function fhost. After outlining, fhost becomes smaller and thus can be easily inlined. This paper presents Ablego, a framework for function outlining and partial inlining that includes several innovations: (1) an abstract-syntax-tree-based analysis and transformation to form cold regions for outlining; (2) a set of flexible heuristics to control the aggressiveness of function outlining; (3) several possible function outlining strategies; (4) explicit variable spilling, a new technique that overcomes negative side-effects of function outlining. With the proper strategy, partial inlining improves performance by up to 5.75%. A performance study also suggests that partial inlining's effect on enabling more aggressive inlining is limited. The performance improvement from partial inlining actually comes from better code placement and better code generation. Copyright © 2006 John Wiley & Sons, Ltd.
 
Article
This paper presents a general model for dealing with abnormal events during program execution and describes how this model is implemented in the μSystem. (The μSystem is a library of C definitions that provide light-weight concurrency on uniprocessor and multiprocessor computers running the UNIX operating system.) Two different techniques can be used to deal with an abnormal event: an exception, which results in an exceptional change in control flow from the point of the abnormal event; and an intervention, which is a routine call from the point of the abnormal event that performs some corrective action. Users can define named exceptions and interventions in conjunction with ones defined by the μSystem. Exception handlers and intervention routines for dealing with abnormal events can be defined/installed at any point in a program. An exception or intervention can then be raised or called, passing data about the abnormal event and returning results for interventions. Interventions can also be activated in other tasks, like a UNIX signal. Such asynchronous interventions may interrupt a task's execution and invoke the specified intervention routine. Asynchronous interventions are found to be useful to get another task's attention when it is not listening through the synchronous communication mechanism.
 
Article
The abstract data type concept appears to be a useful software structuring tool. A project, called ‘Système d'Objets Conservés’, which was developed at the University of Rennes, (France), gave some experience in implementing this concept. The possibility of including abstract data type into a pre-existing compiler is demonstrated, and desirable properties of the host language are exhibited. Provision of external procedures and data makes some type checking extensions necessary: these features increase software reliability.
 
Article
The presentation of an abstract data type by a series of equational axioms has become an accepted specification mechanism. Verifying the correctness of such specifications has been recognized as a problem troubling their use. A means is presented for experimenting with a directly executable version of the axioms without having to choose representations for the data structures or describe algorithms for the operations.
 
Article
The paper covers the problem of bridging the gap between abstract and textual concrete syntaxes of software languages in the model-driven engineering (MDE) context. This problem has been well studied in the context of programming languages, but due to the obvious difference in the definitions of abstract syntax, MDE requires a new set of engineering principles. We first explore different approaches to defining abstract and concrete syntaxes in the MDE context. Next, we investigate the current state of languages and techniques used for bridging between textual concrete and abstract syntaxes in the context of MDE. Finally, we report on lessons learned in experimenting with the current technologies. In order to provide a comprehensive coverage of the problem under study, we have selected a case of Web rule languages. Web rule languages leverage various types of syntax specification languages; and they are complex in nature and large in terms of the language elements. Thus, they provide us with a realistic analysis framework based on which we can draw general conclusions. Based on the series of experiments that we conducted with the analyzed languages, we propose a method for approaching such problems and report on the empirical results obtained from the data collected during our experiments. Copyright © 2009 John Wiley & Sons, Ltd.
 
Article
The UMIST Abstract Data Store is a software tool which supports abstract data types together with flexible mechanisms for specifying, for each abstract data type, alternative user interface and memory representations appropriate to different physical media. These mechanisms facilitate the definition of types, the specification of their alternative representations and the creation and manipulation of their values in a persistent fashion. The media supported may include such things as disks and visual displays and collections of these connected together via a network. This paper focuses on the mechanisms which have evolved in this environment for specifying safe user interfaces to complex data structures.
 
Article
This paper discusses the use of abstract machine modelling as a technique for producing portable software, i.e. software which can be moved readily from one computer to another. An overview of the principles involved is presented and a critical examination made of three existing abstract machines which were used respectively to implement a macro processor, a text editor and a BASIC compiler.
 
Article
The computer-aided design of dedicated pipelined processors for numerical applications and signal processing requires design tools that support system refinement, partitioning strategies and the transformation of behavioural descriptions into structure. This in turn poses problems of design data management which are considered here. We show that an object-oriented data management system is a step towards solving the problems. The method proposed here is based on a systematic specification of data structures and data access methods, using abstract data type specifications. As a result, the data management is completely transparent to the application programs. The data-management system is written in Enhanced C (EC), a set-orientated extension of the language C.
 
Article
This paper describes SIMPL-D, a stack-based language with data abstraction features, and some of the details of its implementation. The language allows users to define new types that are parameterized by size and to perform system-defined operations (e.g. assignment) on objects with user-defined types. The use of object-describing templates in the implementation of storage allocation, assignment and returning values from functions is discussed. Finally, the conflicts between automatic initialization and separate compilation are explained.
 
Article
This paper describes the Implementation model of a data definition facility for abstract data types, implemented as an extension to PL/1. The facility is based on a modified version of the cluster mechanism for the implementation of types. The proposed version tries to address the issues of efficiency and portability in connection with the goal of systematic programming.
 
Article
C++ uses inheritance as a substitute for subtype polymorphism. We give examples where this makes the type system too inflexible. We then describe a conservative language extension that allows a programmer to define an abstract type hierarchy independent of any implementation hierarchies, to retroactively abstract over an implementation, and to decouple subtyping from inheritance. This extension gives the user more of the flexibility of dynamic typing while retaining the efficiency and security of static typing. With default implementations and views flexible mechanisms are provided for implementing an abstract type by different concrete class types. We first show how the language extension can be implemented in a preprocessor to a C++ compiler, and then detail and analyze the efficiency of an implementation we directly incorporated in the GNU C++ compiler.
 
Article
This paper examines the sequence abstraction known in Pascal as the ‘file’, and shows how sequences of characters (‘strings’ in the SNOBOL sense) may be cleanly fitted into Pascal-like languages. The specific problems of providing the suggested facilities as an experimental extension to Pascal are examined.
 
Article
The concept of pervasiveness provides the future of computing with an attractive perspective. However, software support for physical and logical mobility raises a set of new requirements and challenges for software production, creating demand for new types of applications, the so-called pervasive applications, which express follow-me semantics. The core of these challenges is a dynamic operating environment, which originates with the users' movement in different terminals and locations, and determines different execution contexts. For this envision to become a reality, developers must build applications that constantly adapt to a highly dynamic computing environment. Research works in pervasive computing have already addressed important issues, but they do not approach the problem of how to program general-purpose pervasive systems. Pervasive applications are distributed, mobile, adaptive and consider context as a first-order concept. To make the developers' task easier, we have introduced the software architecture called ISAM, which provides an integrated environment aimed at building pervasive applications composed of a development environment and an execution middleware. As part of our study within the ISAM project, we have been investigating how context-awareness can be expressed at the programming language level with a basis on four main abstractions: context, adapters, adaptation commands, and adaptive behavior management policies. This paper introduces such abstractions, and presents some development and management tools anchored on an example application, which is under development. Copyright © 2006 John Wiley & Sons, Ltd.
 
Top-cited authors
Rajkumar Buyya
  • University of Melbourne
R. Ranjan
  • Newcastle University
Anton Beloglazov
  • University of Melbourne
Edward M. Reingold
  • Illinois Institute of Technology
Stephen C. North
  • Infovisible