Journal of Systems and Software

Published by Elsevier
Online ISSN: 0164-1212
Publications
Article
Traceability relations support stakeholders in understanding the dependencies between artifacts created during the development of a software system and thus enable many development-related tasks. To ensure that the anticipated benefits of these tasks can be realized, it is necessary to have an up-to-date set of traceability relations between the established artifacts. This goal requires the creation of traceability relations during the initial development process. Furthermore, the goal also requires the maintenance of traceability relations over time as the software system evolves in order to prevent their decay. In this paper, an approach is discussed that supports the (semi-) automated update of traceability relations between requirements, analysis and design models of software systems expressed in the UML. This is made possible by analyzing change events that have been captured while working within a third-party UML modeling tool. Within the captured flow of events, development activities comprised of several events are recognized. These are matched with predefined rules that direct the update of impacted traceability relations. The overall approach is supported by a prototype tool and empirical results on the effectiveness of tool-supported traceability maintenance are provided.
 
Article
Machine Learning algorithms have provided core functionality to many application domains - such as bioinformatics, computational linguistics, etc. However, it is difficult to detect faults in such applications because often there is no "test oracle" to verify the correctness of the computed outputs. To help address the software quality, in this paper we present a technique for testing the implementations of machine learning classification algorithms which support such applications. Our approach is based on the technique "metamorphic testing", which has been shown to be effective to alleviate the oracle problem. Also presented include a case study on a real-world machine learning application framework, and a discussion of how programmers implementing machine learning algorithms can avoid the common pitfalls discovered in our study. We also conduct mutation analysis and cross-validation, which reveal that our method has high effectiveness in killing mutants, and that observing expected cross-validation result alone is not sufficiently effective to detect faults in a supervised classification program. The effectiveness of metamorphic testing is further confirmed by the detection of real faults in a popular open-source classification program.
 
Article
As every software artifact, also software models are subject to continuous evolution. The operations applied between two successive versions of a model are crucial for understanding its evolution. Generic approaches for detecting operations a posteriori identify atomic operations, but neglect composite operations, such as refactorings, which leads to cluttered difference reports. To tackle this limitation, we present an orthogonal extension of existing atomic operation detection approaches for detecting also composite operations. Our approach searches for occurrences of composite operations within a set of detected atomic operations in a post-processing manner. One major benefit is the reuse of specifications available for executing composite operations also for detecting applications of them. We evaluate the accuracy of the approach in a real-world case study and investigate the scalability of our implementation in an experiment.
 
Conference Paper
Optimal path set selection problem is a crucial issue in structural testing. The zero-one optimal path set selection method is a generalized method that can be applied to all coverage criteria. The only drawback to this method is that for a large program the computation may take ten or more hours because the computation is exponentially proportional to the number of candidate paths and proportional to the number of components to be covered. To alleviate the drawback, this paper enhances the method by: defining five reduction rules: and reusing previously selected path set(s) to reduce both the number of candidate paths and the number of components to be covered. Since both the number of candidate paths and the number of components to be covered are reduced, the computation time can be greatly reduced
 
Conference Paper
In recent years, various attempts have been made to combine software and hardware fault tolerance in critical computer systems. In these systems, software and hardware faults may occur in many different sequences. The problem of how to incorporate these sequences in a fault-tolerant system design has been largely neglected in previous work. In this paper, a uniform software-based fault tolerance method is proposed to distinguish and tolerate various sequences of software and hardware faults. This method can be based on the recovery block or the n-version programming scheme, two fundamental software fault tolerance schemes. The concept of a fault identification and system reconfiguration (FISR) tree is proposed to represent the procedure of fault identification and system reconfiguration in a systematic way
 
Conference Paper
Glotos is a visual representation of Lotos, and both are semantically equivalent. In this paper, a Glotos layout tool is described, which takes either the Lotos or the edited Glotos specification as input and generates an aesthetic Glotos layout as output. In both cases, a Glotos syntax tree is created. A bottom-up procedure is then used to calculate the boundary for each Glotos constructor. Finally, a top-down procedure determines x, y coordinates for each visual constructor. Unlike other layout tools, the Glotos layout tool makes full use of the syntactic and semantic information of Glotos in layouting, which greatly improves the efficiency of the layout tool
 
Conference Paper
Component-based decomposition can result in implementations with use cases code tangled and scattered across components. Modularity techniques such as aspects, mixins, and virtual classes have been recently proposed to address this problem. One can use such techniques to group together code related to a single use case. This paper analyzes qualitatively and quantitatively the impact of this kind of use case modularization. We apply one specific technique, Aspect Oriented Programming, to modularize the use case implementations of the Health Watcher system. We extract traditional and contemporary metrics, including cohesion, coupling and separation of concerns and analyze modularity in terms of quality attributes such as changeability, support for parallel development, and pluggability. Our findings indicate that the results of modularity analysis depends on other factors beyond the chosen system, metrics and the applied technique.
 
Article
This article presents an experiment assessing the decision support value of a simulation environment for the information systems (IS) design process. We have implemented a prototype simulation environment that uses data flow diagrams (DFDs) augmented with the performance rates of system components to specify the structure and dynamics of IS designs. The DFD-based representation is automatically mapped to a stochastic queuing network simulation. Knowledge-based help supports formulation of simulation run parameters and interpretation of output. We measure the prototype's impact on system dynamics assessment via the accuracy of IS professionals' responses to questions about the dynamics of four IS cases. The prototype's simulation capability has a significant positive effect on accuracy scores for questions involving waiting times, system times, and queue lengths of jobs or customers. Subjects choosing to conduct more and longer simulation runs provide significantly more accurate assessments than less-active users of simulation. The findings suggest that IS professionals can make use of simulation technology within a compact time frame if the proper supports are provided; on-line help or other methods should be used to encourage users to conduct sufficiently long simulation runs; and embedding the prototype's capabilities in a computer-aided software engineering workbench would result in better designed information systems.
 
Article
With the growing impact of information technology the proper understanding of IT-architecture designs is becoming ever more important. Much debate has been going on about how to describe them. In 2000, the IEEE Std 1471 proposed a model of an architecture description and its context.In this paper we propose a lightweight method for modeling architectural information after (part of) the conceptual model of IEEE Std 1471 and defining IEEE Std 1471 viewpoints. The method gives support by outlining in textual form and in diagram form the relation of the concerns of the stakeholders to the architectural information. The definition of viewpoints can then be done with insight from these relations. The method has four steps: (1) creating stakeholder profiles, (2) summarizing internal design documentation, (3) relating the summary to the concerns of the stakeholders, and (4) defining viewpoints.We have conducted a round of discussion and testing in practice in various settings. In this paper we present the feedback we received and propose improvements.
 
Article
There has been a proliferation of software engineering standards in the last two decades. While the utility of standards in general is acknowledged, thus far little attempt has been made to evaluate the success of any of these standards. One suggested criterion of success is the extent of usage of a standard. In this paper we present a general method for estimating the extent to which a standard is used. The method uses a capture–recapture (CR) model that was originally proposed for estimating birth and death rates in human populations. We apply the method to estimate the number of software process assessments that were conducted world-wide between September 1996 and June 1998 using the emerging ISO/IEC 15504 International Standard. Our results indicate that 1264 assessments were performed with a 90% confidence interval of 916 and 1895. The method used here can be applied to estimate the extent of usage of other software engineering standards, and also of other software engineering technologies. Such estimates can benefit standards (or technology) developers, funding agencies, and researchers by focusing their efforts on the most widely used standards (or technologies).
 
Article
The emerging international standard ISO/IEC 15504 (Software Process Assessment) includes an exemplar assessment model (known as Part 5). Thus far, the majority of users of ISO/IEC 15504 employ the exemplar model as the basis for their assessments. This paper describes an empirical evaluation of the exemplar model. Questionnaire data was collected from the lead assessors of 57 assessments world-wide. Our findings are encouraging for the developers and users of ISO/IEC 15504 in that they indicate that the current model can be used successfully in assessments. However, they also point out some weaknesses in the rating scheme that need to be rectified in future revisions of ISO/IEC 15504.
 
Article
In this paper, we provide a defense mechanism to Kim–Lee–Yoo’s ID-based password authentication scheme, which is vulnerable to impersonation attacks and resource exhaustion attacks. Mutual authentication and communication privacy are regarded as essential requirements in today’s client/server-based architecture; therefore, a lightweight but secure mutual authentication method is introduced in the proposed scheme. Once the mutual authentication is successful, the session key will be established without any further computation. The proposed defense mechanism not only accomplishes the mutual authentication and the session key establishment, but also inherits the security advantages of Kim–Lee–Yoo’s scheme, e.g. it is secure against password guessing attacks and message replay attacks.
 
Overview of the types of articles
Article
The software engineering (SE) community has recently recognized that the field lacks well-established research paradigms and clear guidance on how to write good research reports. With no comprehensive guide to the different article types in the field, article writing and reviewing heavily depends on the expertise and the understanding of the individual SE actors.In this work, we classify and describe the article types published in SE with an emphasis on what is required for publication in journals and conference proceedings. Theoretically, we consider article types as genres, because we assume that each type of article has a specific function and a particular communicative purpose within the community, which the members of the community can recognize. We draw on written sources available, i.e. the instructions to authors/reviewers of major SE journals, the calls for papers of major SE conferences, and previous research published on the topic.Despite the fragmentation and limitations of the sources studied, we are able to propose a classification of different SE article types. Such classification helps in guiding the reader through the SE literature, and in making the researcher reflect on directions for improvements.
 
Article
Software development project failures have become commonplace. With almost daily frequency these failures are reported in newspapers, journal articles, or popular books. These failures are defined in terms of cost and schedule over-runs, project cancellations, and lost opportunities for the organizations that embark on the difficult journey of software development. Rarely do these accounts include perspectives from the software developers that worked on these projects. This case study provides an in-depth look at software development project failure through the eyes of the software developers. The researcher used structured interviews, project documentation reviews, and survey instruments to gather a rich description of a software development project failure. The results of the study identify a large gap between how a team of software developers defined project success and the popular definition of project success. This study also revealed that a team of software developers maintained a high-level of job satisfaction despite their failure to meet schedule and cost goals of the organization.
 
Article
This paper is the second in an annual series whose goal is to answer the questions Who are the most published authors in the field of systems and software engineering (SSE)? From which institutions do the most systems and software engineering published papers emerge? The first paper (Glass, 1994) in the series was published a year ago. The current study reports on the top scholars and institutions for the years 1993 and 1994.The methodology of the study (including the journals surveyed) and its limitations are discussed later in the paper. This study focuses on systems and software engineering and not, for example, on computer science or information systems. The findings are as follows.
 
Top scholars in the field of systems and software engineering
Top scholar keywords describing research focus
Top institutions in the field of systems and software engineering
Top institutions and top scholars
Article
This paper presents the findings of a five-year study of the top scholars and institutions in the Systems and Software Engineering field, as measured by the quantity of papers published in the journals of the field in 2000–2004. The top scholar is Hai Zhuge of the Chinese Academy of Sciences, and the top institution is Korea Advanced Institute of Science and Technology. This paper is part of an ongoing study, conducted annually, that identifies the top 15 scholars and institutions in the most recent five-year period.
 
Article
Data broadcast is an efficient dissemination method to deliver information to mobile clients through the wireless channel. It allows a huge number of the mobile clients simultaneously access data in the wireless environments. In real-life applications, more popular data may be frequently accessed by clients than less popular ones. Under such scenarios, Acharya et al.’s Broadcast Disks algorithm (BD) allocates more popular data appeared more times in a broadcast period than less popular ones, i.e., the nonuniform broadcast, and provides a good performance on reducing client waiting time. However, mobile devices should constantly tune in to the wireless broadcast channel to examine data, consuming a lot of energy. Using index technologies on the broadcast file can reduce a lot of energy consumption of the mobile devices without significantly increasing client waiting time. In this paper, we propose an efficient nonuniform index called the skewed index, SI, over BD. The proposed algorithm builds an index tree according to skewed access patterns of clients, and allocates index nodes for the popular data more times than those for the less popular ones in a broadcast cycle. From our experimental study, we have shown that our proposed algorithm outperforms the flexible index and the flexible distributed index.
 
Article
The design of Ada software is best initiated from a set of specifications that complements Ada software engineering. Under DoD-STD-2167A the specification writer may inadvertently impede Ada design methods through the over-specification of program structure before the problem domain is totally understood in the context of Ada implementation. This paper identifies some of the potential pitfalls in expressing requirements under the DoD-STD-2167A, Software Requirements Specification format and gives recommendations on how to write compliant requirements that facilitates Ada design.
 
Article
Various contiguous and noncontiguous processor allocation policies have been proposed for mesh-connected multicomputers. Contiguous allocation suffers from high external processor fragmentation because it requires that the processors allocated to a parallel job be contiguous and have the same topology as the multicomputer. The goal of lifting the contiguity condition in noncontiguous allocation is reducing processor fragmentation. However, this can increase the communication overhead because the distances traversed by messages can be longer, and messages from different jobs can interfere with each other by competing for communication resources. The extra communication overhead depends on how the allocation request is partitioned and mapped to free processors. In this paper, we investigate a new class of noncontiguous allocation schemes for two-dimensional mesh-connected multicomputers. These schemes are different from previous ones in that request partitioning is based on the submeshes available for allocation. The available submeshes selected for allocation to a job are such that a high degree of contiguity among their processors is achieved. The proposed policies are compared to previous noncontiguous policies using detailed simulations, where several common communication patterns are considered. The results show that the proposed policies can reduce the communication overhead and improve performance substantially.
 
Article
This paper provides an extensive review of studies related to expert estimation of software development effort. The main goal and contribution of the review is to support the research on expert estimation, e.g., to ease other researcher’s search for relevant expert estimation studies. In addition, we provide software practitioners with useful estimation guidelines, based on the research-based knowledge of expert estimation processes. The review results suggest that expert estimation is the most frequently applied estimation strategy for software projects, that there is no substantial evidence in favour of use of estimation models, and that there are situations where we can expect expert estimates to be more accurate than formal estimation models. The following 12 expert estimation “best practice” guidelines are evaluated through the review: (1) evaluate estimation accuracy, but avoid high evaluation pressure; (2) avoid conflicting estimation goals; (3) ask the estimators to justify and criticize their estimates; (4) avoid irrelevant and unreliable estimation information; (5) use documented data from previous development tasks; (6) find estimation experts with relevant domain background and good estimation records; (7) Estimate top-down and bottom-up, independently of each other; (8) use estimation checklists; (9) combine estimates from different experts and estimation strategies; (10) assess the uncertainty of the estimate; (11) provide feedback on estimation accuracy and development task relations; and, (12) provide estimation training opportunities. We found supporting evidence for all 12 estimation principles, and provide suggestions on how to implement them in software organizations.
 
Article
In this paper, we present a lossy compression scheme based on the application of the 3D fast wavelet transform to code medical video. This type of video has special features, such as its representation in gray scale, its very few interframe variations, and the quality requirements of the reconstructed images. These characteristics as well as the social impact of the desired applications demand a design and implementation of coding schemes especially oriented to exploit them. We analyze different parameters of the codification process, such as the utilization of different wavelets functions, the number of steps the wavelet function is applied to, the way the thresholds are chosen, and the selected methods in the quantization and entropy encoder. In order to enhance our original encoder, we propose several improvements in the entropy encoder: 3D-conscious run-length, hexadecimal coding and the application of arithmetic coding instead of Huffman. Our coder achieves a good trade-off between compression ratio and quality of the reconstructed video. We have also compared our scheme with MPEG-2 and EZW, obtaining better compression ratios up to 119% and 46%, respectively for the same PSNR.
 
Article
Risk management and performance enhancement have always been the focus of software project management studies. The present paper shows the findings from an empirical study based on 115 software projects on analyzing the probability of occurrence and impact of the six dimensions comprising 27 software risks on project performance. The MANOVA analysis revealed that the probability of occurrence and composite impact have significant differences on six risk dimensions. Moreover, it indicated that no association between the probability of occurrence and composite impact among the six risk dimensions exists and hence, it is a crucial consideration for project managers when deciding the suitable risk management strategy. A pattern analysis of risks across high, medium, and low-performance software projects also showed that (1) the “requirement” risk dimension is the primary area among the six risk dimensions regardless of whether the project performance belongs to high, medium, or low; (2) for medium-performance software projects, project managers, aside from giving importance to “requirement risk”, must also continually monitor and control the “planning and control” and the “project complexity” risks so that the project performance can be improved; and, (3) improper management of the “team”, “requirement”, and “planning and control” risks are the primary factors contributing to a low-performance project.
 
Article
Mobile agent technology presents an attractive alternative to the client–server paradigm for several network and real-time applications. However, for most applications, the lack of a viable agent security model has limited the adoption of the agent paradigm. This paper describes how the security infrastructure for computational Grids using X.509 Proxy Certificates can be extended to facilitate security for mobile agents. Proxy Certificates serve as credentials for Grid applications, and their primary purpose is the temporary delegation of authority. We are exploiting the similarities between Grid applications and mobile agent applications, and motivate the use of Proxy Certificates as credentials for mobile agents. Further, we propose extensions for Proxy Certificates to facilitate the characteristics of mobile agent applications, and present mechanisms that achieve agent-to-host authentication, restriction of agent privileges, and secure delegation of authority during spawning of new agents.
 
Article
Research on the use of conceptual information in database queries has primarily focused on semantic query optimization. Studies on the important aspects of conceptual query formulation are currently not as extensive. Only a relatively small number of works exist in this area. The existing concept-based query languages are similar in the sense that they require the user to specify the entire query path in formulating a query. In this study, we present the Conceptual Query Language (CQL), which does not require entire query paths to be specified but only their terminal points. CQL is an abbreviated concept-based query language that allows for the conceptual abstraction of database queries and exploits the rich semantics of semantic data models to ease and facilitate query formulation. CQL was developed with the aim of providing typical end-users like secretaries and administrators an easy-to-use database query interface for querying and report generation. A CQL prototype has been implemented and currently runs as a front-end to an underlying relational DBMS. A statistical experiment conducted to probe end-users' reaction to using CQL vis-à-vis SQL as a database query language indicates that end-users perform better with CQL and have a better perception of it than of SQL. This paper discusses the design of CQL, the strategies for CQL query processing, and the comparative study between CQL and SQL.
 
Article
Cluster systems, using commercially available personal computers connected in a loosely coupled fashion can provide high levels of availability. To improve the availability of personal computer-based Active/Standby cluster systems, we have conducted a study of software rejuvenation that follows a proactive fault-tolerant approach to handle software-origin system failure. In this paper, we map software rejuvenation and switchover states with a semi-Markov process and get mathematical steady-state solutions of the chain. We calculate the availability and the downtime of Active/Standby cluster systems using the solutions and find that software rejuvenation can be used to improve the availability of Active/Standby cluster systems.
 
Article
An abstract program is a formal specification that models the valid behavior of a concurrent program without describing particular implementation mechanisms that achieve this behavior. Valid behavior can be modeled as the possible sequences of events that may be observed of a conforming concrete implementation of the abstract program.In this article, we address the problem of how to select event sequences from an abstract program to test its concrete implementation. Sequencing constraints make explicit certain types of required properties that are expressed only implicitly by the abstract program itself. The sequencing constraints derived from an abstract program can be used to guide the selection of event sequences during testing: sequences are selected to check the implementation for conformance to the required properties. We describe a constraint notation called CSPE and formally define CSPE constraints in the propositional modal μ-calculus.CSPE constraints can be automatically derived from abstract CCS and Lotos programs, and test sequences can be generated to cover the constraints. We describe a test sequence generation tool that can be used to partially automate this process. The test sequence generator inputs an abstract program and a list of constraints, and outputs a list of test sequences. The test sequence generator was used in an experiment to measure the effectiveness of the test sequences. We created mutations of a nontrivial concurrent Ada program in order to determine the mutation adequacy of a set of test sequences generated from an abstract program. The abstract program was a specification of the Sliding Window Protocol. The results of the experiment are reported.
 
Article
The growth of the Internet has generated Web pages that are rich in media and that incur significant rendering latency when accessed through slow communication channels. The technique of Web-object prefetching can potentially expedite the presentation of Web pages by utilizing the current Web page's view time to acquire the Web objects of likely future Web pages. The performance of the Web object prefetcher is contingent on the predictability of future Web pages and quickly determining which Web objects to prefetch during the limited view time interval of the current Web page. The proposed Markov–Knapsack method uses an approach that combines a Multi-Markov Web-application centric prefetch model with a Knapsack Web object selector to enhance Web page rendering performance. The Markov Web page model ascertains the most likely next Web page set based on the current Web page and the Web object Knapsack selector determines the premium Web objects to request from these Web pages. The results presented in the paper show that the proposed methods can be effective in improving a Web browser cache-hit percentage while significantly lowering Web page rendering latency.
 
Article
Component-based development (CBD) techniques have been widely used to enhance the productivity and reduce the cost for software systems development. However, applying CBD techniques to embedded software development faces additional challenges. For embedded systems, it is crucial to consider the quality of service (QoS) attributes, such as timeliness, memory limitations, output precision, and battery constraints. Frequently, multiple components implementing the same functionality with different QoS properties (measurements in terms of QoS attributes) can be used to compose a system. Also, software components may have parameters that can be configured to satisfy different QoS requirements. Composition analysis, which is used to determine the most suitable component selections and parameter settings to best satisfy the system QoS requirement, is very important in embedded software development process. In this paper, we present a model and the methodologies to facilitate composition analysis. We define QoS requirements as constraints and objectives. Composition analysis is performed based on the QoS properties and requirements to find solutions (component selections and parameter settings) that can optimize the QoS objectives while satisfying the QoS constraints. We use a multi-objective concept to model the composition analysis problem and use an evolutionary algorithm to determine the Pareto-optimal solutions efficiently.
 
Article
In this paper we propose a new cache management scheme for online analytical processing (OLAP) systems based on the usability of query results in rewriting and processing other queries. For effective admission and replacement of OLAP query results, we consider the benefit of query results not only for recently issued queries but for the expected future queries of a current query. We exploit semantic relationships between successive queries in an OLAP session, which are derived from the interactive and navigational nature of OLAP query workloads, in order to classify and predict subsequent future queries. We present a method for estimating the usability of query results for the representative future queries using a probability model for them. Experimental evaluation shows that our caching scheme using the past and future usability of query results can reduce the cost of processing OLAP query workloads effectively only with a small cache size and outperforms the previous caching strategies for OLAP systems.
 
Article
A software system prototype is an operational model that exhibits the behavioral and structural characteristics of the desired software product. We describe a prototyping system that automatically generates compilable prototypes by transforming an abstract data type specification into a program. The prototyping system consists of two versions: a compiler on MULTICS that generates PL/1 code, and a compiler on UNIX that generates ADA code. The proposed approach allows the specification developer to investigate the behavior of the specifications and define implementation models.
 
Article
Different forms of parallelism have been extensively investigated over the last few years in logic programs and a number of systems have been proposed. Or/And System (OASys) is an experimental parallel Prolog system that exploits and-or-parallelism and comprises a computational model, a compiler, an abstract machine and an emulator. OASys computational model combines the two types of parallelism considering each alternative path as a totally independent computation which consists of a conjunction of determinate subgoals. It is based on distributed scheduling and supports recomputation of paths as well as stack copying. The system features modular design, high distribution and minimal inter-processor communication. This paper presents briefly the computational model and describes the abstract machine discussing data representation, memory organization, instruction set, operation and synchronization. Finally performance results obtained by the single-processor implementation and the multiple-processor emulation are discussed.
 
Wilcoxon test of imputation accuracy differences between MINI and k-NN and CMI 
Article
Effort prediction is a very important issue for software project management. Historical project data sets are frequently used to support such prediction. But missing data are often contained in these data sets and this makes prediction more difficult. One common practice is to ignore the cases with missing data, but this makes the originally small software project database even smaller and can further decrease the accuracy of prediction. The alternative is missing data imputation. There are many imputation methods. Software data sets are frequently characterised by their small size but unfortunately sophisticated imputation methods prefer larger data sets. For this reason we explore using simple methods to impute missing data in small project effort data sets. We propose a class mean imputation (CMI) method based on the k-NN hot deck imputation method (MINI) to impute both continuous and nominal missing data in small data sets. We use an incremental approach to increase the variance of population. To evaluate MINI (and k-NN and CMI methods as benchmarks) we use data sets with 50 cases and 100 cases sampled from a larger industrial data set with 10%, 15%, 20% and 30% missing data percentages respectively. We also simulate Missing Completely at Random (MCAR) and Missing at Random (MAR) missingness mechanisms. The results suggest that the MINI method outperforms both CMI and the k-NN methods. We conclude that this new imputation technique can be used to impute missing values in small data sets.
 
Article
Most extant debugging aids force their users to think about errors in programs from a low-level, unit-at-a-time perspective. Such a perspective is inadequate for debugging large complex systems, particularly distributed systems. In this paper, we present a high-level approach to debugging that offers an alternative to the traditional techniques. We describe a language, edl, developed to support this high-level approach to debugging and outline a set of tools that has been constructed to effect this approach. The paper includes an example illustrating the approach and discusses a number of problems encountered while developing these debugging tools.
 
Article
In this paper, a new approach for detecting previously unencountered malware targeting mobile device is proposed. In the proposed approach, time-stamped security data is continuously monitored within the target mobile device (i.e., smartphones, PDAs) and then processed by the knowledge-based temporal abstraction (KBTA) methodology. Using KBTA, continuously measured data (e.g., the number of sent SMSs) and events (e.g., software installation) are integrated with a mobile device security domain knowledge-base (i.e., an ontology for abstracting meaningful patterns from raw, time-oriented security data), to create higher level, time-oriented concepts and patterns, also known as temporal abstractions. Automatically-generated temporal abstractions are then monitored to detect suspicious temporal patterns and to issue an alert. These patterns are compatible with a set of predefined classes of malware as defined by a security expert (or the owner) employing a set of time and value constraints. The goal is to identify malicious behavior that other defensive technologies (e.g., antivirus or firewall) failed to detect. Since the abstraction derivation process is complex, the KBTA method was adapted for mobile devices that are limited in resources (i.e., CPU, memory, battery). To evaluate the proposed modified KBTA method a lightweight host-based intrusion detection system (HIDS), combined with central management capabilities for Android-based mobile phones, was developed. Evaluation results demonstrated the effectiveness of the new approach in detecting malicious applications on mobile devices (detection rate above 94% in most scenarios) and the feasibility of running such a system on mobile devices (CPU consumption was 3% on average).
 
Article
An approach to functional testing is described in which the design of a program is used to generate functional test data. The approach depends on the use of design methods that model the abstract functional structure of a program as well as the abstract structure of the data on which the program operates. An example of the use of the method is given and a discussion of its effectiveness is included.
 
Article
During design or maintenance, software developers often use intuition, rather than an objective set of criteria, to determine or recapture the design structure of a software system. A decision process based on intuition alone can miss alternative design options that are easier to implement, test, maintain, and reuse. The concept of design-level cohesion can provide both visual and quantitative guidance for comparing alternative software designs. The visual support can supplement human intuition; an ordinal design-level cohesion measure provides objective criteria for comparing alternative design structures. The process for visualizing and quantifying design-level cohesion can be readily automated and can be used to re-engineer software.
 
Article
This paper outlines the organization of an MSc in Software Engineering that has been set up as a specialist conversion course for graduates who have had some experience of computer programming. The most distinctive feature of the program is that this degree involves the participation of an industrial partner in providing some of the teaching and a period of industrial placement. Our experiences with the academic and practical aspects of such a structure have been included.† † This paper is an extension of an earlier report [1] and as we did then, we should explain to readers in the U.S.A. that the British tend to use the term course when referring to both a degree programme and also a course unit.
 
Example of an elliptic curve over the real numbers visualizing the point addition. 
Arithmetic hierarchy
VHDL Generator
Generic architecture of the CryptoProcessor
Article
This paper addresses public key cryptosystems based on elliptic curves, which are aimed to high-performance digital signature schemes. Elliptic curve algorithms are characterized by the fact that one can work with considerably shorter keys compared to the RSA approach at the same level of security. A general and highly efficient method for mapping the most time-critical operations to a configurable co-processor is proposed. By means of real-time measurements the resulting performance values are compared to previously published state of the art hardware implementations.A generator based approach is advocated for that purpose which supports application specific co-processor configurations in a flexible and straight forward way. Such a configurable CryptoProcessor has been integrated into a Java-based digital signature environment resulting in a considerable increase of its performance. The outlined approach combines in an unique way the advantages of mapping functionality to either hardware or software and it results in high-speed cryptosystems which are both portable and easy to update according to future security requirements.
 
Article
There has been a slight surge in the study of technology adoption in developing countries. However, little attention has been paid to the adoption of biometric security systems. This paper reports a study that analyzed the adoption of biometric technology in a developing country from an institutional point of view. The results show that job positions (managerial and operational) could influence perceptions of innovation characteristics (especially ease of use and usefulness) in the decision to adopt biometrics. However, the unified organizational analyses indicate that ease of use, communication, size and type of organizations have significant impacts on the decision to adopt biometrics.
 
Article
Access control within an application during its execution prevents information leakage. The prevention can be achieved through information flow control. Many information flow control models were developed, which may be based on discretionary access control (DAC), mandatory access control (MAC), label-based approach, and role-based access control (RBAC). Most existing models are for object-oriented systems. Since the procedural C language is still in use heavily, offering a model to control information flows for C applications should be fruitful. Although we identified information flow control models that can be applied to procedural languages, they do not offer the features we need. We thus developed a model to control information flows for C applications. Our model is based on access control lists (ACLs) and named CACL. It offers the following features: (a) controlling both read and write access, (b) preventing indirect information leakage, (c) detailing the control granularity to variables, (d) avoiding improper function call, (e) controlling function call through argument sensitivity, and (f) preventing change of an application when the access rights of the application’s real world users change. This paper presents CACL.
 
Article
In this paper, we present an ameliorative demand-paging algorithm called PDPAF (i.e., pinned demand paging based on the access frequency of video files), to efficiently utilize the limited buffer space in a VOD (video-on-demand) server. It excludes the limitation of the disk bandwidth, and raises the hit ratio of video pages in the buffer, thereby increasing the total number of concurrent clients. Furthermore, we also propose an admission control algorithm to decide whether a new request can be admitted. Finally, we conduct extensive experiments to compare PDPAF with other algorithms on the average waiting time and the maximal number of concurrent requests, and the simulation results validate the superiority of our approach.
 
Article
The role-based access control (RBAC) approach has been recognized as useful in information security and many RBAC models have been proposed. Current RBAC researches focus on developing new models or enhancing existing models. In our research, we developed an RBAC model that can be embedded in object-oriented systems to control information flows (i.e. to protect privacy) within the systems. This paper proposes the model. The model, which is named OORBAC, is an extension of RBAC96. OORBAC offers the following features: (a) precisely control information flows among objects, (b) control method invocation through argument sensitivity, (c) allow purpose-oriented method invocation and prevent leakage within an object, (d) precisely control write access, and (e) avoid Trojan horses. We implemented a prototype for OORBAC using JAVA as the target language. The implementation resulted in a language named OORBACL, which can be used to implement secure applications. We evaluated OORBAC using experiments. The evaluation results are also shown in this paper.
 
Article
The elliptic curve cryptosystem is considered to be the strongest public-key cryptosystem known today and is preferred over the RSA cryptosystem because the key length for secure RSA has increased over recent years, and this has put a heavier processing load on its applications. An efficient key management and derivation scheme based on the elliptic curve cryptosystem is proposed in this paper to solve the hierarchical access control problem. Each class in the hierarchy is allowed to select its own secret key. The problem of efficiently adding or deleting classes can be solved without the necessity of regenerating keys for all the users in the hierarchy, as was the case in previous schemes. The scheme is shown much more efficiently and flexibly than the schemes proposed previously.
 
Article
This paper presents the Modify-on-Access (Mona) file system that provides extensibility through transformations applied to streams of data. Mona overcomes two limitations of prior extensible file systems. First, the Mona file system offers two levels of extensions (kernel and user) that share a common interface. It allows performance-critical operations to execute with modest overhead in the kernel and untrusted or more complex operations to safely execute in user space. Second, Mona enables fine-grained extensions which allow an application to customize the file system at runtime. This paper discusses the implementation of the Mona file system. Our implementation adds modest overhead of 0–3% () to file system operations. This overhead has even less effect on net system performance for several benchmarks. Moreover, this paper describes applications that achieve 4–5 times speedup using custom transformations. This paper also describes several transformations that increase functionality. Among these are the ftp transformation that allows a user to browse a remote file as though it were local and the command transformation which invokes an arbitrary executable (even a shell script) on a data stream.
 
Article
Tools to support computer supported collaborative writing (CSCWriting) allow multiple distributed users to collaborate over a wide area network on constructing a shared document. Prior research on computer supported collaborative work (CSCW) in general has predominantly focused on synchronous collaboration. Network latency becomes a bottleneck in maintaining shared artifacts during synchronous collaboration. Besides, to enable truly cooperative work asynchronous modes need to be supported as well, so that mobile users can switch between synchronous and asynchronous modes while they disconnect and reconnect to the network. These two considerations motivated the development of a distributed version control system for CSCWriting described in this paper. The most important contribution of our work is the proposal of an activity identification (AID) tag as the fundamental mechanism to support distributed management of multiple versions of a document. The AID tag facilitates the design and implementation of an integrated approach that includes differencing, merging and role-based access control at different levels of granularity, maintaining and visualizing the version structure, and group awareness of document status and operations. The AID tag leads to simple and effective differencing and merging schemes. Its unique address scheme eliminates the need for large storage capacity for version maintenance. Role-based access control can be implemented by associating the access right table and role assignment capabilities with the AID tag. Information for providing group awareness of the changing document is available from the AID tag. In addition, since the system maintains a user-browsable version structure of the evolving document that incorporates AID tag information, any user collaborating in the authoring of a document can easily visualize the historical evolution and current context of the document.
 
Article
The various kinds of access decision dependencies within a predicate-based model of database protection are classified according to cost of enforcement. Petri nets and some useful extensions are described. Extended Petri nets are used to model the flow of messages and data during protection enforcement within MULTISAFE, a multimodule system architecture for secure database management. The model demonstrates that some of the stated criteria for security are met within MULTISAFE. Of particular interest is the modeling of data dependent access conditions with predicates at Petri net transitions. Tokens in the net carry the intermodule messages of MULTISAFE. Login, authorization, and database requests are traced through the model as examples. The evaluation of complex access condition predicates is described for the enforcement process. Queues of data and queues of access condition predicates are cycled through the net so that each data record is checked against each predicate. Petri nets are shown to be a useful modeling tool for database security.
 
Article
The results of a considerable number of works addressing various features of real-time database systems (RTDBSs) have recently appeared in the literature. An issue that has not received much attention yet is the performance of the communication network configuration in a distributed RTDBS. In this article, we examine the impact of underlying network architecture on the performance of a distributed RTDBS. In particular, we evaluate the real-time performance of distributed transactions in terms of the fraction of satisfied deadlines under various network access strategies. We also critically examine the common assumption of constant network delay for each communication message exchanged in a distributed RTDBS.
 
Article
Distributed simulation has emerged as an important instrument for studying large-scale complex systems. Such systems inherently consist of a large number of components, which operate in a large shared state space interacting with it in highly dynamic and unpredictable ways. Optimising access to the shared state space is crucial for achieving efficient simulation executions. Data accesses may take two forms: locating data according to a set of attribute value ranges (range query) or locating a particular state variable from the given identifier (ID query and update). This paper proposes two alternative routing approaches, namely the address-based approach, which locates data according to their address information, and the range-based approach, whose operation is based on looking up attribute value range information along the paths to the destinations. The two algorithms are discussed and analysed in the context of PDES-MAS, a framework for the distributed simulation of multi-agent systems, which uses a hierarchical infrastructure to manage the shared state space. The paper introduces a generic meta-simulation framework which is used to perform a quantitative comparative analysis of the proposed algorithms under various circumstances.
 
Article
Software reuse has long been touted as an effective means to develop software products. But reuse technologies for software have not lived up to expectations. Among the barriers are high costs of building software repositories and the need for effective tools to help designers locate reusable software. Although many design-for-reuse and software classification efforts have been proposed, these methods are cost-intensive and cannot effectively take advantage of large stores of design artifacts that many development organizations have accumulated. Methods are needed that take advantage of these valuable resources in a cost-effective manner. This article describes an approach to the design of tools to help software designers build repositories of software components and locate potentially reusable software in those repositories. The approach is investigated with a retrieval tool, named CodeFinder, which supports the process of retrieving software components when information needs are ill-defined and users are not familiar with vocabulary used in the repository. CodeFinder uses an innovative integration of tools for the incremental refinement of queries and a retrieval mechanism that finds information associatively related to a query. Empirical evaluation of CodeFinder has demonstrated the effectiveness of the approach.
 
Article
The problem of access control in a hierarchy is present in many application areas. Since computing resources have grown tremendously, access control is more frequently required in areas such as computer networks, database management systems, and operating systems. Many schemes based on cryptography have been proposed to solve this problem. However, previous schemes need large values associated with each security class. In this paper, we propose a new scheme to solve this problem achieving the following two goals. One is that the number of keys is reduced without affecting the security of the system. The other goal is that when a security class is added to the system, we need only update a few keys of the related security classes with simple operations.
 
Top-cited authors
Barbara Kitchenham
  • Keele University
Mark Turner
  • Keele University
Mohamed Khalil
  • University of Khartoum
Paris Avgeriou
  • University of Groningen
Lionel C. Briand
  • Simula Research Laboratory