Journal of Systems and Software

Published by Elsevier BV

Print ISSN: 0164-1212

Articles


Table 3 Project models, diagrams and elements.
Table 5 Analysis of variance (ANOVA) of the treatment and the experience of the subject on the dependent variables.
Traceability life cycle for a project
Distinction of traceability relations into incoming and outgoing relative to the selected element within a model
Changing an element in one model can have different impact on related elements and on the existing traceability relations

+6

Towards automated traceability maintenance
  • Article
  • Full-text available

October 2012

·

838 Reads

·

Traceability relations support stakeholders in understanding the dependencies between artifacts created during the development of a software system and thus enable many development-related tasks. To ensure that the anticipated benefits of these tasks can be realized, it is necessary to have an up-to-date set of traceability relations between the established artifacts. This goal requires the creation of traceability relations during the initial development process. Furthermore, the goal also requires the maintenance of traceability relations over time as the software system evolves in order to prevent their decay. In this paper, an approach is discussed that supports the (semi-) automated update of traceability relations between requirements, analysis and design models of software systems expressed in the UML. This is made possible by analyzing change events that have been captured while working within a third-party UML modeling tool. Within the captured flow of events, development activities comprised of several events are recognized. These are matched with predefined rules that direct the update of impacted traceability relations. The overall approach is supported by a prototype tool and empirical results on the effectiveness of tool-supported traceability maintenance are provided.
Download
Share

Testing and Validating Machine Learning Classifiers by Metamorphic Testing

April 2011

·

572 Reads

·

·

·

[...]

·

Machine Learning algorithms have provided core functionality to many application domains - such as bioinformatics, computational linguistics, etc. However, it is difficult to detect faults in such applications because often there is no "test oracle" to verify the correctness of the computed outputs. To help address the software quality, in this paper we present a technique for testing the implementations of machine learning classification algorithms which support such applications. Our approach is based on the technique "metamorphic testing", which has been shown to be effective to alleviate the oracle problem. Also presented include a case study on a real-world machine learning application framework, and a discussion of how programmers implementing machine learning algorithms can avoid the common pitfalls discovered in our study. We also conduct mutation analysis and cross-validation, which reveal that our method has high effectiveness in killing mutants, and that observing expected cross-validation result alone is not sufficiently effective to detect faults in a supervised classification program. The effectiveness of metamorphic testing is further confirmed by the detection of real faults in a popular open-source classification program.

A posteriori operation detection in evolving software models

February 2013

·

219 Reads

As every software artifact, also software models are subject to continuous evolution. The operations applied between two successive versions of a model are crucial for understanding its evolution. Generic approaches for detecting operations a posteriori identify atomic operations, but neglect composite operations, such as refactorings, which leads to cluttered difference reports. To tackle this limitation, we present an orthogonal extension of existing atomic operation detection approaches for detecting also composite operations. Our approach searches for occurrences of composite operations within a set of detected atomic operations in a post-processing manner. One major benefit is the reuse of specifications available for executing composite operations also for detecting applications of them. We evaluate the accuracy of the approach in a real-world case study and investigate the scalability of our implementation in an experiment.

An enhanced zero-one optimal path set selection method

January 1996

·

41 Reads

Optimal path set selection problem is a crucial issue in structural testing. The zero-one optimal path set selection method is a generalized method that can be applied to all coverage criteria. The only drawback to this method is that for a large program the computation may take ten or more hours because the computation is exponentially proportional to the number of candidate paths and proportional to the number of components to be covered. To alleviate the drawback, this paper enhances the method by: defining five reduction rules: and reusing previously selected path set(s) to reduce both the number of candidate paths and the number of components to be covered. Since both the number of candidate paths and the number of components to be covered are reduced, the computation time can be greatly reduced

A uniform approach to software and hardware fault tolerance
In recent years, various attempts have been made to combine software and hardware fault tolerance in critical computer systems. In these systems, software and hardware faults may occur in many different sequences. The problem of how to incorporate these sequences in a fault-tolerant system design has been largely neglected in previous work. In this paper, a uniform software-based fault tolerance method is proposed to distinguish and tolerate various sequences of software and hardware faults. This method can be based on the recovery block or the n-version programming scheme, two fundamental software fault tolerance schemes. The concept of a fault identification and system reconfiguration (FISR) tree is proposed to represent the procedure of fault identification and system reconfiguration in a systematic way

A layout tool for Glotos

October 1992

·

37 Reads

Glotos is a visual representation of Lotos, and both are semantically equivalent. In this paper, a Glotos layout tool is described, which takes either the Lotos or the edited Glotos specification as input and generates an aesthetic Glotos layout as output. In both cases, a Glotos syntax tree is created. A bottom-up procedure is then used to calculate the boundary for each Glotos constructor. Finally, a top-down procedure determines x, y coordinates for each visual constructor. Unlike other layout tools, the Glotos layout tool makes full use of the syntactic and semantic information of Glotos in layouting, which greatly improves the efficiency of the layout tool

Modularity Analysis of Use Case Implementations

October 2010

·

113 Reads

Component-based decomposition can result in implementations with use cases code tangled and scattered across components. Modularity techniques such as aspects, mixins, and virtual classes have been recently proposed to address this problem. One can use such techniques to group together code related to a single use case. This paper analyzes qualitatively and quantitatively the impact of this kind of use case modularization. We apply one specific technique, Aspect Oriented Programming, to modularize the use case implementations of the Health Watcher system. We extract traditional and contemporary metrics, including cohesion, coupling and separation of concerns and analyze modularity in terms of quality attributes such as changeability, support for parallel development, and pluggability. Our findings indicate that the results of modularity analysis depends on other factors beyond the chosen system, metrics and the applied technique.

Experimental evaluation of a simulation environment for information systems design

January 1996

·

29 Reads

This article presents an experiment assessing the decision support value of a simulation environment for the information systems (IS) design process. We have implemented a prototype simulation environment that uses data flow diagrams (DFDs) augmented with the performance rates of system components to specify the structure and dynamics of IS designs. The DFD-based representation is automatically mapped to a stochastic queuing network simulation. Knowledge-based help supports formulation of simulation run parameters and interpretation of output. We measure the prototype's impact on system dynamics assessment via the accuracy of IS professionals' responses to questions about the dynamics of four IS cases. The prototype's simulation capability has a significant positive effect on accuracy scores for questions involving waiting times, system times, and queue lengths of jobs or customers. Subjects choosing to conduct more and longer simulation runs provide significantly more accurate assessments than less-active users of simulation. The findings suggest that IS professionals can make use of simulation technology within a compact time frame if the proper supports are provided; on-line help or other methods should be used to encourage users to conduct sufficiently long simulation runs; and embedding the prototype's capabilities in a computer-aided software engineering workbench would result in better designed information systems.

An empirical evaluation of the ISO/IEC 15504 assessment model

October 2001

·

97 Reads

The emerging international standard ISO/IEC 15504 (Software Process Assessment) includes an exemplar assessment model (known as Part 5). Thus far, the majority of users of ISO/IEC 15504 employ the exemplar model as the basis for their assessments. This paper describes an empirical evaluation of the exemplar model. Questionnaire data was collected from the lead assessors of 57 assessments world-wide. Our findings are encouraging for the developers and users of ISO/IEC 15504 in that they indicate that the current model can be used successfully in assessments. However, they also point out some weaknesses in the rating scheme that need to be rectified in future revisions of ISO/IEC 15504.

DoS-resistant ID-based password authentication scheme using smart cards. Journal of Systems and Software, 83(1), 163-172

January 2010

·

99 Reads

In this paper, we provide a defense mechanism to Kim–Lee–Yoo’s ID-based password authentication scheme, which is vulnerable to impersonation attacks and resource exhaustion attacks. Mutual authentication and communication privacy are regarded as essential requirements in today’s client/server-based architecture; therefore, a lightweight but secure mutual authentication method is introduced in the proposed scheme. Once the mutual authentication is successful, the session key will be established without any further computation. The proposed defense mechanism not only accomplishes the mutual authentication and the session key establishment, but also inherits the security advantages of Kim–Lee–Yoo’s scheme, e.g. it is secure against password guessing attacks and message replay attacks.

Overview of the types of articles
Software engineering article types: An analysis of the literature. Journal of Systems and Software, 81(10), 1694-1714

October 2008

·

4,434 Reads

The software engineering (SE) community has recently recognized that the field lacks well-established research paradigms and clear guidance on how to write good research reports. With no comprehensive guide to the different article types in the field, article writing and reviewing heavily depends on the expertise and the understanding of the individual SE actors.In this work, we classify and describe the article types published in SE with an emphasis on what is required for publication in journals and conference proceedings. Theoretically, we consider article types as genres, because we assume that each type of article has a specific function and a particular communicative purpose within the community, which the members of the community can recognize. We draw on written sources available, i.e. the instructions to authors/reviewers of major SE journals, the calls for papers of major SE conferences, and previous research published on the topic.Despite the fragmentation and limitations of the sources studied, we are able to propose a classification of different SE article types. Such classification helps in guiding the reader through the SE literature, and in making the researcher reflect on directions for improvements.

Software developer perceptions about software project failure: A case study. Journal of Systems and Software, 49(2-3), 177-192

December 1999

·

457 Reads

Software development project failures have become commonplace. With almost daily frequency these failures are reported in newspapers, journal articles, or popular books. These failures are defined in terms of cost and schedule over-runs, project cancellations, and lost opportunities for the organizations that embark on the difficult journey of software development. Rarely do these accounts include perspectives from the software developers that worked on these projects. This case study provides an in-depth look at software development project failure through the eyes of the software developers. The researcher used structured interviews, project documentation reviews, and survey instruments to gather a rich description of a software development project failure. The results of the study identify a large gap between how a team of software developers defined project success and the popular definition of project success. This study also revealed that a team of software developers maintained a high-level of job satisfaction despite their failure to meet schedule and cost goals of the organization.

Editor's corner an assessment of systems and software engineering scholars and institutions, 1993 and 1994

October 1995

·

18 Reads

This paper is the second in an annual series whose goal is to answer the questions Who are the most published authors in the field of systems and software engineering (SSE)? From which institutions do the most systems and software engineering published papers emerge? The first paper (Glass, 1994) in the series was published a year ago. The current study reports on the top scholars and institutions for the years 1993 and 1994.The methodology of the study (including the journals surveyed) and its limitations are discussed later in the paper. This study focuses on systems and software engineering and not, for example, on computer science or information systems. The findings are as follows.

Table 1 : Top scholars in the field of systems and software engineering
Table 2 : Top scholar keywords describing research focus
Table 3 : Top institutions in the field of systems and software engineering
Table 4 : Top institutions and top scholars
An Assessment of Systems and Software Engineering Scholars and Institutions (2000–2004)

October 2005

·

1,198 Reads

This paper presents the findings of a five-year study of the top scholars and institutions in the Systems and Software Engineering field, as measured by the quantity of papers published in the journals of the field in 2000–2004. The top scholar is Hai Zhuge of the Chinese Academy of Sciences, and the top institution is Korea Advanced Institute of Science and Technology. This paper is part of an ongoing study, conducted annually, that identifies the top 15 scholars and institutions in the most recent five-year period.

An efficient nonuniform index in the wireless broadcast environments. Journal of Systems and Software, 81, 2091-2103

November 2008

·

44 Reads

Data broadcast is an efficient dissemination method to deliver information to mobile clients through the wireless channel. It allows a huge number of the mobile clients simultaneously access data in the wireless environments. In real-life applications, more popular data may be frequently accessed by clients than less popular ones. Under such scenarios, Acharya et al.’s Broadcast Disks algorithm (BD) allocates more popular data appeared more times in a broadcast period than less popular ones, i.e., the nonuniform broadcast, and provides a good performance on reducing client waiting time. However, mobile devices should constantly tune in to the wireless broadcast channel to examine data, consuming a lot of energy. Using index technologies on the broadcast file can reduce a lot of energy consumption of the mobile devices without significantly increasing client waiting time. In this paper, we propose an efficient nonuniform index called the skewed index, SI, over BD. The proposed algorithm builds an index tree according to skewed access patterns of clients, and allocates index nodes for the popular data more times than those for the less popular ones in a broadcast cycle. From our experimental study, we have shown that our proposed algorithm outperforms the flexible index and the flexible distributed index.

Requirements specification for Ada software under DoD-STD-2167A

May 1991

·

61 Reads

The design of Ada software is best initiated from a set of specifications that complements Ada software engineering. Under DoD-STD-2167A the specification writer may inadvertently impede Ada design methods through the over-specification of program structure before the problem domain is totally understood in the context of Ada implementation. This paper identifies some of the potential pitfalls in expressing requirements under the DoD-STD-2167A, Software Requirements Specification format and gives recommendations on how to write compliant requirements that facilitates Ada design.

Availability-based noncontiguous processor allocation policies for 2D mesh-connected multicomputers

July 2008

·

98 Reads

Various contiguous and noncontiguous processor allocation policies have been proposed for mesh-connected multicomputers. Contiguous allocation suffers from high external processor fragmentation because it requires that the processors allocated to a parallel job be contiguous and have the same topology as the multicomputer. The goal of lifting the contiguity condition in noncontiguous allocation is reducing processor fragmentation. However, this can increase the communication overhead because the distances traversed by messages can be longer, and messages from different jobs can interfere with each other by competing for communication resources. The extra communication overhead depends on how the allocation request is partitioned and mapped to free processors. In this paper, we investigate a new class of noncontiguous allocation schemes for two-dimensional mesh-connected multicomputers. These schemes are different from previous ones in that request partitioning is based on the submeshes available for allocation. The available submeshes selected for allocation to a job are such that a high degree of contiguity among their processors is achieved. The proposed policies are compared to previous noncontiguous policies using detailed simulations, where several common communication patterns are considered. The results show that the proposed policies can reduce the communication overhead and improve performance substantially.

A lossy 3D wavelet transform for high-quality compression of medical video

March 2009

·

66 Reads

In this paper, we present a lossy compression scheme based on the application of the 3D fast wavelet transform to code medical video. This type of video has special features, such as its representation in gray scale, its very few interframe variations, and the quality requirements of the reconstructed images. These characteristics as well as the social impact of the desired applications demand a design and implementation of coding schemes especially oriented to exploit them. We analyze different parameters of the codification process, such as the utilization of different wavelets functions, the number of steps the wavelet function is applied to, the way the thresholds are chosen, and the selected methods in the quantization and entropy encoder. In order to enhance our original encoder, we propose several improvements in the entropy encoder: 3D-conscious run-length, hexadecimal coding and the application of arithmetic coding instead of Huffman. Our coder achieves a good trade-off between compression ratio and quality of the reconstructed video. We have also compared our scheme with MPEG-2 and EZW, obtaining better compression ratios up to 119% and 46%, respectively for the same PSNR.

Secure agent computation: X.509 Proxy Certificates in a multi-lingual agent framework

February 2005

·

39 Reads

Mobile agent technology presents an attractive alternative to the client–server paradigm for several network and real-time applications. However, for most applications, the lack of a viable agent security model has limited the adoption of the agent paradigm. This paper describes how the security infrastructure for computational Grids using X.509 Proxy Certificates can be extended to facilitate security for mobile agents. Proxy Certificates serve as credentials for Grid applications, and their primary purpose is the temporary delegation of authority. We are exploiting the similarities between Grid applications and mobile agent applications, and motivate the use of Proxy Certificates as credentials for mobile agents. Further, we propose extensions for Proxy Certificates to facilitate the characteristics of mobile agent applications, and present mechanisms that achieve agent-to-host authentication, restriction of agent privileges, and secure delegation of authority during spawning of new agents.

An abbreviated concept-based query language and its exploratory evaluation

July 2002

·

23 Reads

Research on the use of conceptual information in database queries has primarily focused on semantic query optimization. Studies on the important aspects of conceptual query formulation are currently not as extensive. Only a relatively small number of works exist in this area. The existing concept-based query languages are similar in the sense that they require the user to specify the entire query path in formulating a query. In this study, we present the Conceptual Query Language (CQL), which does not require entire query paths to be specified but only their terminal points. CQL is an abbreviated concept-based query language that allows for the conceptual abstraction of database queries and exploits the rich semantics of semantic data models to ease and facilitate query formulation. CQL was developed with the aim of providing typical end-users like secretaries and administrators an easy-to-use database query interface for querying and report generation. A CQL prototype has been implemented and currently runs as a front-end to an underlying relational DBMS. A statistical experiment conducted to probe end-users' reaction to using CQL vis-à-vis SQL as a database query language indicates that end-users perform better with CQL and have a better perception of it than of SQL. This paper discusses the design of CQL, the strategies for CQL query processing, and the comparative study between CQL and SQL.

Exploiting and-or parallelism in Prolog: The OASys computational model and abstract architecture

October 1998

·

28 Reads

Different forms of parallelism have been extensively investigated over the last few years in logic programs and a number of systems have been proposed. Or/And System (OASys) is an experimental parallel Prolog system that exploits and-or-parallelism and comprises a computational model, a compiler, an abstract machine and an emulator. OASys computational model combines the two types of parallelism considering each alternative path as a totally independent computation which consists of a conjunction of determinate subgoals. It is based on distributed scheduling and supports recomputation of paths as well as stack copying. The system features modular design, high distribution and minimal inter-processor communication. This paper presents briefly the computational model and describes the abstract machine discussing data representation, memory organization, instruction set, operation and synchronization. Finally performance results obtained by the single-processor implementation and the multiple-processor emulation are discussed.

Table 4 :
Table 10: Wilcoxon test of imputation accuracy differences between MINI and k-NN and CMI 
Abstract A new imputation method for small software project data sets

January 2007

·

285 Reads

Effort prediction is a very important issue for software project management. Historical project data sets are frequently used to support such prediction. But missing data are often contained in these data sets and this makes prediction more difficult. One common practice is to ignore the cases with missing data, but this makes the originally small software project database even smaller and can further decrease the accuracy of prediction. The alternative is missing data imputation. There are many imputation methods. Software data sets are frequently characterised by their small size but unfortunately sophisticated imputation methods prefer larger data sets. For this reason we explore using simple methods to impute missing data in small project effort data sets. We propose a class mean imputation (CMI) method based on the k-NN hot deck imputation method (MINI) to impute both continuous and nominal missing data in small data sets. We use an incremental approach to increase the variance of population. To evaluate MINI (and k-NN and CMI methods as benchmarks) we use data sets with 50 cases and 100 cases sampled from a larger industrial data set with 10%, 15%, 20% and 30% missing data percentages respectively. We also simulate Missing Completely at Random (MCAR) and Missing at Random (MAR) missingness mechanisms. The results suggest that the MINI method outperforms both CMI and the k-NN methods. We conclude that this new imputation technique can be used to impute missing values in small data sets.

Testing abstract distributed programs and their implementations: A constraint-based approach

June 1996

·

22 Reads

An abstract program is a formal specification that models the valid behavior of a concurrent program without describing particular implementation mechanisms that achieve this behavior. Valid behavior can be modeled as the possible sequences of events that may be observed of a conforming concrete implementation of the abstract program.In this article, we address the problem of how to select event sequences from an abstract program to test its concrete implementation. Sequencing constraints make explicit certain types of required properties that are expressed only implicitly by the abstract program itself. The sequencing constraints derived from an abstract program can be used to guide the selection of event sequences during testing: sequences are selected to check the implementation for conformance to the required properties. We describe a constraint notation called CSPE and formally define CSPE constraints in the propositional modal μ-calculus.CSPE constraints can be automatically derived from abstract CCS and Lotos programs, and test sequences can be generated to cover the constraints. We describe a test sequence generation tool that can be used to partially automate this process. The test sequence generator inputs an abstract program and a list of constraints, and outputs a list of test sequences. The test sequence generator was used in an experiment to measure the effectiveness of the test sequences. We created mutations of a nontrivial concurrent Ada program in order to determine the mutation adequacy of a set of test sequences generated from an abstract program. The abstract program was a specification of the Sliding Window Protocol. The results of the experiment are reported.

Abstract Availability analysis and improvement of Active/Standby cluster systems using software rejuvenation

March 2002

·

48 Reads

Cluster systems, using commercially available personal computers connected in a loosely coupled fashion can provide high levels of availability. To improve the availability of personal computer-based Active/Standby cluster systems, we have conducted a study of software rejuvenation that follows a proactive fault-tolerant approach to handle software-origin system failure. In this paper, we map software rejuvenation and switchover states with a semi-Markov process and get mathematical steady-state solutions of the chain. We calculate the availability and the downtime of Active/Standby cluster systems using the solutions and find that software rejuvenation can be used to improve the availability of Active/Standby cluster systems.

Generation of ADA and PL/1 prototypes from abstract data type specifications

November 1991

·

117 Reads

A software system prototype is an operational model that exhibits the behavioral and structural characteristics of the desired software product. We describe a prototyping system that automatically generates compilable prototypes by transforming an abstract data type specification into a program. The prototyping system consists of two versions: a compiler on MULTICS that generates PL/1 code, and a compiler on UNIX that generates ADA code. The proposed approach allows the specification developer to investigate the behavior of the specifications and define implementation models.

High-level debugging of distributed systems: The behavioral abstraction approach

December 1983

·

63 Reads

Most extant debugging aids force their users to think about errors in programs from a low-level, unit-at-a-time perspective. Such a perspective is inadequate for debugging large complex systems, particularly distributed systems. In this paper, we present a high-level approach to debugging that offers an alternative to the traditional techniques. We describe a language, edl, developed to support this high-level approach to debugging and outline a set of tools that has been constructed to effect this approach. The paper includes an example illustrating the approach and discusses a number of problems encountered while developing these debugging tools.

Intrusion detection for mobile devices using the knowledge-based, temporal abstraction method

August 2010

·

193 Reads

In this paper, a new approach for detecting previously unencountered malware targeting mobile device is proposed. In the proposed approach, time-stamped security data is continuously monitored within the target mobile device (i.e., smartphones, PDAs) and then processed by the knowledge-based temporal abstraction (KBTA) methodology. Using KBTA, continuously measured data (e.g., the number of sent SMSs) and events (e.g., software installation) are integrated with a mobile device security domain knowledge-base (i.e., an ontology for abstracting meaningful patterns from raw, time-oriented security data), to create higher level, time-oriented concepts and patterns, also known as temporal abstractions. Automatically-generated temporal abstractions are then monitored to detect suspicious temporal patterns and to issue an alert. These patterns are compatible with a set of predefined classes of malware as defined by a security expert (or the owner) employing a set of time and value constraints. The goal is to identify malicious behavior that other defensive technologies (e.g., antivirus or firewall) failed to detect. Since the abstraction derivation process is complex, the KBTA method was adapted for mobile devices that are limited in resources (i.e., CPU, memory, battery). To evaluate the proposed modified KBTA method a lightweight host-based intrusion detection system (HIDS), combined with central management capabilities for Android-based mobile phones, was developed. Evaluation results demonstrated the effectiveness of the new approach in detecting malicious applications on mobile devices (detection rate above 94% in most scenarios) and the feasibility of running such a system on mobile devices (CPU consumption was 3% on average).

Functional testing and design abstractions

December 1980

·

37 Reads

An approach to functional testing is described in which the design of a program is used to generate functional test data. The approach depends on the use of design methods that model the abstract functional structure of a program as well as the abstract structure of the data on which the program operates. An example of the use of the method is given and a discussion of its effectiveness is included.

Using design abstractions to visualize, quantify, and restructure software

August 1998

·

26 Reads

During design or maintenance, software developers often use intuition, rather than an objective set of criteria, to determine or recapture the design structure of a software system. A decision process based on intuition alone can miss alternative design options that are easier to implement, test, maintain, and reuse. The concept of design-level cohesion can provide both visual and quantitative guidance for comparing alternative software designs. The visual support can supplement human intuition; an ordinal design-level cohesion measure provides objective criteria for comparing alternative design structures. The process for visualizing and quantifying design-level cohesion can be readily automated and can be used to re-engineer software.

Academic/Industrial collaboration in a Postgraduate MSc Course in Software Engineering

November 1989

·

14 Reads

This paper outlines the organization of an MSc in Software Engineering that has been set up as a specialist conversion course for graduates who have had some experience of computer programming. The most distinctive feature of the program is that this degree involves the participation of an industrial partner in providing some of the teaching and a period of industrial placement. Our experiences with the academic and practical aspects of such a structure have been included.† † This paper is an extension of an earlier report [1] and as we did then, we should explain to readers in the U.S.A. that the British tend to use the term course when referring to both a degree programme and also a course unit.

Figure 1: Example of an elliptic curve over the real numbers visualizing the point addition. 
Figure 2: Arithmetic hierarchy
Figure 3: VHDL Generator
Figure 4: Generic architecture of the CryptoProcessor
FPGA based hardware acceleration for elliptic curve public key cryptosystems

March 2004

·

491 Reads

This paper addresses public key cryptosystems based on elliptic curves, which are aimed to high-performance digital signature schemes. Elliptic curve algorithms are characterized by the fact that one can work with considerably shorter keys compared to the RSA approach at the same level of security. A general and highly efficient method for mapping the most time-critical operations to a configurable co-processor is proposed. By means of real-time measurements the resulting performance values are compared to previously published state of the art hardware implementations.A generator based approach is advocated for that purpose which supports application specific co-processor configurations in a flexible and straight forward way. Such a configurable CryptoProcessor has been integrated into a Java-based digital signature environment resulting in a considerable increase of its performance. The outlined approach combines in an unique way the advantages of mapping functionality to either hardware or software and it results in high-speed cryptosystems which are both portable and easy to update according to future security requirements.

Empirical analysis of biometric technology adoption and acceptance in Botswana

September 2009

·

374 Reads

There has been a slight surge in the study of technology adoption in developing countries. However, little attention has been paid to the adoption of biometric security systems. This paper reports a study that analyzed the adoption of biometric technology in a developing country from an institutional point of view. The results show that job positions (managerial and operational) could influence perceptions of innovation characteristics (especially ease of use and usefulness) in the decision to adopt biometrics. However, the unified organizational analyses indicate that ease of use, communication, size and type of organizations have significant impacts on the decision to adopt biometrics.

Cryptanalysis of Hwang–Yang scheme for controlling access in large partially ordered hierarchies

February 2005

·

15 Reads

Recently, Hwang and Yang [J. Syst. Software 67 (2003) 99] proposed a cryptographic key assignment scheme for access control in large partially ordered hierarchies. In this paper, we show that their scheme is insecure against the collusion attack whereby some security classes conspire to derive the secret keys of other leaf security classes, which is not allowed according to the scheme.

Pinned demand paging based on the access frequency of video files in video servers

December 2005

·

89 Reads

In this paper, we present an ameliorative demand-paging algorithm called PDPAF (i.e., pinned demand paging based on the access frequency of video files), to efficiently utilize the limited buffer space in a VOD (video-on-demand) server. It excludes the limitation of the disk bandwidth, and raises the hit ratio of video pages in the buffer, thereby increasing the total number of concurrent clients. Furthermore, we also propose an admission control algorithm to decide whether a new request can be admitted. Finally, we conduct extensive experiments to compare PDPAF with other algorithms on the average waiting time and the maximal number of concurrent requests, and the simulation results validate the superiority of our approach.

An information flow control model for C applications based on access control lists

October 2005

·

31 Reads

Access control within an application during its execution prevents information leakage. The prevention can be achieved through information flow control. Many information flow control models were developed, which may be based on discretionary access control (DAC), mandatory access control (MAC), label-based approach, and role-based access control (RBAC). Most existing models are for object-oriented systems. Since the procedural C language is still in use heavily, offering a model to control information flows for C applications should be fruitful. Although we identified information flow control models that can be applied to procedural languages, they do not offer the features we need. We thus developed a model to control information flows for C applications. Our model is based on access control lists (ACLs) and named CACL. It offers the following features: (a) controlling both read and write access, (b) preventing indirect information leakage, (c) detailing the control granularity to variables, (d) avoiding improper function call, (e) controlling function call through argument sensitivity, and (f) preventing change of an application when the access rights of the application’s real world users change. This paper presents CACL.

Streaming extensibility in the Modify-on-Access file system

November 2001

·

24 Reads

This paper presents the Modify-on-Access (Mona) file system that provides extensibility through transformations applied to streams of data. Mona overcomes two limitations of prior extensible file systems. First, the Mona file system offers two levels of extensions (kernel and user) that share a common interface. It allows performance-critical operations to execute with modest overhead in the kernel and untrusted or more complex operations to safely execute in user space. Second, Mona enables fine-grained extensions which allow an application to customize the file system at runtime. This paper discusses the implementation of the Mona file system. Our implementation adds modest overhead of 0–3% () to file system operations. This overhead has even less effect on net system performance for several benchmarks. Moreover, this paper describes applications that achieve 4–5 times speedup using custom transformations. This paper also describes several transformations that increase functionality. Among these are the ftp transformation that allows a user to browse a remote file as though it were local and the command transformation which invokes an arbitrary executable (even a shell script) on a data stream.

Information access tools for software reuse

September 1995

·

120 Reads

Software reuse has long been touted as an effective means to develop software products. But reuse technologies for software have not lived up to expectations. Among the barriers are high costs of building software repositories and the need for effective tools to help designers locate reusable software. Although many design-for-reuse and software classification efforts have been proposed, these methods are cost-intensive and cannot effectively take advantage of large stores of design artifacts that many development organizations have accumulated. Methods are needed that take advantage of these valuable resources in a cost-effective manner. This article describes an approach to the design of tools to help software designers build repositories of software components and locate potentially reusable software in those repositories. The approach is investigated with a retrieval tool, named CodeFinder, which supports the process of retrieving software components when information needs are ill-defined and users are not familiar with vocabulary used in the repository. CodeFinder uses an innovative integration of tools for the incremental refinement of queries and a retrieval mechanism that finds information associatively related to a query. Empirical evaluation of CodeFinder has demonstrated the effectiveness of the approach.

Controlling access in large partially ordered hierarchies using cryptographic keys

August 2003

·

45 Reads

The problem of access control in a hierarchy is present in many application areas. Since computing resources have grown tremendously, access control is more frequently required in areas such as computer networks, database management systems, and operating systems. Many schemes based on cryptography have been proposed to solve this problem. However, previous schemes need large values associated with each security class. In this paper, we propose a new scheme to solve this problem achieving the following two goals. One is that the number of keys is reduced without affecting the security of the system. The other goal is that when a security class is added to the system, we need only update a few keys of the related security classes with simple operations.

DPE/PAC: Decentralized process engine with product access control

June 2005

·

35 Reads

This paper proposes a process engine called DPE/PAC (decentralized process engine with product access control). It can be embedded in a PSEE (process-centered software engineering environment) to decentralize the PSEE. In a decentralized PSEE, every site can enact process programs and therefore the workload of the PSEE’s sites is balanced (i.e., no site will become a bottleneck). Moreover, when a site is down, other sites can enact process programs. Therefore, a decentralized PSEE overcomes the following drawbacks of a traditional client/server PSEEs: (a) the server may become a bottleneck and (b) when the server is down, the entire process should be suspended. In addition to decentralizing PSEEs, DPE/PAC offers an additional function, which is ensuring secure product access (including the enforcement of separation-of-duty constraints). The function is essential for a software process that develops sensitive software systems.

A model of enforcement relationships among database access control dependencies

September 1983

·

8 Reads

The various kinds of access decision dependencies within a predicate-based model of database protection are classified according to cost of enforcement. Petri nets and some useful extensions are described. Extended Petri nets are used to model the flow of messages and data during protection enforcement within MULTISAFE, a multimodule system architecture for secure database management. The model demonstrates that some of the stated criteria for security are met within MULTISAFE. Of particular interest is the modeling of data dependent access conditions with predicates at Petri net transitions. Tokens in the net carry the intermodule messages of MULTISAFE. Login, authorization, and database requests are traced through the model as examples. The evaluation of complex access condition predicates is described for the enforcement process. Queues of data and queues of access condition predicates are cycled through the net so that each data record is checked against each predicate. Petri nets are shown to be a useful modeling tool for database security.

An efficient key-management scheme for hierarchical access control based on elliptic curve cryptosystem

August 2006

·

34 Reads

The elliptic curve cryptosystem is considered to be the strongest public-key cryptosystem known today and is preferred over the RSA cryptosystem because the key length for secure RSA has increased over recent years, and this has put a heavier processing load on its applications. An efficient key management and derivation scheme based on the elliptic curve cryptosystem is proposed in this paper to solve the hierarchical access control problem. Each class in the hierarchy is allowed to select its own secret key. The problem of efficiently adding or deleting classes can be solved without the necessity of regenerating keys for all the users in the hierarchy, as was the case in previous schemes. The scheme is shown much more efficiently and flexibly than the schemes proposed previously.

Embedding role-based access control model in object-oriented systems to protect privacy

April 2004

·

49 Reads

The role-based access control (RBAC) approach has been recognized as useful in information security and many RBAC models have been proposed. Current RBAC researches focus on developing new models or enhancing existing models. In our research, we developed an RBAC model that can be embedded in object-oriented systems to control information flows (i.e. to protect privacy) within the systems. This paper proposes the model. The model, which is named OORBAC, is an extension of RBAC96. OORBAC offers the following features: (a) precisely control information flows among objects, (b) control method invocation through argument sensitivity, (c) allow purpose-oriented method invocation and prevent leakage within an object, (d) precisely control write access, and (e) avoid Trojan horses. We implemented a prototype for OORBAC using JAVA as the target language. The implementation resulted in a language named OORBACL, which can be used to implement secure applications. We evaluated OORBAC using experiments. The evaluation results are also shown in this paper.

An evaluation of network access protocols for distributed real-time database systems

April 1997

·

7 Reads

The results of a considerable number of works addressing various features of real-time database systems (RTDBSs) have recently appeared in the literature. An issue that has not received much attention yet is the performance of the communication network configuration in a distributed RTDBS. In this article, we examine the impact of underlying network architecture on the performance of a distributed RTDBS. In particular, we evaluate the real-time performance of distributed transactions in terms of the fraction of satisfied deadlines under various network access strategies. We also critically examine the common assumption of constant network delay for each communication message exchanged in a distributed RTDBS.

Table 4
Benchmark program description
Summary of benchmark execution
Local variable access behavior of a hardware-translation based Java virtual machine

November 2008

·

32 Reads

Hardware bytecode translation is a technique to improve the performance of the Java virtual machine (JVM), especially on the portable devices for which the overhead of dynamic compilation is significant. However, since the translation is done on a single bytecode basis, a naive implementation of the JVM generates frequent memory accesses for local variables which can be not only a performance bottleneck but also an obstacle for instruction folding. A solution to this problem is to add a small register file to the data path of the microprocessor which is dedicated for storing local variables. However, the effectiveness of such a local variable register file depends on the size and the local variable access behavior of the applications.In this paper, we analyze the local variable access behavior of various Java applications. In particular, we will investigate the fraction of local variable accesses that are covered by the register file of a varying size, which determines the chip area overhead and the operation speed. We also evaluate the effectiveness of the sliding register window for parameter passing in context of JVM and on-the-fly optimization of local variable to register file mapping.With two types of exceptions, a 16-entry register file achieves coverages of up to 98%. The first type of exception is represented by the SAXON XSLT processor for which the effect of cold miss is significant. Adding the sliding window feature to the register file for parameter passing turns 6.2–13.3% of total accesses from miss to hit to the register file for the SAXON with XSLTMark. The second type of exception is represented by the FFT, which accesses more than 16 local variables for most of method invocations. In this case, on-the-fly profiling is effective. The hit ratio of a 16-entry register file for the FFT is increased from 44% to 83% by an array of 8-bit counters.

Fault coverage of Constrained Random Test Selection for access control: A formal analysis

December 2010

·

34 Reads

A probabilistic model of fault coverage is presented. This model is used to analyze the variation in the fault detection effectiveness associated with the use of two test selection strategies: heuristics-based and Constrained Random Test Selection (CRTS). These strategies arise in the context of conformance test suite generation for Role Based Access Control (RBAC) systems. The proposed model utilizes coverage matrix based approach for fault coverage analysis. First, two boundary instances of fault distribution are considered and then generalized. The fault coverages of the test suites generated using the heuristics-based and the CRTS strategies, applied to a sample RBAC policy, are then compared through simulation. Finally the simulation results are correlated with a case study.

On accessing data in high-dimensional spaces: A comparative study of three space partitioning strategies

September 2004

·

75 Reads

While experience shows that contemporary multi-dimensional access methods perform poorly in high-dimensional spaces, little is known about the underlying causes of this important problem. One of the factors that has a profound effect on the performance of a multi-dimensional structure in high-dimensional situations is its space partitioning strategy. This paper investigates the partitioning strategies of KDB-trees, the Pyramid Technique, and a new point access method called the Θs Technique. The paper reveals important dimensionality problems associated with these strategies and shows how each strategy affects the retrieval performance across a range of spaces with varying dimensionalities. The Pyramid Technique, which is frequently regarded as the state-of-the-art access method for high-dimensional data, suffers from numerous problems that become particularly severe with highly skewed data in heavily sparse spaces. Although the partitioning strategy of KDB-trees incurs several problems in high-dimensional spaces, it exhibits a remarkable adaptability to the changing data distributions. However, the experimental evidence gathered on both simulated and real data sets shows that the Θs Technique generally outperforms the other two schemes in high-dimensional spaces, usually by a significant margin.

Introduction of accounting capabilities in future service architectures

November 2002

·

14 Reads

This paper proposes enhancements to the accounting support capabilities of legacy, standardised service architectures. The TINA service architecture is used as a reference, even though our approach is applicable to other models as well. The key points in this paper are the following. First, the definition of minimal extensions to the standard service architecture components, so as to enable them to offer accounting information. Second, the definition of the interface among the extended service architecture components and the new components that will undertake the accounting functionality. Third, the definition of the functionality of the new components that will undertake the accounting functionality. New components are defined so as to minimally impact the specified service architecture.

Predictive accuracy comparison of fuzzy models for software development effort of small programs

June 2008

·

41 Reads

Regression analysis to generate predictive equations for software development effort estimation has recently been complemented by analyses using less common methods such as fuzzy logic models. On the other hand, unless engineers have the capabilities provided by personal training, they cannot properly support their teams or consistently and reliably produce quality products. In this paper, an investigation aimed to compare personal Fuzzy Logic Models (FLM) with a Linear Regression Model (LRM) is presented. The evaluation criteria were based mainly upon the magnitude of error relative to the estimate (MER) as well as to the mean of MER (MMER). One hundred five small programs were developed by thirty programmers. From these programs, three FLM were generated to estimate the effort in the development of twenty programs by seven programmers. Both the verification and validation of the models were made. Results show a slightly better predictive accuracy amongst FLM and LRM for estimating the development effort at personal level when small programs are developed.

How accurate should early design stage power/performance tools be? A case study with statistical simulation

September 2004

·

9 Reads

To cope with the widening design gap, the ever increasing impact of technology, reflected in increased interconnect delay and power consumption, and the time-consuming simulations needed to define the architecture of a microprocessor, computer engineers need techniques to explore the design space efficiently in an early design stage. These techniques should be able to identify a region of interest with desirable characteristics in terms of performance, power consumption and cycle time. In addition, they should be fast since the design space is huge and the design time is limited. In this paper, we study how accurate early design stage techniques should be to make correct design decisions. In this analysis we focus on relative accuracy which is more important than absolute accuracy at the earliest stages of the design flow. As a case study we demonstrate that statistical simulation is capable of making viable microprocessor design decisions efficiently in early stages of a microprocessor design while considering performance, power consumption and cycle time.

Achieving requirements reuse: A domain-specific approach from avionics

September 1997

·

14 Reads

The reuse of requirements has received little attention in the literature, especially in relation to genuine industrial experience. This paper describes the efforts being made to promote the reuse of requirements for engine control systems at Rolls-Smith Engine Controls Limited (RoSEC). Our approach is based on existing domain analysis techniques, and we relate our experience of applying these techniques to develop a core set of generic requirements. A forms-based tool that we have prototyped which supports the reuse of requirements during the development of new systems is also presented, and is compared with existing reuse tools. The paper concludes with a discussion on the possible impact of this work on RoSEC's current requirements engineering process.

Top-cited authors