Recently, adaptive random testing (ART) has been introduced to improve the fault-detection effectiveness of random testing for non-point types of failure patterns. However, ART requires additional computations to ensure an even spread of test cases, which may render ART less cost-effective than random testing. This paper presents a new technique, namely mirror ART, to reduce these computations. It is an integration of the technique of mirroring and ART. Our simulation results clearly show that mirror ART does improve the cost-effectiveness of ART.
Back-to-back testing involves cross-comparison of all responses
obtained from functionally equivalent software components. Whenever a
difference is observed it is investigated and, if necessary, a
correction is applied. Events associated with back-to-back testing
process are defined and examined. The process is first modeled assuming
failure independence, and then, assuming failure correlation. It is
shown that multiversion testing involving more than four versions,
offers rapidly diminishing returns in terms of failure-detection
effectiveness, unless additional versions reduce the span of correlated
failures. It is also shown that back-to-back testing can remain an
efficient way of detecting failures even when the probability of
identical and wrong responses from all participating versions, is very
close to one
A path analysis approach to concurrent program testing is
proposed. A concurrent path model for modeling the execution behavior of
a concurrent program is presented. In the model, an execution of a
concurrent program is seen as involving a concurrent path (which
comprises the paths of all concurrent tasks), and the tasks'
synchronizations are modeled as a concurrent route to traverse the
concurrent path involved in the execution. Accordingly, testing is a
process to examine the correctness of each concurrent route along all
concurrent paths of concurrent programs. On the basis of the model, the
test format is defined, and a path analysis testing methodology is
presented. Also, several coverage criteria, extended from coverage
criteria for sequential programs, are proposed. Some practical issues of
path analysis testing, namely, test path generation, test data
generation, and design of the test execution control mechanism are also
Pointer analysis is an important part of source code analysis. Many programs that manipulate source code take points-to sets as part of their input input. Points-to related data collected from 27 mid-sized C programs (ranging in size from 1168 to 53131 lines of code) is presented The data shows the relative sizes and the complexities of computing points-to sets. Such data is useful in improving algorithms for the computation of points-to sets as well as algorithms that make use of this information in other operations. Several uses of the data are discussed
Data are used in the delivery of many products and services, and so data quality is an important component of customers' perceptions of the quality of these products and services. The paper describes efforts initiated by AT&T to control and improve the quality of data it uses to operate its worldwide intelligent network, to conduct its day-to-day operations, and to manage its businesses smoothly. These efforts stem from the observation that it is extremely difficult to fix faulty data once they are in a database. Therefore attention must be directed at processes that introduce, modify, and transform data. Only when these processes have been put into a state of statistical control can sustainable improvements in data quality be expected. The report describes AT&T's four-part data quality improvement program: •• Develop the technical foundations for understanding and measuring data quality. Four objective dimensions of data quality (accuracy, completeness, consistency, and currency) are defined.•• Extend and apply process management techniques to information management.•• Extend and apply methods of statistical process control. In particular data tracking, a method to evaluate quantitatively the processes by which data are introduced into a database, is described. Data tracking provides ongoing process control and helps identify improvement opportunities.•• Develop and apply methods that help ensure data quality in a data-processing environment. Collectively, such methods are referred to as data engineering and are particularly useful in ensuring data consistency.Taken together, these efforts form the basis of a comprehensive, overall approach to improving data quality and sustaining the improvements.
It is suggested that the multi-perspective nature of information systems (IS) quality, representing the manifold interest groups involved, is the very reason why attempts to develop any general purpose quality model for information systems tend to be fruitless. This paper develops the concept is IS work quality by utilising the existing SOLE (Software Library Evolution) quality model by Eriksson and Torn (1991), and builds upon the elements which particularly address the quality of IS work practices by discussing the management issues that affect those elements and are necessary to assure and maintain the quality of systems evolution and use. The model of information systems work quality proposed in this article provides a framework that allows the consideration of different work contexts and the specific needs of an organisation when evaluating the quality of the information system at hand and its benefits to the organisation. The IS work quality construct broadens the software quality concepts as it caters for the diverse needs of organisations and work contexts.
A human factors (HF) evaluation, carried out as part of the development of a set of computer-aided software engineering (CASE) tools, is presented and is used as an example of the processes and products of typical HF evaluation practice. The role of HF evaluation as a part of software quality assurance is identified, and typical current practice of HF evaluation is characterized. The details of the particular evaluation are then reported. First, its processes are described; these are determined by relating features of the system under development to the desired focus, actual context, and possible methods of the evaluation. Then the products of the evaluation are described; these products or outcomes are formulated as the user-computer interaction difficulties that were identified, grouped into three types (termed task, presentation, and device difficulties). The characteristics of each type of difficulty are discussed, in terms of their ease of identification, their generality across application domains, the HF knowledge that they draw on, and their relationship to redesign. The conclusion considers the usefulness of the evaluation, the inadequacies of system development practice it implies, and how to incorporate HF evaluation into an improved system development practice.
In May 1989 the UK Ministry of Defence issued Interim Defence Standard 00–55 ‘Requirements for the procurement of safety critical software in defence equipment’ for comment. The standard sets stiff requirements on the development of safety-critical software in the defence arena. The paper looks at the scope of the new standard and examines its methodological implications, giving commentary on the standard's requirements.
ContextDiagnosing processes in a small company requires process assessment practices which give qualitative and quantitative results; these should offer an overall view of the process capability. The purpose is to obtain relevant information about the running of processes, for use in their control and improvement. However, small organizations have some problems in running process assessment, due to their specific characteristics and limitations.ObjectiveThis paper presents a methodology for assessing software processes which assist the activity of software process diagnosis in small organizations. There is an attempt to address issues such as the fact that: (i) process assessment is expensive and typically requires major company resources and (ii) many light assessment methods do not provide information that is detailed enough for diagnosing and improving processes.MethodTo achieve all this, the METvalCOMPETISOFT assessment methodology was developed. This methodology: (i) incorporates the strategy of internal assessments known as rapid assessment, meaning that these assessments do not take up too much time or use an excessive quantity of resources, nor are they too rigorous and (ii) meets all the requirements described in the literature for an assessment proposal which is customized to the typical features of small companies.ResultsThis paper also describes the experience of the application of this methodology in eight small software organizations that took part in the COMPETISOFT project. The results obtained show that this approach allows us to obtain reliable information about the strengths and weaknesses of software processes, along with information to companies on opportunities for improvement.ConclusionThe assessment methodology proposed sets out the elements needed to assist with diagnosing the process in small organizations step-by-step while seeking to make its application economically feasible in terms of resources and time. From the initial application it may be seen that this assessment methodology can be useful, practical and suitable for diagnosing processes in this type of organizations.
The objective of this work is to trace the financial profile of firms in the software industry between the years 1980–1994 using data from COMPUSTAT. Our results are useful both to academics, who strive to link theory to empirical observation, and to practitioners trying to better understand the environment in which they operate. Our analysis suggests that (1) the software industry has been steadily expanding over the sample period, (2) the relative market power of the industry leaders has remained fairly stable, (3) the median firm operating in the industry has became smaller through time, (4) firms have been spending increasingly more on R&D and less in capital investment through time, (5) profitability was declining over the first half of the sample period, stabilizing in the second half, and (6) the risk of bankruptcy for the median firm has similarly declined over the sample period.
The Standish Group reported in their 1994 CHAOS report that the average cost overrun of software projects was as high as 189%. This figure for cost overrun is referred to frequently by scientific researchers, software process improvement consultants, and government advisors. In this paper, we review the validity of the Standish Group's 1994 cost overrun results. Our review is based on a comparison of the 189% cost overrun figure with the cost overrun figures reported in other cost estimation surveys, and an examination of the Standish Group's survey design and analysis methods. We find that the figure reported by the Standish Group is much higher than those reported in similar estimation surveys and that there may be severe problems with the survey design and methods of analysis, e.g. the population sampling method may be strongly biased towards ‘failure projects’. We conclude that the figure of 189% for cost overruns is probably much too high to represent typical software projects in the 1990s and that a continued use of that figure as a reference point for estimation accuracy may lead to poor decision making and hinder progress in estimation practices.
Citations and related work are crucial in any research to position the work and to build on the work of others. A high citation count is an indication of the influence of specific articles. The importance of citations means that it is interesting to analyze which articles are cited the most. Such an analysis has been conducted using the ISI Web of Science to identify the most cited software engineering journal articles published in 1999. The objective of the analysis is to identify and list the articles that have influenced others the most as measured by citation count. An understanding of which research is viewed as most valuable to build upon may provide valuable insights into what research to focus on now and in the future. Based on the analysis, a list of the 20 most cited articles is presented here. The intention of the analysis is twofold. First, to actually show the most cited articles, and second, to invite the authors of the most cited articles in 1999 to contribute to a special issue of Information and Software Technology. Five invited authors have accepted the invitation and their articles are appearing in this special issue. Moreover, the research topics and methods of the most cited articles in 1999 are compared with those from the most cited articles in 1994 to provide a picture of similarities and differences between the years.
Nowadays, UML is the de-facto standard for object-oriented analysis and design. Unfortunately, the deficiency of its dynamic semantics limits the possibility of early specification analysis. UML 2.0 comes to precise and complete this semantics but it remains informal and still lacks tools for automatic validation. The main purpose of this study is to automate the formal validation, according a value-oriented approach, of the behavior of systems expressed in UML. The marriage of Petri nets with temporal logics seems a suitable formalism for translating and then validating UML state-based models. The contributions of the paper are threefold. We first, consider how UML 2.0 activity partitions can be transformed into Object Petri Nets to formalize the object dynamics, in an object-oriented context. Second, we develop an approach based on the object and sequence diagram information to initialize the derived Petri nets in terms of objects and events. Finally, to thoroughly verify if the UML model meets the system required properties, we suggest to use the OCL invariants exploiting their association end constructs. The verification is performed on a predicate/transition net explored by model checking. A case study is given to illustrate this methodology throughout the paper.
Citations and related work are crucial in any research to position the work and to build on the work of others. A high citation count is an indication of the influence of specific articles. The importance of citations means that it is interesting to analyze which articles are cited the most. Such an analysis has been conducted using the ISI Web of Science to identify the most cited software engineering journal articles published in 2000. The objective of the analysis is to identify and list the articles that have influenced others the most as measured by citation count. An understanding of which research is viewed by the research community as most valuable to build upon may provide valuable insights into what research to focus on now and in the future. Based on the analysis, a list of the 20 most cited articles is presented here. The intention of the analysis is twofold. First, to identify the most cited articles, and second, to invite the authors of the most cited articles in 2000 to contribute to a special issue of Information and Software Technology. Five authors have accepted the invitation and their articles appear in this special issue. Moreover, an analysis of the most cited software engineering journal articles in the last 20 years is presented. The presentation includes both the most cited articles in absolute numbers and the most cited articles when looking at the average number of citations per year. The article describing the SPIN model checker by G.J. Holzmann published in 1997 is first on both these lists.
Citations and related work are crucial in any research to position the work and to build on the work of others. A high citation count is an indication of the influence of specific articles. The importance of citations means that it is interesting to analyze which articles are cited the most. Such an analysis has been conducted using the ISI Web of Science to identify the most cited software engineering journal articles published in 2001. The objective of the analysis is to identify and list the articles that have influenced others the most as measured by citation count. An understanding of which research is viewed by the research community as most valuable to build upon may provide valuable insights into what research to focus on now and in the future. Based on the analysis, a list of the 20 most cited articles is presented here. The intention of the analysis is twofold. First, to identify the most cited articles, and second, to invite the authors of the most cited articles in 2001 to contribute to a special section of Information and Software Technology. Three authors have accepted the invitation and their articles appear in this special section. Moreover, an analysis has been conducted regarding which authors are most productive in terms of software engineering journal publications. The latter analysis focuses on the publications in the last 20 years, which is intended as a complement to last year’s analysis focusing on the most cited articles in the last 20 years [C. Wohlin, An Analysis of the Most Cited Articles in Software Engineering Journals – 2007, Information and Software Technology 49 (1) 2–11]. The most productive author in the last 20 years is Professor Victor Basili.
While practically nobody would dispute that there is a Year 2000 (Y2K) problem with software, and by extension with computers and communications at large, there is a wide range of opinions on how critical will be that problem. Opinions vary widely and there is no way to check how reasonable they are because there is no precedent and no facts against which to gauge them.The focal point of this article is litigation, which will, in all likelihood, go well beyond year 2000. This is the first event for which industrial companies and their lawyers, as well as insurers, bankers and other professionals know in advance when it is going to happen but not what will happen and what might be its most likely magnitude. The article reviews the current state of preparedness in terms of Y2K and offers some suggestions about what might take place after January 1, 2000.
In this paper we propose a virtual office environment that integrates natural communication and secure private space. The features of this system are the following. (1) This system has a virtual shared room based on the idea of “shared room metaphor” and 3D graphics on an SGI workstation is used for this system. It uses Ethernet media (i.e. real-time audio/video streams). (2) This system implements the field of view of a human by using our “around view” technique. This provides more natural communication between members. (3) “Sound effects” are used to help users feel the presence of other members. For instance, members hear the sound of a door opening when someone logs into our system and the sound of footsteps when someone is walking around our virtual room. (4) At times our system limits the flow of awareness information. A person concentrating on his/her work may not want to perceive excessive awareness of others. To support such situation, we define “awareness space” which restricts the field where other members' awareness is transmitted. Awareness space changes in size with the degree of concentration which is measured through two factors: the movement of a chair and the frequency of keyboard typing. (5) “Headphone metaphor”. A picture of a headphone is attached above a person's image and changes color depending on the degree of concentration. This enables other members to recognize his/her state and can be a criterion as to whether he/she is available to communicate or not. (6) In the virtual space, users are represented as avatars built of 3D polygons and still pictures. The avatars change shape automatically according to the users' action.
For many companies, business processes have become the focal point of attention. As a result, many tools have been developed for business process engineering and the actual deployment of business processes. Typical examples of these tools are Business Process Reengineering (BPR) tools, Enterprise Resource Planning (ERP) systems, and Workflow Management (WFM) systems. Some of the leading products, e.g. SAP R/3 (ERP/WFM) and ARIS (BPR), use Event-driven Process Chains (EPCs) to model business processes. Although the EPCs have become a widespread process modeling technique, they suffer from a serious drawback: neither the syntax nor the semantics of an EPC are well defined. In this paper, this problem is tackled by mapping EPCs (without connectors of type ∨) onto Petri nets. The Petri nets have formal semantics and provide an abundance of analysis techniques. As a result, the approach presented in this paper gives formal semantics to EPCs. Moreover, many analysis techniques are available for EPCs. To illustrate the approach, it is shown that the correctness of an EPC can be checked in polynomial time by using Petri-net-based analysis techniques.
In this paper, we examine the state of software engineering (SE) research from the point of view of the following research questions:1.What topics do SE researchers address?2.What research approaches do SE researchers use?3.What research methods do SE researchers use?4.On what reference disciplines does SE research depend?5.At what levels of analysis do SE researchers conduct research?To answer those questions, we examined 369 papers in six leading research journals in the SE field, answering those research questions for each paper.From that examination, we conclude that SE research is diverse regarding topic, narrow regarding research approach and method, inwardly-focused regarding reference discipline, and technically focused (as opposed to behaviorally focused) regarding level of analysis.We pass no judgment on the SE field as a result of these findings. Instead, we present them as groundwork for future SE research efforts.
The requirements engineering (RE) process is a decision-rich complex problem solving activity. This paper examines the elements of organization-oriented macro decisions as well as process-oriented micro decisions in the RE process and illustrates how to integrate classical decision-making models with RE process models. This integration helps in formulating a common vocabulary and model to improve the manageability of the RE process, and contributes towards the learning process by validating and verifying the consistency of decision-making in RE activities.
ContextBusiness processes are an important source for the engineering of customized software systems and are constantly gaining attention in the area of software engineering as well as in the area of information and system security. While the need to integrate processes and role-based access control (RBAC) models has been repeatedly identified in research and practice, standard process modeling languages do not provide corresponding language elements.ObjectiveIn this paper, we are concerned with the definition of an integrated approach for modeling processes and process-related RBAC models – including roles, role hierarchies, statically and dynamically mutual exclusive tasks, as well as binding of duty constraints on tasks.MethodWe specify a formal metamodel for process-related RBAC models. Based on this formal model, we define a domain-specific extension for a standard modeling language.ResultsOur formal metamodel is generic and can be used to extend arbitrary process modeling languages. To demonstrate our approach, we present a corresponding extension for UML2 activity models. The name of our extension is Business Activities. Moreover, we implemented a library and runtime engine that can manage Business Activity runtime models and enforce the different policies and constraints in a software system.ConclusionThe definition of process-related RBAC models at the modeling-level is an important prerequisite for the thorough implementation and enforcement of corresponding policies and constraints in a software system. We identified the need for modeling support of process-related RBAC models from our experience in real-world role engineering projects and case studies. The Business Activities approach presented in this paper is successfully applied in role engineering projects.
Background: Software Process Improvement (SPI) is intended to improve software engineering, but can only be effective if used. To improve SPI’s uptake, we should understand why organizations adopt SPI. CMM-based SPI approaches are widely known and studied. Objective: We investigated why organizations adopt CMM-based SPI approaches, and how these motivations relate to organizations’ size. Method: We performed a systematic review, examining reasons reported in more than forty primary studies. Results: Reasons usually related to product quality and project performance, and less commonly, to process. Organizations reported customer reasons infrequently and employee reasons very rarely. We could not show that reasons related to size. Conclusion: Despite its origins in helping to address customer-related issues for the USAF, CMM-based SPI has mostly been adopted to help organizations improve project performance and product quality issues. This reinforces a view that the goal of SPI is not to improve process per se, but instead to provide business benefits.
Business process modeling is heavily applied in practice, but important quality issues have not been addressed thoroughly by research. A notorious problem is the low level of modeling competence that many casual modelers in process documentation projects have. Existing approaches towards model quality might be of benefit, but they suffer from at least one of the following problems. On the one hand, frameworks like SEQUAL and the Guidelines of Modeling are too abstract to be applicable for novices and non-experts in practice. On the other hand, there are collections of pragmatic hints that lack a sound research foundation. In this paper, we analyze existing research on relationships between model structure on the one hand and error probability and understanding on the other hand. As a synthesis we propose a set of seven process modeling guidelines (7PMG). Each of these guidelines builds on strong empirical insights, yet they are formulated to be intuitive to practitioners. Furthermore, we analyze how the guidelines are prioritized by industry experts. In this regard, the seven guidelines have the potential to serve as an important tool of knowledge transfer from academia into modeling practice.
Internal quality system audits are a compliance requirement of ISO 900\2. Requirements of the internal quality audit clause define quality audit outputs in terms of audit planning and scheduling, recording of results and follow-up of audit activities. ISO 10011-1 provides general guidance on the conduct of audits. Where domain specific guidance is required on the audit, little support is available from ISO and national standards bodies. In particular, there is no freely available checklist support questions which probe the compliance requirements of ISO 9001\2, or provide industry specific guidance questions for probing the effectiveness of the implementation of the ISO 9001\2 process clauses. This paper reviews a national project to develop a checklist to probe ISO 9001 requirements for the field of software, and to offer guidance for examining the effectiveness of the implementation of the process clauses in the software domain. It is believed that this project is important for two reasons: the ISO 9001 compliance questions in the checklist are generically applicable, and secondly, the structure of the checklist has been devised to be tailorable to a wide range of application domains.
A software quality model acts as a framework for the evaluation of attributes of an application that contribute to the software quality. In this paper, a quality model is presented for evaluation of B2B applications. First, the most well-known quality models are studied, and reasons for using ISO 9126 quality model as the basis are discussed. This model, then, is customized in accordance with special characteristics of B2B applications. The customization is done by extracting the quality factors from web applications and B2B e-commerce applications, weighting these factors from the viewpoints of both developers and end users, and adding them to the model. Finally, as a case study, ISACO portal is evaluated by the proposed model.
Priority inversion is any situation where low priority tasks are served before higher priority tasks. It is recognized as a serious problem for real-time systems. In this paper, we describe the fundamental mechanisms in Ada 95 to reduce uncontrolled priority inversion in real-time scheduling. We implemented the priority inheritance protocol (PIP) in Ada 95 to better illustrate Ada's new usefulness for real-time programming. A detailed discussion of this protocol and other related issues are presented.
A key aspect of resource management is efficient and effective deployment of available resources whenever needed. The issue typically covers two areas: monitoring of resources used by software systems and managing the consumption of resources. A key aspect of each monitoring system is its reconfigurability – the ability of a system to limit the number of resources monitored at a given time to those that are really necessary at any particular moment. The authors of this article propose a fully dynamic and reconfigurable monitoring system based on the concept of Adaptable Aspect-Oriented Programming (AAOP) in which a set of AOP aspects is used to run an application in a manner specified by the adaptability strategy. The model can be used to implement systems that are able to monitor an application and its execution environment and perform actions such as changing the current set of resource management constraints applied to an application if the application/environment conditions change. Any aspect that implements a predefined interface may be used by the AAOP-based monitoring system as a source of information. The system utilizes the concept of dynamic AOP, meaning that the aspects (which are sources of information) may be dynamically enabled/disabled.
A purely functional description of the Abracadabra protocol and a verification of its correct transmission under appropriate conditions is given. The description is based mainly on algebraic specifications and a special form of stream processing functions also introduced by algebraic specifications. It starts from an informal description and then proceeds to a formal modelling.
The Fortran Abstract Data (FAD) project encourages and enforces the encapsulation of generic abstract data types (ADTs) in a Fortran programming environment, using a preprocessor, database, and object library. The FAD database contains information about ADT subroutines and functions; the FAD library contains the compiled bodies of the ADT subprograms. Once ADT implementations have been installed in the FAD database and library, Fortran programmers can declare and use ADT variables. The FAD preprocessor use ADT translates a user's ADT references into traditional Fortran and prohibits illegal use of ADT variables. Thereafter, the standard Fortran compiler and linker-loader are used with the FAD Fortran library and other libraries. The FAD database can be created by experienced Fortran implementors with the FAD tool make ADT. A parser-generator system is employed to generate the parse tables and substantial program fragments of use ADT and make ADT. This approach facilitates extensions to FAD and porting to new dialects of Fortran.The paper contends that the Fortran programming language should be extended to include better data abstraction facilities and demonstrates that this extension can be accomplished without severe run-time efficiency penalties. The intended audience is programmers and managers who regularly use Fortran.
In the process of software design, both ‘structured’ diagrams as well as mathematical formalisms can provide useful ways of expressing a designer's ideas about a solution to a problem. The paper describes a transformation tool that generates executable specifications in the CSP/me too notation, taking as its input a high-level MASCOT design. The resulting specifications can then be used to ‘execute’ the design, so that the designer can explore the dynamic behaviour of the intended system. The design modelling strategies of the two forms are discussed and the ways in which their use can be combined to support the development of a system design examined.
This paper gives a comprehensive introduction to the B Abstract Machine Notation (AMN), a formal method which is based on Z and which is supported by an industrial quality toolset. The paper describes development techniques for AMN, including the formalization of requirements, specification construction, design and implementation. Results from a large-scale safety-critical development using the method are also given.
The concepts of abstract and virtual machines have been used for many different purposes to obtain diverse benefits such as code portability, compiler simplification, interoperability, distribution and direct support of specific paradigms. Despite of these benefits, the main drawback of virtual machines has always been execution performance. Consequently, there has been considerable research aimed at improving the performance of virtual machine's application execution compared to its native counterparts. Techniques like adaptive Just In Time compilation or efficient and complex garbage collection algorithms have reached such a point that Microsoft and Sun Microsystems identify this kind of platforms as appropriate to implement commercial applications.What we have noticed in our research work is that these platforms have heterogeneity, extensibility, platform porting and adaptability limitations caused by their monolithic designs. Most designs of common abstract machines are focused on supporting a fixed programming language and the computation model they offer is set to the one employed by the specific language. We have identified reflection as a basis for designing an abstract machine, capable of overcoming the previously mentioned limitations. Reflection is a mechanism that gives our platform the capability to adapt the abstract machine to different computation models and heterogeneous computing environments, not needing to modify its implementation. In this paper we present the reflective design of our abstract machine, example code extending the platform, a reference implementation, and a comparison between our implementation and other well-known platforms.
Jackson's problem frames is an approach to describing a recurring software problem. It is presumed that some knowledge of the application domain and context has been gathered so that an appropriate problem frame can be determined. However, the identification of aspects of the problem, and its appropriate ‘framing’ is recognised as a difficult task. One way to describe a software problem context is through process modelling. Once contextual information has been elicited, and explicitly described, an understanding of what problems need to be solved should emerge. However, this use of process models to inform requirements is often rather ad hoc; the traceability from business process to software requirement is not always as straightforward as it ought to be. Hence, this paper proposes an approach for deriving and contextualising software requirements through use of the problem frames approach from business process models. We apply the approach on a live industrial e-business project in which we assess the relevance and usefulness of problem frames as a means of describing the requirements context. We found that the software problem did not always match easily with Jackson's five existing frames. Where no frame was identified, however, we found that Jackson's problem diagrams did couch the requirements in their right context, and thus application of the problem frames approach was useful. This implies a need for further work in adapting a problem frames approach to the context of e-business systems.
The provision of graphical representations of data types used within nongraphical applications has proved to be not only a valuable technique for increasing usability and user comprehension, but also a time-consuming and error-prone task. The methods most commonly used (simply hardwiring a particular representation into the application) can result in a good interface component, but has the disadvantage that representations are fixed and cannot be modified without major effort. The paper describes a technique used with much success in the Papillon project, which provides tools for automatic management of representations associated with abstract data types, allowing both designer and user to redefine the display formats using a graphical editor.
Since a query language is used as a handy tool to obtain information from a database, users want more user-friendly and fault-tolerant query interfaces. When a query search condition does not match with the underlying database, users would rather receive approximate answers than null information by relaxing the condition. They also prefer a less rigid querying structure, one which allows for vagueness in composing queries, and want the system to understand the intent behind a query. This paper presents a data abstraction approach to facilitate the development of such a fault-tolerant and intelligent query processing system. It specifically proposes a knowledge abstraction database that adopts a multilevel knowledge representation scheme called the knowledge abstraction hierarchy. Furthermore, the knowledge abstraction database extracts semantic data relationships from the underlying database and supports query relaxation using query generalization and specialization steps. Thus, it can broaden the search scope of original queries to retrieve neighborhood information and help users to pose conceptually abstract queries. Specifically, four types of vague queries are discussed, including approximate selection, approximate join, conceptual selection and conceptual join.
The basis for measuring many attributes in the physical world, such as size and mass, is fairly obvious when compared to the measurement of software attributes. Software has a very complex structure, and this makes it difficult to define meaningful measures that actually quantify attributes of interest. Program slices provide an abstraction that can be used to define important software attributes that can serve as a basis for measurement. We have successfully used program slices to define objective, meaningful, and valid measures of cohesion. Previously, cohesion was viewed as an attribute that could not be objectively measured; cohesion assessment relied on subjective evaluations. In this paper we review the original slice-based cohesion measures defined to measure functional cohesion in the procedural paradigm as well as the derivative work aimed at measuring cohesion in other paradigms and situations. By viewing software products at differing levels of abstraction or granularity, it is possible to define measures which are available at different points in the software life cycle and/or suitable for varying purposes.
Data abstraction is one of the most fundamental principles of software engineering. The increasing realization of this is reflected in the design of programming languages, from the tentative user-defined data types of Pascal through to the more extensive facilities provided today by languages such as Ada and Modula-2.This tutorial paper examines how the data abstraction facilities provided by Modula-2 can be used in a sophisticated application. We demonstrate that data abstraction facilitates the production of code which is highly modularized, is easy to write and easy to read. The application, garbage collection, is an important part of system software, and is particularly interesting because it highlights the conflict between efficiency and expressive power. We show the extent to which a functional style of programming can be utilized in solving a problem which is inherently concerned with ‘state’, and we also show how formal semantics can be provided for the abstract data types.
A software process is a problem-solving process with human cognitive characteristics. This paper presents a cognitive-based problem-solving framework consisting of a problem-solving cognitive space, a category-based representation and a set of problem-solving control strategies. As an application of the framework, a cognitive-based software process model is proposed to unify the software process and the developer's cognition. The proposed model provides a new way to improve the software process by enhancing the developer's cognitive skill. The development process of management information systems (MIS) has been used to demonstrate the proposed model.
One of the main reasons for the failure of many software projects is the late discovery of a mismatch between the customers’ expectations and the pieces of functionality implemented in the delivered system. At the root of such a mismatch is often a set of poorly defined, incomplete, under-specified, and inconsistent requirements. Test driven development has recently been proposed as a way to clarify requirements during the initial elicitation phase, by means of acceptance tests that specify the desired behavior of the system.The goal of the work reported in this paper is to empirically characterize the contribution of acceptance tests to the clarification of the requirements coming from the customer. We focused on Fit tables, a way to express acceptance tests, which can be automatically translated into executable test cases. We ran two experiments with students from University of Trento and Politecnico of Torino, to assess the impact of Fit tables on the clarity of requirements. We considered whether Fit tables actually improve requirement understanding and whether this requires any additional comprehension effort. Experimental results show that Fit helps in the understanding of requirements without requiring a significant additional effort.
ContextThe technology acceptance model (TAM) was proposed in 1989 as a means of predicting technology usage. However, it is usually validated by using a measure of behavioural intention to use (BI) rather than actual usage.ObjectiveThis review examines the evidence that the TAM predicts actual usage using both subjective and objective measures of actual usage.MethodWe performed a systematic literature review based on a search of six digital libraries, along with vote-counting meta-analysis to analyse the overall results.ResultsThe search identified 79 relevant empirical studies in 73 articles. The results show that BI is likely to be correlated with actual usage. However, the TAM variables perceived ease of use (PEU) and perceived usefulness (PU) are less likely to be correlated with actual usage.ConclusionCare should be taken using the TAM outside the context in which it has been validated.
Pressures are increasing on organisations to take an early and more systematic approach to security. A key to enforcing security is to restrict access to valuable assets. We regard access policies as security requirements that specify such restrictions. Current requirements engineering methods are generally inadequate for eliciting and analysing these types of requirements, because they do not allow complex organisational structures and procedures that underlie policies to be represented adequately. This paper discusses roles and why they are important in the analysis of security. The paper relates roles to organisational theory and how they could be employed to define access policies. A framework is presented, based on these concepts, for analysing access policies.
Access control (AC) is a mechanism for achieving confidentiality and integrity in software systems. Access control policies (ACPs) express rules concerning who can access what information, and under what conditions. ACP specification is not an explicit part of the software development process and is often isolated from requirements analysis activities, leaving systems vulnerable to security breaches because policies are specified without ensuring compliance with system requirements. In this paper, we present the Requirements-based Access Control Analysis and Policy Specification (ReCAPS) method for deriving and specifying ACPs, and discuss three validation efforts. The method integrates policy specification into the software development process, ensures consistency across software artifacts, and provides prescriptive guidance for how to specify ACPs. It also improves the quality of requirements specifications and system designs by clarifying ambiguities and resolving conflicts across these artifacts during the analysis, making a significant step towards ensuring that policies are enforced in a manner consistent with a system’s requirements specifications. To date, the method has been applied within the context of four operational systems. Additionally, we have conducted an empirical study to evaluate its usefulness and effectiveness. A software tool, the Security and Privacy Requirements Analysis Tool (SPRAT), was developed to support ReCAPS analysis activities.
One of the most significant difficulties with developing Service-Oriented Architecture (SOA) involves meeting its security challenges, since the responsibilities of SOA security are based on both the service providers and the consumers. In recent years, many solutions to these challenges have been implemented, such as the Web Services Security Standards, including WS-Security and WS-Policy. However, those standards are insufficient for the new generation of Web technologies, including Web 2.0 applications. In this research, we propose an intelligent SOA security framework by introducing its two most promising services: the Authentication and Security Service (NSS), and the Authorization Service (AS). The suggested autonomic and reusable services are constructed as an extension of WS-∗ security standards, with the addition of intelligent mining techniques, in order to improve performance and effectiveness. In this research, we apply three different mining techniques: the Association Rules, which helps to predict attacks, the Online Analytical Processing (OLAP) Cube, for authorization, and clustering mining algorithms, which facilitate access control rights representation and automation. Furthermore, a case study is explored to depict the behavior of the proposed services inside an SOA business environment. We believe that this work is a significant step towards achieving dynamic SOA security that automatically controls the access to new versions of Web applications, including analyzing and dropping suspicious SOAP messages and automatically managing authorization roles.
While relational database technology has dominated the database field for more than a decade, object-oriented database (OODB) technology has recently gained a lot of attention in the database community. Many researchers are concerned about the performance of OODBs. This paper proposes an OODB design methodology called fragmented hash-indexed (FHIN) that is aimed at improving the operating performance of OODBs. The FHIN model's storage structure contains an Instances–Classes Table (ICT) with a two-segment data design. Query processing is done by accessing data segments through ICT with an algorithm introduced here. The FHIN model uses three access methods: hashing, indexing, or hash-indexing. The database performance of FHIN is compared to two previous access methods using 1050 simulation runs. Results indicate that the FHIN model is 43% better than either of the other models in smaller databases, 65% better in larger databases, 50% better under conditions of high updating, and 72% better under conditions of low updating. These results suggest that FHIN methodology has promise and is worthy of exploration and OODB software development.
This paper presents the schema and component architectures of a prototype multidatabase management system, EDDS, which is used for research and teaching purposes in the authors' laboratories. Many of the features are shared with other distributed database management systems but some novel features, such as the gateway feature which allows personal computers to share distributed database functionality, are particularly attractive in many of the specific real-world scenarios for which the system was designed. The system is written in C on Unix, and in PASCAL on DEC VMS, and several PC ports have been implemented.
Specifying, enforcing and evolving access control policies is essential to prevent security breaches and unavailability of resources. These access control design concerns impose requirements that allow only authorized users to access protected computer-based resources. Addressing these concerns in a design results in the spreading of access control functionality across several design modules. The pervasive nature of access control functionality makes it difficult to evolve, analyze, and enforce access control policies. To tackle this problem, we propose using an aspect-oriented modeling(AOM) approach for addressing access control concerns. In the AOM approach, functionality that addresses a pervasive access control concern is localized in an aspect. Other functional design concerns are addressed in a model of the application referred to as a primary model. Composing access control aspects with a primary model results in an application model that addresses access control concerns. We illustrate our approach using a form of Role-Based Access Control.
The unprecedented increase in the availability of information, due to the success of the World Wide Web, has generated an urgent need for new and robust methods that simplify the querying and integration of data. In this research, we investigate a practical framework for data access to heterogeneous data sources. The framework utilizes the extensible markup language (XML) Schema as the canonical data model for the querying and integration of data from heterogeneous data sources. We present algorithms for mapping relational and network schemas into XML schemas using the relational mapping algorithm. We also present library system of databases (libSyD), a prototype of a system for heterogeneous database access.
Primarily used in military communications in the past, code division multiple access (CDMA) is recently found to be attractive for personal communications as well. As a large number of mobile hosts are supported within a cell and a wide range of services are provided, one of the most important issues in a CDMA personal communication network is how to control the uplink access to the shared wireless spectrum. In this paper, we address this issue in a realistic situation where the receiver-oriented transmission protocol is employed and the packet loss due to multiple access interference (MAI) cannot be ignored. A medium access control protocol for voice and data integration is proposed. It solves the problems of code assignment and MAI control at the same time. A Markov chain model is used to analyze the protocol and the analytical results are shown to be very close to simulations. Based on the modeling, the effectiveness of the protocol's MAI control is demonstrated and some system design issues are investigated.
In order to provide an alternative way for accessing WWW, we propose a FaxWeb system to access WWW services using a fax machine in this paper. Based on the FaxWeb system, users, who do not have computers or who do not have the capability of using computers, can access WWW services using fax machines. In this way, people can adopt not only computers but also fax machines, which belong to the traditional consumer/office electronics and are getting less expensive gradually, to access WWW at client sites. The user spectrum of WWW can be expanded to those people who lack the capability of using computers. To have convenient use, the FaxWeb system contains a voice response and touch-tone WWW browsing facility for accessing WWW using fax machines. The system architecture and the system development of FaxWeb are presented in the paper.
Mesodata modelling is a recently developed approach for enhancing a data model’s capabilities by providing for more advanced semantics to be associated with the domain of an attribute. Mesodata supplies both an inter-value structure to the domain and a set of operations applicable to that structure that may be used to facilitate additional functionality in a database. We argue that conceptual modelling methodologies would be semantically richer if they were able to express the semantics of complex data types for attribute domains. This paper investigates the accommodation of mesodata into the entity-relationship and object role modelling, presenting the Mesodata Entity-Relationship (MDER) model and Mesodata Object Role Modelling (MDORM), which show how the mesodata concept can be incorporated into conceptual modelling methodologies to include the semantics of complex-domain structures.