Information and Software Technology

Published by Elsevier BV

Print ISSN: 0950-5849

Articles


Mirror adaptive random testing (Journal version)
  • Article

December 2004

·

329 Reads

T. Y. Chen

·

·

·

S. P. Ng
Recently, adaptive random testing (ART) has been introduced to improve the fault-detection effectiveness of random testing for non-point types of failure patterns. However, ART requires additional computations to ensure an even spread of test cases, which may render ART less cost-effective than random testing. This paper presents a new technique, namely mirror ART, to reduce these computations. It is an integration of the technique of mirroring and ART. Our simulation results clearly show that mirror ART does improve the cost-effectiveness of ART.
Share

Back-to-back testing

February 1990

·

145 Reads

Back-to-back testing involves cross-comparison of all responses obtained from functionally equivalent software components. Whenever a difference is observed it is investigated and, if necessary, a correction is applied. Events associated with back-to-back testing process are defined and examined. The process is first modeled assuming failure independence, and then, assuming failure correlation. It is shown that multiversion testing involving more than four versions, offers rapidly diminishing returns in terms of failure-detection effectiveness, unless additional versions reduce the span of correlated failures. It is also shown that back-to-back testing can remain an efficient way of detecting failures even when the probability of identical and wrong responses from all participating versions, is very close to one

A path analysis approach to concurrent program testing

April 1990

·

25 Reads

A path analysis approach to concurrent program testing is proposed. A concurrent path model for modeling the execution behavior of a concurrent program is presented. In the model, an execution of a concurrent program is seen as involving a concurrent path (which comprises the paths of all concurrent tasks), and the tasks' synchronizations are modeled as a concurrent route to traverse the concurrent path involved in the execution. Accordingly, testing is a process to examine the correctness of each concurrent route along all concurrent paths of concurrent programs. On the basis of the model, the test format is defined, and a path analysis testing methodology is presented. Also, several coverage criteria, extended from coverage criteria for sequential programs, are proposed. Some practical issues of path analysis testing, namely, test path generation, test data generation, and design of the test execution control mechanism are also addressed

Flow insensitive points-to sets

February 2001

·

21 Reads

Pointer analysis is an important part of source code analysis. Many programs that manipulate source code take points-to sets as part of their input input. Points-to related data collected from 27 mid-sized C programs (ranging in size from 1168 to 53131 lines of code) is presented The data shows the relative sizes and the complexities of computing points-to sets. Such data is useful in improving algorithms for the computation of points-to sets as well as algorithms that make use of this information in other operations. Several uses of the data are discussed

Data quality

October 1990

·

35 Reads

Data are used in the delivery of many products and services, and so data quality is an important component of customers' perceptions of the quality of these products and services. The paper describes efforts initiated by AT&T to control and improve the quality of data it uses to operate its worldwide intelligent network, to conduct its day-to-day operations, and to manage its businesses smoothly. These efforts stem from the observation that it is extremely difficult to fix faulty data once they are in a database. Therefore attention must be directed at processes that introduce, modify, and transform data. Only when these processes have been put into a state of statistical control can sustainable improvements in data quality be expected. The report describes AT&T's four-part data quality improvement program: •• Develop the technical foundations for understanding and measuring data quality. Four objective dimensions of data quality (accuracy, completeness, consistency, and currency) are defined.•• Extend and apply process management techniques to information management.•• Extend and apply methods of statistical process control. In particular data tracking, a method to evaluate quantitatively the processes by which data are introduced into a database, is described. Data tracking provides ongoing process control and helps identify improvement opportunities.•• Develop and apply methods that help ensure data quality in a data-processing environment. Collectively, such methods are referred to as data engineering and are particularly useful in ensuring data consistency.Taken together, these efforts form the basis of a comprehensive, overall approach to improving data quality and sustaining the improvements.

Information systems work quality

December 1997

·

17 Reads

It is suggested that the multi-perspective nature of information systems (IS) quality, representing the manifold interest groups involved, is the very reason why attempts to develop any general purpose quality model for information systems tend to be fruitless. This paper develops the concept is IS work quality by utilising the existing SOLE (Software Library Evolution) quality model by Eriksson and Torn (1991), and builds upon the elements which particularly address the quality of IS work practices by discussing the management issues that affect those elements and are necessary to assure and maintain the quality of systems evolution and use. The model of information systems work quality proposed in this article provides a framework that allows the consideration of different work contexts and the specific needs of an organisation when evaluating the quality of the information system at hand and its benefits to the organisation. The IS work quality construct broadens the software quality concepts as it caters for the diverse needs of organisations and work contexts.

Case study in human factors evaluation

July 1992

·

33 Reads

A human factors (HF) evaluation, carried out as part of the development of a set of computer-aided software engineering (CASE) tools, is presented and is used as an example of the processes and products of typical HF evaluation practice. The role of HF evaluation as a part of software quality assurance is identified, and typical current practice of HF evaluation is characterized. The details of the particular evaluation are then reported. First, its processes are described; these are determined by relating features of the system under development to the desired focus, actual context, and possible methods of the evaluation. Then the products of the evaluation are described; these products or outcomes are formulated as the user-computer interaction difficulties that were identified, grouped into three types (termed task, presentation, and device difficulties). The characteristics of each type of difficulty are discussed, in terms of their ease of identification, their generality across application domains, the HF knowledge that they draw on, and their relationship to redesign. The conclusion considers the usefulness of the evaluation, the inadequacies of system development practice it implies, and how to incorporate HF evaluation into an improved system development practice.

Software development under Def Stan 00–55: a guide

April 1990

·

35 Reads

In May 1989 the UK Ministry of Defence issued Interim Defence Standard 00–55 ‘Requirements for the procurement of safety critical software in defence equipment’ for comment. The standard sets stiff requirements on the development of safety-critical software in the defence arena. The paper looks at the scope of the new standard and examines its methodological implications, giving commentary on the standard's requirements.

Assessment methodology for software process improvement in small organizations. Information and Software Technology, 52(10), 1044-1061

October 2010

·

803 Reads

ContextDiagnosing processes in a small company requires process assessment practices which give qualitative and quantitative results; these should offer an overall view of the process capability. The purpose is to obtain relevant information about the running of processes, for use in their control and improvement. However, small organizations have some problems in running process assessment, due to their specific characteristics and limitations.ObjectiveThis paper presents a methodology for assessing software processes which assist the activity of software process diagnosis in small organizations. There is an attempt to address issues such as the fact that: (i) process assessment is expensive and typically requires major company resources and (ii) many light assessment methods do not provide information that is detailed enough for diagnosing and improving processes.MethodTo achieve all this, the METvalCOMPETISOFT assessment methodology was developed. This methodology: (i) incorporates the strategy of internal assessments known as rapid assessment, meaning that these assessments do not take up too much time or use an excessive quantity of resources, nor are they too rigorous and (ii) meets all the requirements described in the literature for an assessment proposal which is customized to the typical features of small companies.ResultsThis paper also describes the experience of the application of this methodology in eight small software organizations that took part in the COMPETISOFT project. The results obtained show that this approach allows us to obtain reliable information about the strengths and weaknesses of software processes, along with information to companies on opportunities for improvement.ConclusionThe assessment methodology proposed sets out the elements needed to assist with diagnosing the process in small organizations step-by-step while seeking to make its application economically feasible in terms of resources and time. From the initial application it may be seen that this assessment methodology can be useful, practical and suitable for diagnosing processes in this type of organizations.

The financial profile of the software industry between 1980 and 1994

August 2000

·

75 Reads

The objective of this work is to trace the financial profile of firms in the software industry between the years 1980–1994 using data from COMPUSTAT. Our results are useful both to academics, who strive to link theory to empirical observation, and to practitioners trying to better understand the environment in which they operate. Our analysis suggests that (1) the software industry has been steadily expanding over the sample period, (2) the relative market power of the industry leaders has remained fairly stable, (3) the median firm operating in the industry has became smaller through time, (4) firms have been spending increasingly more on R&D and less in capital investment through time, (5) profitability was declining over the first half of the sample period, stabilizing in the second half, and (6) the risk of bankruptcy for the median firm has similarly declined over the sample period.

How large are software cost overruns? A review of the 1994 CHAOS report

April 2006

·

299 Reads

The Standish Group reported in their 1994 CHAOS report that the average cost overrun of software projects was as high as 189%. This figure for cost overrun is referred to frequently by scientific researchers, software process improvement consultants, and government advisors. In this paper, we review the validity of the Standish Group's 1994 cost overrun results. Our review is based on a comparison of the 189% cost overrun figure with the cost overrun figures reported in other cost estimation surveys, and an examination of the Standish Group's survey design and analysis methods. We find that the figure reported by the Standish Group is much higher than those reported in similar estimation surveys and that there may be severe problems with the survey design and methods of analysis, e.g. the population sampling method may be strongly biased towards ‘failure projects’. We conclude that the figure of 189% for cost overruns is probably much too high to represent typical software projects in the 1990s and that a continued use of that figure as a reference point for estimation accuracy may lead to poor decision making and hinder progress in estimation practices.

An analysis of the most cited articles in software engineering journals - 1999

December 2005

·

61 Reads

Citations and related work are crucial in any research to position the work and to build on the work of others. A high citation count is an indication of the influence of specific articles. The importance of citations means that it is interesting to analyze which articles are cited the most. Such an analysis has been conducted using the ISI Web of Science to identify the most cited software engineering journal articles published in 1999. The objective of the analysis is to identify and list the articles that have influenced others the most as measured by citation count. An understanding of which research is viewed as most valuable to build upon may provide valuable insights into what research to focus on now and in the future. Based on the analysis, a list of the 20 most cited articles is presented here. The intention of the analysis is twofold. First, to actually show the most cited articles, and second, to invite the authors of the most cited articles in 1999 to contribute to a special issue of Information and Software Technology. Five invited authors have accepted the invitation and their articles are appearing in this special issue. Moreover, the research topics and methods of the most cited articles in 1999 are compared with those from the most cited articles in 1994 to provide a picture of similarities and differences between the years.

An object-oriented approach to formally analyze the UML 2.0 activity partitions

September 2007

·

42 Reads

Nowadays, UML is the de-facto standard for object-oriented analysis and design. Unfortunately, the deficiency of its dynamic semantics limits the possibility of early specification analysis. UML 2.0 comes to precise and complete this semantics but it remains informal and still lacks tools for automatic validation. The main purpose of this study is to automate the formal validation, according a value-oriented approach, of the behavior of systems expressed in UML. The marriage of Petri nets with temporal logics seems a suitable formalism for translating and then validating UML state-based models. The contributions of the paper are threefold. We first, consider how UML 2.0 activity partitions can be transformed into Object Petri Nets to formalize the object dynamics, in an object-oriented context. Second, we develop an approach based on the object and sequence diagram information to initialize the derived Petri nets in terms of objects and events. Finally, to thoroughly verify if the UML model meets the system required properties, we suggest to use the OCL invariants exploiting their association end constructs. The verification is performed on a predicate/transition net explored by model checking. A case study is given to illustrate this methodology throughout the paper.

An analysis of the most cited articles in software engineering journals - 2000

January 2007

·

25 Reads

Citations and related work are crucial in any research to position the work and to build on the work of others. A high citation count is an indication of the influence of specific articles. The importance of citations means that it is interesting to analyze which articles are cited the most. Such an analysis has been conducted using the ISI Web of Science to identify the most cited software engineering journal articles published in 2000. The objective of the analysis is to identify and list the articles that have influenced others the most as measured by citation count. An understanding of which research is viewed by the research community as most valuable to build upon may provide valuable insights into what research to focus on now and in the future. Based on the analysis, a list of the 20 most cited articles is presented here. The intention of the analysis is twofold. First, to identify the most cited articles, and second, to invite the authors of the most cited articles in 2000 to contribute to a special issue of Information and Software Technology. Five authors have accepted the invitation and their articles appear in this special issue. Moreover, an analysis of the most cited software engineering journal articles in the last 20 years is presented. The presentation includes both the most cited articles in absolute numbers and the most cited articles when looking at the average number of citations per year. The article describing the SPIN model checker by G.J. Holzmann published in 1997 is first on both these lists.

An analysis of the most cited articles in software engineering journals – 2001

January 2008

·

31 Reads

Citations and related work are crucial in any research to position the work and to build on the work of others. A high citation count is an indication of the influence of specific articles. The importance of citations means that it is interesting to analyze which articles are cited the most. Such an analysis has been conducted using the ISI Web of Science to identify the most cited software engineering journal articles published in 2001. The objective of the analysis is to identify and list the articles that have influenced others the most as measured by citation count. An understanding of which research is viewed by the research community as most valuable to build upon may provide valuable insights into what research to focus on now and in the future. Based on the analysis, a list of the 20 most cited articles is presented here. The intention of the analysis is twofold. First, to identify the most cited articles, and second, to invite the authors of the most cited articles in 2001 to contribute to a special section of Information and Software Technology. Three authors have accepted the invitation and their articles appear in this special section. Moreover, an analysis has been conducted regarding which authors are most productive in terms of software engineering journal publications. The latter analysis focuses on the publications in the last 20 years, which is intended as a complement to last year’s analysis focusing on the most cited articles in the last 20 years [C. Wohlin, An Analysis of the Most Cited Articles in Software Engineering Journals – 2007, Information and Software Technology 49 (1) 2–11]. The most productive author in the last 20 years is Professor Victor Basili.

How long into the 21st century will the aftermath of the millennium bug last?

November 1999

·

11 Reads

While practically nobody would dispute that there is a Year 2000 (Y2K) problem with software, and by extension with computers and communications at large, there is a wide range of opinions on how critical will be that problem. Opinions vary widely and there is no way to check how reasonable they are because there is no precedent and no facts against which to gauge them.The focal point of this article is litigation, which will, in all likelihood, go well beyond year 2000. This is the first event for which industrial companies and their lawyers, as well as insurers, bankers and other professionals know in advance when it is going to happen but not what will happen and what might be its most likely magnitude. The article reviews the current state of preparedness in terms of Y2K and offers some suggestions about what might take place after January 1, 2000.

A company-office system “Valentine” providing informal communication and personal space based on 3D virtual space and avatars

April 1999

·

25 Reads

Shinkuro Honda

·

Hironari Tomioka

·

Takaaki Kimura

·

[...]

·

Yutaka Matsushita
In this paper we propose a virtual office environment that integrates natural communication and secure private space. The features of this system are the following. (1) This system has a virtual shared room based on the idea of “shared room metaphor” and 3D graphics on an SGI workstation is used for this system. It uses Ethernet media (i.e. real-time audio/video streams). (2) This system implements the field of view of a human by using our “around view” technique. This provides more natural communication between members. (3) “Sound effects” are used to help users feel the presence of other members. For instance, members hear the sound of a door opening when someone logs into our system and the sound of footsteps when someone is walking around our virtual room. (4) At times our system limits the flow of awareness information. A person concentrating on his/her work may not want to perceive excessive awareness of others. To support such situation, we define “awareness space” which restricts the field where other members' awareness is transmitted. Awareness space changes in size with the degree of concentration which is measured through two factors: the movement of a chair and the frequency of keyboard typing. (5) “Headphone metaphor”. A picture of a headphone is attached above a person's image and changes color depending on the degree of concentration. This enables other members to recognize his/her state and can be a criterion as to whether he/she is available to communicate or not. (6) In the virtual space, users are represented as avatars built of 3D polygons and still pictures. The avatars change shape automatically according to the users' action.

Figure 1: Modeling of a business process, using event-driven process chains.
Figure 4: Mapping connectors onto places and transitions.
Figure 6: The event-driven process chain of Figure 1, mapped onto a Petri net.
Figure 7: An erroneous event-driven process chain.
Figure 8: Well-structuredness is based on the distinction between good constructs (left) and bad constructs (right).

+2

Aalst, W.M.P.: Formalization and Verification of Event-driven Process Chains. Information and Software Technology 41, 639-650
  • Article
  • Full-text available

July 1999

·

1,849 Reads

For many companies, business processes have become the focal point of attention. As a result, many tools have been developed for business process engineering and the actual deployment of business processes. Typical examples of these tools are Business Process Reengineering (BPR) tools, Enterprise Resource Planning (ERP) systems, and Workflow Management (WFM) systems. Some of the leading products, e.g. SAP R/3 (ERP/WFM) and ARIS (BPR), use Event-driven Process Chains (EPCs) to model business processes. Although the EPCs have become a widespread process modeling technique, they suffer from a serious drawback: neither the syntax nor the semantics of an EPC are well defined. In this paper, this problem is tackled by mapping EPCs (without connectors of type ∨) onto Petri nets. The Petri nets have formal semantics and provide an abundance of analysis techniques. As a result, the approach presented in this paper gives formal semantics to EPCs. Moreover, many analysis techniques are available for EPCs. To illustrate the approach, it is shown that the correctness of an EPC can be checked in polynomial time by using Petri-net-based analysis techniques.
Download

Research in software engineering: An analysis of the literature. Information and Software Technology 44(8): 491-506

June 2002

·

2,418 Reads

In this paper, we examine the state of software engineering (SE) research from the point of view of the following research questions:1.What topics do SE researchers address?2.What research approaches do SE researchers use?3.What research methods do SE researchers use?4.On what reference disciplines does SE research depend?5.At what levels of analysis do SE researchers conduct research?To answer those questions, we examined 369 papers in six leading research journals in the SE field, answering those research questions for each paper.From that examination, we conclude that SE research is diverse regarding topic, narrow regarding research approach and method, inwardly-focused regarding reference discipline, and technically focused (as opposed to behaviorally focused) regarding level of analysis.We pass no judgment on the SE field as a result of these findings. Instead, we present them as groundwork for future SE research efforts.

Wohlin, C.: The Fundamental Nature of Requirements Engineering Activities as a Decision Making Process. Information and Software Technology 45, 945-954

November 2003

·

257 Reads

The requirements engineering (RE) process is a decision-rich complex problem solving activity. This paper examines the elements of organization-oriented macro decisions as well as process-oriented micro decisions in the RE process and illustrates how to integrate classical decision-making models with RE process models. This integration helps in formulating a common vocabulary and model to improve the manageability of the RE process, and contributes towards the learning process by validating and verifying the consistency of decision-making in RE activities.

Mendling, J.: Modeling process-related RBAC models with extended UML activity models. Inf. Softw. Technol. 53, 456-483

May 2011

·

112 Reads

ContextBusiness processes are an important source for the engineering of customized software systems and are constantly gaining attention in the area of software engineering as well as in the area of information and system security. While the need to integrate processes and role-based access control (RBAC) models has been repeatedly identified in research and practice, standard process modeling languages do not provide corresponding language elements.ObjectiveIn this paper, we are concerned with the definition of an integrated approach for modeling processes and process-related RBAC models – including roles, role hierarchies, statically and dynamically mutual exclusive tasks, as well as binding of duty constraints on tasks.MethodWe specify a formal metamodel for process-related RBAC models. Based on this formal model, we define a domain-specific extension for a standard modeling language.ResultsOur formal metamodel is generic and can be used to extend arbitrary process modeling languages. To demonstrate our approach, we present a corresponding extension for UML2 activity models. The name of our extension is Business Activities. Moreover, we implemented a library and runtime engine that can manage Business Activity runtime models and enforce the different policies and constraints in a software system.ConclusionThe definition of process-related RBAC models at the modeling-level is an important prerequisite for the thorough implementation and enforcement of corresponding policies and constraints in a software system. We identified the need for modeling support of process-related RBAC models from our experience in real-world role engineering projects and case studies. The Business Activities approach presented in this paper is successfully applied in role engineering projects.

Niazi, M.: Systematic Review of Organizational Motivations for Adopting CMM-based SPI. Inform. & Softw. Technol. 50(7/8), 605-620

June 2008

·

434 Reads

Background: Software Process Improvement (SPI) is intended to improve software engineering, but can only be effective if used. To improve SPI’s uptake, we should understand why organizations adopt SPI. CMM-based SPI approaches are widely known and studied. Objective: We investigated why organizations adopt CMM-based SPI approaches, and how these motivations relate to organizations’ size. Method: We performed a systematic review, examining reasons reported in more than forty primary studies. Results: Reasons usually related to product quality and project performance, and less commonly, to process. Organizations reported customer reasons infrequently and employee reasons very rarely. We could not show that reasons related to size. Conclusion: Despite its origins in helping to address customer-related issues for the USAF, CMM-based SPI has mostly been adopted to help organizations improve project performance and product quality issues. This reinforces a view that the goal of SPI is not to improve process per se, but instead to provide business benefits.

Prioritizing guideline 7PMG
Seven Process Modeling Guidelines (7PMG)

February 2010

·

9,047 Reads

Business process modeling is heavily applied in practice, but important quality issues have not been addressed thoroughly by research. A notorious problem is the low level of modeling competence that many casual modelers in process documentation projects have. Existing approaches towards model quality might be of benefit, but they suffer from at least one of the following problems. On the one hand, frameworks like SEQUAL and the Guidelines of Modeling are too abstract to be applicable for novices and non-experts in practice. On the other hand, there are collections of pragmatic hints that lack a sound research foundation. In this paper, we analyze existing research on relationships between model structure on the one hand and error probability and understanding on the other hand. As a synthesis we propose a set of seven process modeling guidelines (7PMG). Each of these guidelines builds on strong empirical insights, yet they are formulated to be intuitive to practitioners. Furthermore, we analyze how the guidelines are prioritized by industry experts. In this regard, the seven guidelines have the potential to serve as an important tool of knowledge transfer from academia into modeling practice.

Improving the quality of ISO 9001 audits in the field of software

December 1998

·

34 Reads

Internal quality system audits are a compliance requirement of ISO 900\2. Requirements of the internal quality audit clause define quality audit outputs in terms of audit planning and scheduling, recording of results and follow-up of audit activities. ISO 10011-1 provides general guidance on the conduct of audits. Where domain specific guidance is required on the audit, little support is available from ISO and national standards bodies. In particular, there is no freely available checklist support questions which probe the compliance requirements of ISO 9001\2, or provide industry specific guidance questions for probing the effectiveness of the implementation of the ISO 9001\2 process clauses. This paper reviews a national project to develop a checklist to probe ISO 9001 requirements for the field of software, and to offer guidance for examining the effectiveness of the implementation of the process clauses in the software domain. It is believed that this project is important for two reasons: the ISO 9001 compliance questions in the checklist are generically applicable, and secondly, the structure of the checklist has been devised to be tailorable to a wide range of application domains.

Customizing ISO 9126 quality model for evaluation of B2B applications

March 2009

·

2,007 Reads

A software quality model acts as a framework for the evaluation of attributes of an application that contribute to the software quality. In this paper, a quality model is presented for evaluation of B2B applications. First, the most well-known quality models are studied, and reasons for using ISO 9126 quality model as the basis are discussed. This model, then, is customized in accordance with special characteristics of B2B applications. The customization is done by extracting the quality factors from web applications and B2B e-commerce applications, weighting these factors from the viewpoints of both developers and end users, and adding them to the model. Finally, as a case study, ISACO portal is evaluated by the proposed model.

Top-cited authors