IEEE Transactions on Software Engineering

Published by Institute of Electrical and Electronics Engineers
Online ISSN: 0098-5589
Publications
Article
Performance is a nonfunctional software attribute that plays a crucial role in wide application domains spreading from safety-critical systems to e-commerce applications. Software risk can be quantified as a combination of the probability that a software system may fail and the severity of the damages caused by the failure. In this paper, we devise a methodology for estimation of performance-based risk factor, which originates from violations, of performance requirements, (namely, performance failures). The methodology elaborates annotated UML diagrams to estimate the performance failure probability and combines it with the failure severity estimate which is obtained using the functional failure analysis. We are thus able to determine risky scenarios as well as risky software components, and the analysis feedback can be used to improve the software design. We illustrate the methodology on an e-commerce case study using step-by step approach, and then provide a brief description of a case study based on large real system.
 
Article
In the above titled paper (ibid., vol. 33, no. 8, pp. 526-543, Aug 07), there were several mistakes. The corrections are presented here.
 
Article
A typographic error in rule 10 in the above-titled paper (see ibid., vol.18, no.12, p.1053-64, Dec. 1992), is corrected
 
Article
The authors correct several typographical errors, and misinterpretations in their abovementioned paper (see ibid., vol.SE-12, no.1, p.3-11, Jan. 1986).
 
Article
Software risk management can be defined as an attempt to formalize risk oriented correlates of development success into a readily applicable set of principles and practices. By using a survey instrument we investigate this claim further. The investigation addresses the following questions: 1) What are the components of software development risk? 2) how does risk management mitigate risk components, and 3) what environmental factors if any influence them? Using principal component analysis we identify six software risk components: 1) scheduling and timing risks, 2) functionality risks, 3) subcontracting risks, 4) requirements management, 5) resource usage and performance risks, and 6) personnel management risks. By using one-way ANOVA with multiple comparisons we examine how risk management (or the lack of it) and environmental factors (such as development methods, manager's experience) influence each risk component. The analysis shows that awareness of the importance of risk management and systematic practices to manage risks have an effect on scheduling risks, requirements management risks, and personnel management risks. Environmental contingencies were observed to affect all risk components. This suggests that software risks can be best managed by combining specific risk management considerations with a detailed understanding of the environmental context and with sound managerial practices, such as relying on experienced and well-educated project managers and launching correctly sized projects
 
Article
The paper describes an improved hierarchical model for the assessment of high-level design quality attributes in object-oriented designs. In this model, structural and behavioral design properties of classes, objects, and their relationships are evaluated using a suite of object-oriented design metrics. This model relates design properties such as encapsulation, modularity, coupling, and cohesion to high-level quality attributes such as reusability, flexibility, and complexity using empirical and anecdotal information. The relationship or links from design properties to quality attributes are weighted in accordance with their influence and importance. The model is validated by using empirical and expert opinion to compare with the model results on several large commercial object-oriented systems. A key attribute of the model is that it can be easily modified to include different relationships and weights, thus providing a practical quality assessment tool adaptable to a variety of demands
 
Article
A technology for automatically assembling large software libraries which promote software reuse by helping the user locate the components closest to her/his needs is described. Software libraries are automatically assembled from a set of unorganized components by using information retrieval techniques. The construction of the library is done in two steps. First, attributes are automatically extracted from natural language documentation by using an indexing scheme based on the notions of lexical affinities and quantity of information. Then a hierarchy for browsing is automatically generated using a clustering technique which draws only on the information provided by the attributes. Due to the free-text indexing scheme, tools following this approach can accept free-style natural language queries
 
Article
A description and analysis of concurrent systems, such as communication systems, whose behavior is dependent on explicit values of time is presented. An enumerative method is proposed in order to exhaustively validate the behavior of P. Merlin's time Petri net model, (1974). This method allows formal verification of time-dependent systems. It is applied to the specification and verification of the alternating bit protocol as a simple illustrative example
 
Article
Software models typically contain many inconsistencies and consistency checkers help engineers find them. Even if engineers are willing to tolerate inconsistencies, they are better off knowing about their existence to avoid follow-on errors and unnecessary rework. However, current approaches do not detect or track inconsistencies fast enough. This paper presents an automated approach for detecting and tracking inconsistencies in real time (while the model changes). Engineers only need to define consistency rules-in any language-and our approach automatically identifies how model changes affect these consistency rules. It does this by observing the behavior of consistency rules to understand how they affect the model. The approach is quick, correct, scalable, fully automated, and easy to use as it does not require any special skills from the engineers using it. We evaluated the approach on 34 models with model sizes of up to 162,237 model elements and 24 types of consistency rules. Our empirical evaluation shows that our approach requires only 1.4 ms to reevaluate the consistency of the model after a change (on average); its performance is not noticeably affected by the model size and common consistency rules but only by the number of consistency rules, at the expense of a quite acceptable, linearly increasing memory consumption.
 
Article
The major components of the MAP 2.1 conformance test system are described. Protocol conformance testing and program testing are compared. The scope and process of dynamic conformance testing is reviewed. The architecture of the tests system is presented, first in terms of the developing ISO (International Organization for Standardization) framework, and then in terms of run-time components. Several specific tools which comprise the test system are described. These tools include test engines, a high-level control tool, monitoring and analysis tools, and document handling tools. Benefits and limitations of the test tools are examined. Conclusions and suggestions for future efforts are provided
 
Article
The International Symposium on Software Testing and Analysis (ISSTA) is the leading research conference in software testing and analysis. ISSTA brings together academics, industrial researchers, and practitioners to present and discuss the most promising approaches for using testing and analysis to assess and improve software and the processes by which it is engineered. ISSTA 2004 was held in at the Parker House Hotel in Boston, Massachusetts, July 11-14, 2004. On this occasion the conference was co-located with the 16th Computer- Aided Verification Conference (CAV 2004), with the two conferences sharing a day of sessions. The conference was also preceded by two workshops: the Workshop on Testing, Analysis, and Verification of Web Services and the Workshop on Empirical Research in Software Testing. The number of submitted papers to ISSTA, and the ratio of that number to accepted papers, are a good indication of ISSTA’s health as a conference, and technical reputation. For ISSTA 2004, 93 research papers (a near record) and eight tools papers were submitted. Submissions were reviewed by at least three reviewers and discussed in a Program Committee meeting. The result was a high-quality technical program with 26 research papers and two tools papers. This issue of IEEE Transactions on Software Engineering presents a special section containing papers based on five of the best papers of ISSTA 2004. These papers were part of a set of seven papers selected by the ISSTA Program Committee from among the papers presented at the conference. The selected set, following revisions and enhancements by the authors, went through the standard TSE review process involving at least three anonymous reviewers per paper, overseen by myself as guest editor (or in one case on a paper on which I had a conflict, by John Knight). The result is a selection of five excellent papers.
 
Article
The four papers in this special section are extended versions of selected papers from the 16th ACM International Symposium on the Foundations of Software Engineering, held in Atlanta, Georgia, 11-13 November 2008.
 
Article
In the past, a number of methods have been proposed to model and validate communication protocols that have already been designed. However, design criteria and design aids are still lacking for designing correct protocols. The objective of developing automated protocol synthesizers is to provide a systematic way of designing new communication protocols such that their correctness can be ensured. This paper presents an implementation of an automated protocol synthesizer (APS) which automatically generates the peer entity model from the given model of a single local entity. The protocol generated is guaranteed to be well behaved, if the given entity model possesses certain desirable properties. These properties are very natural and can be checked by APS from the structure of the state transition graph of the given entity model. This paper also presents an application result of APS to a modified CCITT X.21 Recommendation. Copyright © 1985 by The Institute of Electrical and Electronics Engineers, Inc.
 
Article
The authors describe a number of results from a quantitative study of faults and failures in two releases of a major commercial software system. They tested a range of basic software engineering hypotheses relating to: the Pareto principle of distribution of faults and failures; the use of early fault data to predict later fault and failure data; metrics for fault prediction; and benchmarking fault data. For example, we found strong evidence that a small number of modules contain most of the faults discovered in prerelease testing and that a very small number of modules contain most of the faults discovered in operation. We found no evidence to support previous claims relating module size to fault density nor did we find evidence that popular complexity metrics are good predictors of either fault-prone or failure-prone modules. We confirmed that the number of faults discovered in prerelease testing is an order of magnitude greater than the number discovered in 12 months of operational use. The most important result was strong evidence of a counter-intuitive relationship between pre- and postrelease faults; those modules which are the most fault-prone prerelease are among the least fault-prone postrelease, while conversely, the modules which are most fault-prone postrelease are among the least fault-prone prerelease. This observation has serious ramifications for the commonly used fault density measure. Our results provide data-points in building up an empirical picture of the software development process
 
Article
One of the most important features of modeling tools is generation of output. The output may be documentation, source code, net list, or any other presentation of the system being constructed. The process of output generation may be considered as automatic creation of a target model from a model in the source modeling domain. This translation does not need to be accomplished in a single step. Instead, a tool may generate multiple intermediate models as other views to the system. These models may be used either as better descriptions of the system, or as a descent down the abstraction levels of the user-defined model, gradually leading to the desired implementation. If the modeling domains have their metamodels defined in terms of object-oriented concepts, the models consist of instances of the abstractions from the metamodels and links between them. A new technique for specifying the mapping between different modeling domains is proposed in the paper. It uses UML object diagrams that show the instances and links of the target model that should be created during automatic translations. The diagrams are extended with the proposed concepts of conditional, repetitive, parameterized, and polymorphic model creation, implemented by the standard UML extensibility mechanisms. Several examples from different engineering domains are provided, illustrating the applicability and benefits of the approach. The first experimental results show that the specifications may lead to better reuse and shorter production time when developing customized output generators
 
Article
We have located the results of empirical studies on elicitation techniques and aggregated these results to gather empirically grounded evidence. Our chosen surveying methodology was systematic review, whereas we used an adaptation of comparative analysis for aggregation because meta-analysis techniques could not be applied. The review identified 564 publications from the SCOPUS, IEEEXPLORE, and ACM DL databases, as well as Google. We selected and extracted data from 26 of those publications. The selected publications contain 30 empirical studies. These studies were designed to test 43 elicitation techniques and 50 different response variables. We got 100 separate results from the experiments. The aggregation generated 17 pieces of knowledge about the interviewing, laddering, sorting, and protocol analysis elicitation techniques. We provide a set of guidelines based on the gathered pieces of knowledge.
 
Graphical model of MR interval for (a) Department A and (b) Department B.  
Proportion of multiple-site MRs over time.  
Article
Global software development is rapidly becoming the norm for technology companies. Previous qualitative research suggests that distributed development may increase development cycle time for individual work items (modification requests). We use both data from the source code change management system and survey data to model the extent of delay in a distributed software development organization and explore several possible mechanisms for this delay. One key finding is that distributed work items appear to take about two and one-half times as long to complete as similar items where all the work is colocated. The data strongly suggest a mechanism for the delay, i.e., that distributed work items involve more people than comparable same-site work items, and the number of people involved is strongly related to the calendar time to complete a work item. We replicate the analysis of change data in a different organization with a different product and different sites and confirm our main findings. We also report survey results showing differences between same-site and distributed social networks, testing several hypotheses about characteristics of distributed social networks that may be related to delay. We discuss implications of our findings for practices and collaboration technology that have the potential for dramatically speeding distributed software development.
 
Article
A new model of the software development process is presented and used to derive the form of the resource consumption curve of a project over its life cycle. The function obtained differs in detail from the Rayleigh curve previously used in fitting actual project data. The main advantage of the new model is that it relates the rate of progress which can be achieved in developing software to the structure of the system being developed. This leads to a more testable theory, and it also becomes possible to predict how the use of structured programming methods may alter pattems of life cycle resource consumption.
 
Article
This paper presents an analysis of operating system failures on an IBM 3081 running VM/SP. We find three broad categories of software failures: error handling (ERH), program control or logic (CTL), and hardware related (HS); it is found that more than 25 percent of software failures occur in the hardware/software interface. Measurements show that results on software reliability cannot be considered representative unless the system workload is taken into account. For example, it is shown that the risk of a software failure increases in a nonlinear fashion with the amount of interactive processing, as measured by parameters such as the paging rate and the amount of overhead (operating system CPU time). The overall CPU execution rate, although measured to be close to 100 percent most of the time, is not found to correlate strongly with the occurrence of failures. The paper discusses possible reasons for the observed workload failure dependency based on detailed investigations of the failure data.
 
Article
In advanced service oriented systems, complex applications, described as abstract business processes, can be executed by invoking a number of available Web services. End users can specify different preferences and constraints and service selection can be performed dynamically identifying the best set of services available at runtime. In this paper, we introduce a new modeling approach to the Web service selection problem that is particularly effective for large processes and when QoS constraints are severe. In the model, the Web service selection problem is formalized as a mixed integer linear programming problem, loops peeling is adopted in the optimization, and constraints posed by stateful Web services are considered. Moreover, negotiation techniques are exploited to identify a feasible solution of the problem, if one does not exist. Experimental results compare our method with other solutions proposed in the literature and demonstrate the effectiveness of our approach toward the identification of an optimal solution to the QoS constrained Web service selection problem
 
Article
This paper describes ASSIST-V, a software tool designed for use in the teaching of operating systems, fie management, and machine architecture courses. ASSIST-V is a program that provides an environment for the implementation, testing, and evaluation of systems software for the IBM 360 series machines. This capability is achieved by simulating all relevant aspects of the machine's architecture. In particular, ASSIST-V simulates interrupts, I/O channels, and I/O devices, as well as all IBM 360 machine instructions. In addition, ASSIST-V provides extensive debugging and statistics-gathering aids.
 
Article
While interactive multimedia animation is a very compelling medium, few people are able to express themselves in it. There are too many low-level details that have to do not with the desired content-e.g., shapes, appearance and behavior-but rather how to get a computer to present the content. For instance, behavior such as motion and growth are generally gradual, continuous phenomena. Moreover, many such behaviors go on simultaneously. Computers, on the other hand, cannot directly accommodate either of these basic properties, because they do their work in discrete steps rather than continuously, and they only do one thing at a time. Graphics programmers have to spend much of their effort bridging the gap between what an animation is and how to present it on a computer. We propose that this situation can be improved by a change of language, and present Fran, synthesized by complementing an existing declarative host language, Haskell, with an embedded domain-specific vocabulary for modeled animation. As demonstrated in a collection of examples, the resulting animation descriptions are not only relatively easy to write, but also highly composable
 
Article
The authors discuss the upper and lower bounds on the accuracy of the time synchronization achieved by the algorithm implemented in TEMPO, the distributed service that synchronizes the clocks of the University of California, Berkeley, UNIX 4.3BSD systems. The accuracy is shown to be a function of the network transmission latency; it depends linearly upon the drift rate of the clocks and the interval between synchronizations. TEMPO keeps the clocks of the VAX computers in a local area network synchronized with an accuracy comparable to the resolution of single-machine clocks. Comparison with other clock synchronization algorithms shows that TEMPO, in an environment with no Byzantine faults, can achieve better synchronization at a lower cost
 
Article
Given the central role that software development plays in the delivery and application of information technology, managers are increasingly focusing on process improvement in the software development area. This demand has spurred the provision of a number of new and/or improved approaches to software development, with perhaps the most prominent being object-orientation (OO). In addition, the focus on process improvement has increased the demand for software measures, or metrics with which to manage the process. The need for such metrics is particularly acute when an organization is adopting a new technology for which established practices have yet to be developed. This research addresses these needs through the development and implementation of a new suite of metrics for OO design. Metrics developed in previous research, while contributing to the field's understanding of software development processes, have generally been subject to serious criticisms, including the lack of a theoretical base. Following Wand and Weber (1989), the theoretical base chosen for the metrics was the ontology of Bunge (1977). Six design metrics are developed, and then analytically evaluated against Weyuker's (1988) proposed set of measurement principles. An automated data collection tool was then developed and implemented to collect an empirical sample of these metrics at two field sites in order to demonstrate their feasibility and suggest ways in which managers may use these metrics for process improvement
 
Article
While empirical studies in software engineering are beginning to gain recognition in the research community, this subarea is also entering a new level of maturity by beginning to address the human aspects of software development. This added focus has added a new layer of complexity to an already challenging area of research. Along with new research questions, new research methods are needed to study nontechnical aspects of software engineering. In many other disciplines, qualitative research methods have been developed and are commonly used to handle the complexity of issues involving human behaviour. The paper presents several qualitative methods for data collection and analysis and describes them in terms of how they might be incorporated into empirical studies of software engineering, in particular how they might be combined with quantitative methods. To illustrate this use of qualitative methods, examples from real software engineering studies are used throughout
 
Article
Many critical real-time applications are implemented as time-triggered systems. We present a systematic way to derive such time-triggered implementations from algorithms specified as functional programs (in which form their correctness and fault-tolerance properties can be formally and mechanically verified with relative ease). The functional program is first transformed into an untimed synchronous system and, then, to its time-triggered implementation. The first step is specific to the algorithm concerned, but the second is generic and we prove its correctness. This proof has been formalized and mechanically checked with the PVS verification system. The approach provides a methodology that can ease the formal specification and assurance of critical fault-tolerant systems
 
Article
Many organizations want to predict the number of defects (faults) in software systems, before they are deployed, to gauge the likely delivered quality and maintenance effort. To help in this numerous software metrics and statistical models have been developed, with a correspondingly large literature. We provide a critical review of this literature and the state-of-the-art. Most of the wide range of prediction models use size and complexity metrics to predict defects. Others are based on testing data, the “quality” of the development process, or take a multivariate approach. The authors of the models have often made heroic contributions to a subject otherwise bereft of empirical studies. However, there are a number of serious theoretical and practical problems in many studies. The models are weak because of their inability to cope with the, as yet, unknown relationship between defects and failures. There are fundamental statistical and data quality problems that undermine model validity. More significantly many prediction models tend to model only part of the underlying problem and seriously misspecify it. To illustrate these points the Goldilock's Conjecture, that there is an optimum module size, is used to show the considerable problems inherent in current defect prediction approaches. Careful and considered analysis of past and new results shows that the conjecture lacks support and that some models are misleading. We recommend holistic models for software defect prediction, using Bayesian belief networks, as alternative approaches to the single-issue models used at present. We also argue for research into a theory of “software decomposition” in order to test hypotheses about defect introduction and help construct a better science of software engineering
 
Article
An in depth analysis of the 80×86 processor families identifies architectural properties that may have unexpected, and undesirable, results in secure computer systems. In addition, reported implementation errors in some processor versions render them undesirable for secure systems because of potential security and reliability problems. We discuss the imbalance in scrutiny for hardware protection mechanisms relative to software, and why this imbalance is increasingly difficult to justify as hardware complexity increases. We illustrate this difficulty with examples of architectural subtleties and reported implementation errors
 
Article
First Page of the Article
 
Article
Petri nets in which random delays are associated with atomic transitions are defined in a comprehensive framework that contains most of the models already proposed in the literature. To include generally distributed firing times into the model one must specify the way in which the next transition to fire is chosen, and how the model keeps track of its past history; this set of specifications is called an execution policy. A discussion is presented of the impact that different execution policies have on semantics of the mode, as well as the characteristics of the stochastic process associated with each of these policies. When the execution policy is completely specified by the transition with the minimum delay (race policy) and the firing distributions are of the phase type, an algorithm is provided that automatically converts the stochastic process into a continuous time homogeneous Markov chain. An execution policy based on the choice of the next transition to fire independently of the associated delay (preselection policy) is introduced, and its semantics is discussed together with possible implementation strategies
 
Article
Model building is identified as the most important part of the analysis and design process for software systems. A set of primitives to support this process is presented, along with a formal language, MSG.84, for recording the results of analysis and design. The semantics of the notation is defined in terms of the actor formalism, which is based on a message passing paradigm. The automatic derivation of a graphical form of the specification for user review is discussed. Potentials for computer-aided design based on MSG.84 are indicated.
 
Article
In a distributed computing system a modular program must have its modules assigned among the processors so as to avoid excessive interprocessor communication while taking advantage of specific efficiencies of some processors in executing some program modules. In this paper we show that this program module assignment problem can be solved efficiently by making use of the well-known Ford–Fulkerson algorithm for finding maximum flows in commodity networks as modified by Edmonds and Karp, Dinic, and Karzanov. A solution to the two-processor problem is given, and extensions to three and n-processors are considered with partial results given without a complete efficient solution.
 
Article
Girgis and Woodward [I] implemented a system for FORTRAN-77 programs that integrates weak mutation and data flow analysis. The system instruments a source program to collect program execution histories. It is then able to report on the completeness of the test data with respect to the weak mutation and data flow testing criteria. Of the five elementary program components considered by Howden, variable definition, variable reference, arithmetic expressions, and relational expressions are monitored by that system. It applies three types of mutations: wrong-variables, off-by-a-constant, and wrong-relationaloperator. Experiments have been carried out using that system, to compare the error exposing ability of weak mutation, data flow, and control flow testing strategies. The results of these experiments were reported in [2].
 
Article
The ABE multilevel architecture for developing intelligent systems addresses the key problems of intelligent systems engineering: large-scale applications and the reuse and integration of software components. ABE defines a virtual machine for module-oriented programming and a cooperative operating system that provides access to the capabilities of that virtual machine. On top of the virtual machine, ABE provides a number of system design and development frameworks, which embody such programming metaphors as control flow, blackboards, and dataflow. These frameworks support the construction of capabilities, including knowledge processing tools, which span a range from primitive modules to skeletal systems. Finally, applications can be built on skeletal systems. In addition, ABE supports the importation of existing software, including both conventional and knowledge processing tools
 
Article
In a recent article by P.G. Frankl and E.J. Weyuker (see ibid., vol.19, no.3, p.962-75, 1993), results are reported that appear to establish a hierarchy of software test methods based on their respective abilities to detect faults. The methods used by Frankl and Weyuker to obtain this hierarchy constitute a new and important addition to their arsenal of tools. These tools were developed specifically to establish simple, useful comparisons of test data generation methods. This is the latest step in an ambitious test method classification program undertaken by the Frankl and Weyuker and their collaborators. The article discusses the method and goes on to present a reply to the critique.
 
Article
The problem of authentication of mutually suspicious parties is one that is becoming more and more important with the proliferation of distributed systems. In this paper we describe a protocol, based on the difficulty of finding discrete logarithms over finite fields, by which users can verify whether they have matching credentials without revealing their credentials to each other unless there is a match. This protocol requires a trusted third party, but does not require it to be available to the users except when they sign up for the system. Thus it is useful in situations in which a trusted third party exists but is not available to all users at all times.
 
Article
A catalog of quick closed-form bounds on set intersection and union sizes is presented; they can be expressed as rules, and managed by a rule-based system architecture. These methods use a variety of statistics precomputed on the data, and exploit homomorphisms (onto mappings) of the data items onto distributions that can be more easily analyzed. The methods can be used anytime, but tend to work best when there are strong or complex correlations in the data. This circumstance is poorly handled by the standard independence-assumption and distributional-assumption estimates
 
Article
The concept of abstract data types is extended to associate performance information with each abstract data type representation. The resulting performance abstract data type contains a functional part which describes the functional properties of the data type and a performance part which describes the performance characteristics of the data type. The performance part depends upon 1) the algorithms and data representation selected to represent the data type, 2) the particular machine on which the software realization of the data type is realized, and 3) the statistical properties of the actual data represented by the data objects involved in the data type. Methods for determining the necessary information to specify the performance part of the representation are discussed.
 
Article
Software maintenance tools for program analysis and refactoring rely on a metamodel capturing the relevant properties of programs. However, what is considered relevant may change when the tools are extended with new analyses, refactorings, and new programming languages. This paper proposes a language independent metamodel and an architecture to construct instances thereof, which is extensible for new analyses, refactorings, and new front-ends of programming languages. Due to the loose coupling between analysis, refactoring, and front-end components, new components can be added independently and reuse existing ones. Two maintenance tools implementing the metamodel and the architecture, VlZZANALYZER and X-DEVELOP, serve as proof of concept.
 
Article
Programming languages have traditionally had more data types than database systems. The flexibility of abstract types could make a database system more useful in supporting application development. Abstract types allow users to think about and manipulate data in terms and structures that they are familiar with. This paper proposes that databases have a type system interface and describes a representation of a type system in terms of relations. The type system model supports a variety of programming language constructs, such as user-defined parameterized data types and user-defined generic operations. The efficiency of the type system is compared to the access time of the database system.
 
Article
An abstract requirements specification states system requirements precisely without describing a real or a paradigm implementation. Although such specifications have important advantages, they are difficult to produce for complex systems and hence are seldom seen in the "real" programming world. This paper introduces an approach to producing abstract requirements specifications that applies to a significant class of real-world systems, including any system that must reconstruct data that have undergone a sequence of transformations. tions. It also describes how the approach was used to produce a requirements document for SCP, a small, but nontrivial Navy communications system. The specification techniques used in the SCP requirements document are introduced and illustrated with examples.
 
Top-cited authors
Lionel C. Briand
  • Simula Research Laboratory
Chris F. Kemerer
  • University of Pittsburgh
Elaine J. Weyuker
  • University of Central Florida
Gerard Holzmann
  • Nimble Research
Mark Harman
  • University College London