Chapter

A Review of Software Quality Methodologies

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Pervasive systems and increased reliance on embedded systems require that the underlying software is properly tested and has in-built high quality. The approaches often adopted to realize software systems have inherent weaknesses that have resulted in less robust software applications. The requirement of reliable software suggests that quality needs to be instilled at all stages of a software development paradigms, especially at the testing stages of the development cycle ensuring that quality attributes and parameters are taken into account when designing and developing software. In this respect, numerous tools, techniques, and methodologies have also been proposed. In this chapter, the authors present and review different methodologies employed to improve the software quality during the software development lifecycle.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Probably everyone has an idea about the meaning of quality. However, when it comes to quality in the real world, i.e. in conjunction with a software development project, disagreements between the persons involved often lead to further problems. Especially in the case of customer complaints about faults in a software product, it seems to be unclear not only what the requirements are, but also if the software has the „right“ characteristics with regard to these require­ments. This article aims to reduce the confusion arisen about quality, requirement and characteristic.
Article
Full-text available
Knowledge is too problematic a concept to make the task of building a dynamic knowledge-based theory of the firm easy. We must also distinguish the theory from the resource-based and evolutionary views. The paper begins with a multitype epistemology which admits both the pre- and subconscious modes of human knowing and, reframing the concept of the cognizing individual, the collective knowledge of social groups. While both Nelson and Winter, and Nonaka and Takeuchi, successfully sketch theories of the dynamic interactions of these types of organizational knowledge, neither indicates how they are to be contained. Callon and Latour suggest knowledge itself is dynamic and contained within actor networks, so moving us from knowledge as a resource toward knowledge as a process. To simplify this approach, we revisit sociotechnical systems theory, adopt three heuristics from the social constructionist literature, and make a distinction between the systemic and component attributes of the actor network. The result is a very different mode of theorizing, less an objective statement about the nature of firms ‘out there’ than a tool to help managers discover their place in the firm as a dynamic knowledge-based activity system.
Article
Full-text available
Given assumptions about the characteristics of knowledge and the knowledge requirements of production, the firm is conceptualized as an institution for integrating knowledge. The primary contribution of the paper is in exploring the coordination mechanisms through which firms integrate the specialist knowledge of their members. In contrast to earlier literature, knowledge is viewed as residing within the individual, and the primary role of the organization is knowledge application rather than knowledge creation. The resulting theory has implications for the basis of organizational capability, the principles of organization design (in particular, the analysis of hierarchy and the distribution of decision-making authority), and the determinants of the horizontal and vertical boundaries of the firm. More generally, the knowledge-based approach sheds new light upon current organizational innovations and trends and has far-reaching implications for management practice.
Article
Full-text available
Object-oriented Software Systems present a particular challenge to the software testing community. This review of the problem points out the particular aspects of object-oriented systems which makes it costly to test them. The flexibility and reusability of such systems is described from the negative side which implies that there are many ways to use them and all of these ways need to be tested. The solution to this challenge lies in automation. The review emphasizes the role of test automation in achieving adequate test coverage both at the unit and the component level. The system testing level is independent of the underlying programming technology.
Article
Full-text available
Peer-to-peer (P2P) offers good solutions for many applications such as large data sharing and collaboration in social networks. Thus, it appears as a powerful paradigm to develop scalable distributed applications, as reflected by the increasing number of emerging projects based on this technology. However, building trustworthy P2P applications is difficult because they must be deployed on a large number of autonomous nodes, which may refuse to answer to some requests and even leave the system unexpectedly. This volatility of nodes is a common behavior in P2P systems and may be interpreted as a fault during tests (i.e., failed node). In this work, we present a framework and a methodology for testing P2P applications. The framework is based on the individual control of nodes, allowing test cases to precisely control the volatility of nodes during their execution. We validated this framework through implementation and experimentation on an open-source P2P system. The experimentation tests the behavior of the system on different conditions of volatility and shows how the tests were able to detect complex implementation problems. KeywordsSoftware testing-Peer-to-peer systems-Distributed hash tables (DHT)-Testing methodology-Experimental procedure
Article
Full-text available
This study describes a study of 14 software companies, on how they initiate and pre-plan software projects. The aim was to obtain an indication of the range of planning activities carried out. The study, using a convenience sample, was carried out using structured interviews, with questions about early software project planning activities. The study offers evidence that an iterative and incremental development process presents extra difficulties in the case of fixed-contract projects. The authors also found evidence that feasibility studies were common, but generally informal in nature. Documentation of the planning process, especially for project scoping, was variable. For incremental and iterative development projects, an upfront decision on software architecture was shown to be preferred over allowing the architecture to just `emerge`. There is also evidence that risk management is recognised but often performed incompletely. Finally appropriate future research arising from the study is described.
Conference Paper
Full-text available
This paper describes the application of risk-based testing for a software product evaluation in a real case study. Risk-based testing consists of a set of activities regarding risk factors identification related to software requirements. Once identified, the risks are prioritized according to their likelihood and impact and test cases are designed based on the strategies for treatment of the identified risk factors. Thus, test efforts are continuously adjusted according to risk monitoring. The paper also briefly reviews available risk-based approaches, describes the benefits that are likely to accrue from the growing body of work in this area and provides a set of problems, challenges and future work.
Article
Full-text available
Search-based software testing is the application of metaheuristic search techniques to generate software tests. The test adequacy criterion is transformed into a fitness function and a set of solutions in the search space are evaluated with respect to the fitness function using a metaheuristic search technique. The application of metaheuristic search techniques for testing is promising due to the fact that exhaustive testing is infeasible considering the size and complexity of software under test. Search-based software testing has been applied across the spectrum of test case design methods; this includes white-box (structural), black-box (functional) and grey-box (combination of structural and functional) testing. In addition, metaheuristic search techniques have also been applied to test non-functional properties. The overall objective of undertaking this systematic review is to examine existing work into non-functional search-based software testing (NFSBST). We are interested in types of non-functional testing targeted using metaheuristic search techniques, different fitness functions used in different types of search-based non-functional testing and challenges in the application of these techniques. The systematic review is based on a comprehensive set of 35 articles obtained after a multi-stage selection process and have been published in the time span 1996–2007. The results of the review show that metaheuristic search techniques have been applied for non-functional testing of execution time, quality of service, security, usability and safety. A variety of metaheuristic search techniques are found to be applicable for non-functional testing including simulated annealing, tabu search, genetic algorithms, ant colony methods, grammatical evolution, genetic programming (and its variants including linear genetic programming) and swarm intelligence methods. The review reports on different fitness functions used to guide the search for each of the categories of execution time, safety, usability, quality of service and security; along with a discussion of possible challenges in the application of metaheuristic search techniques.
Conference Paper
Full-text available
Testing with randomly generated test inputs, namely Random Testing, is a strategy that has been applied succefully in a lot of cases. Recently, some new adaptive approaches to the random generation of test cases have been proposed. Whereas there are many comparisons of Random Testing with Partition Testing, a systematic comparison of random testing techniques is still missing. This paper presents an empirical analysis and comparison of all random testing techniques from the field of Adaptive Random Testing (ART). The ART algorithms are compared for effectiveness using the mean F-measure, obtained through simulation and mutation analysis, and the P-measure. An interesting connection between the testing effectiveness measures F-measure and P-measure is described. The spatial distribution of test cases is determined to explain the behavior of the methods and identify possible shortcomings. Besides this, both the theoretical asymptotic runtime and the empirical runtime for each method are given.
Conference Paper
Full-text available
Progress in testing requires that we evaluate the effectiveness of testing strategies on the basis of hard experimental evidence, not just intuition or a priori arguments. Random testing, the use of randomly generated test data, is an example of a strategy that the literature often deprecates because of such preconceptions. This view is worth revisiting since random testing otherwise offers sev- eral attractive properties: simplicity of implementation, speed of execution, absence of human bias. Weperformedanintensiveexperimentalanalysisoftheefficiency of random testing on an existing industrial-grade code base. The use of a large-scale cluster of computers, for a total of 1500 hours of CPU time, allowed a fine-grain analysis of the individual effect of the various parameters involved in the random testing strategy, such as the choice of seed for a random number generator. The re- sults provide insights into the effectiveness of random testing and a number of lessons for testing researchers and practitioners.
Conference Paper
Full-text available
Reactive real-time systems have to react to external events within time constraints: Triggered tasks must execute within deadlines. The goal of this article is to automate, based on the system task architecture, the derivation of test cases that maximize the chances of critical deadline misses within the system. We refer to that testing activity as stress testing. We have developed a method based on genetic algorithms and implemented it in a tool. Case studies were run and results show that the tool may actually help testers identify test cases that will likely stress the system to such an extent that some tasks may miss deadlines.
Article
Full-text available
Component-based development has emerged as a system engineering approach that promises rapid software development with fewer resources. Yet, improved reuse and reduced cost benefits from software components can only be achieved in practice if the components provide reliable services, thereby rendering component analysis and testing a key activity. This paper discusses various issues that can arise in component testing by the component user at the stage of its integration within the target system. The crucial problem is the lack of information for analysis and testing of externally developed components. Several testing techniques for component integration have recently been proposed. These techniques are surveyed here and classified according to a proposed set of relevant attributes. The paper thus provides a comprehensive overview which can be useful as introductory reading for newcomers in this research field, as well as to stimulate further investigation. Copyright © 2006 John Wiley & Sons, Ltd.
Article
Full-text available
In the IT industry, customers want the vendor organisations to deliver defect free software within agreed and negotiated timeframe. This task can be achieved through combination of two approaches – process-oriented approach and quantitative control of the process. The first one deals with defining a process for a project to follow and the second one deal with quantitative project management techniques used to control various parameters to bring process under control. Through these quantitative techniques, quality goals are set at the start of the project during project initiation phase and these goals are monitored at different milestones as defined in the project plan. Through this approach, an early signal can be detected if the process goes out of control. Thus, a combination of process management and quantitative management improves the delivered quality. In this paper, practical and real-time development project experiences have been used to come up with a systematic approach for quantitative project management for better quality of deliverables. The paper discusses different tools and monitoring mechanisms that can be employed during project execution and possible benefits obtained from this approach. The tone of the paper is in prescriptive manner and is based on best practices across different organisations.
Article
Full-text available
Conventional wisdom and anecdote suggests that testing takes between 30 to 50% of a project's effort. However testing is not a monolithic activity as it consists of a number of different phases such as unit testing, integration testing and finally system and acceptance test. Unit testing has received a lot of criticism in terms of the amount of time that it is perceived to take and its perceived costs. However it still remains an important verification activity being an effective means to test individual software components for boundary value behavior and ensure that all code has been exercised adequately. We examine the available data from three safety-related, industrial software projects that have made use of unit testing. Using this information we argue that the perceived costs of unit testing may be exaggerated and that the likely benefits in terms of defect detection are quite high in relation to those costs. We also discuss the different issues that have been found applying the technique at different phases of the development and using different methods to generate those tests. We also compare results we have obtained with empirical results from the literature and highlight some possible weakness of research in this area.
Article
Many real-time systems are developed and maintained through the use of commercial software products, such as Matlab and MatrixX, that automatically generate source code based on graphical control systems models. Testing these real-time models and the real-time software generated from them presents special problems during maintenance not faced with other forms of software. Very importantly, many of the models and software systems have to be tested through the use of simulations, Huge input and output data sets, the need for testing over a long duration of time (weeks or months), and computationally intensive requirements are just a few of the difficulties. For testing during maintenance in such situations, this paper draws upon field experience to present a set of test types and a strategy for selecting test types used to create series of input values to serve as test cases. Also this paper presents strategies for applying these test types, using the assistance of a free, widely available testing tool that automates test case generation, executes the simulations, and supports the analysis of the test results. Copyright (C) 2000 John Wiley & Sons, Ltd.
Article
Experience from a dozen years of analyzing software engineering processes and products is summarized as a set of software engineering and measurement principles that argue for software engineering process models that integrate sound planning and analysis into the construction process. In the TAME (Tailoring A Measurement Environment) project at the University of Maryland, such an improvement-oriented software engineering process model was developed that uses the goal/question/metric paradigm to integrate the constructive and analytic aspects of software development. The model provides a mechanism for formalizing the characterization and planning tasks, controlling and improving projects based on quantitative analysis, learning in a deeper and more systematic way about the software process and product, and feeding the appropriate experience back into the current and future projects. The TAME system is an instantiation of the TAME software engineering process model as an ISEE (integrated software engineering environment). The first in a series of TAME system prototypes has been developed. An assessment of experience with this first limited prototype is presented including a reassessment of its initial architecture
Article
Functional testing is used to find disagreement between the specifications and the actual implementation of the software systems. The method of representing the specification can help to detect inconsistency and incompleteness in it. The various specification representation schemes are outlined in the paper. The basic technique of functional testing of software systems is the black box technique. This technique generates the test data using the information contained in the program's specification, independent of the implemented program's code. Black box testing cannot discover errors contained in the functions which are not mentioned explicitly in the specification. Therefore, a program dependent testing is necessary to discover this type of errors. The paper surveys the different methods of generating test data for both techniques; the black box and the program dependent techniques.
Article
Object‐oriented software development is an evolutionary process, and hence the opportunities for integration are abundant. Conceptually, classes are encapsulation of data attributes and their associated functions. Software components are amalgamation of logically and/or physically related classes. A complete software system is also an aggregation of software components. All of these various integration levels warrant contemporary integration techniques. Traditional integration techniques towards the end of software development process do not suffice any more. Integration strategies are needed at class level, component level, sub‐system level, and system levels. Classes require integration of methods. Various types of class interaction mechanisms demand different testing strategies. Integration of classes into components presses its own integration requirements. Finally, the system integration demands different types of integration testing strategies. This paper discusses the various integration levels prevalent in object‐oriented software development. The integration requirements of each level are met by suggesting a solution for the same. An integration framework for integrating classes into a system is also proposed.
Article
The present approach to productivity estimation, although useful, is far from being optimized. Based on the results of the variable analysis described in this paper, and supplemented by the results of the continued investigation of additional variables related to productivity, an experimental regression model has been developed. Preliminary results indicate that the model reduces the scatter. Further work is being done to determine the potential of regression as an estimating tool, as well as to extend the analyses of the areas of computer usage, documentation volume, duration, and staffing.
Conference Paper
The paper attempts to provide a comprehensive view of the field of software testing. The objective is to put all the relevant issues into a unified context, although admittedly the overview is biased towards my own research and expertise. In view of the vastness of the field, for each topic problems and approaches are only briefly tackled, with appropriate references provided to dive into them. I do not mean to give here a complete survey of software testing. Rather I intend to show how an unwieldy mix of theoretical and technical problems challenge software testers, and that a large gap exists between the state of the art and of the practice.
Conference Paper
Classical Design of Experiment (DOE) techniques have been in use for many years to aid in the performance testing of systems. In particular fractional factorial designs have been used in cases with many numerical factors to reduce the number of experimental runs necessary. For experiments involving categorical factors, this is not the case; experimenters regularly resort to exhaustive (full factorial) experiments. Recently, D-optimal designs have been used to reduce numbers of tests for experiments involving categorical factors because of their flexibility, but not necessarily because they can closely approximate full factorial results. In commonly used statistical packages, the only generic alternative for reduced experiments involving categorical factors is afforded by optimal designs. The extent to which D-optimal designs succeed in estimating exhaustive results has not been evaluated, and it is natural to determine this. An alternative design based on covering arrays may offer a better approximation of full factorial data. Covering arrays are used in software testing for accurate coverage of interactions, while D-optimal and factorial designs measure the amount of interaction. Initial work involved exhaustive generation of designs in order to compare covering arrays and D-optimal designs in approximating full factorial designs. In that setting, covering arrays provided better approximations of full factorial analysis, while ensuring coverage of all small interactions. Here we examine commercially viable covering array and D-optimal design generators to compare designs. Commercial covering array generators, while not as good as exhaustively generated designs, remain competitive with D-optimal design generators.
Conference Paper
The idea of conducting usability tests remotely emerged ten years ago. Since then, it has been studied empirically, and some software organizations employ remote methods. Yet there are still few comparisons involving more than one remote method. This paper presents results from a systematic empirical comparison of three methods for remote usability testing and a conventional laboratory-based think-aloud method. The three remote methods are a remote synchronous condition, where testing is conducted in real time but the test monitor is separated spatially from the test subjects, and two remote asynchronous conditions, where the test monitor and the test subjects are separated both spatially and temporally. The results show that the remote synchronous method is virtually equivalent to the conventional method. Thereby, it has the potential to conveniently involve broader user groups in usability testing and support new development approaches. The asynchronous methods are considerably more time-consuming for the test subjects and identify fewer usability problems, yet they may still be worthwhile.
Conference Paper
Poor performance of Web-based systems can adversely impact the profitability of enterprises that rely on them. As a result, effective performance testing techniques are essential for understanding whether a Web-based system will meet its performance objectives when deployed in the real world. The workload of a Web-based system has to be characterized in terms of sessions; a session being a sequence of inter-dependent requests submitted by a single user. Dependencies arise because some requests depend on the responses of earlier requests in a session. To exercise application functions in a representative manner, these dependencies should be reflected in the synthetic workloads used to test Web-based systems. This makes performance testing a challenge for these systems. In this paper, we propose a model-based approach to address this problem. Our approach uses an application model that captures the dependencies for a Web-based system under study. Essentially, the application model can be used to obtain a large set of valid request sequences representing how users typically interact with the application. This set of sequences can be used to automatically construct a synthetic workload with desired characteristics. The application model provides an indirection which allows a common set of workload generation tools to be used for testing different applications. Consequently, less effort is needed for developing and maintaining the workload generation tools and more effort can be dedicated towards the performance testing process.
Conference Paper
The lack of information limits component consumers to understand candidate components sufficiently in a way they can check if a given component fulfills its goal. Thus, this paper presents an approach to support component testing aiming to reduce the lack of information between component producers and component consumers. Additionally, the approach is covered by a CASE tool integrated in the development environment. An experimental study was performed in order to evaluate its efficiency and difficulties of its use. The experimental study indicates that the approach is viable and the tool support provides effort reduction to component producers and component consumers.
Conference Paper
This contribution outlines a cognitive-oriented approach to construct test systems that can "partially " imitate several cognitive paradigms of skilled human testers. For example, learning, reasoning, optimization, etc. Hence, a reasonable portion of the workload done by a human tester would be shifted to the test system itself. This consequently leads to a substantial reduction in the development time and cost; yet the test efficiency is not sacrificed.
Conference Paper
Simulation is proven to be an effective way for testing complex real-time hardware/software systems. However, simulation test data generation is a challenging issue which determines the efficiency in such testing. This paper proposed a data-driven mutation based approach for test data generation to address the unique requirements of realtime, input domain coverage, adaptability, and reliability. The architecture is designed to simulate the process for data sampling, transforming, packaging, and transmitting segments in an aerospace ground control system. Various mutant operators are defined for signal generation and data package generation to produce fault-sensitive inputs based on the system's fault pattern analysis. Mechanisms are defined to generate a large number of test cases by combination and composition of various data outputs from different dimensions to achieve a high coverage. A configuration mechanism is introduced to enable re-composition and re-combination of simulation components. The selective simulation testing method is discussed to improve test effectiveness. A case study shows that the proposed approach can achieve a high fault-coverage with a small number of effective test cases
Article
Numerous studies have been performed in supply chain environments to determine the performance requirement to achieve a near 100% read rate. However the literature on testing in the library environment is sparse. This study examines the operational efficiencies of a RFID library reader. The factors examined included angle directionality sensitivity, read distance and tag location. The findings suggested that the performance of the inventory system degrades significantly as the angle directionality moves from 0 to 60 degrees. The read distance varied from vendor specification. The findings provide empirical insight into the performance of an RFID reader in an operating environment.
Article
If software cannot be tested exhaustively, it must be tested selectively. But, on what should selection be based in order to maximise test effectiveness? It seems sensible to concentrate on the parts of the software where the risks are greatest, but what risks should be sought and how can they be identified and analysed?'Risk-based testing'is a term in current use, for example in an accredited test-practitioners'course syllabus, but there is no broadly accepted definition of the phrase and no literature or body of knowledge to underpin the subject implied by it. Moreover, there has so far been no suggestion that it requires an understanding of the subject of risk. This paper examines what is implied by risk-based testing, shows that its practice requires an understanding of risk, and points to the need for research into the topic and the development of a body of knowledge to underpin it.
Article
Regression testing is an expensive testing process used to revalidate software as it evolves. Various methodologies for improving regression testing processes have been explored, but the cost-effectiveness of these methodologies has been shown to vary with characteristics of regression test suites. One such characteristic involves the way in which test inputs are composed into test cases within a test suite. This article reports the results of controlled experiments examining the effects of two factors in test suite composition---test suite granularity and test input grouping---on the costs and benefits of several regression-testing-related methodologies: retest-all, regression test selection, test suite reduction, and test case prioritization. These experiments consider the application of several specific techniques, from each of these methodologies, across ten releases each of two substantial software systems, using seven levels of test suite granularity and two types of test input grouping. The effects of granularity, technique, and grouping on the cost and fault-detection effectiveness of regression testing under the given methodologies are analyzed. This analysis shows that test suite granularity significantly affects several cost-benefit factors for the methodologies considered, while test input grouping has limited effects. Further, the results expose essential tradeoffs affecting the relationship between test suite design and regression testing cost-effectiveness, with several implications for practice.
Article
The effect of testing location on usability test elements such as stress levels and user experience is not clear. A comparison between traditional lab testing and synchronous remote testing was conducted. The present study investigated two groups of users in remote and traditional settings. Within each group participants completed two tasks, a simple task and a complex task. The dependent measures were task time taken, number of critical incidents reported, and user-reported anxiety score. Task times differed significantly between the physical location condition; this difference was not meaningful for real world application, and likely introduced by overhead regarding synchronous remote testing methods. Critical incident reporting counts did not differ in any condition. No significant differences were found in user reported stress levels. Subjective assessments of the study and interface also did not differ significantly. Study findings suggest a similar user testing experience exists for remote and traditional laboratory usability testing.
Article
In this paper we 1) review industry acceptance testing practices and 2) present a systematic approach to scenario analysis and its application to acceptance testing with the aim to improve the current practice. It summarizes the existing practice into categories and identifies the serious weakness. Then, a new approach based on the formal scenario analysis is presented. It is systematic, and easily applicable to any software or system. A simple, yet realistic example is used to illustrate its effectiveness. Finally, its benefits and its applicability are summarized.
Article
With the advancement in network bandwidth and computing power, multimedia systems have become a popular means for information delivery. However, general principles of system testing cannot be directly applied to testing of multimedia systems on account of their stringent temporal and synchronization requirements. In particular, few studies have been made on the stress testing of multimedia systems with respect to their temporal requirements under resource saturation. Stress testing is important because erroneous behavior is most likely to occur under resource saturation. This paper presents an automatable method of test case generation for the stress testing of multimedia systems. It adapts constraint solving techniques to generate test cases that lead to potential resource saturation in a multimedia system. Coverage of the test cases is defined upon the reachability graph of a multimedia system. The proposed stress testing technique is supported by tools and has been successfully applied to a real-life commercial multimedia system. Although our technique focuses on the stress testing of multimedia systems, the underlying issues and concepts are applicable to other types of real-time systems.
Conference Paper
Testing is regarded as one of the most resource consuming tasks of an average software project. A common goal of testing related activities is to make sure that requirements are satisfied by the implementation. Although existing tools are often effective in functional testing, emerging nonfunctional requirements set new demands. Aspect-oriented techniques offer a promising approach for capturing such issues under verification. However, prior to industrial adoption more pragmatic guidelines on applying aspects are required. In this paper, we evaluate aspect-oriented techniques in testing non-functional requirements of an industrial system. In addition, we discuss the types of requirements that lend themselves for more efficient testing using aspects than conventional techniques.
Article
High-profile disasters and the ensuing debates in the press are alerting more people to the crucial nature of software quality in their everyday lives. This should prompt software professionals to take a second look at how they define software quality. In this task of assessing 'adequate' quality in a software product, context is important. Errors tolerated in word-processing software may not be acceptable in control software for a nuclear power plant. Thus, the meanings of 'safety-critical' and 'mission-critical' must be reexamined in the context of software's contribution to the larger functionality and quality of products and businesses. At the same time, software professionals must ask themselves who is responsible for setting quality goals, and make sure they are achieved.
Article
One of the most important problems faced by software developers and users is the prediction of the size of a programming system and its development effort. As an alternative to "size," one might deal with a measure of the "function" that the software is to perform. Albrecht [1] has developed a methodology to estimate the amount of the "function" the software is to perform, in terms of the data it is to use (absorb) and to generate (produce). The "function" is quantified as "function points," essentially, a weighted sum of the numbers of "inputs," "outputs,"master files," and "inquiries" provided to, or generated by, the software. This paper demonstrates the equivalence between Albrecht's external input/output data flow representative of a program (the "function points" metric) and Halstead's [2] "software science" or "software linguistics" model of a program as well as the "soft content" variation of Halstead's model suggested by Gaffney [7].
Article
Introduces a systematic and defined process called "comparison of design methodologies" (CDM) for objectively comparing software design methodologies (SDMs). We believe that using CDM will lead to detailed, traceable, and objective comparisons. CDM uses process modeling techniques to model SDMs, classify their components, and analyze their procedural aspects. Modeling the SDMs entails decomposing their methods into components and analyzing the structure and functioning of the components. The classification of the components illustrates which components address similar design issues and/or have similar structures. Similar components then may be further modeled to aid in more precisely understanding their similarities and differences. The models of the SDMs are also used as the bases for conjectures and analyses about the differences between the SDMs. This paper describes three experiments that we carried out in evaluating CDM. The first uses CDM to compare Jackson System Development (JSD) and Booch's (1986) object-oriented design. The second uses CDM to compare two other pairs of SDMs. The last one compares some of our comparisons with other comparisons done in the past using different approaches. The results of these experiments demonstrate that process modeling is valuable as a powerful tool in analysis of software development approaches.< ></ETX
Article
A method is presented for functional testing of microprocessors. First, a control fault model at the RTL (register transfer language) level is developed. Based on this model, the authors establish testing requirements for control faults. They present two test procedures to verify the write and read sequences, and use the write and read sequences to test each instruction in the microprocessor. By utilizing k -out-of- m codes, they use fewer tests to cover more faults, thereby reducing the test generation time
Article
After a year in Sunnyvale, California, doing test management, the author is moving to Virginia, where he's been hired by Reliable Software Technologies, a company known for tools and consulting dedicated to achieving very high quality software. He'll spend most of his time in a consulting role, which is a very different reality from software management. But it will be an interesting one, nonetheless, especially since he's never worked in an environment that puts reliability ahead of marketability, or even abreast of it. He's known for his Good Enough quality model, so it may seem strange that RST recruited him. But as it turns out, RST believes that a very high reliability standard is-for them-just part of good enough quality. The same thinking about quality applies no matter where one places oneself on the quality scale: at every point in the life cycle, one must compare the present quality of the product against the cost and value of further improvement
Article
Requirements engineering (RE) is concerned with the identification of the goals to be achieved by the envisioned system, the operationalization of such goals into services and constraints, and the assignment of responsibilities for the resulting requirements to agents such as humans, devices, and software. The processes involved in RE include domain analysis, elicitation, specification, assessment, negotiation, documentation, and evolution. Getting highquality requirements is difficult and critical. Recent surveys have confirmed the growing recognition of RE as an area of utmost importance in software engineering research and practice. The paper presents a brief history of the main concepts and techniques developed to date to support the RE task, with a special focus on modeling as a common denominator to all RE processes. The initial description of a complex safetycritical system is used to illustrate a number of current research trends in RE-specific areas such as go...
Software testing research: Achievements, challenges, dreams. Paper presented at 2007 Future of Software Engineering (FOSE '07)
  • A Bertolino
A practical worst-case methodology for software testing
  • D I Bright
The software testing challenges and methods
  • L Lazic
The benefits and challenges of executable acceptance testing. Paper presented at the 2008 International Workshop on Scrutinizing Agile Practices or Shoot-out at the Agile Corral (APOS '08)
  • S S Park
  • F Maurer