April 1998
·
6 Reads
·
12 Citations
IEEE Software
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
April 1998
·
6 Reads
·
12 Citations
IEEE Software
November 1997
·
20 Reads
·
15 Citations
Journal of Systems and Software
This paper presents our strategy, implementation, and experience with tool support for software measurement, analysis, and quality improvement in a commercial software development environment. The activities that need to be supported include: 1) gathering software product and process measurements, 2) analyzing measurement data, and 3) using the results to assess and improve software quality. We employed existing internal tools, commercial off-the-shelf tools, and new tools that we developed to gather and analyze data, and presented results. These tools were tailored to fit the application environments and were integrated through common rules, multiple purpose tools, and utility programs. This approach has been used successfully in supporting software measurement and quality improvement for several large commercial software products developed in the IBM Software Solutions Toronto Laboratory.
January 1997
·
13 Reads
·
11 Citations
Annals of Software Engineering
This paper characterizes the testing environment for large commercial software systems, matches reliability model assumptions with the application environment, examines alternative test workload measurements that capture software usage information during testing, and uses two such measurements, test runs and transactions, as our usage dependent time measurements in reliability modeling. Our previous research using test runs, execution time, and test input information for reliability analysis and improvement is extended to ensure better test workload measurements for reliability assessment and prediction. This paper also identifies conditions under which different test workload measurements are appropriate, and presents reliability modeling results using these measurements in several products developed in the IBM Software Solutions Toronto Laboratory.
January 1996
·
5 Reads
·
9 Citations
The paper presents an approach to software reliability modeling using data partitions derived from tree based models. We use these data sensitive partitions to group data into clusters with similar failure intensities. The series of data clusters associated with different time segments forms a piecewise linear model for the assessment and short term prediction of reliability. Long term prediction can be provided by the dual model that uses these grouped data as input fitted to some failure count variations of the traditional software reliability growth models. These partition based reliability models can be used effectively to measure and predict the reliability of software systems and can be readily integrated into our strategy of reliability assessment and improvement using tree based modeling
June 1995
·
11 Reads
·
39 Citations
IEEE Transactions on Software Engineering
The paper studies practical reliability measurement and modeling for large commercial software systems based on test execution data collected during system testing. The application environment and the goals of reliability assessment were analyzed to identify appropriate measurement data. Various reliability growth models were used on failure data normalized by test case executions to track testing progress and provide reliability assessment. Practical problems in data collection, reliability measurement and modeling, and modeling result analysis were also examined. The results demonstrated the feasibility of reliability measurement in a large commercial software development environment and provided a practical comparison of various reliability measurements and models under such an environment
January 1993
·
9 Reads
·
8 Citations
This paper studies the collection of appropriate data for software reliability analysis and modeling. Our approach satisfies the data requirements for various reliability models under the constraints imposed by our project environment. These data-model relations and data - environment constraints are characterized to provide a tentative roadmap for data collection. In the process of collecting data for a group of projects in the system testing stage, we encountered various problems and devised solutions and improvement initiatives to deal with them. We summarize our experience in this paper so that similar initiatives in quality improvement under similar environments can be implemented more effectively.
... Detailed information regarding the study and the initiatives can be found in [8,9]. Tool Support for SM [23] This initiative uses a set of integrated tools in order to support software measurement and quality improvement. A tool that supports tree-modeling analysis (S-PLUS) is the central analysis tool. ...
November 1997
Journal of Systems and Software
... The models fall into two basic classes namely failures per time period and time between failures. A software reliability growth model provides a systematic way of assessing and predicting software reliability based on certain assumptions about the fault in the software and fault exposure in a given usage environment [12]. The reliability growth for software is the positive improvement of software reliability over time, accomplished through the systematic removal of software faults. ...
January 1993
... The problem of testing and benchmarking large-scale software is a key part of the software engineering discipline. Here, the main question to be answered is what makes a good testing or benchmarking environment [8, 23, 24]. The work of Tian and Palma [23] presents insights into the characteristics of a workload used to test large commercial software products. ...
January 1997
Annals of Software Engineering
... As a result, once a failure is observed, usually a series of related test runs are conducted to help isolate the cause of failure. Overall, testing of software systems uses a mixture of [32] @BULLET structured (centered around scenarios), @BULLET clustered (focused on fault localization), @BULLET random testing. The -dependence of successive software runs also depends on the extent to which internal state of a software has been affected and on the nature of operations undertaken for execution resumption, i.e., whether or not they involve state cleaning [16]. ...
January 1996
... DTs also make good candidates for combining because they are structurally unstable classifiers and produce diversity in classifier decision boundaries. DTs have also been one of the tools of choice for building classification models in the SE field [29,30,46474858,646566707172. We assume that the readers are familiar with DT learning [4,49]. ...
April 1998
IEEE Software
... Because of the dependence of testing-effort on multiple factors, testing time may not enable accurate measurement of testing-effort. Hence, SRGMs that implicitly relate fault detection to testing-effort according to elapsed time during testing may not fail to adequately characterize fault detection [7][8][9][10]. To overcome this possible limitation, testing-effort models [11][12][13][14] were proposed, which model fault discovery according to the effort dedicated to testing activities. ...
June 1995
IEEE Transactions on Software Engineering