ArticlePDF Available

Detection of Reliable Software Using SPRT on Interval Domain Data

Authors:

Abstract

In Classical Hypothesis testing volumes of data is to be collected and then the conclusions are drawn which may take more time. But, Sequential Analysis of statistical science could be adopted in order to decide upon the reliable / unreliable of the developed software very quickly. The procedure adopted for this is, Sequential Probability Ratio Test (SPRT). In the present paper we proposed the performance of SPRT on Interval domain data using Weibull model and analyzed the results by applying on 5 data sets. The parameters are estimated using Maximum Likelihood Estimation.
A preview of the PDF is not available
... The least size of average sample was required in the SPRT when other test conditions are the same. 24 However, the application of SPRT in fault diagnosis is still a relatively new research direction. Kalman filter and SPRT algorithm were used by Wei et al. to detect the soft fault of aeroengine system sensors. ...
Article
Full-text available
The spatial information of the signal is neglected by the conventional frequency/time decompositions such as the fast Fourier transformation, principal component analysis, and independent component analysis. Framing of the data being as a three-way array indexed by channel, frequency, and time allows the application of parallel factor analysis, which is known as a unique multi-way decomposition. The parallel factor analysis was used to decompose the wavelet transformed ongoing diagnostic channel–frequency–time signal and each atom is trilinearly decomposed into spatial, spectral, and temporal signature. The time–frequency–space characteristics of the single-source fault signal was extracted from the multi-source dynamic feature recognition of mechanical nonlinear multi-failure mode and the corresponding relationship between the nonlinear variable, multi-fault mode, and multi-source fault features in time, frequency, and space was obtained. In this article, a new method for the multi-fault condition monitoring of slurry pump based on parallel factor analysis and continuous wavelet transform was developed to meet the requirements of automatic monitoring and fault diagnosis of industrial process production lines. The multi-scale parallel factorization theory was studied and a three-dimensional time–frequency–space model reconstruction algorithm for multi-source feature factors that improves the accuracy of mechanical fault detection and intelligent levels was proposed.
Article
Full-text available
A non-homogeneous Poisson process with its mean value function generated by the cumulative distribution function of half logistic distribution is considered. It is modeled to assess the failure phenomenon of developed software. When the failure data are in the form of the number of failures in a given interval of time, the model parameters are estimated by the maximum likelihood method. The performance of the model is compared with two standard models [Goel, A. L., and Okumoto. K., "A Time Dependent Error Detection Rate Model for Software Reliability and Other Performance Measures," IEEE Trans. Reliab., Vol. 28(3), 1979, pp. 206-211; Yamada et al., "S-Shaped Reliability Growth Modeling for Software Error Detection," IEEE Trans. Reliab., Vol. 32(5), 1983, pp. 475-484] using two data sets. The release time of the software subject to a minimum expected cost is worked out and exemplified through illustrations.
Chapter
This paper discusses improvements to conventional software reliability analysis models by making the assumptions on which they are based more realistic. That is, for example, the mutual dependency of errors in a program.
Article
Methods proposed for software reliability prediction are reviewed. A case study is then presented of the analysis of failure data from a Space Shuttle software project to predict the number of failures likely during a mission, and the subsequent verification of these predictions.
Article
If "classical" testing-strategies are used (no usage-testing) the application of software reliability growth models may be difficult and reliability predictions can be misleading. Nevertheless statistical methods can be successfully applied to failure data. This paper presents an approach which allows the detection of unreliable software components and the comparison of the reliability of different software versions-even if testing is done in a classical manner. A simple to use graphical method-which is mainly based on the sequential test of Wald (1947)-is described. The methodology was successfully applied to a software system for tax consultants.
Article
Software reliability growth models (SRGMs) incorporating the imperfect debugging and learning phenomenon of developers have recently been developed by many researchers to estimate software reliability measures such as the number of remaining faults and software reliability. However, the model parameters of both the fault content rate function and fault detection rate function of the SRGMs are often considered to be independent from each other. In practice, this assumption may not be the case and it is worth to investigate what if it is not. In this paper, we aim for such study and propose a software reliability model connecting the imperfect debugging and learning phenomenon by a common parameter among the two functions, called the imperfect-debugging fault-detection dependent-parameter model. Software testing data collected from real applications are utilized to illustrate the proposed model for both the descriptive and predictive power by determining the non-zero initial debugging process.
Article
Critical business applications require reliable software, but developing reliable software is one of the most difficult problems facing the software industry. After the software is shipped, software vendors receive customer feedback about software reliability. However, by then it is too late; software vendors need to know whether their products are reliable before they are delivered to customers. Software reliability growth models help provide that information. Unfortunately, very little real data and models from commercial applications have been published, possibly because of the proprietary nature of the data. Over the past few years, the author and his colleagues at Tandem have experimented with software reliability growth models. At Tandem, a major software release consists of substantial modifications to many products and may contain several million lines of code. Major software releases follow a well defined development process and involve a coordinated quality assurance effort. We applied software reliability modeling to a subset of products for four major releases. The article reports on what was learned
Ulrich's Periodicals Directory, JournalTOCS, PKP Open Archives Harvester
  • Sharing Iiste Knowledge
  • Ebsco Partners
  • Index Copernicus
IISTE Knowledge Sharing Partners EBSCO, Index Copernicus, Ulrich's Periodicals Directory, JournalTOCS, PKP Open Archives Harvester, Bielefeld Academic Search Engine, Elektronische Zeitschriftenbibliothek EZB, Open J-Gate, OCLC WorldCat, Universe Digtial Library, NewJour, Google Scholar
He received gold medal from Acharya Nagarjuna University for his outstanding performance in Masters Degree. He is currently working as Associate Professor and H.O.D, in the Department of
  • A Wood
Wood, A. (1996). "Predicting Software Reliability", IEEE Computer, 2253-2264. Author Profile: First Author: Mr. G. Krishna Mohan is working as a Reader in the Department of Computer Science, P.B.Siddhartha College, Vijayawada. He obtained his M.C.A degree from Acharya Nagarjuna University in 2000, M.Tech from JNTU, Kakinada, M.Phil from Madurai Kamaraj University and pursuing Ph.D at Acharya Nagarjuna University. His research interests lies in Data Mining and Software Engineering. Second Author: Dr. R. Satya Prasad received Ph.D. degree in Computer Science in the faculty of Engineering in 2007 from Acharya Nagarjuna University, Andhra Pradesh. He received gold medal from Acharya Nagarjuna University for his outstanding performance in Masters Degree. He is currently working as Associate Professor and H.O.D, in the Department of Computer Science & Engineering, Acharya Nagarjuna University. His current research is focused on Software Engineering. He has published several papers in National & International Journals.