To read the file of this research, you can request a copy directly from the author.
Abstract
This paper presents state of the art in designing, implementing, and testing web software and assesses the software reliability of web applications by using software reliability growth models. This paper discussed the relevant reliability issues and best practices of growth models in terms of web software reliability.
This paper proposes a new scheme for constructing software
reliability growth models (SRGM) based on a nonhomogeneous Poisson
process (NHPP). The main focus is to provide an efficient parametric
decomposition method for software reliability modeling, which considers
both testing efforts and fault detection rates (FDR). In general, the
software fault detection/removal mechanisms depend on previously
detected/removed faults and on how testing efforts are used. From
practical field studies, it is likely that we can estimate the testing
efforts consumption pattern and predict the trends of FDR. A set of
time-variable, testing-effort-based FDR models were developed that have
the inherent flexibility of capturing a wide range of possible fault
detection trends: increasing, decreasing, and constant. This scheme has
a flexible structure and can model a wide spectrum of software
development environments, considering various testing efforts. The paper
describes the FDR, which can be obtained from historical records of
previous releases or other similar software projects, and incorporates
the related testing activities into this new modeling approach. The
applicability of our model and the related parametric decomposition
methods are demonstrated through several real data sets from various
software projects. The evaluation results show that the proposed
framework to incorporate testing efforts and FDR for SRGM has a fairly
accurate prediction capability and it depicts the real-life situation
more faithfully. This technique can be applied to wide range of software
systems
Genetic algorithms have been used in science and engineering as adaptive algorithms for solving practical problems and as computational models of natural evolutionary systems. This brief, accessible introduction describes some of the most interesting research in the field and also enables readers to implement and experiment with genetic algorithms on their own. It focuses in depth on a small set of important and interesting topics—particularly in machine learning, scientific modeling, and artificial life—and reviews a broad span of research, including the work of Mitchell and her colleagues.
The descriptions of applications and modeling projects stretch beyond the strict boundaries of computer science to include dynamical systems theory, game theory, molecular biology, ecology, evolutionary biology, and population genetics, underscoring the exciting "general purpose" nature of genetic algorithms as search methods that can be employed across disciplines.
An Introduction to Genetic Algorithms is accessible to students and researchers in any scientific discipline. It includes many thought and computer exercises that build on and reinforce the reader's understanding of the text. The first chapter introduces genetic algorithms and their terminology and describes two provocative applications in detail. The second and third chapters look at the use of genetic algorithms in machine learning (computer programs, data analysis and prediction, neural networks) and in scientific models (interactions among learning, evolution, and culture; sexual selection; ecosystems; evolutionary activity). Several approaches to the theory of genetic algorithms are discussed in depth in the fourth chapter. The fifth chapter takes up implementation, and the last chapter poses some currently unanswered questions and surveys prospects for the future of evolutionary computation.
Bradford Books imprint
A set of linear combination software reliability models that combine the results of single, or component, models is presented. It is shown that, as measured by statistical methods for determining a model's applicability to a set of failure data, a combination model tends to have more accurate short-term and long-term predictions than a component model. These models were evaluated using both historical data sets and data from recent Jet Propulsion Laboratory projects. The computer-aided software reliability estimation (CASRE) tool, which automates many reliability measurement tasks and makes it easier to apply reliability models and to form combination models, is described.< >