ArticlePDF Available

Abstract and Figures

Requirements engineering (RE) is considerably different in agile development than in traditional processes. Yet, there is little empirical knowledge on the state of the practice and contemporary problems in agile RE. As part of a bigger survey initiative (Naming the Pain in Requirements Engineering), we build an empirical basis on such aspects of agile RE. Based on the responses from 92 people representing 92 organizations, we found that agile RE concentrates on free-text documentation of requirements elicited with a variety of techniques. Many manage explicit traces between requirements and code. Furthermore, the continuous improvement of RE is done because of intrinsic motivation. Important experienced problems include unclear requirements and communication flaws. Hence, agile RE is in several aspects not so different from RE in other development processes. We plan to investigate specific techniques, such as acceptance-test-driven development, in a future survey to better capture what is special in agile RE.
Content may be subject to copyright.
Requirements for Integrating
Defect Prediction and Risk-based Testing
Rudolf Ramler
Software Competence Center Hagenberg
Softwarepark 21, A-4232 Hagenberg, Austria
rudolf.ramler@scch.at
Michael Felderer
University of Innsbruck
Technikerstrasse 21a, A-6020 Innsbruck, Austria
michael.felderer@uibk.ac.at
AbstractDefect prediction is a powerful method that pro-
vides information about the likely defective parts in software
system and is applicable to improve effectiveness and efficiency
of software quality assurance. This makes defect prediction a
perfect candidate to be combined with risk-based testing to opti-
mally guide testing activities towards risky parts of software. As
a first step towards a successful combination, this paper presents
requirements that have to be fulfilled for enabling the synergies
between defect prediction and risk-based testing.
Keywordsrisk-based testing; defect prediction; software test-
ing; software test process.
I. INTRODUCTION
The prediction of defects in software systems has become a
frequently and widely addressed topic in research. The driving
scenario is usually summarized as follows: “Software testing
activities play a critical role in the production of dependable
systems, and account for a significant amount of resources in-
cluding time, money, and personnel. If testing can be more
precisely focused on the places in a software system where
faults are likely to be, then available resources will be used
more effectively and efficiently, resulting in more reliable sys-
tems produced at decreased cost.” [11]
The rationale of risk-based testing is along the same lines.
Increasing effectiveness and efficiency (by focusing testing on
the “most critical” parts of the software system under test) is
the main motivation for adopting a risk-based approach in
software testing. As reported by a previous study on risk-based
testing in industry, “risk information is used in testing in two
ways: (1) As a suggestion to extend the scope of testing to-
wards risky areas where critical problems can be found, and
(2) as a guideline to optimally adjust the focus of testing to
risky areas where most critical problems are located[2].
The shared objectives and the congruent underlying ra-
tionale of both approaches suggest that defect prediction and
risk-based testing are promising candidates for a combination
where the whole may be greater than the sum of its parts. De-
fect prediction is a method that provides information about the
likely defective components in a future version of a software
system. Risk-based testing uses risk information for guiding
the activities in the software test process. The goal of this paper
is to outline how the information provided by defect prediction
can be leveraged in risk-based testing.
The reminder of the paper is structured as follows. Section
II provides an overview of defect prediction and shows exam-
ples of successful applications of defect prediction in practice.
Section III explains the concepts of risk-based testing including
open issues to be addressed by the integration of defect predic-
tion. Section IV describes how defect prediction and risk-based
testing can be combined. The section also discusses require-
ments that have to be fulfilled by defect prediction in order to
enable exploiting synergies. Section V concludes with an out-
look on our plans for future work.
II. DEFECT PREDICTION
Defect prediction has been proposed as a way to indicate
the likely defective parts (e.g., modules, components, files, or
classes) in a future version of a software system [9][10][11].
These likely defective parts are predicted based on software
metrics, which provide quantitative descriptions of software
modules. The metrics are, for example, derived from static
properties of the source code of the analyzed system (e.g.,
[10]), or they capture various aspects of the project’s develop-
ment process and change history (e.g., [9]). Supervised learn-
ing techniques (classification trees, neural networks, etc.) are
commonly used to construct prediction models from these
software metrics [7]. The resulting prediction models define
the relationship between the software metrics as independent
variables and the modules’ defectiveness in terms of defect
count or binary classification as dependent variable.
Numerous studies have shown that a relationship can be es-
tablished between some software metrics and the defectiveness
of the modules of a software system and, that this relationship
can be exploited for making predictions. For example, 208
such studies have been analyzed in a systematic literature re-
view by Hall et al. [2]. The objective of most of these studies
has been to explore and investigate specific aspects relevant for
constructing defect prediction models in order to refine and
advance defect prediction techniques. Empirical methods in-
cluding data and artifacts from real-world projects are used for
evaluating the results. However, only a few studies report on
the application of defect prediction in practice and share in-
sights in how the prediction results have been used to support
software testing activities.
The following cases are examples of empirical studies that
provide evidence about the use of defect prediction to support
software testing activities in an industrial context. The common
goals were to increase the efficiency and effectiveness of soft-
ware testing. However, the cases also document that defect
prediction and the results it generates have to be integrated in
the wider context of the test process in order to become usable
and useful for practitioners. Predicting the likelihood of defects
is more of a means to an end rather than an end in itself.
Ostrand et al. [11] describe the application of defect predic-
tion for a large software system developed by AT&T. They
analyzed a total of seventeen successive releases related to
more than four years of continuous field use. The authors used
defect prediction to support testers by providing information on
where in their software releases to test prior to testing them.
Their goal was to provide testers with a practical and reasona-
bly accurate measure of which files are most likely to contain
the largest number of faults, so the testers can direct their ef-
forts to these fault-prone files. The prediction approach they
used assigned a predicted fault count to each file of a software
release, based on the file’s structure and its history over the
previous releases. The prediction results were used to rank the
files from most to least fault-prone, without choosing an arbi-
trary cutoff point for fault-proneness. The ranking allows test-
ers to focus their efforts and by so doing, to find defects more
quickly and to conduct testing more efficiency.
Li et al. [8] report experiences and initial empirical results
from using defect prediction for initiating risk management
activities at ABB. The authors examined data from two real-
time commercial software systems, a monitoring system and a
controller management system, by analyzing releases spanning
about 5 years and 9 years of development. The results were
used to improve testing, to support maintenance planning, and
to initiate process improvement. Regarding testing, their goal
was to increase the effectiveness of testing in order to detect
(and remove) defects that customers may encounter. Prediction
had been used to determine the fault proneness of different
areas, i.e., application groupings (sub-systems) and operating
systems groupings, to prioritize product testing. Making pre-
dictions for identifying defect-prone sub-systems was not in-
tended to replace expert knowledge. The results have been
used to complement expert intuition by providing quantitative
evidence. It allowed test engineers to back their decisions and
recommendation with quantitative data. When applied in test-
ing, the prediction results helped to uncover additional defects
in a sub-system previously thought to be low-defect.
Taipale et al. [17] present a pilot study where they devel-
oped a defect prediction model and explored different ways of
making the prediction results usable for the practitioners, e.g.,
via a commit hotness ranking and the visualization of interac-
tions among teams through errors. The project that has been
used in the study was a software component of a mission-
critical embedded system project with about 60 developers
working on it. The research was initiated from the perspective
of defect prediction, and then extended into finding ways of
presenting the data collected throughout the process and the
outcomes of the predictions for use of the developers. The au-
thors were able to construct prediction models with good per-
formance, in the range of related studies. The constructed mod-
els provided accurate information about the most error prone
parts of the software. However, the feedback from the practi-
tioners showed that these results alone are not useful to have an
impact in their daily work. Additional effort had been neces-
sary for creating practical representations from the prediction
results in order to become of value for the project.
III. RISK-BASED TESTING
Risk-based testing (RBT) is a testing approach which con-
siders risks of the software product as the guiding factor to
support decisions in all phases of the test process [4]. In this
section we present the concept of risk in software testing as
well as a process for risk-based testing, which implements the
current software testing standard ISO/IEC/IEEE 29119. This
standard explicitly involves risks as an integral part of the test-
ing process but lacks concrete implementation guidelines.
A. Concept of Risk in Software Testing
A risk is a factor that could result in future negative conse-
quences and is usually expressed by its probability and impact
[6]. In context of testing, probability is typically determined by
the likelihood that a failure assigned to a risk occurs, and im-
pact is determined by the cost or severity of a failure if it oc-
curs in operation. The resulting risk value or risk exposure is
assigned to a risk item. A risk item is anything of value (i.e., an
asset) associated with the system under test, for instance, a
requirement, a feature, or a component. Risk items can be pri-
oritized based on their risk exposure values and assigned to
risk levels. Risk levels are often defined in form of a risk ma-
trix combining probability and impact values. Figure 1 shows a
3×3 risk matrix visualizing the risks in terms of the estimated
probability and impact associated to four risk items (A to D).
The fields of the matrix correspond to five risk levels, i.e., level
I (low probability and low impact) to level V (high probability
and high impact). However, even though the risk items B and C
are at the same risk level (level III), risk item B with high im-
pact and low probability may be considered more critical and
may therefore be treated with a different testing approach than
risk item C, which has a high probability and low impact.
III IV V
II III IV
III III
Estimated Impact
Estimated Probability
medium
high
low
low
medium
high
A
B
C
D
Fig. 1. Example 3x3 risk matrix defining five risk levels (I-V)
and showing four risk items (A-D).
B. Process for Risk-based Testing
The risk matrix is an abstraction layer. It decouples collect-
ing and computing risk information (i.e., raw probability and
impact values) from operational testing activities concerned
with concrete test scenarios and test cases. The risk infor-
mation compiled in form of the risk matrix is used as basis for
many of the consecutive decisions in the testing process and,
eventually, for driving testing activities to generate successful
results including revealing new defects and beyond that
creating valuable insights for decision makers. The information
in the risk matrix can be adopted in the test process to support
decisions in all of its phases, i.e., test planning, test design, test
implementation, test execution and test evaluation [6]. Figure 2
shows the central role of the risk matrix in the risk-based test-
ing process. It links probability and impact estimates to the
phases of the test process. Since the risk information is provid-
ed for individual risk items, each item can be treated according
to its risk level by selecting appropriate testing measures and
adjusting the testing intensity.
Fig. 2. Risk matrix and risk-based testing process.
C. Probability Estimation
Probability values are estimated for each risk item. In the
context of testing the probability value expresses the likelihood
of defectiveness of a risk item, i.e., the likelihood that a fault
exists in a specific module that may lead to a failure. In prac-
tice, probability estimation often relies on data from defect
classification and the software systems defect history [14].
The estimation of the fault probability is usually performed in
an informal way based on experience and/or heuristics, e.g., the
number of bug reports in the past is applied as basic predictor
for the number of future faults. In [14] an approach to estimate
the risk probability of components by counting their assigned
defects weighted according to their severity is presented. A
recent study [13] on informal manual risk estimation based on
expert opinions highlights problems with regard to the timely
availability of experts, the proneness to estimation bias, as well
as the reliability of the results. Furthermore, the study shows
that it requires several experts working together, as in large
systems there is often no single person with sufficient
knowledge to estimate the probability for all parts. Finally,
manual methods are time consuming, which is particularly
noticeable when repeated estimation is required as in the case
of regression testing or continuous delivery.
To overcome these problems, manual estimation approach-
es should be complemented by including automatically meas-
urable metric data. For instance, [1] uses a Factor-Criteria-
Metrics approach to integrate manually and automatically
measured metrics for estimating the likelihood of defective-
ness. More concrete, in this approach the criteria code com-
plexity and functional complexity, which are measured auto-
matically via static analysis and manually via expert estima-
tion, respectively, are integrated to estimate the factor proba-
bility for the risk item type system components. The setup,
maintenance, and interpretation of such integrated approaches
soon become complex and costly. Enhancing them by estab-
lished and mature learning-based defect prediction approaches
can further improve efficiency and accuracy of probability es-
timation for testing purposes.
IV. COMBINING DEFECT PREDICTION AND
RISK-BASED TESTING
Defect prediction provides valuable information about the
likely defectiveness of the individual parts of a software sys-
tem. The predicted values are, for example, the number of de-
fects in a module or its classification as defective/defect-free.
In risk-based testing, this information can be used as basis for
estimating the risk probability associated with the system parts.
It expresses the likelihood that the software system will fail,
which is related to defectiveness (i.e., a defect occurs), even
though the nature of this relationship can be complex [12].
Risk-based testing provides a framework where the predic-
tion results can be integrated in a form that they become appli-
cable for human testers. Furthermore, testing activities incorpo-
rate several influence factors in prioritization and decision
making. The likelihood of defectiveness is one of these factors.
Although it is an important factor, it has to be combined with
other factors such as the impact of the defect for customers and
end users. In risk-based testing this combination is supported
by risk visualizations, incorporating the knowledge of the hu-
man testers, and interactive assignment of risk items to the risk
levels in a risk matrix.
To realize the benefits of defect prediction for risk-based
testing, an interface between the two processes has to be estab-
lished and the provided prediction information needs to be
brought in alignment with the information needs of the testing
process. The following requirements for making defect predic-
tions have been derived from our experience with the risk-
based testing process described in [2][3][14] as well as practi-
cal experiences with defect prediction in industry projects [15].
Prediction results must be associated with risk items. A
central task in any test process is eliciting the “big picture” of
the system under test and systematically identifying the indi-
vidual system parts and aspects that need to be tested. The re-
sult of this step in risk-based testing is the list of risk items.
The risk items are the entities the testers are familiar with (e.g.,
features, sub-systems, configurations). They are at the right
level of granularity to be handled in testing and for associating
them with the different risk factors. Thus, defect prediction has
to provide information that can be related to the risk items used
in a specific project, e.g., directly or by aggregation in case of a
more complex, hierarchical relationship.
Predictions must be made for different types of risk.
Testing needs to cover various functional aspects (e.g., correct-
ness, completeness, appropriateness) as well as non-functional
aspects (e.g., performance, security, usability) depending on
the system under test and its application domain. These differ-
ent aspects of software quality form a separate dimension that
has to be considered throughout all steps of the testing process
starting from defect reporting to selecting testing methods and
techniques. Adjusting the testing activities to these quality as-
pects is an important step in risk-based testing. Consequently,
defect prediction has to distinguish different risk types when
predicting the probability of defectiveness of an item (e.g.,
[18]). Prediction results should provide information to deter-
mine the overall risk associated with a quality aspect (e.g., to
decide how much load testing will be necessary in the upcom-
ing release) and how this type of risk relates to the different
parts of the system under test (e.g., to decide where to test).
Predictions must be about the future. In order to be of
practical use, predictions have to provide information about the
defectiveness before this information becomes available via
other sources like inspection or testing. Thus, for example,
predictions have to be made at the end of the development
phase to predict the expected number and location of defects to
support test management decisions or due to the advent of
continuous delivery predictions have to be continuously up-
dated and incorporated in making release decisions.
Prediction results must be compatible with risk levels.
The risk items under test are assigned to risk levels (Fig. 1),
which partition the spectrum of risk values and cluster risk
items with similar impact and probability estimates for testing.
Risk items at a particular risk level are considered equally risky
and are subject to the same intensity of testing. Defect predic-
tion applies regression models predicting the number of de-
fects, classifiers that categorize risk items in bins, etc. The type
and granularity of the prediction results have to be compatible
with the definition of the risk levels as the values expressing
the likely defectiveness of the risk items have to be mapped to
the levels in the risk matrix. Defining and adjusting risk levels
to appropriately cluster risk items is often a manual step that
should be supported by fine-grained yet aggregable probability
information coming from predictions.
Prediction results must be useful for humans. In the end,
there are human decision makers who need to understand and
interpret the (automatically produced) predictions to (manual-
ly) make timely and sound decisions in context of software
testing. The decision makers are usually experts in their field
such as test managers or quality engineers with detailed
knowledge about the system under test and profound experi-
ence acquired over many years. Defect prediction is not in-
tended to replace their expert knowledge. The prediction re-
sults should be a useful complement by providing easily acces-
sible, up-to-date, quantitative evidence and new insights at an
acceptable cost-benefit ratio.
V. CONCLUSIONS AND FUTURE WORK
Making accurate predictions about the defectiveness of the
parts in a future version of a system is a complex endeavor.
Making the results applicable for human testers requires addi-
tional steps and their integration in a test process amenable for
incorporating defectiveness information. Risk-based testing
provides such a process and can serve as a framework for lev-
eraging prediction results into successful test results.
In this paper we showed how defect prediction and risk-
based testing fit together, and we discussed requirements for
the interface of the two processes. These requirements have
been derived from our experience with both, defect prediction
and risk-based testing, gained over several years and from ap-
plications in industry projects. These requirements represent
the starting point for future work. We plan to investigate the
different requirements in a literature study on defect prediction
issues and by studying real-world applications where defect
prediction provides input for risk-based testing.
REFERENCES
[1] M. Felderer, C. Haisjackl, R. Breu, and J. Motz, “Integrating manual and
automatic risk assessment for risk-based testing,” Software Quality Days
(SWQD), LNBIP 94, Springer, 2012.
[2] M. Felderer, and R. Ramler, “A multiple case study on risk-based testing
in industry,” International Journal on Software Tools for Technology
Transfer (STTT), 16(5), October 2014, pp. 609-625.
[3] M. Felderer, and R. Ramler, Risk orientation in software testing
processes of small and medium enterprises: an exploratory and
comparative study,” Software Quality Journal, DOI:10.1007/s11219-
015-9289-z, 2015.
[4] Felderer, M., Schieferdecker, I.: A taxonomy of risk-based testing.
International Journal on Software Tools for Technology Transfer, 16(5),
pp. 559-568, 2014.
[5] T. Hall, S. Beecham, D. Bowes, D. Gray, and S. Counsell, A
systematic literature review on fault prediction performance in software
engineering,” IEEE Transactions on Software Engineering, 38(6), pp.
1276-1304, 2012.
[6] ISTQB, “Standard glossary of terms used in software testing. Version
2.1,” International Software Testing Qualifications Board, 2010.
[7] S. Lessmann, B. Baesens, C. Mues, and S. Pietsch, “Benchmarking
Classification Models for Software Defect Prediction: A Proposed
Framework and Novel Findings,” IEEE Transactions on Software
Engineering, 34(4), pp. 485-496, July-Aug. 2008.
[8] P. L. Li, J. Herbsleb, M. Shaw, and B. Robinson, “Experiences and
results from initiating field defect prediction and product test
prioritization efforts at ABB Inc.,” 28th Int. Conf. on Software
Engineering (ICSE), 2006.
[9] L. Madeyski, and M. Jureczko, “Which process metrics can significantly
improve defect prediction models? An empirical study,” Software
Quality Journal 23(3), pp. 393-422, 2015.
[10] T. Menzies, Z. Milton, B. Turhan, B. Cukic, Y. Jiang, and A. Bener,
Defect prediction from static code features: current results,
limitations, Automated Software Eng., 17(4), pp. 375-407, 2010.
[11] T. J. Ostrand, E. J. Weyuker, and R. M. Bell, Where the bugs are,”
ACM SIGSOFT Int. Symposium on Software Testing and Analysis
(ISSTA), 2004.
[12] R, Ramler, The impact of product development on the lifecycle of
defects,” Workshop on Defects in Large Software Systems, (co-located
with ISSTA), 2008.
[13] R. Ramler, and M. Felderer, “Experiences from an initial study on risk
probability estimation based on expert opinion,” Joint Conf. of 23rd Int.
Workshop on Software Measurement and 8th Int. Conf. on Software
Process and Product Measurement (IWSM-MENSURA), 2013.
[14] R. Ramler, and M. Felderer, A process for risk-based test strategy
development and its industrial evaluation,” Product-Focused Software
Process Improvement (PROFES), LNCS 9459, Springer, 2015.
[15] R. Ramler, K. Wolfmaier, E. Stauder, F. Kossak, and T. Natschläger,
Key questions in building defect prediction models in practice,”
Product-Focused Software Process Improvement (PROFES). LNBIP 32,
Springer, 2009.
[16] F. Redmill, Theory and practice of risk-based testing: Research
Articles,” Software Testing, Verification and Reliability, 15(1), pp. 3-
20, Wiley, 2005.
[17] T. Taipale, M. Qvist, and B. Turhan. Constructing defect predictors and
communicating the outcomes to practitioners,” ACM/IEEE Int.
Symposium on Empirical Software Engineering and Measurement
(ESEM), 2013.
[18] T. Zimmermann, N. Nagappan, L. Williams Searching for a needle in a
haystack: Predicting security vulnerabilities for Windows Vista,” 3rd
Int. Conf. Software Testing, Verification and Validation (ICST), 2010.
... It is important to highlight that as developers feel that their requirements are clear (i8), they will not opt to interact with their co-workers unless needed [22]. However, in agile development is common that user stories (requirements) are not always clear enough [61]. Thus, it could trigger constant interactions among developers and product owners. ...
Article
Full-text available
Occupational stress refers to job‐related uneasiness and anxiety, which affect people's emotional or physical health. Although occupational stress has been studied in several industries, it has remained largely unexplored in software developers, particularly in emerging economies such as Mexico. In this work, we propose a set of measures for supporting the assessment of occupational stress in software developers, which concurs with the types of tasks performed by software developers daily. For this, we first identified several stressors found in the literature. Then, we carried out 10 semi‐structured interviews with novice software developers to further understand stressors at the workplace. Afterwards, we conducted a study with 30 novice software developers for associating stress with workplace measures. From our work, we identified the workload, mental work fatigue, and work distraction as relevant measures associated with a certain level of stress in software developers. Our results suggest that some indicators can be used to monitor measures that are associated with occupational stress in software developers.
Chapter
Agile software development (agile SD) has evolved and originated to solve the issues of industries due to several changes in the requirements. Industries have also recognized and acknowledged this reality. In addition, with the suitable process alignment and control, changing or evolving requirements could be efficiently solved and managed for satisfying the project stakeholders through agile method. Hence, this study attempts to review requirement management (RM) in agile environment which is a significant part that confirms the successful accomplishment of goals pertaining to the product development. Requirement prioritization (RP) is important to assure RM as it improvises planning, scheduling and budget control. This study analyses RP in agile. The study also discusses the uses of agile requirement engineering. A comparative analysis is carried out to analyse various RM steps pursued in different agile software development (SD) methods and quality requirement (QR) management practices relying on agile environment. Several agile SD methods considered for analysis include SCRUM, lean software development (LSD), Kanban, dynamic system development method (DSDM), extreme programming (XP), feature driven development (FDD), adaptive software development (ASD) and agile unified process (AUP). On the other hand, various QR management practices considered for analysis include client engagement, direct interaction, provisioning user stories, uninterrupted planning, combining the requirement analysis, review meetings, etc. This study also discusses the RM challenges in agile methodology by analysing various existing methods. Hence, this study affords detailed information associated with current traditional RM practices that will assist the software practitioners to choose appropriate methods in agile SD for handling persistent requirement changes.KeywordsRequirement managementSoftware developmentAgileRequirement prioritization (RP)Quality requirement (QR)
Chapter
Software requirements changes become necessary due to changes in customer requirements and changes in business rules and operating environments; hence, requirements development, which includes requirements changes, is a part of a software process. Previous studies have shown that failing to manage software requirements changes well is a main contributor to project failure. Given the importance of the subject, there is a plethora of efforts in academia and industry that discuss the management of requirements change in various directions, ways, and means. This chapter provided information about the current state-of-the-art approaches (i.e., Disciplined or Agile) for RCM and the research gaps in existing work. Benefits, risks, and difficulties associated with RCM are also made available to software practitioners who will be in a position of making better decisions on activities related to RCM. Better decisions can lead to better planning, which will increase the chance of project success.
Article
Non-functional requirements (NFR), which include performance, availability, and maintainability, are vitally important to overall software quality. However, research has shown NFRs are, in practice, poorly defined and difficult to verify. Continuous software engineering practices, which extend agile practices, emphasize fast paced, automated, and rapid release of software that poses additional challenges to handling NFRs. In this multi-case study we empirically investigated how three organizations, for which NFRs are paramount to their business survival, manage NFRs in their continuous practices. We describe four practices these companies use to manage NFRs, such as offloading NFRs to cloud providers or the use of metrics and continuous monitoring, both of which enable almost real-time feedback on managing the NFRs. However, managing NFRs comes at a cost—as we also identified a number of challenges these organizations face while managing NFRs in their continuous software engineering practices. For example, the organizations in our study were able to realize an NFR by strategically and heavily investing in configuration management and infrastructure as code, in order to offload the responsibility of NFRs; however, this offloading implied potential loss of control. Our discussion and key research implications show the opportunities, trade-offs, and importance of the unique give-and-take relationship between continuous software engineering and NFRs. Research artifacts may be found at https://doi.org/10.5281/zenodo.3376342 .
Conference Paper
The insertion of agile practices in software development has increased exponentially. Both industry and academic staff constantly face challenges related to requirements specification and testing in this context. In this study, we conducted a systematic mapping of the literature to investigate the practices, strategies, techniques, tools, and challenges met in the association of Requirements Engineering with Software Testing (REST) in the agile context. By searching seven major bibliographic databases, we identified 1.099 papers related to Agile REST. Based on the systematic mapping guidelines, we selected 14 of them for a more specific analysis. In general, the main findings include the fact that weekly meetings should be held to establish frequent communication with stakeholders. Also, most projects adopt use cases as conceptual models and perform use case detailing. Test cases are an important artifact with test case design as a Software Testing practice. For the automation of test cases, fit tables have been recommended as an artifact. Finally, proper project documentation constitutes a critical basis in Agile REST.
Conference Paper
Full-text available
Risk-based testing has a high potential to improve the software test process as it helps to optimize the allocation of resources and provides decision support for the management. But for many organizations the integration of risk-based testing into an existing test process is a challenging task. An essential first step when introducing risk-based testing in an organization is to establish a risk-based test strategy which considers risks as the guiding factor to support all testing activities in the entire software lifecycle. In this paper we address this issue by defining a process for risk-based test strategy development and refinement. The process has been created as part of a research transfer project on risk-based testing that provided the opportunity to get direct feedback from industry and to evaluate the ease of use, usefulness and representativeness of each process step together with five software development companies. The findings are that the process is perceived as useful and moderately easy to use, i.e., some steps involve noticeable effort. For example, the effort for impact estimation is considered high, whereas steps that can be based on existing information are perceived as easy, e.g., deriving probability estimates from established defect classifications. The practical application of the process in real-world settings supports the representativeness of the outcome.
Article
Full-text available
Risk orientation in testing is an important means to balance quality, time-to-market, and cost of software. Especially for small and medium enterprises (SME) under high competitive and economic pressure, risk orientation can help to focus testing activities on critical areas of a software product. Although several risk-based approaches to testing are available, the topic has so far not been investigated in context of SME, where risks are often associated to business critical issues. This article fills the gap and explores the state of risk orientation in testing processes of SME. Furthermore, it compares the state of risk-based testing in SME to the situation in large enterprises. The article is based on a multiple case study conducted with five SME. A previous study on risk-based testing in large enterprises is used as reference for investigating the differences between risk orientation in SME and large enterprises. The findings of our study show that a strong business focus, the use of informal risk concepts , as well as the application of risk knowledge to reduce testing cost and time are key differences of risk-based testing in SME compared to large enterprises .
Article
Full-text available
Software testing has often to be done under severe pressure due to limited resources and a challenging time schedule facing the demand to assure the fulfillment of the software requirements. In addition, testing should unveil those software defects that harm the mission-critical functions of the software. Risk-based testing uses risk (re-)assessments to steer all phases of the test process to optimize testing efforts and limit risks of the software-based system. Due to its importance and high practical relevance, several risk-based testing approaches were proposed in academia and industry. This paper presents a taxonomy of risk-based testing providing a framework to understand, categorize, assess, and compare risk-based testing approaches to support their selection and tailoring for specific purposes. The taxonomy is aligned with the consideration of risks in all phases of the test process and consists of the top-level classes risk drivers, risk assessment, and risk-based test process. The taxonomy of risk-based testing has been developed by analyzing the work presented in available publications on risk-based testing. Afterwards, it has been applied to the work on risk-based testing presented in this special section of the International Journal on Software Tools for Technology Transfer.
Article
Full-text available
The knowledge about the software metrics which serve as defect indicators is vital for the efficient allocation of resources for quality assurance. It is the process metrics, although sometimes difficult to collect, which have recently become popular with regard to defect prediction. However, in order to identify rightly the process metrics which are actually worth collecting, we need the evidence validating their ability to improve the product metric-based defect prediction models. This paper presents an empirical evaluation in which several process metrics were investigated in order to identify the ones which significantly improve the defect prediction models based on product metrics. Data from a wide range of software projects (both, industrial and open source) were collected. The predictions of the models that use only product metrics (simple models) were compared with the predictions of the models which used product metrics, as well as one of the process metrics under scrutiny (advanced models). To decide whether the improvements were significant or not, statistical tests were performed and effect sizes were calculated. The advanced defect prediction models trained on a data set containing product metrics and additionally Number of Distinct Committers (NDC) were significantly better than the simple models without NDC, while the effect size was medium and the probability of superiority (PS) of the advanced models over simple ones was high ( \(p=.016\) , \(r=-.29\) , \(\hbox {PS}=.76\) ), which is a substantial finding useful in defect prediction. A similar result with slightly smaller PS was achieved by the advanced models trained on a data set containing product metrics and additionally all of the investigated process metrics ( \(p=.038\) , \(r=-.29\) , \(\hbox {PS}=.68\) ). The advanced models trained on a data set containing product metrics and additionally Number of Modified Lines (NML) were significantly better than the simple models without NML, but the effect size was small ( \(p=.038\) , \(r=.06\) ). Hence, it is reasonable to recommend the NDC process metric in building the defect prediction models.
Article
Full-text available
In many development projects, testing has to be conducted under severe pressure due to limited resources and a challenging time schedule. Risk-based testing, which utilizes identified risks of the system for testing purposes, has a high potential to improve testing as it helps to optimize the allocation of resources and provides decision support for management. But for many organizations, the integration of a risk-based approach into established testing activities is a challenging task, and there are several options to do so. In this article, we analyze how risk is defined, assessed, and applied to support and improve testing activities in projects, products, and processes. We investigate these questions empirically by a multiple case study of currently applied risk-based testing activities in industry. The case study is based on three cases from different backgrounds, i.e., a test project in context of the extension of a large Web-based information system, product testing of a measurement and diagnostic equipment for the electrical power industry, as well as a test process of a system integrator of telecommunication solutions. By analyzing and comparing these different industrial cases, we draw conclusions on the state of risk-based testing and discuss possible improvements.
Conference Paper
Full-text available
Background: Determining the factor probability in risk estimation requires detailed knowledge about the software product and the development process. Basing estimates on expert opinion may be a viable approach if no other data is available. Objective: In this paper we analyze initial results from estimating the risk probability based on expert opinion to answer the questions (1) Are expert opinions consistent? (2) Do expert opinions reflect the actual situation? (3) How can the results be improved? Approach: An industry project serves as case for our study. In this project six members provided initial risk estimates for the components of a software system. The resulting estimates are compared to each other to reveal the agreement between experts and they are compared to the actual risk probabilities derived in an ex-post analysis from the released version. Results: We found a moderate agreement between the rations of the individual experts. We found a significant accuracy when compared to the risk probabilities computed from the actual defects. We identified a number of lessons learned useful for improving the simple initial estimation approach applied in the studied project. Conclusions: Risk estimates have successfully been derived from subjective expert opinions. However, additional measures should be applied to triangulate and improve expert estimates.
Article
Full-text available
Background: The accurate prediction of where faults are likely to occur in code can help direct test effort, reduce costs and improve the quality of software. Objective: We investigate how the context of models, the independent variables used and the modelling techniques applied, influence the performance of fault prediction models. Method:We used a systematic literature review to identify 208 fault prediction studies published from January 2000 to December 2010. We synthesise the quantitative and qualitative results of 36 studies which report sufficient contextual and methodological information according to the criteria we develop and apply. Results: The models that perform well tend to be based on simple modelling techniques such as Na??ve Bayes or Logistic Regression. Combinations of independent variables have been used by models that perform well. Feature selection has been applied to these combinations when models are performing particularly well. Conclusion: The methodology used to build models seems to be influential to predictive performance. Although there are a set of fault prediction studies in which confidence is possible, more studies are needed that use a reliable methodology and which report their context, methodology and performance comprehensively.
Conference Paper
Full-text available
In this paper we define a model-based risk assessment procedure that integrates automatic risk assessment by static analysis, semi-automatic risk assessment and guided manual risk assessment. In this process probability and impact criteria are determined by metrics which are combined to estimate the risk of specific system development artifacts. The risk values are propagated to the assigned test cases providing a prioritization of test cases. This supports to optimize the allocation of limited testing time and budget in a risk-based testing methodology. Therefore, we embed our risk assessment process into a generic risk-based testing methodology. The calculation of probability and impact metrics is based on system and requirements artifacts which are formalized as model elements. Additional time metrics consider the temporal development of the system under test and take for instance the bug and version history of the system into account. The risk assessment procedure integrates several stakeholders and is explained by a running example.
Conference Paper
Quantitatively-based risk management can reduce the risks associated with field defects for both software producers and software consumers. In this paper, we report experiences and results from initiating risk-management activities at a large systems development organization. The initiated activities aim to improve product testing (system/integration testing), to improve maintenance resource allocation, and to plan for future process improvements. The experiences we report address practical issues not commonly addressed in research studies: how to select an appropriate modeling method for product testing prioritization and process improvement planning, how to evaluate accuracy of predictions across multiple releases in time, and how to conduct analysis with incomplete information. In addition, we report initial empirical results for two systems with 13 and 15 releases. We present prioritization of configurations to guide product testing, field defect predictions within the first year of deployment to aid maintenance resource allocation, and important predictors across both systems to guide process improvement planning. Our results and experiences are steps towards quantitatively-based risk management.