Article

Estimates, uncertainty, and risk

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The authors discuss the sources of uncertainty and risk, their implications for software organizations, and how risk and uncertainty can be managed. Specifically, they assert that uncertainty and risk cannot be managed effectively at the individual project level. These factors must be considered in an organizational context

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In the proposed approach, the cost of risk exposure is calculated according to the procedure defined by Kitchenham and Linkman [9] in 1997. They suggested that the uncertainties in the software development process cause inaccuracies in the software effort estimate irrespective of the effort estimation technique being used. ...
... This assumption error is the risk which creeps into the project when project cost factor values do not meet the assumed level. Kitchenham and Linkman [9], have linked the risk exposure of the project to the error in assumption of the project cost factor values. They have suggested to collect alternative cost values of the project cost factors along with the probabilities of not meeting the initial cost factor values. ...
... According to Kitchenham and Linkman [9], for a project with cost factors, initial effort estimate ( ) required to develop a project is calculated using the assumed project cost factor values at the beginning of the project. To calculate the risk exposure of the project, risk exposure due to each project cost factor is added. ...
... A common characteristic of these data-driven approaches is the fact that they produce point estimates of the dependent variable without providing any further information about uncertainty, which arises in many forms [32]. Regarding this, many researchers point out potential sources of uncertainty and their associated error types (see indicatively, [14][15][16]33,34]). Firstly, uncertainty may arise from erroneous model fit (model error), since a PS provides only an approximation of the true relationship between the response variable and a set of predictors [33,34]. ...
... Regarding this, many researchers point out potential sources of uncertainty and their associated error types (see indicatively, [14][15][16]33,34]). Firstly, uncertainty may arise from erroneous model fit (model error), since a PS provides only an approximation of the true relationship between the response variable and a set of predictors [33,34]. In addition, the quality of the dataset used into the fitting phase plays a significant role, since measurement errors and noise may exist in the data affecting, in turn, the quality of the derived estimates [15]. ...
... Based on the above considerations, the assessment of one of the most common sources of uncertainty, i.e., inaccurate cost estimates [3], is a key part of project management, since it is directly associated with the identification, quantification and prioritization of risks that can potentially threaten the success of a project [3,13,14,33]. As we have already mentioned, the evidence from historical past O&G projects reveals significant cost overruns [3,4,36] that may, among other reasons, have a detrimental effect on wrong managerial decisions based solely on point estimates of cost [3,13]. ...
Article
Full-text available
Nowadays, the Oil and Gas (O&G) industry faces significant challenges due to the relentless pressure for rationalization of project expenditure and cost reduction, the demand for greener and renewable energy solutions and the recent outbreak of the pandemic and geopolitical crises. Despite these barriers, the O&G industry still remains a key sector in the growth of world economy, requiring huge capital investments on critical megaprojects. On the other hand, the O&G projects, traditionally, experience cost overruns and delays with damaging consequences to both industry stakeholders and policy-makers. Regarding this, there is an urgent necessity for the adoption of innovative project management methods and tools facilitating the timely delivery of projects with high quality standards complying with budgetary restrictions. Certainly, the success of a project is intrinsically associated with the ability of the decision-makers to estimate, in a compelling way, the monetary resources required throughout the project’s life cycle, an activity that involves various sources of uncertainty. In this study, we focus on the critical management task of evaluating project cost performance through the development of a framework aiming at handling the inherent uncertainty of the estimation process based on well-established data-driven concepts, tools and performance metrics. The proposed framework is demonstrated through a benchmark experiment on a publicly available dataset containing information related to the construction cost of natural gas pipeline projects. The findings derived from the benchmark study showed that the applied algorithm and the adoption of a different feature scaling mechanism presented an interaction effect on the distribution of loss functions, when used as point and interval estimators of the actual cost. Regarding the evaluation of point estimators, Support Vector Regression with different feature scaling mechanisms achieved superior performances in terms of both accuracy and bias, whereas both K-Nearest Neighbors and Classification and Regression Trees variants indicated noteworthy prediction capabilities for producing narrow interval estimates that contain the actual cost value. Finally, the evaluation of the agreement between the performance rankings for the set of candidate models, when used as point and interval estimators revealed a moderate agreement ().
... The proposed approach gives a method for determining this extra effort that goes into risk management based upon the project cost factor values. The risk exposure of the project is determined as the extra effort that will go into the risk mitigation and control when the assumed level of the project cost factor value is not met during the execution of the project (Kitchenham and Linkman 1997). This is essentially the error in assumption of the project cost factor values in the software effort estimation process which leads to inaccuracies in the software effort estimate. ...
... Thus, the effort which goes into controlling such risks can be calculated by taking the difference in effort at the assumed level and the alternative level for each project cost factor and then multiplying it with the corresponding probability of attaining that alternative level. Sum of the risk exposure for each project cost factor was termed as risk exposure of the project, which needs to be controlled and mitigated for successful project delivery (Kitchenham and Linkman 1997). ...
... Since the first two errors relate to the uncertainties at the organisational level, this research paper considers only assumption error at the project level. This research uses the formula for calculation of risk exposure due to uncertainties given in Kitchenham and Linkman (1997). The proposed approach calculates the integrated effort estimate ðIEÞ of software projects by adding the risk exposure to the initial effort estimate of the projects. ...
Article
ContextRisks associated with software projects play a significant role on delivery of software projects within a given budget. These risks are due to volatility in project requirements, availability of experienced personnel, ever-changing technology and many more project cost factors. Effort spent on managing the risks is termed as the risk exposure of the project. In this research, this risk exposure has been added to effort estimate of a software project. This total effort is termed as the integrated effort estimate.Objective To improve the accuracy of software effort estimates by integrating the risk exposure with the initial effort estimate of the project.MethodA formula to calculate integrated effort estimate of a software project has been proposed in the paper. This proposed formula has been tested on two datasets collected from industry, one for waterfall projects and another for agile projects. Initial effort estimates for waterfall projects are calculated using CoCoMo II and for agile projects are calculated using story point approach by Ziauddin.ResultsThe integrated effort estimates were more accurate than their corresponding initial effort estimates on all the four parameters: MMRE, SA, effect size and R2.Conclusion Integrated effort estimates are more comprehensive, reliable, and accurate than the initial effort estimates for the project.
... There are several sources of uncertainty in the context of SEE. For instance, when using machine learning for creating SEE models, uncertainty may arise from the effort estimation model limitations and from the noise (e.g., data collection mistakes) in the dataset used for creating the SEE models [36,41,43,50]. Uncertainty leads to many difficulties in SEE, especially in the early project development phases [49]. ...
... For decades, software estimation experts have pointed out that besides point estimates, effective project management also requires information about effort-related project risk [49]. In particular, they have suggested that effort estimation for a predicted software project should be a range of values (e.g., prediction interval) with a specific probability (e.g., confidence level) within which the software development can be completed, rather than a single-point estimation [43]. ...
... The problem of providing only a point estimate is that there is no feasible way to manage risks and uncertainty based on the point effort estimate. If a point prediction had to ensure against all possible risks and uncertainties, the price of constructing such a model would be prohibitive [43]. Therefore, relying on point estimation may ignore uncertain factors and lead project managers to wrong decision making. ...
Article
Software effort estimation (SEE) usually suffers from inherent uncertainty arising from predictive model limitations and data noise. Relying on point estimation only may ignore the uncertain factors and lead project managers (PMs) to wrong decision making. Prediction intervals (PIs) with confidence levels (CLs) present a more reasonable representation of reality, potentially helping PMs to make better-informed decisions and enable more flexibility in these decisions. However, existing methods for PIs either have strong limitations or are unable to provide informative PIs. To develop a “better” effort predictor, we propose a novel PI estimator called Synthetic Bootstrap ensemble of Relevance Vector Machines (SynB-RVM) that adopts Bootstrap resampling to produce multiple RVM models based on modified training bags whose replicated data projects are replaced by their synthetic counterparts. We then provide three ways to assemble those RVM models into a final probabilistic effort predictor, from which PIs with CLs can be generated. When used as a point estimator, SynB-RVM can either significantly outperform or have similar performance compared with other investigated methods. When used as an uncertain predictor, SynB-RVM can achieve significantly narrower PIs compared to its base learner RVM. Its hit rates and relative widths are no worse than the other compared methods that can provide uncertain estimation.
... It firstly presents the mathematical formulation used for both attribute measurement and model errors since EEV addresses these two error sources. 5 Thereafter, it describes the steps of the process of EEV. EEV aims at providing an effort distribution of a new project whatever the effort estimation technique used. ...
... Attribute measurement error is caused by accuracy limitations of the input variables corresponding to attributes X j . 5 In fact, attributes uncertainty is caused by attribute bias and therefore depends on attribute information rather than attribute value. ...
... It concerns the inherent limitation of a theoretical approach. 5 Since the model error is related to effort estimation, the absolute error is used to calculate the effort deviation of estimates from the actual effort values. Equation 1 defines the effort deviation Δeff i for project P i . ...
... As effort estimation techniques did not success giving reliable estimates in all situations (Kitchenham et al. 1997), estimations always go hand in hand with risks (Patil 2007). Thus, estimates error assessment is a challenging and complex task as error sources are various and inherent to the effort estimation process. ...
... Thus, estimates error assessment is a challenging and complex task as error sources are various and inherent to the effort estimation process. Kitchenham et al. (1997) have identified four different sources of error: (1) attributes measurement error that concerns the input variables; (2) model error that corresponds to the inherent limitation of a theoretical approach; ...
... Furthermore, Kitchenham et al. (1997) suggest that managing error in SDEE should be investigated at an organizational level and not for a single project. In fact, a portfolio offers more possibilities in terms of risks management in comparison with a single project. ...
Article
Full-text available
Over the last decades, software development effort estimation has integrated new approaches dealing with uncertainty. However, effort estimates are still plagued with errors limiting their reliability. Thus, estimates error management at an organization level provides a promising alternative to the classical approaches dealing with single projects as a portfolio can afford more flexibility and opportunities in terms of risk management. The most widely used approaches in risk management were mainly based on the Gaussian approximation that shows its limits facing “ruin” risk associated to unusual events. The aim of this paper is to propose a Multi-Projects Error Modeling framework to characterize error at a portfolio level using bootstrapping, mixture of Gaussians and power law to emphasize the tail behavior respectively.
... The research expands to explore how different parameters of the software projects are represented an identified that the single value estimates of parameters of software projects are uncertain [KI97]. This research work showed how the single value estimates could be mapped on to a probability distribution, which helps to model the estimated parameters using random variables with an underlying probability distribution. ...
... The research work presented that the estimated parameters should be sub-additive hence it is uncertain [KI97]. Further, the measured risk is not sub-additive when applied to portfolio of software projects or to a project that consist multiple development tasks. ...
... Hence, these models possess a static behavior, which is not suitable for dynamic projects. [KI97]. The domain of software engineering has moved into the realm of probabilistic modeling and researchers are now using these models to represent the parameters of software projects as random with underlying probability distributions [FA95] [KI03]. ...
Article
This research presents and validates a simulation model for the strategic management process of software projects. The proposed simulation model maps strategic decisions with parameters of strategic importance and links them to project management plans. Hence, the proposed simulation model is a complete framework for the analysis and the selection of strategic decisions for the development of software projects. The proposed framework integrates critical elements of software development projects, i.e. risk assessment, cost estimation and project management planning, for the analysis of strategic decisions which helps in choosing a strategic decision, among various strategic alternatives for the project, that suits the requirement of an organization the best. The simulation model captures the effects of strategic decisions on parameters of software projects in dynamic settings during the simulation of different phases of the development. The dynamic variations in project parameters affect project management plans. Capturing these variations of strategic parameters in dynamic settings brings out critical information about strategic decisions for the effective project management planning. This research work explains that the measure of risk and contingency estimates are fundamental, in-addition to risk assessment and cost estimation, for the strategic management of software development projects. Therefore, risk measure and contingency estimation models are developed for software projects. The proposed simulation model is generic, i.e. having generic components with plug-and-play interfaces; hence, it is independent of any risk assessment, cost estimation, risk measurement and contingency estimation models and project management tools for software development projects. This research presents a successful case study which shows that different strategic management decisions produce different sets of risk and cost options, as well as different project management plans for the development of software projects.
... However, these existing SDEE techniques are not sufficient because there are various sources of uncertainty that make error inherent to the estimates. Four sources have been identified by Kitchenham and Linkman [3]: ...
... Then, it seems more adapted and realistic to provide an interval estimates with a probability distribution over the interval. Moreover, the issue of error estimation becomes crucial for organizations [3]. In fact, error control can help improving the project running performance by capturing uncertainty and accessing it more efficiently. ...
... Other methods are equally used with 8%: risk analysis cost sensitivity and classifier ensemble methods, genetic algorithms, data partitioning, and augmented case point estimation. Furthermore, we notice that there is no technique that covers all error sources proposed by Kitchenham and Linkman (measurement, model, scope, assumptions) [3]. Further work should be performed in order to categorize the error approaches based on the four error sources. ...
... Kitchenham et al. [KITCHENHAM97] refer to several sources of error, i.e. errors found in the measurements (Measurement error), produced by unsuitable mathematical models (model error), wrongly assumed input values (assumption error), or inadequacy of the projects chosen for building such estimates (scope error). Errors can be represented by (stochastic) variables that we can study and even try to predict, but we cannot avoid. ...
... Choosing a lower confidence level, the error prediction interval would get smaller and more useful (i.e., narrower). The decision to fix this threshold should be made at the highest management level of the organization based on the organization strategy [KITCHENHAM97]. The rate to which an organization earns contracts should be tracked for prospective evaluation [BASILI92B] and encapsulated into an experience package in the experience base of the organization [BASILI92B]. ...
... (9), (10), or (11). Those formulas can also be considered as a sort of correction used to make the estimates unbiased [KITCHENHAM97]. For instance, with respect to Exn. (9), if the error bias was zero (i.e., RE = 0), then Exn. To calculate a two-tail (1-)% confidence interval for the mean there is Exn. ...
Book
Full-text available
This work shows the mathematical reasons why parametric estimation models fall short of providing correct estimates and define an approach that overcomes the causes of these shortfalls. The approach aims at improving parametric estimation models when any regression model assumption is violated for the data being analyzed. Violations can be that, the errors are x-correlated, the model is not linear, the sample is heteroscedastic, or the error probability distribution is not Gaussian. If data violates the regression assumptions and we do not deal with the consequences of these violations, we cannot improve the model and estimates will be incorrect forever. The novelty of this work is that we define and use a variety of feed-forward multi-layer neural networks to estimate prediction intervals (i.e. evaluate uncertainty), make estimates, and detect improvement needs. This approach has proved to be successful in many areas with a full validation in the field of software engineering and risk management. This book is suitable for Ph.D/PostDoc Students, Practitioners, and Scholars interested in the field of Bayesian Learning and non-linear Prediction Models.
... • Consideration of the impact of Category 2 risk, i.e. the amount of contingency required to allow for unplanned tasks and delays. The approach suggested by Kitchenham and Linkman (1997) and Kansala (1997) which is based on assessing the impact of invalid assumptions, is a suitable technique for Category 2 risks. ...
... We refer to this set of projects as a company's project portfolio. We take the view that failure to consider the portfolio viewpoint can damage the bidding process and result in unexpected knock-on effects on other projects (Kitchenham and Linkman, 1997). ...
... It would be based on an assessment of the implication of systematic bias (towards underestimation) in the original effort estimates. Kitchenham and Linkman (1997) suggest the extent of unplanned effort could be estimated by considering the impact of initial estimating assumptions being invalid. Thus, we have one estimate of staff effort based on the most likely assumptions which we use for planning and pricing (the planning estimate) and a second estimate of staff effort based on an alternative set of assumptions (the alternative estimate) that we use for contingency planning. ...
... This work is part of a research project "Managing Software Risks across a Portfolio of Projects". The goal of this project is to improve the critical area of software bid management by extending the work of Kitchenham and Linkman [Kitchenham97] on portfolio-based software risk management which suggests holding project contingency (arising from risk exposure) in a central fund and administering such a fund centrally. This requires an investigation into the extent to which risk management incorporated into portfolio management is relevant to software companies during the bidding process and approaches to modelling risk in other industries. ...
... We refer to this set of projects as a company's project portfolio. We take the view that failure to consider the portfolio viewpoint can damage the bidding process and result in unexpected knockon effects on other projects [Kitchenham97]. ...
... Research associated with assessing risks at the bidding stage has been limited. Kitchenham and Linkman [Kitchenham97] ideas about uncertainty and risk are the basis of our approach to bidding risk. In addition, there were two earlier researchers who briefly mention the issue [Carter94] and [McFarlan81]. ...
Article
Full-text available
We report the results of a literature survey that reviewed bidding processes and portfolio management processes in a variety of different industries including the make-to-order engineering industry, the pharmaceutical industry, the finance industry, insurance industry, the oil industry, shipbuilding, and construction. As a starting point we reviewed the bidding process and portfolio management procedures used in the software industry. The goal of the wider survey was to identify whether any of the models or methods used in other industries to improve the bidding process and manage portfolios could be used to assist bidding and portfolio management in the custom-built software industry. We found some industries in which bidding and pricing are the most significant issues and others where portfolio management is the most important issue. Only the oil industry seemed equally concerned about both issues. Each industry developed models and methods based on its own specific circumstances, so we found no methods or models that can be used “as-is ” for software projects. However, we found useful general approaches and strategies in all industries. From the viewpoint of bidding and pricing practices, software practices
... Gamma distribution is used to model waiting time or service time in queuing theory. Kitchenham et al. pointed out that this characteristic is appropriate for modeling a time of completing a task, and they argued how to use Gamma distribution to effort estimation [15]. Gamma distribution always takes positive and its standard deviation is proportional to a mean. ...
... Because Gamma distribution is skewed right, the mean response is greater than median of Gamma distribution. That is, Gamma regression models predict a planned-effort [26,15]. Since MRE penalizes overestimates more seriously, it preferred lognormal regression than Gamma regression. ...
Conference Paper
Effort estimation models are widely investigated because they have an advantage over expert judgment in terms of objectivity and repeatability. Linear regression models are the most major methods used in the past study. While these studies carefully determined predictor variables and model formulation, error distributions are fewer considered. Furthermore, characteristics of linear regression models using different error distributions have not studied with actual datasets. This study compared log-normal and Gamma regressions for effort estimation in terms of their predictive performance. Both regressions were examined with multiple datasets and two formulation approaches. As a result, it was found that log-normal and Gamma regressions have contrasting characteristics though the difference is diminished when uncertainty of effort is well explained by predictor variables. Furthermore, it was found that which error distribution is favored depends on what one wants to estimate. These results contribute some suggestions to effort estimation model construction.
... Ils ont comme objectif de produire des réponses se rapprochant le plus d'un comportement réel F qui peut être complexe. De ce fait, ils constituent des sources d'incertitudes pour l'estimation [Kitchenham+1997]. ...
... Ces imperfections sont associées, entre autres, à la méthode et aux outils de mesure ainsi qu'aux erreurs d'application de la procédure de mesure par l'expert. Un exemple illustrant les erreurs de mesures est donné par [Kitchenham+1997], pour le cas de mesure de la taille en PF. En fait, la mesure en PF est supposée avoir au moins 12% d'erreur. ...
Thesis
Full-text available
L'estimation de l'effort de développement logiciel est l'une des tâches les plus importantes dans le management de projets logiciels. Elle constitue la base pour la planification, le contrôle et la prise de décision. La réalisation d'estimations fiables en phase amont des projets est une activité complexe et difficile du fait, entre autres, d'un manque d'informations sur le projet et son avenir, de changements rapides dans les méthodes et technologies liées au domaine logiciel et d'un manque d'expérience avec des projets similaires. De nombreux modèles d'estimation existent, mais il est difficile d'identifier un modèle performant pour tous les types de projets et applicable à toutes les entreprises (différents niveaux d'expérience, technologies maitrisées et pratiques de management de projet). Globalement, l'ensemble de ces modèles formule l'hypothèse forte que (1) les données collectées sont complètes et suffisantes, (2) les lois reliant les paramètres caractérisant les projets sont parfaitement identifiables et (3) que les informations sur le nouveau projet sont certaines et déterministes. Or, dans la réalité du terrain cela est difficile à assurer. Deux problématiques émergent alors de ces constats : comment sélectionner un modèle d'estimation pour une entreprise spécifique ? et comment conduire une estimation pour un nouveau projet présentant des incertitudes ? Les travaux de cette thèse s'intéressent à répondre à ces questions en proposant une approche générale d'estimation. Cette approche couvre deux phases : une phase de construction du système d'estimation et une phase d'utilisation du système pour l'estimation de nouveaux projets. La phase de construction du système d'estimation est composée de trois processus : 1) évaluation et comparaison fiable de différents modèles d'estimation, et sélection du modèle d'estimation le plus adéquat, 2) construction d'un système d'estimation réaliste à partir du modèle d'estimation sélectionné et 3) utilisation du système d'estimation dans l'estimation d'effort de nouveaux projets caractérisés par des incertitudes. Cette approche intervient comme un outil d'aide à la décision pour les chefs de projets dans l'aide à l'estimation réaliste de l'effort, des coûts et des délais de leurs projets logiciels. L'implémentation de l'ensemble des processus et pratiques développés dans le cadre de ces travaux ont donné naissance à un prototype informatique open-source. Les résultats de cette thèse s'inscrivent dans le cadre du projet ProjEstimate FUI13.
... Probabilistic models represent random values where each random value has a probability; hence, the impact of risk events can be mapped to random cost impacts. Kitchenham et al (1997Kitchenham et al ( , 2003 Assume that is a random variable representation the cost of software projects having an underlying continuous probability distribution [Appendix A]. When the underlying distribution is gamma, the is represented with the gamma distribution, ...
... Uzzafer (2013a) explained that a single estimate of cost can be represented with a gamma distribution by mapping to the expectation of gamma distribution, i.e., ( ) . Then setting (Kitchenham and Linkman, 1997;Guo, 2010) the value of can be estimated from the identity (Uzzafer, 2013a). ...
Article
Full-text available
The risk of software projects is measured in terms of cost that is needed to abate the risk. Traditional practice to measure the risk of software projects uses risk exposure; however, risk measure cannot quantify the risk beyond the expected value of cost. Software project managers are keen to quantify the risk based on a certain probability which is beyond the expected value of the cost. This research work presents a model to measure the risk based on certain probability beyond the expectation. A case-study validates that proposed model shows an improvement in the measurement of risk of real software projects compared to the actual risk of software projects.
... Despite the evolutionary introduction of many prediction methodologies ranging from expert judgement techniques to algorithmic and machine learning models, the findings are associated with inconsistencies regarding the superiority of a technique over another (see for example [3]). Although the researchers strive to indentify the factors for the lack of convergence in experimental studies, it seems that they do not take into account an inherent limitation of prediction systems that produce estimates which are expressed as single numbers (point estimates) without considering the uncertainty and risk when estimating a single value of cost [4]. ...
... Forecast practitioners in other applied areas often face a similar quandary and so most managers do realize the importance of providing PIs instead of a single value of estimate. Although there is an imperative need for the construction of reliable and accurate PIs in SCE ( [4], [5] and [6]), the topic of the comparison of PIs has not attracted much of the interest of the research community, yet. ...
Article
The task of predicting accurately the cost required for the completion of a new software project is a challenging issue in the Software Cost Estimation area, since it is closely related with the activities of project management and the wise decision-making of organizations in order to bid, plan and budget a forthcoming system. However, the accurate prediction of the cost is often obtained with great uncertainty and for this reason there has been noted a lack of convergence in experimental studies. The main reason for the discrepancy can be derived from the inherent characteristic of prediction methodologies, since they produce point estimates without taking into account the risk covering the whole process. In this study, we propose a statistical framework, so as to focus on the construction of Prediction Intervals which provide an "optimistic" and a "pessimistic" guess for the true magnitude of the cost. The proposed framework that incorporates different accuracy indicators, formal hypothesis testing and graphical inspection of the predictive performance is applied on a dataset with real software projects.
... In order to satisfy the third criterion of Soft Computing, Fuzzy Analogy must be able to handle and incorporate the uncertainty factor when estimating the cost or development effort of the new project. The need of managing uncertainty in Fuzzy Analogy is justified by three reasons: § An estimate is a probabilistic assessment of a future condition [16]. § We cannot include all the factors that affect the cost in the identification step of Fuzzy Analogy. ...
... Thus, the software projects managers in an organization may use this cost possibility distribution for risk assessment of the estimation results. Kitchenham and Linkman have examined four sources of estimate uncertainty and risk: measurement error, model error, assumption error, and scope error [16]. The two first sources are related respectively to the accuracy of the measurements of the model's input and the accuracy of the model itself. ...
Article
Software cost estimation has been the subject of intensive investigations in the field of software engineering. As a result, numerous software cost estimation techniques have been proposed and investigated. To our knowledge, currently there are no cost estimation techniques that can incorporate and/or tolerate the aspects of imprecision, vagueness, and uncertainty into their predictions. However, software projects are often described by vague information. Furthermore, an estimate is only a probabilistic assessment of a future condition. Consequently, cost estimation models must be able to deal with imprecision and uncertainty, the two principal components of soft computing. To estimate the cost of software projects when they are described by vague and imprecise attributes, in an earlier study we have proposed an innovative approach referred to as Fuzzy Analogy. In this paper, we investigate the uncertainty of cost estimates generated by the Fuzzy Analogy approach. The primary aim is to generate a set of possible values for the actual software development cost. This set then be used to deduce, for practical purposes, a point estimate for the cost, and for analyzing the risks associated with all possible estimates.
... A modo de ejemplo, el modelo de[30] nos demuestra que el problema de estimación en realidad "ya está resuelto", siempre que se circunscriba a estimar proyectos similares a otros ya concluidos. ...
Article
Full-text available
A critical synthesis on the most representative models for software development project effort estimation is provided. This work is a basis for a discussion about the methodological and practical challenges which entail the effort estimation field, specially in the mathematical/statistical modelling fundamentals, and its empirical verification in the software industry.
... However, focusing research efforts entirely on it adopts a myopic view. It assumes that software estimation is a rational prediction task only -a view we refer to as software estimation as a prediction task, inspired by previous definitions of software estimation (Kitchenham and Linkman 1997). The focus is entirely on the technicalities of estimating as the main tool to reach accuracy: the effort predictors to consider (as size), adding technical steps to the estimation process (as the use of historical data), or even accounting for expected events that can lead to a higher need of effort (as changes to requirements or scope). ...
Article
Full-text available
Traditionally, Software Effort Estimation (SEE) has been portrayed as a technical prediction task, for which we seek accuracy through improved estimation methods and a thorough consideration of effort predictors. In this article, our objective to make explicit the perspective of SEE as a behavioral act, bringing attention to the fact that human biases and noise are relevant components in estimation errors, acknowledging that SEE is more than a prediction task. We employed a thematic analysis of factors affecting expert judgment software estimates to satisfy this objective. We show that estimators do not necessarily behave entirely rationally given the information they have as input for estimation. The reception of estimation requests, the communication of software estimates, and their use also impact the estimation values — something unexpected if estimators were solely focused on SEE as a prediction task. Based on this, we also matched SEE interventions to behavioral ones from Behavioral Economics showing that, although we are already adopting behavioral insights to improve our estimation practices, there are still gaps to build upon. Furthermore, we assessed the strength of evidence for each of our review findings to derive recommendations for practitioners on the SEE interventions they can confidently adopt to improve their estimation processes. Moreover, in assessing the strength of evidence, we adopted the GRADE-CERQual (Confidence in the Evidence from Reviews of Qualitative research) approach. It enabled us to point concrete research paths to strengthen the existing evidence about SEE interventions based on the dimensions of the GRADE-CERQual evaluation scheme.
... For example, someone is going to build a bridge but does not know the cost. These uncertainties relate to quantities, rates, and unit prices, and they focus on components of base estimates [14,21]. Single-event uncertainty relates to the probability and consequences of a possible event [9]. ...
Article
Full-text available
To succeed with projects, we need to understand and manage uncertainty. Uncertainties impact a project’s cost, time, and quality performance. The project’s front end is challenging for decision makers due to the high level of uncertainty. This paper identifies the most common uncertainties and their origin in the pre-project phase of large road projects. It also analyses the changes in these factors over 20 years. Document studies collected information about uncertainty factors identified in the early phase of 90 large road projects. The research strategy was explanatory, and data were collected from quality assurance reports from a population of large Norwegian road projects. The project cost varies between USD 30 million and over USD 2 billion. Then, 15-factor groups were established for categorising uncertainties. This study shows a rise in uncertainty factors with operational origins and a decrease in uncertainty factors with strategic and contextual origins over the last 20 years. Identifying and understanding common uncertainties and their origins provides policymakers, practitioners, and researchers with useful insights for policy revision and investment decision making and facilitates a proper focus regarding uncertainty analyses in the front end of road projects.
... Next, the patterns of how the causes and effects of community smells occur and the mapping between the community smells and the critical factors for effective teamwork can motivate further research. For example, researchers can apply software risk assessment [65,66] to extend our work. In this case, quantitative risk assessment could examine the impact of such events by estimating probabilities. ...
Preprint
Full-text available
Context: Social debt describes the accumulation of unforeseen project costs (or potential costs) from sub-optimal software development processes. Community smells are sociotechnical anti-patterns and one source of social debt that impact software teams, development processes, outcomes, and organizations. Objective: To provide an overview of community smells based on published literature, and describe future research. Method: We conducted a systematic literature review (SLR) to identify properties, understand origins and evolution, and describe the emergence of community smells. This SLR explains the impact of community smells on teamwork and team performance. Results: We include 25 studies. Social debt describes the impacts of poor socio-technical decisions on work environments, people, software products, and society. For each of the 30 identified community smells, we provide a description, management approaches, organizational strategies, and mitigation effectiveness. We identify five groups of management approaches: organizational strategies, frameworks, models, tools, and guidelines. We describe 11 properties of community smells. We develop the Community Smell Stages Framework to concisely describe the origin and evolution of community smells. We describe the causes and effects for each community smell. We identify and describe 8 types of causes and 11 types of effects for community smells. Finally, we provide 8 Sankey diagrams that offer insights into threats the community smells pose to teamwork factors and team performance. Conclusion: Community smells explain the influence work conditions have on software developers. The literature is scarce and focuses on a small number of community smells. Thus, community smells still need more research. This review organizes the state of the art about community smells and provides motivation for future research along with educational material.
... Furthermore, project interdependence makes risk identification and assessment processes more complicated. Insufficient resources and project complexity are other challenges that influence the analysis of potential problems and their consequences (Kitchenham and Linkman 1997). ...
Article
Full-text available
Project risks are widespread in the construction industry because of the uncertainty and complexity involved in construction activities. Many factors directly or indirectly affect a project’s successful completion. Therefore, a project’s success largely depends on how it manages risks. This study examines the effect of risk management practices on project success. In addition, it explores the mechanism that leads to project success. Understanding risk management practices and processes that influence project performance is vital for success. This study proposes that three risk management practices (e.g. risk identification, monitoring, and prevention) influence project success. Furthermore, this study maintains that the risk coping capacity mediates, and risk transparency moderates the relationship between risk management practices and project success. We tested the hypotheses using survey questionnaire data provided by 320 project managers. It reveals that risk management practices significantly influence project success. It also finds that risk coping capacity mediates the relationship between risk management practices and project success. Furthermore, risk transparency moderates the relationship between risk coping capacity and project success. This study contributes to the risk management literature by explaining the mechanism that leads to project success. It also discussed theoretical contributions and practical implementations.
... It was highlighted that estimation models must be created by taking such factors into account, and it was stated that assumptions must be made for unidentified risk factors. It was mentioned that assessment models can be drawn by taking the impact values of erroneous assumptions [14]. Kwan and Leung addressed the risk management methodology and project risk dependencies in their study. ...
Article
Full-text available
Many projects that progress with failure, processes managed erroneously, failure to deliver products and projects on time, excessive increases taking place in costs, and an inability to analyze customer requests correctly pave the way for the use of agile processes in software development methods and cause the importance of test processes to increase day by day. In particular, the inability to properly handle testing processes and risks with time and cost pressures, the differentiation of software development methods between projects, the failure to integrate risk management, and risk analysis studies, conducted within a company/institution, with software development methods also complicates this situation. It is recommended to use agile process methods and test maturity model integration (TMMI), with risk-based testing techniques and user scenario testing techniques, to eliminate such problems. In this study, agile process transformation of a company, operating in factory automation systems in the field of industry, was followed for two and a half years. This study has been prepared to close the gap in the literature on the integration of TMMI Level 2, TMMI Level 3, and TMMI Level 4 with SAFE methodology and agile processes. Our research has been conducted upon the use of all TMMI Level sub-steps with both agile process practices and some test practices (risk-based testing techniques, user scenario testing techniques). TMMI coverage percentages have been determined as 92.85% based on TMMI Level 2, 92.9% based on TMMI Level 3, and 100% based on TMMI Level 4. In addition, agile process adaptation metrics and their measurements between project versions will be shown, and their contribution to quality will be mentioned.
... In the typical confidence scenario, the potential efforts are characterized by a right skewed triangular distributions, in which the team's estimates correspond to the most likely value of the distribution, meaning the realization of many features will take about what was estimated, some will take some more and a few could take less. The right skewness of the typical estimate distributions is predicated on our tendency to estimate based on imagining success [9], behaviors like Parkinson's Law 4 and the Student Syndrome 5 , which limit the potential for completing development with less effort usage than estimated, and the fact that the number of things that can go wrong is practically unlimited [10,11]. Although many distributions fit this pattern, e.g. ...
Chapter
Full-text available
This article analyzes the performance of the MoSCoW method to deliver all features in each of its categories: Must Have, Should Have and Could Have using Monte Carlo simulation. The analysis shows that under MoSCoW rules, a team ought to be able to deliver all Must Have features for underestimations of up to 100% with very high probability. The conclusions reached are important for developers as well as for project sponsors to know how much faith to put on any commitments made.
... In the typical confidence scenario, the potential efforts are characterized by a right skewed triangular distributions, in which the team's estimates correspond to the most likely value of the distribution, meaning the realization of many features will take about what was estimated, some will take some more and a few could take less. The right skewness of the typical estimate distributions is predicated on our tendency to estimate based on imagining success [9], behaviors like Parkinson's Law 4 and the Student Syndrome 5 , which limit the potential for completing development with less effort usage than estimated, and the fact that the number of things that can go wrong is practically unlimited [10] [11]. Although many distributions fit this pattern, e.g. ...
Preprint
Full-text available
This article analyzes the performance of the MoSCoW method to deliver all features in each of its categories: Must Have, Should Have and Could Have using Monte Carlo simulation. The analysis shows that under MoSCoW rules, a team ought to be able to deliver all Must Have features for underestimations of up to 100% with very high probability. The conclusions reached are important for developers as well as for project sponsors to know how much faith to put on any commitments made. TO CITE THIS PAPER: Miranda, E. (2022). Moscow Rules: A Quantitative Exposé. In: Stray, V., Stol, KJ., Paasivaara, M., Kruchten, P. (eds) Agile Processes in Software Engineering and Extreme Programming. XP 2022. Lecture Notes in Business Information Processing, vol 445. Springer, Cham. https://doi.org/10.1007/978-3-031-08169-9_2
... A third observation is that most software initiatives are characterised by uncertainty [1,48] and this means that values cannot be represented in a deterministic way. A probabilistic distribution is a more suitable measure for some kinds of objective [11,39,50]. Our proposal is that a set of objectives may be modelled by a set of values, the type of value for each depending upon the nature of the objective. ...
Preprint
There is growing acknowledgement within the software engineering community that a theory of software development is needed to integrate the myriad methodologies that are currently popular, some of which are based on opposing perspectives. We have been developing such a theory for a number of years. In this paper, we overview our theory and report on a recent ontological analysis of the theory constructs. We suggest that, once fully developed, this theory, or one similar to it, may be applied to support situated software development, by providing an overarching model within which software initiatives might be categorised and understood. Such understanding would inevitably lead to greater predictability with respect to outcomes.
... A third observation is that most software initiatives are characterised by uncertainty (Atkinson et al., 2006;Perminova et al., 2007) and this means that values cannot be represented in a deterministic way. A probabilistic distribution is a more suitable measure for some kinds of objective (Connor, 2007;Kitchenham and Linkman, 1997;Rao et al., 2008). Our proposal is that a set of objectives may be modelled by a set of values, the type of value for each depending upon the nature of the objective. ...
Preprint
There is growing acknowledgement within the software engineering community that a theory of software development is needed to integrate the myriad methodologies that are currently popular, some of which are based on opposing perspectives. We have been developing such a theory for a number of years. In this position paper, we overview our theory along with progress made thus far. We suggest that, once fully developed, this theory, or one similar to it, may be applied to support situated software development, by providing an overarching model within which software initiatives might be categorised and understood. Such understanding would inevitably lead to greater predictability with respect to outcomes.
... Yet, expert effort estimation remains the most widely used technique [48]. Another area of the research is focused on the estimation error measurement [49] and the search for the reliable metrics, as well as on the comparison of different models [50], [51]. Besides simply relating the absolute values of estimated and actual effort, standardly used by the KARNA, GOTOVAC, VICKOVIĆ AND MIHANOVIĆ THE EFFECTS OF TURNOVER ... industry [52], researchers developed a number of more or less reliable measures of estimation error that indicate the accuracy of implemented models [53], [54]. ...
Article
Full-text available
Turnover of the personnel represents a serious issue for management of software projects. The buildup of competences and phasing in of the people into the project requires both time and effort. This paper presents a case study of a large in-house agile software development project. The research goal was to determine the effects that turnover has on the expert effort estimation. In order to do this, paper examines relations across empirical data on a studied project. Study findings are the following: a) it is necessary to distinguish types of turnover, b) the general and planned turnover do not necessarily have a negative effect on estimation accuracy, and c) the unplanned turnover can have a significant negative impact on the reliability of the estimates and therefore should be treated with special attention. Results suggest that these facts should be taken into account both by the management and human resources.
... The results of previous research do not clearly estimate reserves for unidentified risks, because unknown-unknown risks were excluded from the research due to assumption by the author (Baccarini, 2006) or unmanageable (Chapman, 2000). Furthermore, contingency resources estimated to handle unknown risk events cannot be justified because these could not be identified and estimated (Kitchenham & Linkman, 1997). In addition, these are events not known to the project team before they occurred or viewed as impossible in a specific project situation. ...
Article
Full-text available
Project risk is a critical factor in estimating project budget. Previous studies on this topic have only addressed estimation methods that consider project budget reserves against identified risks. As a result, project managers still face the challenge of completing projects within given budgets but without the relevant tools to deal with unidentified risks. This study proposes an approach for estimating reserves for both identified and unidentified risks separately. The study also suggests using the three-point estimation technique and R-value determination for estimating risk costs, which can improve budget accuracy and precision. The construction of residential building projects in South Korea demonstrates the advantages of the proposed approach compared with previous methods.
... Probability models are used for activity costs and durations in a single project, which is modelled as a stochastic network, to evaluate the overall uncertainty about the cost and the duration of a project [2]. A Gamma distribution is used in [3] to model the overall completion time of a project and evaluate the risk of late completions in a portfolio of projects. Similarly, a probability distribution of the project Net Present Value over the project lifecycle is obtained in [4] to obtain a better balancing of the overall portfolio of projects for a company operating in the Engineering and Contracting industry. ...
Conference Paper
Full-text available
Project size, as measured by the amount of investment required, is a relevant parameter to be used in project selection. The evaluation of a project portfolio must consider the variety of project sizes that may be met, so that a proper model should be adopted to describe that variety, especially for its use in simulation. In this paper, a log-normal probability model is suggested to describe the dispersion of project sizes within a project portfolio. The model is obtained on the basis of two real datasets spanning over ten years of observations, and after comparison with competing Gamma and Pareto models. The parameters of the log-normal model are provided as resulting from the best-fit procedure, and indications are also given for the values to use in a simulation study.
... But this is inherently more difficult to understand and estimate a product or process that cannot be seen and touched [4], [5]. Estimation is a futuristic prediction of a metric to be used in the software to be developed, thus resulting in uncertainty [6]. Every software is unique in itself, its development process, requirements and demands from the customer side, everything vary from one software to another. ...
... As discussed by Kitchenham and Linkman (1997) and Stamelos and Angelis (2001), any estimation method may lead to some uncertainty in establishing project return distributions, so the risk analysis may be mistaken. In fact, Atkinson et al. (2006) mention that one of the causes of uncertainty about estimates is bias that may be exhibited by estimators. ...
Article
Although a variety of models have been studied for project portfolio selection, many organizations still struggle to choose a potentially diverse range of projects while ensuring the most beneficial results. The use of the Mean-Gini framework and stochastic dominance to select portfolios of research and development (R&D) projects has been gaining attention in the literature despite the fact that such approaches do not consider uncertainty regarding the projects' parameters. This paper discusses, with relation to project portfolio selection through a Mean-Gini approach and stochastic dominance, the impact of uncertainty on project parameters. In the process, Monte Carlo simulation is considered in evaluating the impact of parametric uncertainty on project selection. The results show that the influence of uncertainty is significant enough to mislead managers. A more robust selection policy using the Mean-Gini approach and Monte Carlo simulation is proposed.
... The first thing to consider is what is an estimate? Although it can easily be forgotten it must be stressed that an estimate is a probabilistic statement (DeMarco 1982), (Kitchenham and Linkman 1997) consequently to simply report an estimate as a point value masks important information. As an example, if a project manager makes a prediction that the Integration Testing will take 150 person-hours we do not know with what confidence he or she makes this statement; it could be with near certainty or it could be a wild guess with almost no certainty. ...
Article
Full-text available
This chapter reviews the background and extent of the software project cost prediction problem. Given the importance of the topic, there has been a great deal of research activity over the past 40 years, most of which has focused on developing formal cost prediction systems. The problem is that presently there is limited evidence to suggest formal methods outperform experts, therefore detailed consideration is given to the available empirical evidence concerning expert performance. This shows that software professionals tend to be biased (optimistic) and over-confident, and there are a number of deep cognitive biases which help us understand why this is so. Finally, the chapter describes how this might best be tackled through a range of simple, practical and evidence-based methods. © 2014 Springer-Verlag Berlin Heidelberg. All rights are reserved.
... A third observation is that most software initiatives are characterised by uncertainty (Atkinson et al., 2006;Perminova et al., 2007) and this means that values cannot be represented in a deterministic way. A probabilistic distribution is a more suitable measure for some kinds of objective (Connor, 2007;Kitchenham and Linkman, 1997;Rao et al., 2008). Our proposal is that a set of objectives may be modelled by a set of values, the type of value for each depending upon the nature of the objective. ...
Conference Paper
Full-text available
There is growing acknowledgement within the software engineering community that a theory of software development is needed to integrate the myriad methodologies that are currently popular, some of which are based on opposing perspectives. We have been developing such a theory for a number of years. In this position paper, we overview our theory along with progress made thus far. We suggest that, once fully developed, this theory, or one similar to it, may be applied to support situated software development, by providing an overarching model within which software initiatives might be categorised and understood. Such understanding would inevitably lead to greater predictability with respect to outcomes.
... The majority of existing methods are elaborated from old historical data and for specific organizations, thus it is difficult to adapt them to new project contexts and environments. Therefore, in order to accurately estimate the effort, an organization requires estimation methods that are based on its own performance, working practices and software experience (Kitchenham and Linkman, 1997). ...
Conference Paper
Software effort estimation is a crucial task in the software project management. It is the basis for subsequent planning, control, and decision-making. Reliable effort estimation is difficult to achieve, especially because of the inherent uncertainty arising from the noise in the dataset used for model elaboration and from the model limitations. This research paper proposes a software effort estimation method that provides realistic effort estimates by taking into account uncertainty in the effort estimation process. To this end, an approach to introducing uncertainty in Neural Network based effort estimation model is presented. For this purpose, bootstrap resampling technique is deployed. The proposed method generates a probability distribution of effort estimates from which the Prediction Interval associated to a confidence level can be computed. This is considered to be a reasonable representation of reality, thus helping project managers to make well-founded decisions. The proposed method has been applied on a dataset from International Software Benchmarking Standards Group and has shown better results compared to traditional effort estimation based on linear regression.
... Effective software project estimation is one of the most challenging activities in software development. Almost all participants involved in software estimation, whether they are software managers, researchers, consultants, or tool vendors, would agree that in order to obtain good estimates one needs estimation methods and models based on the organization's performance, working practices, and software experience (Kitchenham & Linkman, 1997). The importance of accurate estimation has led to the investment of extensive research efforts in the development of software estimation methods. ...
Article
The complicatedness in conducting software development derives from the probabilistic assessment of future conditions or unclear requirements and project implications. Information technology project portfolio management is more complex and presents program managers with the challenge of dynamic decision-making in the context of target revisions and project selection (continuation of active projects at various budgeting levels, termination of other active projects and launching new projects). The authors propose a simulation-based decision support system for managing information technology project portfolios. At each control (decision) point, this data support system enables 'optimizing' each project planning subject to alternative determinations of time and cost chance constraints, and evaluating the up-to-date chance of alternative project selections being within the project portfolio budget. These 'optimizations' and evaluations are provided by a stochastic procedure based on Monte Carlo simulations. In contrast with deterministic procedures that are highly inadequate since they mostly provide erroneous results, the stochastic procedure is appropriate since it analyses a higher level of information and provides more accurate results. The proposed system enables the program manager to decipher highly complex simulation results and make decisions based on up-to-date data. It can be easily implemented even by non-experts.
... Various studies have been conducted so far concerning the comparison and evaluation of different cost estimation techniques [3], [9], [17], [1]. In particular, some of them suggest the estimation of intervals [10], [12], [22] and one the estimation of predefined intervals [21]. Machine learning techniques such as Neural Networks [8], CART [20] have also been implemented but all producing point estimates. ...
Article
Full-text available
Defining the required productivity in order to complete successfully and within time and budget constraints a software development project is actually a reasoning problem that should be modelled under uncertainty. One way of achieving this is to estimate an interval accompanied by a probability instead of a particular value. In this paper we compare traditional methods that focus on point estimates, methods that focus both on point and interval estimates and methods that produce only predefined interval estimates. In the case of predefined intervals, software cost estimation becomes a classification problem. All the above methods are applied on two different data sets, namely the COCOMO81 dataset and the Maxwell dataset. Also the ability of classification techniques to resolve one classification problem in cost estimation, namely to determine the software development mode based on project attributes, is assessed and compared to reported results.
Article
Context Social debt describes the accumulation of unforeseen project costs (or potential costs) from sub-optimal software development processes. Community smells are sociotechnical anti-patterns and one source of social debt. Because community smells impact software teams, development processes, outcomes, and organizations, we to understand their impact on software engineering. Objective To provide an overview of community smells in social debt, based on published literature, and describe future research. Method We conducted a systematic literature review (SLR) to identify properties, understand origins and evolution, and describe the emergence of community smells. This SLR explains the impact of community smells on teamwork and team performance. Results We include 25 studies. Social debt describes the impacts of poor socio-technical decisions on work environments, people, software products, and society. For each of the 30 community smells identified as sources of social debt, we provide a detailed description, management approaches, organizational strategies, and mitigation effectiveness. We identify five groups of management approaches: organizational strategies, frameworks, models, tools, and guidelines. We describe 11 common properties of community smells. We develop the Community Smell Stages Framework to concisely describe the origin and evolution of community smells. We then describe the causes and effects for each community smell. We identify and describe 8 types of causes and 11 types of effects related to the community smells. Finally, we provide 8 comprehensive Sankey diagrams that offer insights into threats the community smells pose to teamwork factors and team performance. Conclusion Community smells explain the influence work conditions have on software developers. The literature is scarce and focuses on a small number of community smells. Thus, the community smells still need more research. This review helps by organizing the state of the art about community smells. Our contributions provide motivations for future research and provide educational material for software engineering professionals.
Chapter
The primary objective of software development is to deliver high quality product at low cost. Testing is inherent in each phase of development as the deliverables of each phase is to be tested to produce a better quality artifact before proceeding to the next phase of development. Software testing describes the discrepancies between the software deliverables and the customer expectations. Software testing life cycle covers test selection, test classification, test execution, and quality estimation. The quality of the deliverable produced may not always be as per the expected outcome or within a probabilistic range. The outcome of testing may be error prone and uncertain because of inadequate techniques for estimation, selection, classification, and execution of test cases. Hence, there is a requirement to model uncertainties after completion of each phase of development. Mechanisms are needed to address uncertainty in each of the deliverables produced during software development process. The uncertainty metrics can help in assessing the degree of uncertainty. Effective modeling techniques for uncertainty are needed at each phase of development.
Chapter
Uncertainty and inaccuracy are inherent properties of estimation, in particular predictive estimation. Measuring and managing uncertainty and inaccuracy lies at the heart of good estimation. Flyvbjerg (2006) distinguished three categories of reasons for inaccuracies in project forecasts:
Conference Paper
The precision of estimates may be applied to communicate the uncertainty of required software development effort. The effort estimates 1000 and 975 work-hours, for example, communicate different levels of expected estimation accuracy. Through observational and experimental studies we found that software professionals (i) sometimes, but not in the majority of the examined projects, used estimate precision to convey effort uncertainty, (ii) tended to interpret overly precise, inaccurate effort estimates as indicating low developer competence and low trustworthiness of the estimates, while too narrow effort prediction intervals had the opposite effect. This difference remained even when the actual effort was known to be outside the narrow effort prediction interval. We identified several challenges related to the use of the precision of single value estimates to communicate effort uncertainty and recommend that software professionals use effort prediction intervals, and not the preciseness of single value estimates, to communicate effort uncertainty.
Conference Paper
There is growing acknowledgement within the software engineering community that a theory of software development is needed to integrate the myriad methodologies that are currently popular, some of which are based on opposing perspectives. We have been developing such a theory for a number of years. In this paper, we overview our theory and report on a recent ontological analysis of the theory constructs. We suggest that, once fully developed, this theory, or one similar to it, may be applied to support situated software development, by providing an overarching model within which software initiatives might be categorised and understood. Such understanding would inevitably lead to greater predictability with respect to outcomes.
Article
State of the practice in software engineering economics often focuses exclusively on cost issues and technical considerations for decision making. Value-based software engineering (VBSE) expands the cost focus by also considering benefits, opportunities, and risks. Of central importance in this context is valuation, the process for determining the economic value of a product, service, or a process. Uncertainty is a major challenge in the valuation of software assets and projects. This chapter first introduces uncertainty along with other significant issues and concepts in valuation, and surveys the relevant literature. Then it discusses decision tree and options-based techniques to demonstrate how valuation can help with dynamic decision making under uncertainty in software development projects.
Article
Effort estimation models are widely investigated because they have an advantage over expert judgment in terms of objectivity and repeatability. Linear regression models are the most major methods used in the past study. While these studies carefully determined predictor variables and model formulation, error distributions are fewer considered. Furthermore, characteristics of linear regression models using different error distributions have not studied with actual datasets. This study compared log-normal and Gamma regressions for effort estimation in terms of their predictive performance. Both regressions were examined with multiple datasets and two formulation approaches. As a result, it was found that log-normal and Gamma regressions have contrasting characteristics though the difference is diminished when uncertainty of effort is well explained by predictor variables. Furthermore, it was found that which error distribution is favored depends on what one wants to estimate. These results contribute some suggestions to effort estimation model construction.
Conference Paper
Full-text available
Software Risk Evaluation (SRE) is a process for identifying, analyzing, and developing mitigation strategies for risks in a software intensive system while it is in development. Risk assessment incorporates risk analysis and risk management, i.e. it combines systematic processes for risk identification and determination of their consequences, and how to deal with these risks? Many risk assessment methodologies exist, focusing on different types of risk or different areas of concern. Risk evaluation means to determine level of risk, prioritize the risk and categorize the risk. In this paper we have proposed a Software Risk Assessment and Evaluation Process (SRAEP) using model based approach. We have used model based approach because it requires correct description of the target system, its context and all security features. In SRAEP, we have used the software fault tree (SFT) to identify the risk. Finally we have compared the weaknesses of existing Software Risk Assessment and Estimation Model (SRAEM) with the proposed SRAEP in order to show the importance of software fault tree.
Article
The meaning of an effort or cost estimate should be understood and communicated consistently and clearly to avoid planning and budgeting mistakes. Results from two studies, one of 42 software companies and one of 423 individual software developers, suggest that this is far from being the case. In both studies we found a large variety in what was meant by an effort estimate and that the meaning was frequently not communicated. To improve the planning and budgeting of software projects we recommend that the meaning of effort estimates is understood and communicated using a probability-based terminology.
Article
Previous research has found supporting evidence of a positive relationship between project risk management and project success, but literature on how risk management is applied to and integrated with project portfolios has been scarce. Based on a literature review, a comprehensive conceptual model is developed, which highlights the three components of portfolio risk management: organization, process, and culture. This study investigates their linkage to portfolio success, mediated through risk management quality, and, therefore, provides principles for more effective portfolio risk management. The developed framework can be used for further empirical research on the influence of portfolio risk management and its success.
How to Budget for Program Risk
  • J D Edgar
J.D. Edgar, "How to Budget for Program Risk," Concepts, Summer 1982, pp. 6-73.
Function and Disfunction," <i>Proc. European Software Control and Measurement Conf.,</i&gt
  • T Demarco
Function and Disfunction
  • T Demarco
T. DeMarco, "Function and Disfunction," Proc. European Software Control and Measurement Conf., ESCOM Science Publishers B.V., Leiden, Netherlands, 1995.
  • S D Conte
  • H E Dunsmore
  • V Y Shen
S.D. Conte, H.E. Dunsmore, and V.Y. Shen, Software Engineering Metrics and Models, Chap. 3, Sect. 3.5, Benjamin/ Cummings, Menlo Park, Calif., 1986.