Article

The Impact of Irrelevant and Misleading Information on Software Development Effort Estimates: A Randomized Controlled Field Experiment

Simula Res. Lab., Univ. of Oslo, Lysaker, Norway
IEEE Transactions on Software Engineering (Impact Factor: 2.59). 11/2011; DOI: 10.1109/TSE.2010.78
Source: IEEE Xplore

ABSTRACT Studies in laboratory settings report that software development effort estimates can be strongly affected by effort-irrelevant and misleading information. To increase our knowledge about the importance of these effects in field settings, we paid 46 outsourcing companies from various countries to estimate the required effort of the same five software development projects. The companies were allocated randomly to either the original requirement specification or a manipulated version of the original requirement specification. The manipulations were as follows: 1) reduced length of requirement specification with no change of content, 2) information about the low effort spent on the development of the old system to be replaced, 3) information about the client's unrealistic expectations about low cost, and 4) a restriction of a short development period with start up a few months ahead. We found that the effect sizes in the field settings were much smaller than those found for similar manipulations in laboratory settings. Our findings suggest that we should be careful about generalizing to field settings the effect sizes found in laboratory settings. While laboratory settings can be useful to demonstrate the existence of an effect and better understand it, field studies may be needed to study the size and importance of these effects.

0 Bookmarks
 · 
84 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Context: Being able to select the essential, non-negotiable product features is a key skill for stakeholders of software projects. Such selection relies on human judgment, sometimes supported by structured prioritization techniques and associated tools. Goal: Our goal was to investigate whether certain attributes of prioritization techniques affect stakeholders' threshold for judging product features as essential. The four investigated techniques reflect four combinations of granularity (low, high) and cognitive support (low, high). Method: In one experiment, 94 subjects in four treatment groups indicated the features (from a list of 16) that would be essential in their decision to buy a new cell phone. With a similar setup in a controlled field experiment, 44 domain experts indicated the software product features that were essential for the fulfillment of the project's vision. The effects of granularity and cognitive support on the number of essential ratings were analyzed and compared between the experiments. Result: With lower granularity, significantly more features were rated as essential. The effect was large in the first experiment and extreme (Cohen's d=2.40) in the second. Added cognitive support had medium effect (Cohen's d=0.43 and 0.50), but worked in opposite directions in the two experiments, and was not statistically significant in the second. Implications: The results of the study imply that software projects should avoid taking stakeholders' judgments of essentiality at face value. Practices and tools should be designed to counteract potentially harmful biases; however, more empirical work is needed to obtain more insight into the causes of these biases.
    01/2012;
  • [Show abstract] [Hide abstract]
    ABSTRACT: ContextThe effort estimates of software development work are on average too low. A possible reason for this tendency is that software developers, perhaps unconsciously, assume ideal conditions when they estimate the most likely use of effort. In this article, we propose and evaluate a two-step estimation process that may induce more awareness of the difference between idealistic and realistic conditions and as a consequence more realistic effort estimates. The proposed process differs from traditional judgment-based estimation processes in that it starts with an effort estimation that assumes ideal conditions before the most likely use of effort is estimated.
    Information & Software Technology. 01/2011; 53:1382-1390.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Ensembles of learning machines are promising for software effort estimation (SEE), but need to be tailored for this task to have their potential exploited. A key issue when creating ensembles is to produce diverse and accurate base models. Depending on how differently different performance measures behave for SEE, they could be used as a natural way of creating SEE ensembles. We propose to view SEE model creation as a multiobjective learning problem. A multiobjective evolutionary algorithm (MOEA) is used to better understand the tradeoff among different performance measures by creating SEE models through the simultaneous optimisation of these measures. We show that the performance measures behave very differently, presenting sometimes even opposite trends. They are then used as a source of diversity for creating SEE ensembles. A good tradeoff among different measures can be obtained by using an ensemble of MOEA solutions. This ensemble performs similarly or better than a model that does not consider these measures explicitly. Besides, MOEA is also flexible, allowing emphasis of a particular measure if desired. In conclusion, MOEA can be used to better understand the relationship among performance measures and has shown to be very effective in creating SEE models.
    ACM Transactions on Software Engineering and Methodology (TOSEM). 10/2013; 22(4).

Full-text (2 Sources)

View
1 Download
Available from