Article

Measuring where it matters: Determining starting points for metrics collection

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Defining useful metrics to measure the goals of a software organisation is difficult. Defining useful metrics to measure the causes of the (failure) to fulfil those organisational goals is even more difficult, as the diversity of potential causes makes their measurement illusive. In this article, we describe a method to select useful software metrics based on findings from qualitative research. In a case study, we apply this method to a previously conducted study of project post-mortem reviews to assess the validity of our prior claims. For this we collected data on 109 new software projects in the organisation in which we conducted the previous case study.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In the context of metrics evaluation, there is very limited research has been carried out to look at the organization goals conformance. We concluded recent studies on the metrics model focused on the software performance (Alkhattabi, Neagu, & Cullen, 2011;Ordonez & Garcia-Garcia, 2008;Schalken & Vliet, 2008;Zivkovic et al., 2010) and organization performance (Barnes & Vidgen, 2006;Goel & Chengalur-Smith, 2010;Petkova et al., 2000). There is no study has been address on the metrics model that focused on the relationship of data and the organization goals conformance. ...
... Conceptual data modelling, metrics, measurement theory, structural properties Ordonez and Garcia-Garcia (2008) Referential integrity issues are found in database integration, data quality assurance, data warehousing and data modelling Quality metric, univariate and bivariate statistics, database integration Schalken and Vliet (2008) Defining useful metrics to measure the causes of the (failure) to fulfill those organizational goals is difficult, as the diversity of potential causes makes their measurement illusive Statistic analysis, exploratory cycle, confirmatory cycle, validation of insights, threat to validity Goel and Chengalur-Smith (2010) Limited metric for measuring the effectiveness of security policy tool Information quality, security policies Alkhattabi et al. (2011) Improved technologies could mean faster and easier access to information but do not necessarily ensure the quality of the information; for this reason it is essential to develop valid and reliable methods of quality measurement and carry out careful information quality evaluations GQM, e-learning system, multi element analysis, Java 2 standard edition (J2SE) software development kit ...
... The subject area also includes the quantitative aspects of quality control and assurance -and this covers activities such as recording and monitoring defects during development and testing [10]. The use of software metrics is generally accepted as a means of supporting rational decision making during software development and maintenance [11] [12], with broader goals of increased productivity and quality and reduced cycle time [13]. Metrics have been designed and are used to measure a diverse set of product, process and resource characteristics, including system size, software quality, development schedule, developer effort and code complexity [12]. ...
Preprint
Reliable empirical models such as those used in software effort estimation or defect prediction are inherently dependent on the data from which they are built. As demands for process and product improvement continue to grow, the quality of the data used in measurement and prediction systems warrants increasingly close scrutiny. In this paper we propose a taxonomy of data quality challenges in empirical software engineering, based on an extensive review of prior research. We consider current assessment techniques for each quality issue and proposed mechanisms to address these issues, where available. Our taxonomy classifies data quality issues into three broad areas: first, characteristics of data that mean they are not fit for modeling; second, data set characteristics that lead to concerns about the suitability of applying a given model to another data set; and third, factors that prevent or limit data accessibility and trust. We identify this latter area as of particular need in terms of further research.
... The use of metrics in ESE has been asserted as invaluable in facilitating rational decision making during software development and maintenance (Mazinanian et al. 2012;Schalken and van Vliet 2008), with the expectation that this will in turn lead to positive outcomes such as increased development productivity, reduced deployment cycle time, and improved quality of the software product (Daskalantonakis 1992). Although the in-principle benefits of metrics to software engineering is not in doubt, the inpractice benefits have been questioned increasingly in recent years due to growing concerns over the quality of the data being collected and used in the building of models to predict characteristics such as software size and development effort. ...
Preprint
Data is a cornerstone of empirical software engineering (ESE) research and practice. Data underpin numerous process and project management activities, including the estimation of development effort and the prediction of the likely location and severity of defects in code. Serious questions have been raised, however, over the quality of the data used in ESE. Data quality problems caused by noise, outliers, and incompleteness have been noted as being especially prevalent. Other quality issues, although also potentially important, have received less attention. In this study, we assess the quality of 13 datasets that have been used extensively in research on software effort estimation. The quality issues considered in this article draw on a taxonomy that we published previously based on a systematic mapping of data quality issues in ESE. Our contributions are as follows: (1) an evaluation of the "fitness for purpose" of these commonly used datasets and (2) an assessment of the utility of the taxonomy in terms of dataset benchmarking. We also propose a template that could be used to both improve the ESE data collection/submission process and to evaluate other such datasets, contributing to enhanced awareness of data quality issues in the ESE community and, in time, the availability and use of higher-quality datasets.
... Metrics is frequently used to evaluate data as in the case study, the evaluation proposed the percentage of student satisfaction which is important for the La Trobe Student Support Services to measure student satisfaction of these services. In contrast to previous approaches on metrics (Alkhattabi et al., 2011;Goel & Chengalur-Smith, 2010;Rao et al., 2012;Schalken & Vliet, 2008), metrics proposed for the solution in this paper applied the measurement of organizational data in relation to the organizational goals. Data is evaluated and assisted decision-making process in relation to achieve the organizational goals. ...
Article
Full-text available
Data is important in assisting decision-making in relation to the organizational goals. However, the trustworthiness of organizational data in relation to achieving the organizational goals is often questioned because of the vast amount of organizational data available. This paper advances the understanding of the organizational goals model based on ontology. This refers to the importance of assisting the organization to utilize relevant organizational data for decision-making in relation to the organizational goals. Therefore, domain experts and entrepreneurs can make a decision to what extend the organizational goals are achieved. The results show that ontology supports the relationship between the organizational goal elements as an effort to measure organizational data in relation to the organizational goals.
... The use of metrics in ESE has been asserted as invaluable in facilitating rational decision making during software development and maintenance (Mazinanian et al. 2012;Schalken and van Vliet 2008), with the expectation that this will in turn lead to positive outcomes such as increased development productivity, reduced deployment cycle time, and improved quality of the software product (Daskalantonakis 1992). Although the in-principle benefits of metrics to software engineering is not in doubt, the in-practice benefits have been questioned increasingly in recent years due to growing concerns over the quality of the data being collected and used in the building of models to predict characteristics such as software size and development effort. ...
Article
Data is a cornerstone of empirical software engineering (ESE) research and practice. Data underpin numerous process and project management activities, including the estimation of development effort and the prediction of the likely location and severity of defects in code. Serious questions have been raised, however, over the quality of the data used in ESE. Data quality problems caused by noise, outliers, and incompleteness have been noted as being especially prevalent. Other quality issues, although also potentially important, have received less attention. In this study, we assess the quality of 13 datasets that have been used extensively in research on software effort estimation. The quality issues considered in this article draw on a taxonomy that we published previously based on a systematic mapping of data quality issues in ESE. Our contributions are as follows: (1) an evaluation of the “fitness for purpose” of these commonly used datasets and (2) an assessment of the utility of the taxonomy in terms of dataset benchmarking. We also propose a template that could be used to both improve the ESE data collection/submission process and to evaluate other such datasets, contributing to enhanced awareness of data quality issues in the ESE community and, in time, the availability and use of higher-quality datasets.
... The subject area also includes the quantitative aspects of quality control and assurance -and this covers activities such as recording and monitoring defects during development and testing [10]. The use of software metrics is generally accepted as a means of supporting rational decision making during software development and maintenance [11] [12], with broader goals of increased productivity and quality and reduced cycle time [13]. Metrics have been designed and are used to measure a diverse set of product, process and resource characteristics, including system size, software quality, development schedule, developer effort and code complexity [12]. ...
Conference Paper
Full-text available
Reliable empirical models such as those used in software effort estimation or defect prediction are inherently dependent on the data from which they are built. As demands for process and product improvement continue to grow, the quality of the data used in measurement and prediction systems warrants increasingly close scrutiny. In this paper we propose a taxonomy of data quality challenges in empirical software engineering, based on an extensive review of prior research. We consider current assessment techniques for each quality issue and proposed mechanisms to address these issues, where available. Our taxonomy classifies data quality issues into three broad areas: first, characteristics of data that mean they are not fit for modeling, second, data set characteristics that lead to concerns about the suitability of applying a given model to another data set, and third, factors that prevent or limit data accessibility and trust. We identify this latter area as of particular need in terms of further research.
... Software organizations have been aware of the significance of measurement processes in taking informed decisions to better manage software processes, products and projects. Measurement is itself regarded as a driving force for software process improvement [1]. It also facilitates effective communication between software organizations and customers [2]. ...
Article
Software organizations face challenges in managing and sustaining their measurement programs over time. The complexity of measurement programs increase with exploding number of goals and metrics to collect. At the same time, organizations usually have limited budget and resources for metrics collection. It has been recognized for quite a while that there is the need for prioritizing goals, which then ought to drive the selection of metrics. On the other hand, the dynamic nature of the organizations requires measurement programs to adapt to the changes in the stakeholders, their goals, information needs and priorities. Therefore, it is crucial for organizations to use structured approaches that provide transparency, traceability and guidance in choosing an optimum set of metrics that would address the highest priority information needs considering limited resources. This paper proposes a decision support framework for metrics selection (DSFMS) which is built upon the widely used Goal Question Metric (GQM) approach. The core of the framework includes an iterative goal-based metrics selection process incorporating decision making mechanisms in metrics selection, a pre-defined Attributes/Metrics Repository, and a Traceability Model among GQM elements. We also discuss alternative prioritization and optimization techniques for organizations to tailor the framework according to their needs. The evaluation of the GQM-DSFMS framework was done through a case study in a CMMI Level 3 software company.
... Despite the recommendations from Chirinos et al. to use their measurement framework (MOSME), the ISO/IEC standard was more applicable due to its wider adoption in industry, the fact that it is standardized by an international standardization body and the easy coupling of theory and practice. The use of standardized view on measurement processes also provided the possibility for future benchmarking with other organizations (e.g. as indicated in [36]). ...
Article
ContextPredicting a number of defects to be resolved in large software projects (defect backlog) usually requires complex statistical methods and thus is hard to use on a daily basis by practitioners in industry. Making predictions in simpler and more robust way is often required by practitioners in software engineering industry.ObjectiveThe objective of this paper is to present a simple and reliable method for forecasting the level of defect backlog in large, lean-based software development projects.MethodThe new method was created as part of an action research project conducted at Ericsson. In order to create the method we have evaluated multivariate linear regression, expert estimations and analogy-based predictions w.r.t. their accuracy and ease-of-use in industry. We have also evaluated the new method in a life project at one of the units of Ericsson during a period of 21 weeks (from the beginning of the project until the release of the product).ResultsThe method for forecasting the level of defect backlog uses an indicator of the trend (an arrow) as a basis to forecast the level of defect backlog. Forecasts are based on moving average which combined with the current level of defect backlog was found to be the best prediction method (Mean Magnitude of Relative Error of 16%) for the level of future defect backlog.ConclusionWe have found that ease-of-use and accuracy are the main aspects for practitioners who use predictions in their work. In this paper it is concluded that using the simple moving average provides a sufficiently-good accuracy (much appreciated by practitioners involved in the study). We also conclude that using the indicator (forecasting the trend) instead of the absolute number of defects in the backlog increases the confidence in our method compared to our previous attempts (regression, analogy-based, and expert estimates).
... Despite the recommendations from Chirinos et al. to use their measurement framework (MOSME), the ISO/IEC standard was more applicable due to its wider adoption in industry, the fact that it is standardized by an international standardization body and the easy coupling of theory and practice. The use of standardized view on measurement processes also provided the possibility for future benchmarking with other organizations (e.g. as indicated in [18]). ...
Article
As in every engineering discipline, metrics play an important role in software development, with the difference that almost all software projects need the customization of metrics used. In other engineering disciplines, the notion of a measurement system (i.e. a tool used to collect, calculate, and report quantitative data) is well known and defined, whereas it is not as widely used in software engineering. In this paper we present a framework for developing custom measurement systems and its industrial evaluation in a software development unit within Ericsson. The results include the framework for designing measurement systems and its evaluation in real life projects at the company. The results show that with the help of ISO/IEC standards, measurement systems can be effectively used in software industry and that the presented framework improves the way of working with metrics. This paper contributes with the presentation of how automation of metrics collection and processing can be successfully introduced into a large organization and shows the benefits of it: increased efficiency of metrics collection, increased adoption of metrics in the organization, independence from individuals and standardized nomenclature for metrics in the organization.
... Despite the recommendations from Chirinos et al. to use their measurement framework (MOSME), the ISO/IEC standard was more applicable due to its wider adoption in industry. The use of standardized view on measurement processes also provided the possibility for future benchmarking with other organizations (e.g. as indicated in [24]). The standards are also based on the state-of-the-art in measurement theory, which can be found in [25][26][27][28]. ...
Article
The process of measuring in software engineering has already been standardized in the ISO/IEC 15939 standard, where activities related to identifying, creating, and evaluating of measures are described. In the process of measuring software entities, however, an organization usually needs to create custom measurement systems, which are intended to collect, analyze, and present data for a specific purpose. In this paper, we present a proven industrial process for developing measurement systems including the artifacts and deliverables important for a successful deployment of measurement systems in industry. The process has been elicited during a case study at Ericsson and is used in the organization for over 3 years when the paper was written. The process is supported by a framework that simplifies the implementation of the measurement systems and shortens the time from the initial idea to a working measurement system by the factor of 5 compared with using a standard development process not tailored for measurement systems. Copyright © 2010 John Wiley & Sons, Ltd.
... • risk transfer by using risk ownership transfer, or by using insurance, guarantees or contractual clauses; • reduce risk by changing the risk exposure (through impact and / or probability mitigation Risk acceptance criteria depend on policies, objectives and interests of parties involved in the organization. Organizations define their own classification of risk acceptance levels, taking into account the following [4]: • risk acceptance criteria include multiple thresholds, each associated with a risk level, present in the risk treatment plan; • risk acceptance criteria must be expressed as a percentage of estimated profit (or other benefits of the organization) associated with the estimated risk; • different risk acceptance criteria apply to different classes of risks, such as nonacceptance of non-compliance risks, while high risks can be accepted as a contractual requirement; • risk acceptance criteria for high risks include additional treatments, such as commitments and approvals that will be taken to reduce risk to an acceptable level in a defined time period. Risk acceptance criteria are different, corresponding to the period in which the risk exists (long or short term). ...
Article
Full-text available
The purpose of this paper is to present some directions to perform the risk man-agement for information security. The article follows to practical methods through question-naire that asses the internal control, and through evaluation based on existing controls as part of vulnerability assessment. The methods presented contains all the key elements that concurs in risk management, through the elements proposed for evaluation questionnaire, list of threats, resource classification and evaluation, correlation between risks and controls and residual risk computation.
Article
Context Software measurement programs (MPs) are an important means for understanding, evaluating, managing, and improving software processes, products and resources. However, implementing successful MPs still remains a challenge. Objectives To make a comprehensive review of the studies on MPs for bringing into light the existing measurement planning models and tools used for implementing MPs,the accumulated knowledge on the success/failure factors of MPs and mitigation strategies to address their challenges. Methods A Systematic Literature Review (SLR) was conducted. In total, 65primary studies were reviewed and analyzed. Results We identified 35 measurement planning models and 11 associated tools, most of which either proposed extensions or improvements for goal based approaches. The identified success factors include (a) organizational adoption of MP, (b) integration of MP with SDLC, (c) synchronization of MP with SPI and (d) design of MP. The mostly mentioned mitigation strategies for addressing challenges are effective change management and measurement stakeholder management, automated tool support and incorporation of engineering mechanisms for designing sustainable, effective, scalable and extendible MPs, and measurement expertise and standards development. Conclusion Most of the success factors and mitigation strategies have interdependencies. Therefore, for successful MP implementation, software organizations should consider these factors in combination and make a feasibility study at the very beginning.
Thesis
Full-text available
Data quality is an important issue which has been addressed and recognised in research communities such as data warehousing, data mining and information systems. It has been agreed that poor data quality will impact the quality of results of analyses and that it will therefore impact on decisions made on the basis of these results. Empirical software engineering has neglected the issue of data quality to some extent. This fact poses the question of how researchers in empirical software engineering can trust their results without addressing the quality of the analysed data. One widely accepted definition for data quality describes it as ‘fitness for purpose’, and the issue of poor data quality can be addressed by either introducing preventative measures or by applying means to cope with data quality issues. The research presented in this thesis addresses the latter with the special focus on noise handling. Three noise handling techniques, which utilise decision trees, are proposed for application to software engineering data sets. Each technique represents a noise handling approach: robust filtering, where training and test sets are the same; predictive filtering, where training and test sets are different; and filtering and polish, where noisy instances are corrected. The techniques were first evaluated in two different investigations by applying them to a large real world software engineering data set. In the first investigation the techniques’ ability to improve predictive accuracy in differing noise levels was tested. All three techniques improved predictive accuracy in comparison to the do-nothing approach. The filtering and polish was the most successful technique in improving predictive accuracy. The second investigation utilising the large real world software engineering data set tested the techniques’ ability to identify instances with implausible values. These instances were flagged for the purpose of evaluation before applying the three techniques. Robust filtering and predictive filtering decreased the number of instances with implausible values, but substantially decreased the size of the data set too. The filtering and polish technique actually increased the number of implausible values, but it did not reduce the size of the data set. Since the data set contained historical software project data, it was not possible to know the real extent of noise detected. This led to the production of simulated software engineering data sets, which were modelled on the real data set used in the previous evaluations to ensure domain specific characteristics. These simulated versions of the data set were then injected with noise, such that the real extent of the noise was known. After the noise injection the three noise handling techniques were applied to allow evaluation. This procedure of simulating software engineering data sets combined the incorporation of domain specific characteristics of the real world with the control over the simulated data. This is seen as a special strength of this evaluation approach. The results of the evaluation of the simulation showed that none of the techniques performed well. Robust filtering and filtering and polish performed very poorly, and based on the results of this evaluation they would not be recommended for the task of noise reduction. The predictive filtering technique was the best performing technique in this evaluation, but it did not perform significantly well either. An exhaustive systematic literature review has been carried out investigating to what extent the empirical software engineering community has considered data quality. The findings showed that the issue of data quality has been largely neglected by the empirical software engineering community. The work in this thesis highlights an important gap in empirical software engineering. It provided clarification and distinctions of the terms noise and outliers. Noise and outliers are overlapping, but they are fundamentally different. Since noise and outliers are often treated the same in noise handling techniques, a clarification of the two terms was necessary. To investigate the capabilities of noise handling techniques a single investigation was deemed as insufficient. The reasons for this are that the distinction between noise and outliers is not trivial, and that the investigated noise cleaning techniques are derived from traditional noise handling techniques where noise and outliers are combined. Therefore three investigations were undertaken to assess the effectiveness of the three presented noise handling techniques. Each investigation should be seen as a part of a multi-pronged approach. This thesis also highlights possible shortcomings of current automated noise handling techniques. The poor performance of the three techniques led to the conclusion that noise handling should be integrated into a data cleaning process where the input of domain knowledge and the replicability of the data cleaning process are ensured.
Article
In the context of software evolution, many activities are involved and are very useful, like being able to evaluate the design quality of an evolving system, both to locate the parts that need particular refactoring or reengineering efforts, and to evaluate parts that are well designed. This paper aims to give support hints for the evaluation of the code and design quality of a system and in particular we suggest to use metrics computation and antipatterns detection together. We propose metrics computation based on particular kinds of micro-structures and the detection of structural and object-oriented antipatterns with the aim of identifying areas of design improvements. We can evaluate the quality of a system according to different issues, for example by understanding its global complexity, analyzing the cohesion and coupling of system modules and locating the most critical and complex components that need particular refactoring or maintenance.
Conference Paper
Most of the measurement programs fail to achieve targeted goals. This paper presents outcomes of systematic review on software measurement programs. The aim of the study is to analyse goal based measurement models/frameworks applications/tools and success factors. 1579 research studies were reviewed in the beginning and on basis of predefined criteria 28 studies were chosen for analysis. The selection of research studies was done on the basis of structured procedure of systematic review. Outcome of this study consists of observations and suggestions on the basis of analysis of selected studies.
Article
Full-text available
"This paper advocates a validational process utilizing a matrix of intercorrelations among tests representing at least two traits, each measured by at least two methods. Measures of the same trait should correlate higher with each other than they do with measures of different traits involving separate methods. Ideally, these validity values should also be higher than the correlations among different traits measure by the same method." Examples from the literature are described as well as problems in the application of the technique. 36 refs. (PsycINFO Database Record (c) 2010 APA, all rights reserved)
Article
Full-text available
Objective: To improve the qualitative data obtained from software engineering experiments by gathering feedback during experiments. Rationale: Existing techniques for collecting quantitative and qualitative data from software engineering experiments do not provide sufficient information to validate or explain all our results. Therefore, we would like a cost-effective and unobtrusive method of collecting feedback from subjects during an experiment to augment other sources of data. Design of study: We formulated a set of qualitative questions that might be answered by collecting feedback during software engineering experiments. We then developed a tool to collect such feedback from experimental subjects. This feedback-collection tool was used in four different experiments and we evaluated the usefulness of the feedback obtained in the context of each experiment. The feedback data was triangulated with other sources of quantitative and qualitative data collected for the experiments. Results: We have demonstrated that the collection of feedback during experiments provides useful additional data to: validate the data obtained from other sources about solution times and quality of solutions; check process conformance; understand problem solving processes; identify problems with experiments; and understand subjects perception of experiments. Conclusions: Feedback collection has proved useful in four experiments and we intend to use the feedback-collection tool in a range of other experiments to further explore the cost-effectiveness and limitations of this technique. It is also necessary to carry out a systematic study to more fully understand the impact of the feedback-collecting tool on subjects performance in experiments.
Article
Full-text available
Reporting on the SWEBOK project, the authors-who represent the project's editorial team-discuss the three-phase plan to characterize a body of knowledge, a vital step toward developing software engineering as a profession
Conference Paper
Full-text available
Our objective is to describe how software engineering might benefit from an evidence-based approach and to identify the potential difficulties associated with the approach. We compared the organisation and technical infrastructure supporting evidence-based medicine (EBM) with the situation in software engineering. We considered the impact that factors peculiar to software engineering (i.e. the skill factor and the lifecycle factor) would have on our ability to practice evidence-based software engineering (EBSE). EBSE promises a number of benefits by encouraging integration of research results with a view to supporting the needs of many different stakeholder groups. However, we do not currently have the infrastructure needed for widespread adoption of EBSE. The skill factor means software engineering experiments are vulnerable to subject and experimenter bias. The lifecycle factor means it is difficult to determine how technologies will behave once deployed. Software engineering would benefit from adopting what it can of the evidence approach provided that it deals with the specific problems that arise from the nature of software engineering.
Article
Full-text available
Increasingly organisations are foregoing an ad hoc approach to metrics in favor of complete metrics programs. The authors identify consensus requirements for metric program success and examine how programs in two organisations measured up
Article
Full-text available
Measurement programs in software organizations are an important source of control over quality and cost in software development. The findings of this research presented here are based on an industry-wide survey conducted to examine the factors that influence success in software metrics programs. Our approach is to go beyond the anecdotal information on metrics programs that exists in the literature and use the industry-wide survey data to rigorously test for the effects of various factors that affect metrics programs success. We measure success in metrics programs using two variables-use of metrics information in decision-making and improved organizational performance. The various determinants of metrics program success are divided into two sets-organizational variables and technical variables. The influence of these variables on metrics programs success is tested using regression analysis. Our results support some of the factors discussed in the anecdotal literature such as management support, goal alignment, and communication and feedback. Certain other factors such as metrics quality and the ease of data collection are not as strongly influential on success. We conclude the paper with a detailed discussion of our results and suggestions for future work.
Despite significant progress in the last 15 years, implementing a successful measurement program for software development is still a challenging undertaking. Most problems are not of theoretical but of methodological or practical nature. In this article, we present lessons learned from experiences with goal-oriented measurement. We structure them into practical guidelines for efficient and useful software measurement aimed at process improvement in industry. Issues related to setting measurement goals, defining explicit measurement models, and implementing data collection procedures are addressed from a practical perspective. In addition, guidelines for using measurement in the context of process improvement are provided.
Conference Paper
Post-mortem project reviews often yield useful lessons learned. These project reviews are mostly recorded in plain text. This makes it difficult to derive useful overall findings from a set of such post-mortem reviews, for example to monitor and guide a software process improvement program. We have developed a five-step method to transform the qualitative, natural language type information present in those reports into quantitative information. This quantitative information can be analyzed statistically and related to other types of quantitative projectspecific information. In this paper we discuss the method, and show the results of applying it in the setting of a large industrustrial software process improvement initiative.
SUMMARY We suggest that empirical studies of maintenance are difficult to understand unless the context of the study is fully defined. We developed a preliminary ontology to identify a number of factors that influence maintenance. The purpose of the ontology is to identify factors that would affect the results of empirical studies. We present the ontology in the form of a UML model. Using the maintenance factors included in the ontology, we define two common maintenance scenarios and consider the industrial issues associated with them. Copyright © 1999 John Wiley & Sons, Ltd.
Article
Project evaluation is essential to understand and assess the key aspects of a project that makes it either a success or failure. The latter is influenced by a large number of fac- tors, and many times it is hard to measure them objective. This paper addresses this by introducing a new method for identifying and assessing key project characteristics, which are crucial for a project's success. The method consists of a number of well- defined steps, which are described in detail. The method is applied to two case studies from different application domains and continents. It is concluded that patterns are pos- sible to detect from the data sets. Further, the analysis of the two data sets shows that the proposed method using subjective factors is useful, since it provides an increased understanding, insight and assessment of which project factors might affect project success.
Postmortem project reviews often yield useful lessons learned. These project reviews are mostly recorded in plain text. This makes it difficult to derive useful overall findings from a set of such postmortem reviews, e.g. to monitor and guide a software process improvement program. We have developed a five-step method to transform the qualitative, natural language- type information present in those reports into quantitative information. This quantitative information can be analyzed statistically and related to other types of quantitative project- specific information. In this article, we discuss the method, and show the results of applying it in the setting of a large industrial software process improvement initiative. Through the application of the analysis method in the case study, improved questions for a new evaluation procedure were discovered. The analysis also showed that in this organization team cooperation and the architecture of the infrastructure had a major impact on project performance. Copyright  2006 John Wiley & Sons, Ltd.
Book
Most writing on sociological method has been concerned with how accurate facts can be obtained and how theory can thereby be more rigorously tested. In The Discovery of Grounded Theory, Barney Glaser and Anselm Strauss address the equally Important enterprise of how the discovery of theory from data--systematically obtained and analyzed in social research--can be furthered. The discovery of theory from data--grounded theory--is a major task confronting sociology, for such a theory fits empirical situations, and is understandable to sociologists and laymen alike. Most important, it provides relevant predictions, explanations, interpretations, and applications. In Part I of the book, "Generation Theory by Comparative Analysis," the authors present a strategy whereby sociologists can facilitate the discovery of grounded theory, both substantive and formal. This strategy involves the systematic choice and study of several comparison groups. In Part II, The Flexible Use of Data," the generation of theory from qualitative, especially documentary, and quantitative data Is considered. In Part III, "Implications of Grounded Theory," Glaser and Strauss examine the credibility of grounded theory. The Discovery of Grounded Theory is directed toward improving social scientists' capacity for generating theory that will be relevant to their research. While aimed primarily at sociologists, it will be useful to anyone Interested In studying social phenomena--political, educational, economic, industrial-- especially If their studies are based on qualitative data.
Article
A practical view of software measurement that formed the basis for a companywide software metrics initiative within Motorola is described. A multidimensional view of measurement is provided by identifying different dimensions (e.g., metric usefulness/utility, metric types or categories, metric audiences, etc.) that were considered in this companywide metrics implementation process. The definitions of the common set of Motorola software metrics, as well as the charts used for presenting these metrics, are included. The metrics were derived using the goal/question metric approach to measurement. A distinction is made between the use of metrics for process improvement over time across projects and the use of metrics for in-process project control. Important experiences in implementing the software metrics initiative within Motorola are also included
Article
Success factors for measurement programs as identified in the literature typically focus on the `internals' of the measurement program: incremental implementation, support from management, a well-planned metrics framework, and so on. However, for a measurement program to be successful within its larger organizational context, it has to generate value for the organization. This implies that attention should also be given to the proper mapping of some identifiable organizational problem onto the measurement program, as well as the translation back of measurement results to organizational actions. In this paper, we present a generic process model for measurement-based improvement, which does cover the latter issues as well. We describe a number of common uses for measurement programs in software organizations, from which we derive additional `external' success factors. In addition, we propose a number of activities that organizations can use to implement value-generating measurement programs.
Controlling software projects: management measure-ment and estimation. Yourdon Computing Series The discovery of grounded theory: strategies for qualitative research
  • T Demarco
  • Usa Glaser
  • B G Strauss
DeMarco, T., 1982. Controlling software projects: management measure-ment and estimation. Yourdon Computing Series. Prentice-Hall, Inc., Englewood Cliffs, NJ, USA. Glaser, B.G., Strauss, A.L., 1967. The discovery of grounded theory: strategies for qualitative research. Observations. Weidenfeld and Nicolson.
SWEBOK: Guide to the Software Engineering Body of Knowledge Experience factory
  • Abran
  • Alain
  • Moore
  • James
Abran, Alain, Moore, James W. (Eds.), 2004. SWEBOK: Guide to the Software Engineering Body of Knowledge, 2004 version ed. IEEE Computer Society Press, Washington, DC, USA. Basili, Victor R., Caldiera, Gianluigi, Dieter Rombach, H., 1994. Experience factory. In: Marciniak, John J. (Ed.), Encyclopedia of Software Engineering, vol.
Quasi-Experimentation: Design and Analysis Issues for Field Settings A practical view of software measurement and implementation experiences within Motorola
  • Thomas D Cook
  • Donald T Campbell
Cook, Thomas D., Campbell, Donald T., 1979. Quasi-Experimentation: Design and Analysis Issues for Field Settings. Rand McNally College Publishing Company, Chicago, IL, USA. Daskalantonakis, Michael K., 1992. A practical view of software measurement and implementation experiences within Motorola. IEEE Trans. Software Eng. 18 (11), 998–1010.
Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory The Goal/Question/Metric Method, A Practical Method for Quality Improvement of Software Development
  • Anselm L Strauss
  • Juliet M Corbin
Strauss, Anselm L., Corbin, Juliet M., 1990. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory. Sage Publications, Newbury Park, CA, USA. van Solingen, Rini, Berghout, Egon, 1999. The Goal/Question/Metric Method, A Practical Method for Quality Improvement of Software Development. McGraw-Hill, New York, NY, USA.
Yourdon Computing Series
  • Demarco
The SEI Series in Software Engineering
  • Paulk
Towards an ontology of software maintenance
  • Kitchenham