Software Process Improvement and Practice

Published by Wiley
Online ISSN: 1099-1670
Print ISSN: 1077-4866
Publications
The main objective of this paper is to put forward a software process model for high-performance systems (HPS), and to present a formal framework to describe software design methodologies (SDMs) for those systems. The framework consists of two main parts: the software process activities which characterise the development of HPS, and the components of the SDM (concepts, artifacts, representation and actions) which are essential for any methodology. The framework relates these two parts by identifying generic components of each activity in the software process that can be used to classify and evaluate SDMs for HPS. The framework has been formally specified using the language Z and used to derive formal specifications of SDMs. This is illustrated in the paper by presenting part of the specification of ODM (an Occam design method)
 
Software component reuse is the key to significant gains in productivity. However, the major problem is the lack of identifying and developing potentially reusable components. This paper concentrates on our approach to the development of reusable software components. A prototype tool has been developed, known as the Reuse Assessor and Improver System (RAIS) which can interactively identify, analyse, assess, and modify abstractions, attributes and architectures that support reuse. Practical and objective reuse guidelines are used to represent reuse knowledge and to do domain analysis. It takes existing components, provides systematic reuse assessment which is based on reuse advice and analysis, and produces components that are improved for reuse. Our work on guidelines has been extended to a large scale industrial application.
 
Workflows emphasize the partial order of activities, and the flow of data between activities. In contrast, cooperative processes emphasize the sharing of artefact, and its gradual evolution toward the final product, under the cooperative and concurrent activities of all the involved actors. This paper contrasts workflow and cooperative processes and shows that they are more complementary than conflicting and that, provided some extensions, both approaches can fit into a single tool and formalism. The paper presents Celine, a concurrent engineering tool that allows also to define and support classic workflows and software processes. We claim that the availability of both classes of features allows for the modelling and support of very flexible processes, closer to software engineering reality.
 
During phase two of the SPICE trials, the Proposed Draft Technical Report version of ISO/IEC 15504 is being empirically evaluated. This document set is intended to become an international standard for Software Process Assessment. One thread of evaluations being conducted during these trials is the extent of reliability of assessments based on ISO/IEC PDTR 15504. In this paper we present the first evaluation of the reliability of assessments based on the PDTR version of the emerging international standard. In particular, we evaluate the interrater agreement of assessments. Our results indicate that interrater agreement is considerably high, both for individual ratings at the capability attribute level, and for the aggregated capability levels. In general, these results are consistent with those obtained using the previous version of the Software Process Assessment document set (known as SPICE version 1.0), where capability ratings were also found to have generally high interrater agreem...
 
This article describes the work currently being undertaken to help ISO/IEC TR 15504 progress to the status of a full International Standard, and outlines the changes in design that are to be incorporated in the revision. It describes the inputs for the design decisions that were taken; identifies the fundamental changes in the architecture of the Standard; and briefly describes the current status of the development of the Standard. Copyright © 2004 John Wiley & Sons, Ltd.
 
In this article, we describe the results of CMMI software process appraisal work with six small- to medium-sized software development companies. Our analysis of six CMMI process areas appraised within each of these organisations is presented. Commonly practiced or not practiced elements of the model are identified, leading to the notion of perceived value associated with each specific CMMI practice. A finer-grained framework, which encompasses the notion of perceived value within specific practices, is presented. We argue that such a framework provides incentive to small- to medium-sized enterprises starting process improvement programmes. Copyright © 2005 John Wiley & Sons, Ltd.
 
In their own ways, both ISO 9001 and the Capability Maturity Model (CMM) have provided guidance and urgency to the improvement of software processes. However, there is a point where each calls for statistical analysis of the performance of the processes in use, treating software development as a manufacturing activity. This paper argues that this is an unhelpful demand that should be reconsidered by the writers of standards. © 1996 by John Wiley & Sons Ltd and Gauthier-Villars
 
The increasingly prevalent use of Commercial-Off-The-Shelf (COTS) components in software development has attracted a huge capital pool to the industry. The result is an industry that is characterized by strong forces of change and weak resistance. Under such environment, weaker players are constantly displaced by stronger players, and older technologies are constantly displaced by emerging technologies. This phenomenon has brought about a new class of risk, namely, vendor business factors, to the COTS acquisition community. However, the existing COTS vendor evaluation taxonomies remain product centric, focusing only on product and cost-related factors. This article extends the taxonomies to incorporate Vendor Business Factors into COTS selection process. The resulting model is named VERPRO (Vendor Economics and Risk Profiler). The foundation of VERPRO lies within a measurement-based vendor evaluation taxonomy that categorizes the evaluation criteria into four main factors: product, cost, service, and business. VERPRO enables the acquisition community to incorporate business factors into vendor selection process and provides a revolutionary approach that merges the knowledge of financial analysis into software engineering. Copyright © 2006 John Wiley & Sons, Ltd.
 
Meta-level architectures combined with domain-specific languages serve as a powerful tool to build and maintain a software product line: Meta-level architectures lead to adaptable software systems. Executable descriptions capture expert knowledge. We have developed a meta-level architecture for a software product line of legal expert systems. Four meta-level mechanisms support both variability and evolution of the product line. Domain analysis had shown that separation of expert knowledge from technical code was essential. Descriptions written in domain-specific languages reside in the meta level, and serve as specification, code, and documentation. Technical code finds its place in interpreting machines in the base level. We discuss how meta-level architectures influence the qualities of software product lines and how properties and patterns of the problem space can guide the design of domain-specific languages. Copyright
 
Personnel variances
There is often a misconception that adopting and tailoring agile methods is straightforward resulting in improved products and increasingly satisfied customers. However, the empirical nature of agile methods means that potential practitioners need to carefully assess whether they are exposed to the risks that can make agile method adoption problematic. This is particularly the case with small software companies who are less able to absorb the impact of failed experimentation. This study describes a minimally intrusive assessment approach for small software companies preparing for agile method adoption and tailoring in the light of key risks. The approach has been conducted with six small software companies, three of which are presented to show the evolution of the approach, describe the resource commitment that companies have to make, and highlight the type of information generated from an assessment. The contribution of this study is that small software companies have an alternative to ‘mere experimentation’ with agile methods and can take reasoned steps towards their adoption and tailoring. Copyright © 2008 John Wiley & Sons, Ltd.
 
ion is the key concept. End-users need to be isolated as much as possible from problems arising with the addition of significantly more users. Workflow systems can address this by hierarchically abstracting the workflow and its representations. In order to maintain consistent and coherent project data across large or increasing numbers of people, details that may embody large, multi-person workflows themselves can be encapsulated. https://onlinelibrary.wiley.com/doi/abs/10.1002/%28SICI%291099-1670%28199809%294%3A3%3C125%3A%3AAID-SPIP100%3E3.0.CO%3B2-J
 
We present an agent-based simulation model developed to study how size, complexity and effort relate to each other in the development of open source software (OSS). In the model, many developer agents generate, extend, and re-factor code modules independently and in parallel. This accords with empirical observations of OSS development. To our knowledge, this is the first model of OSS evolution that includes the complexity of software modules as a limiting factor in productivity, the fitness of the software to its requirements, and the motivation of developers. Validation of the model was done by comparing the simulated results against four measures of software evolution (system size, proportion of highly complex modules, level of complexity control work, and distribution of changes) for four large OSS systems. The simulated results resembled the observed data, except for system size: three of the OSS systems showed alternating patterns of super-linear and sub-linear growth, while the simulations produced only super-linear growth. However, the fidelity of the model for the other measures suggests that developer motivation and the limiting effect of complexity on productivity have a significant effect on the development of OSS systems and should be considered in any model of OSS development. Copyright
 
One of the enduring issues being evaluated during the SPICE trials is the reliability of assessments. One type of reliability is the extent to which different assessors produce similar ratings when assessing the same organization and presented with the same evidence. In this paper we report on a study that was conducted to start answering this question. Data was collected from an assessment of 21 process instances covering 15 processes. In each of these assessments two independent assessors performed the ratings. We found that six of the fifteen processes do not meet our minimal benchmark for interrater agreement. Three of these were due to systematic biases by either an internal or external assessor. Furthermore, for eight processes specific rating scale adjustments were identified that could improve its reliability. The findings reported in this paper provide guidance for assessors using the SPICE framework. 1. Introduction The international SPICE (Software Process Improvement and ...
 
The OOSPICE project aims at creating a capability assessment package for component-based development (CBD), which complements and extends the ISO 15504 technical report (SPICE) and its pending standard. In addition, it will offer support for process definition, creation and tailoring in the CBD domain. The major deliverables are a process reference model for CBD, a process assessment methodology, a new CBD methodology and supporting toolset, and an underpinning CBD process metamodel. Here, we evaluate the various terminologies in current usage in OO, CBD, capability assessment and the ISO standards, and then describe the important architectural framework of the project, which brings together the two previously disparate fields of process and methodology metamodeling with capability assessment and improvement. Additionally, we show how elements of preexisting standards, such as ISO 15504 and ISO 12207, as well as elements of existing OO process metamodels can be successfully reused to create a standard for this new focus on component-based systems development. Copyright © 2004 John Wiley & Sons, Ltd.
 
In trying to understand the architecture of the processes governing the development of a large software product, we used various techniques for describing, analyzing and visualizing that process system. A "big picture" visualization of the software development processes yielded a number of cogent observations: I/O mismatches, large fan ins/outs, no clear path through the project, inconsistency in the level of detail, no clear architectural organizational structure. We report the results of a quality improvement team (QIT) put together 1) to determine how the process architecture got to this state, 2) to delineate the base measures by which we plan to measure architectural improvement, 3) to establish surface and root causes for the current state of the architecture and define their interrelationships, and 4) to derive primary and secondary process architecture drivers and to establish counter measures that will yield a more coherent and appropriate process architecture. As a r...
 
Software process assessment is an essential activity for improving software development in an organization. It is very difficult to put together an efficient and effective improvement plan unless it is based on the results of a preparatory assessment. This will determine the current status of the organization's software process and will identify the areas or points that need improvement. There is a need for a rigorous method of assessment that encompasses the factors that affect software production. By rigorous we mean an evaluation based on Evaluation Theory. This theoretical foundation must assure that the evaluation carried out is comprehensive and reliable. Evaluation Theory, defined by Scriven and other authors, describes the components for each type of evaluation method. Six guidelines can be deduced by generalising these components, according to which any evaluation method should be developed: target; criteria; yardstick; assessment techniques; synthesis techniques, and evaluation process. In this paper, we present a software process assessment method based on Evaluation Theory. This theoretical foundation was one of the things that compelled us to reflect on the factors to be included in the evaluation. As a result, the proposed method jointly assesses four essential factors in software production: processes; technology resources; human resources, and organizational design. The aim behind this method is to provide a remedy for the main shortcomings of current software process assessment methods: partiality of the evaluated factors, because most centre on the assessment of management processes; and non-rigorousness of the evaluation processes. The applicability of the proposed method has been corroborated by means of an experimentation conducted at small and medium-sized Spanish businesses. Copyright © 2000 John Wiley & Sons Ltd
 
Many software process assessment models have been developed based on, or inspired by, the Software Engineering Institute's Capability Maturity Model (CMM). The outputs arising from such models can be voluminous and complex, requiring tool support to comprehend and analyse them. The paper describes some tools developed in order to visualise the results of SPICE conformant assessments. The tools, which are equally useful to software producers and to software procurers, also have a valuable research role and have been used to summarise the results of the worldwide trials of the SPICE reference model. This work has itself led to the enhancement of the tools based partly on experience of their use and partly on revisions to the SPICE model. Keywords Process assessment, process improvement, tool support 1
 
In 1987, the SEI released a software process maturity framework and maturity questionnaire to support organizations in improving their software process. Four years later, the SEI released the Capability Maturity ModelSM for Software (SW-CMMSM). The SW-CMM has influenced software process improvement worldwide to a significant degree. More recently, the SEI has become involved in developing additional capability maturity models that impact software. This paper discusses the problems these CMMs are trying to address, our goals in developing these CMMs, the objectives and status of each of these models, and our current plans for the 1996–1997 time frame. We then briefly turn to topics that address the usability of the SW-CMM in certain situations: in small organizations and in challenging application domains. We then describe SEI's involvement in an international standards effort to create a standard for software process assessment. Finally, to gain perspective on how the CMMs might impact the community in the future, we look at the growing use of the SW-CMM and some benefits associated with its use.
 
Self-customizable systems must adapt themselves to evolving user requirements or to their changing environment. One way to address this problem is through automatic component composition, systematically (re-)building systems according to the current requirements by composing reusable components. Our work addresses requirements-driven composition of multi-flow architectures. This article presents the central element of our automated runtime customization approach, the concept of composable components: the internal configuration of a composable component is not fixed, but is variable in the limits of its structural constraints. In this article, we introduce the mechanism of structural constraints as a way of managing the variability of customizable systems. Composition is performed in a top–down stepwise refinement manner, while recursively composing the internal structures of the composable components according to external requirements over the invariant structural constraints. The final section of the article presents our cases of practical validation. Copyright
 
Detecting and fixing defects are key activities in a testing process, which consume two kinds of skill sets. Unfortunately, many current leading software estimation methods, such as COCOMO II, mainly estimate the effort depending on the size of software, and allocate testing effort proportionally among various activities. Both efforts on detecting and fixing defects, are simply counted into software testing process/phase and cannot be estimated and managed satisfactorily. In fact, the activities for detecting defects and fixing them are quite different and need differently skilled people. The inadequate effort estimation leads to the difficulty of test process management. It is also the main problem which causes software project delays. In this article, we propose a method on Quantitatively Managing Testing (TestQM) process including identifying performance objectives, establishing a performance baseline, establish a process-performance model for fixing effort, and establishing a process-performance model for fixing the schedule, which supports high-level process management mentioned in Capability Maturity Model Integration (CMMI). In our method, defect injection distribution (DID) is used to derive estimation of fixing effort and schedule. The TestQM method has been successfully applied to a software organization for their quantitative management of testing process and proved to be helpful in estimating and controlling defects, effort and schedule of the testing process. Copyright © 2008 John Wiley & Sons, Ltd.
 
Nowadays, distributed development is common in software development. Besides many advantages, research in the last decade has consistently found that distribution has a negative impact on collaboration in general, and communication delay and time to complete tasks in particular. Adapted processes, practices, and tools are demanded to overcome these challenges. We report on an empirical study of communication structures and delay in IBM's distributed development project Jazz. The Jazz project explicitly focuses on distributed collaboration and has adapted processes and tools to overcome known challenges. We explore the effect of distance on communication and task completion time and use social network analysis to obtain insights about the collaboration in the Jazz project. We discuss our findings in the light of existing literature on distributed collaboration and delays. Copyright
 
This paper addresses the problem of reorganizing inspection, i.e. how to build an inspection process which promotes the understanding of artifacts while optimizing the time of face-to-face meetings and still guaranteeing consistent information between designers and inspectors. Instead of face-to-face meetings, more emphasis is given here to individual and public inspections. Insufficient understanding and communication are reduced by justifying design decisions by means of design and inspection rationales, and by means of collaborative tool environments. The perceived benefits of our approach are recognized in terms of improved understandability, improved communication between designers and inspectors, easier schedules and optimized inspection time.
 
The usage of data-intensive web applications raises problems concerning consistency, navigation, and data duplication. Content management systems (CMSs) can overcome these problems. In this research, we focus on special types of web content management systems—web-based CMS applications. Currently, no general available methods exist for implementing and configuring these applications. In this research, an assembly based situational method engineering approach is proposed for constructing an implementation method for web-based CMS applications. The approach consists of four steps: (a) identification of implementation situations, (b) selection of candidate methods, (c) analysis and storage of relevant fragments in the method base, and (d) assembly of the new method using route maps to obtain situationality. This method engineering approach is supported by a meta-modeling technique, resulting in a process-data diagram, which integrates UML (Unified Modeling Language) activity diagrams and class diagrams. To validate the method, two case studies were performed at a large health insurance organization and a telecommunication organization in the Netherlands. The new implementation method performed well in both case studies, and the project workers were satisfied with the associated templates and instructions. Copyright © 2006 John Wiley & Sons, Ltd.
 
Method engineering has emerged as the result of the necessity to adapt methods to better fit the needs of the development task at hand. Its aim is to provide techniques for retrieving reusable method components, adapting and assembling these together to form the new method. The paper provides a survey of the main results obtained for the two issues of defining and assembling components. It argues, thereafter, that the full power of method components can be widely exploited by moving to the notion of method services. The paper finally outlines a possible approach towards Method as a Service (MaaS), and illustrates it. Copyright © 2009 John Wiley & Sons, Ltd.
 
Created in 1993, the European Software Institute (ESI) is a foundation launched by the initiative of the European Commission and with the support of leading European companies and the Basque government. The primary objective of ESI is to contribute to the development of the competitiveness of the European industry through the promotion, continuous improvement, and knowledge in information and communication technologies. To this end, ESI identifies, validates, packages, and disseminates good software development and management practices.
 
The paper describes a proposed method (MinimalEDoc—Minimal Documents for Software Evolution) for managing the documents required during the evolution phase of information systems. The method applies a minimalist approach to the documentation and the concept of ‘Total Cost of Ownership’. It is composed of definitions, principles, a documental model, a management process and a supporting tool. Documents are classified according to document types ‘maps’, ‘aspects’, ‘components’ and ‘critical points’. For specific classes of applications, we can define reusable document schemes and patterns. The documents are organized into a common knowledge base subject to multiple views for different classes of users. The implementation technology is based on a wiki tool integrating external specialized CASE tools. The methodology is supported by an empirical case study involving the information system of a large retail company. Copyright © 2009 John Wiley & Sons, Ltd.
 
A Correction to this article has been published in Software Process: Improvement and Practice 2005; 10(3):355 In this article, we present an integrated framework for software process improvement according to the Capability Maturity Model (CMM). The framework is double-integrated. First, it is based on the systematic integration of dynamic modules to build dynamic models that model and simulate each maturity level proposed in the reference model. As a consequence, a hierarchical set of dynamic models is developed following the same hierarchy of levels suggested in the CMM. Second, the dynamic models of the framework are integrated with different static techniques commonly used in planning, control, and process evaluation. The paper describes the reasons found to follow this approach, the integration process of models and techniques, the implementation of the framework, and shows an example of how it can be used in a software process improvement concerning the cost of software quality. Copyright © 2004 John Wiley & Sons, Ltd.
 
Many organizations have turned towards globally distributed software development (GSD) in their quest for cheap, higher-quality software that has a short development cycle. However, this kind of development has often been reported as being problematic and complex to manage. There are indications that trust is a fundamental factor in determining the success or failure of GSD projects. This article studies the key factors that cause a lack of trust and the effect of lacking trust and present data from four projects in which problems with trust were experienced. We found the key factors to be poor socialization and socio-cultural fit, increased monitoring, inconsistency and disparities in work practices, reduction of and unpredictability in communication; and a lack of face-to-face meetings, language skills, conflict handling, and cognitive-based trust. The effect of lacking trust was a decrease in productivity, quality, information exchange and feedback, morale among the employees, and an increase in relationship conflicts. In addition, the employees tended to self-protect, to prioritize individual goals over group goals, and to doubt negative feedback from the manager. Further, the managers increased monitoring, which reduced the level of trust even more. These findings have implications for software development managers and practitioners involved in GSD. Copyright © 2008 John Wiley & Sons, Ltd.
 
this paper we monitor and measure the capability of experienced software developers to predict software change caused by new requirements to an existing software system (i.e. impact analysis) at different levels of granularity. The study shows a general underprediction and that the accuracy of the prediction deteriorated with increasing level of detail. The results were non-intuitive for the practitioners participating in the study and therefore important to consider when revising current processes and cost models based on such estimates. The study is a good example of how software industry can cooperate with university researchers in monitoring, measuring and evaluation of current practices. The results will be used by the organisation as a baseline for further studies of subsequent releases of the product. KEYWORDS Process monitoring, Change management, Impact Analysis, Empirical Studies, Traceability
 
Too many improvement and innovation projects fail. We have studied the characteristics of successful and failed projects. From this study, we derived 20 parameters that influence success and failure. We used the parameters to build the ImprovAbility Model, which is a model that can be used to measure an organization's or a project's ability to succeed with improvement. In this article, we elaborate on selected parameters that have been shown to be important and/or difficult, particularly in low-maturity organizations. Copyright © 2008 John Wiley & Sons, Ltd.
 
This paper describes ten factors that affect organizational change in software process improvement initiatives based on the Capability Maturity Model or the ISO 9000 quality standards. It also assesses the relative importance of these factors and compares the findings with the results of previous research into organizational change in software process improvement. The paper is based on an analysis of published experience reports and case studies of 56 software organizations that have implemented an ISO 9000 quality system or that have conducted a CMMbased process improvement initiative.
 
Software composition relies heavily on the ability to reuse software within the context of a complex target system. When components are built or sourced from third party suppliers, some form of design material is reused that embodies a variety of artifacts that differ in both granularity and abstraction. The more concrete the design material, particularly if it is in the form of a fully realized reusable component, the more likely it has evolved from a distinct supplier development path that will cause interoperability problems in the composite design. Unfortunately, recognizing integration problems occurs late in a typical design process, often disrupting the process of choosing alternative components. We express the design reuse process using a concept map whose nodes represent sources of design material. Guidance for consolidating and analyzing reuse alternatives is shown via design moves between concepts culminating in the choice of components for the target system. We use isolation as the determinant for interoperability assessment. Leveraging a biological analogy, we define distinct integration levels where isolating mechanisms occur to add assessment clarity. This is illustrated with an example showing conflicts between reusable components from different suppliers. Copyright © 2008 John Wiley & Sons, Ltd.
 
Many organizations provide information technology services, either to external or internal customers. They maintain software, operate information systems, manage and maintain workstations, networks or mainframes or provide contingency services. A widely used framework on IT service quality (at least in the Netherlands) is the Information Technology Infrastructure Library (ITIL), which seeks to publish a standard of service quality and best practices. However, it does not provide organizations with the methodology needed to assess and improve their service processes based on assessments. We propose an Information Technology Service Capability Maturity Model (IT Service CMM) that can be used to assess the maturity of IT service processes and identify directions for improvement. This IT Service CMM originates from our efforts to develop a quality improvement framework that was targeted at helping service organizations to improve service quality. Case studies which introduced parts of our...
 
Software process redesign (SPR) is concerned with the development and application of concepts, techniques and tools for dramatically improving or optimizing software processes. This paper introduces how software process modeling, analysis and simulation may be used to support software process redesign. This includes an approach to explicitly modeling process redesign knowledge, analyzing processes for redesign, and simulating processes before, during and after redesign. A discussion follows which identifies a number of topic areas that require further study in order to make SPR a subject of software process research and practice. Copyright
 
The present paper presents a first attempt to produce a dynamical simulation model for the development process of open source software projects. First, a general framework for such models is introduced. Then, a specific simulation model is described and demonstrated. The model equations are based on available literature case studies whenever possible or reasonable assumptions when literature data are not adequate. The model is demonstrated against data obtained from a recent case study by A. Mockus, R. Fielding and J. Herbsleb (‘A case study of open source software development: the Apache server’) on the Apache www server software so as to reproduce quantitatively real results as closely as possible. Computer simulation results based on the calibrated model are thus presented and analysed. OSS dynamic simulation models could serve as generic predicting tools of key OSS project factors such as project failure/success as well as time dependent factors such as the evolution of source code, defect density, number of programmers and distribution of work effort to distinct project modules and tasks. Copyright © 2003 John Wiley & Sons, Ltd.
 
Software engineering focuses on producing quality software products through quality processes. The attention to processes dates back to the early 1970s, when software engineers realized that the desired qualities (such as reliability, efficiency, evolvability, ease of use, etc.) could only be injected in the products by following a disciplined flow of activities. Such a discipline would also make the production process more predictable and economical. Most of the software process work, however, remained in an informal stage until the late 1980s. From then on, the software process was recognized by researchers as a specific subject that deserved special attention and dedicated scientific investigation, the goal being to understand its foundations, develop useful models, identify methods, provide tool support, and help manage its progress. This paper will try to characterize the main approaches to software processes that were followed historically by software engineering, to identify the strengths and weaknesses, the motivations and the misconceptions that led to the continuous evolution of the field. This will lead us to an understanding of where we are now and will be the basis for a discussion of a research agenda for the future. Copyright
 
One of the major problems with software development processes is their complexity. Hence, one of the primary motivations in process improvement is the simplification of these complex processes. We report a set of studies to explore various simplification approaches and techniques. We used the available process documentation, questionnaires and interviews, and a set of process visualization tool fragments (pfv) to gain an understanding of the process under examination. We then used three basic analysis techniques to locate candidates for simplification and improvement: value added analysis, time usage analysis, and alternatives analysis. All three approaches proved effective in isolating problem areas for improvement. The proposed simplifications resulted in a savings of 20% in cost, 20% in human effort, 40% in elapse time, and a 30% reduction in the number of activities. 1. Introduction and Overview The process system that is the context of our study came into being as a result of a ...
 
We have conducted an empirical study with 23 Vietnamese software practitioners to determine Software Process Improvement (SPI) demotivators. We have compared the demotivators identified by the Vietnamese practitioners with the demotivators identified by UK practitioners. The main objective of this study is to provide SPI managers with insight into the nature of factors that can hinder the success of a SPI program, so that SPI managers can better manage those demotivators to maximize practitioners' support for an SPI program. We used face-to-face questionnaire-based survey sessions for gathering data. We also asked the participants to rank each identified SPI demotivator on a five-point scale (high, medium, low, zero or do not know) to determine the perceived importance of each demotivator. From this, we proposed the notion of ‘perceived value’ associated with each identified demotivator. Our findings identify the ‘high’ and ‘medium’ perceived value demotivators that can undermine SPI initiatives. The findings also show that there are differences in SPI demotivators across practitioners' groups (i.e., developers and managers) and across organisational sizes (i.e. large and small-to-medium). Moreover, our results reveal the similarities and differences between SPI demotivators as perceived by practitioners in Vietnam and the United Kingdom. The findings are expected to provide SPI managers with insight to design and implement suitable strategies to deal with the identified SPI demotivators. Copyright
 
The Standard ISO/IEC 15504, has been developed for the purpose of performing assessments of software and systems processes. The result is a capability-level rating for each assessed process on a rating scale from 0 to 5. The capability levels defined in the ISO/IEC 15504 measurement framework have been designed to be applicable universally to all types of processes. Aside from level 1 (the performed level) where the observable indicators differ from process to process, all the attributes from level 2 to 5 should be common and observable for all types of processes. A practical application of this feature is the possibility to perform assessments on processes from other industrial sectors (e.g. manufacturing processes), using the same measurement framework as long as we have the process defined according to specific rules. An industrial experiment in this area was carried out by DNV on some critical processes deployed by a large company in charge of managing the Italian national gas distribution network (SNAM Rete Gas). Specific process definitions were developed with the purpose and outcome as required by the Standard for a Process Reference Model. In order to build the Process Assessment Model, performance indicators were also identified while capability indicators from levels 2 to 5 were taken straight from the Standard (part 5). The assessment was successfully performed on two different processes providing the organization with valuable insights on their process. This article will illustrate step by step, how this industrial experiment was carried out and what were its results. Copyright
 
The emerging international standard ISO/IEC 15504 on ‘Information Technology – Software Process Assessment’ aims to harmonize the various assessment approaches used in software process improvement (SPI). While approaches based on the organization-focused ‘staged model’ (SW-CMM etc.) provide a ‘roadmap’ generally true for most organizations, the process-focused ‘continuous model’ of ISO/IEC 15504 does not prescribe any particular improvement path. SPI projects using the ISO/IEC 15504 approach are thus considered to have to deal with increased complexity in improvement planning. Targeting this problem, the basic ideas behind a generalized system dynamics model of ‘a set of improving software processes’ currently under development to support SPI action planning at a tactical level are presented. The basic intention is to determine the impact of a set of scheduled improvement actions on the strategic target variables of SPI (time-to-market, cost, quality). The approach integrates the two main ‘mental models’ behind ISO/IEC 15504: the process model described textually as a network of processes and work products, and the model of maturing single processes. The development of the model is oriented at organizations with lower capability and medium sized development projects. The results of preceding software process assessments are used as a major source for model initialization. How simulation can support software process improvement is discussed, and insights from sample applications are presented. Copyright © 2000 John Wiley & Sons Ltd
 
A software process assessment based on the emerging International Standard ISO/IEC 15504 (Software Process Assessment) can be considered a subjective measurement procedure since assessors assign ratings to measures (i.e. PA: process attributes) to measure the capability of processes. Since measurement always includes some amount of random measurement error, evaluating the reliability of empirical measurement in ISO/IEC 15504–based process assessment is crucial to give confidence to the assessment results. This study estimated the internal consistency (reliability) of process attributes by utilizing Cronbach's alpha. For this purpose, we analyzed 364 process instances from 29 assessments on the basis of ISO/IEC 15504 in Korea during February 1999 and December 2001. Our result shows that Cronbach's alpha has a high value of 0.88 for use in practice. This is the same value as obtained in the data from the Phase 2 SPICE Trials (between September 1996 and June 1998), where SPICE denotes Software Process Improvement and Capability dEtermination. In addition, this study explored whether the change of the current four-category PA rating scale increases the internal consistency. Our findings indicate that the current four-category rating scale gives higher internal consistency of process capability measures than a two- or a three-category rating scale. Copyright © 2004 John Wiley & Sons, Ltd.
 
This paper describes the evolution of the structure and representation of Capability Maturity Modelssm and various components of the ISO/IEC 15504 (PDTR) product set, formerly known as ‘SPICE’ (Software Process Improvement and Capability dEtermination). ‘15504’ will be used as shorthand for the product set encompassed by the 15504 project. The paper focuses on historical, structural, and conceptual evolution of the two product types. © 1997 John Wiley & Sons Ltd
 
The Standard ISO-IEC 15504, has been developed for the purpose of performing assessments of software and systems processes. The result is a capability-level rating for each assessed process on a rating scale from 0 to 5. The capability levels defined ...
 
Top-cited authors
Simon Helsen
Krzysztof Czarnecki
  • University of Waterloo
Mahmood Niazi
  • King Fahd University of Petroleum and Minerals
Didar Zowghi
  • University of Technology Sydney
David Wilson
  • University of Technology Sydney