The main objective of this paper is to put forward a software
process model for high-performance systems (HPS), and to present a
formal framework to describe software design methodologies (SDMs) for
those systems. The framework consists of two main parts: the software
process activities which characterise the development of HPS, and the
components of the SDM (concepts, artifacts, representation and actions)
which are essential for any methodology. The framework relates these two
parts by identifying generic components of each activity in the software
process that can be used to classify and evaluate SDMs for HPS. The
framework has been formally specified using the language Z and used to
derive formal specifications of SDMs. This is illustrated in the paper
by presenting part of the specification of ODM (an Occam design method)
Software component reuse is the key to significant gains in productivity.
However, the major problem is the lack of identifying and developing
potentially reusable components. This paper concentrates on our approach to the
development of reusable software components. A prototype tool has been
developed, known as the Reuse Assessor and Improver System (RAIS) which can
interactively identify, analyse, assess, and modify abstractions, attributes
and architectures that support reuse. Practical and objective reuse guidelines
are used to represent reuse knowledge and to do domain analysis. It takes
existing components, provides systematic reuse assessment which is based on
reuse advice and analysis, and produces components that are improved for reuse.
Our work on guidelines has been extended to a large scale industrial
Workflows emphasize the partial order of activities, and the flow of data between activities. In contrast, cooperative processes
emphasize the sharing of artefact, and its gradual evolution toward the final product, under the cooperative and concurrent
activities of all the involved actors.
This paper contrasts workflow and cooperative processes and shows that they are more complementary than conflicting and that,
provided some extensions, both approaches can fit into a single tool and formalism.
The paper presents Celine, a concurrent engineering tool that allows also to define and support classic workflows and software
processes. We claim that the availability of both classes of features allows for the modelling and support of very flexible
processes, closer to software engineering reality.
During phase two of the SPICE trials, the Proposed Draft Technical Report version of ISO/IEC 15504 is being empirically evaluated. This document set is intended to become an international standard for Software Process Assessment. One thread of evaluations being conducted during these trials is the extent of reliability of assessments based on ISO/IEC PDTR 15504. In this paper we present the first evaluation of the reliability of assessments based on the PDTR version of the emerging international standard. In particular, we evaluate the interrater agreement of assessments. Our results indicate that interrater agreement is considerably high, both for individual ratings at the capability attribute level, and for the aggregated capability levels. In general, these results are consistent with those obtained using the previous version of the Software Process Assessment document set (known as SPICE version 1.0), where capability ratings were also found to have generally high interrater agreem...
Meta-level architectures combined with domain-specific languages serve as a powerful tool to build and maintain a software product line: Meta-level architectures lead to adaptable software systems. Executable descriptions capture expert knowledge.
We have developed a meta-level architecture for a software product line of legal expert systems. Four meta-level mechanisms support both variability and evolution of the product line. Domain analysis had shown that separation of expert knowledge from technical code was essential. Descriptions written in domain-specific languages reside in the meta level, and serve as specification, code, and documentation. Technical code finds its place in interpreting machines in the base level.
We discuss how meta-level architectures influence the qualities of software product lines and how properties and patterns of the problem space can guide the design of domain-specific languages. Copyright
ion is the key concept. End-users need to be isolated as much as possible from problems arising with the addition of significantly more users. Workflow systems can address this by hierarchically abstracting the workflow and its representations. In order to maintain consistent and coherent project data across large or increasing numbers of people, details that may embody large, multi-person workflows themselves can be encapsulated. https://onlinelibrary.wiley.com/doi/abs/10.1002/%28SICI%291099-1670%28199809%294%3A3%3C125%3A%3AAID-SPIP100%3E3.0.CO%3B2-J
We present an agent-based simulation model developed to study how size, complexity and effort relate to each other in the development of open source software (OSS). In the model, many developer agents generate, extend, and re-factor code modules independently and in parallel. This accords with empirical observations of OSS development. To our knowledge, this is the first model of OSS evolution that includes the complexity of software modules as a limiting factor in productivity, the fitness of the software to its requirements, and the motivation of developers.
Validation of the model was done by comparing the simulated results against four measures of software evolution (system size, proportion of highly complex modules, level of complexity control work, and distribution of changes) for four large OSS systems. The simulated results resembled the observed data, except for system size: three of the OSS systems showed alternating patterns of super-linear and sub-linear growth, while the simulations produced only super-linear growth. However, the fidelity of the model for the other measures suggests that developer motivation and the limiting effect of complexity on productivity have a significant effect on the development of OSS systems and should be considered in any model of OSS development. Copyright
One of the enduring issues being evaluated during the SPICE trials is the reliability of assessments. One type of reliability is the extent to which different assessors produce similar ratings when assessing the same organization and presented with the same evidence. In this paper we report on a study that was conducted to start answering this question. Data was collected from an assessment of 21 process instances covering 15 processes. In each of these assessments two independent assessors performed the ratings. We found that six of the fifteen processes do not meet our minimal benchmark for interrater agreement. Three of these were due to systematic biases by either an internal or external assessor. Furthermore, for eight processes specific rating scale adjustments were identified that could improve its reliability. The findings reported in this paper provide guidance for assessors using the SPICE framework. 1. Introduction The international SPICE (Software Process Improvement and ...
In trying to understand the architecture of the processes governing the development of a large software product, we used various techniques for describing, analyzing and visualizing that process system. A "big picture" visualization of the software development processes yielded a number of cogent observations: I/O mismatches, large fan ins/outs, no clear path through the project, inconsistency in the level of detail, no clear architectural organizational structure. We report the results of a quality improvement team (QIT) put together 1) to determine how the process architecture got to this state, 2) to delineate the base measures by which we plan to measure architectural improvement, 3) to establish surface and root causes for the current state of the architecture and define their interrelationships, and 4) to derive primary and secondary process architecture drivers and to establish counter measures that will yield a more coherent and appropriate process architecture. As a r...
Many software process assessment models have been developed based on, or inspired by, the Software Engineering Institute's Capability Maturity Model (CMM). The outputs arising from such models can be voluminous and complex, requiring tool support to comprehend and analyse them. The paper describes some tools developed in order to visualise the results of SPICE conformant assessments. The tools, which are equally useful to software producers and to software procurers, also have a valuable research role and have been used to summarise the results of the worldwide trials of the SPICE reference model. This work has itself led to the enhancement of the tools based partly on experience of their use and partly on revisions to the SPICE model. Keywords Process assessment, process improvement, tool support 1
In 1987, the SEI released a software process maturity framework and maturity questionnaire to support organizations in improving their software process. Four years later, the SEI released the Capability Maturity ModelSM for Software (SW-CMMSM). The SW-CMM has influenced software process improvement worldwide to a significant degree. More recently, the SEI has become involved in developing additional capability maturity models that impact software. This paper discusses the problems these CMMs are trying to address, our goals in developing these CMMs, the objectives and status of each of these models, and our current plans for the 1996–1997 time frame. We then briefly turn to topics that address the usability of the SW-CMM in certain situations: in small organizations and in challenging application domains. We then describe SEI's involvement in an international standards effort to create a standard for software process assessment. Finally, to gain perspective on how the CMMs might impact the community in the future, we look at the growing use of the SW-CMM and some benefits associated with its use.
Self-customizable systems must adapt themselves to evolving user requirements or to their changing environment. One way to address this problem is through automatic component composition, systematically (re-)building systems according to the current requirements by composing reusable components. Our work addresses requirements-driven composition of multi-flow architectures.
This article presents the central element of our automated runtime customization approach, the concept of composable components: the internal configuration of a composable component is not fixed, but is variable in the limits of its structural constraints. In this article, we introduce the mechanism of structural constraints as a way of managing the variability of customizable systems. Composition is performed in a top–down stepwise refinement manner, while recursively composing the internal structures of the composable components according to external requirements over the invariant structural constraints.
The final section of the article presents our cases of practical validation. Copyright
Nowadays, distributed development is common in software development. Besides many advantages, research in the last decade has consistently found that distribution has a negative impact on collaboration in general, and communication delay and time to complete tasks in particular. Adapted processes, practices, and tools are demanded to overcome these challenges.
We report on an empirical study of communication structures and delay in IBM's distributed development project Jazz. The Jazz project explicitly focuses on distributed collaboration and has adapted processes and tools to overcome known challenges. We explore the effect of distance on communication and task completion time and use social network analysis to obtain insights about the collaboration in the Jazz project. We discuss our findings in the light of existing literature on distributed collaboration and delays. Copyright
This paper addresses the problem of reorganizing inspection, i.e. how to build an inspection process which promotes the understanding of artifacts while optimizing the time of face-to-face meetings and still guaranteeing consistent information between designers and inspectors. Instead of face-to-face meetings, more emphasis is given here to individual and public inspections. Insufficient understanding and communication are reduced by justifying design decisions by means of design and inspection rationales, and by means of collaborative tool environments. The perceived benefits of our approach are recognized in terms of improved understandability, improved communication between designers and inspectors, easier schedules and optimized inspection time.
Created in 1993, the European Software Institute (ESI) is a foundation launched by the initiative of the European Commission and with the support of leading European companies and the Basque government. The primary objective of ESI is to contribute to the development of the competitiveness of the European industry through the promotion, continuous improvement, and knowledge in information and communication technologies. To this end, ESI identifies, validates, packages, and disseminates good software development and management practices.
this paper we monitor and measure the capability of experienced software developers to predict software change caused by new requirements to an existing software system (i.e. impact analysis) at different levels of granularity. The study shows a general underprediction and that the accuracy of the prediction deteriorated with increasing level of detail. The results were non-intuitive for the practitioners participating in the study and therefore important to consider when revising current processes and cost models based on such estimates. The study is a good example of how software industry can cooperate with university researchers in monitoring, measuring and evaluation of current practices. The results will be used by the organisation as a baseline for further studies of subsequent releases of the product. KEYWORDS Process monitoring, Change management, Impact Analysis, Empirical Studies, Traceability
This paper describes ten factors that affect organizational change in software process improvement initiatives based on the Capability Maturity Model or the ISO 9000 quality standards. It also assesses the relative importance of these factors and compares the findings with the results of previous research into organizational change in software process improvement. The paper is based on an analysis of published experience reports and case studies of 56 software organizations that have implemented an ISO 9000 quality system or that have conducted a CMMbased process improvement initiative.
Many organizations provide information technology services, either to external or internal customers. They maintain software, operate information systems, manage and maintain workstations, networks or mainframes or provide contingency services. A widely used framework on IT service quality (at least in the Netherlands) is the Information Technology Infrastructure Library (ITIL), which seeks to publish a standard of service quality and best practices. However, it does not provide organizations with the methodology needed to assess and improve their service processes based on assessments. We propose an Information Technology Service Capability Maturity Model (IT Service CMM) that can be used to assess the maturity of IT service processes and identify directions for improvement. This IT Service CMM originates from our efforts to develop a quality improvement framework that was targeted at helping service organizations to improve service quality. Case studies which introduced parts of our...
Software process redesign (SPR) is concerned with the development and application of concepts, techniques and tools for dramatically improving or optimizing software processes. This paper introduces how software process modeling, analysis and simulation may be used to support software process redesign. This includes an approach to explicitly modeling process redesign knowledge, analyzing processes for redesign, and simulating processes before, during and after redesign. A discussion follows which identifies a number of topic areas that require further study in order to make SPR a subject of software process research and practice. Copyright
Software engineering focuses on producing quality software products through quality processes. The attention to processes dates back to the early 1970s, when software engineers realized that the desired qualities (such as reliability, efficiency, evolvability, ease of use, etc.) could only be injected in the products by following a disciplined flow of activities. Such a discipline would also make the production process more predictable and economical. Most of the software process work, however, remained in an informal stage until the late 1980s. From then on, the software process was recognized by researchers as a specific subject that deserved special attention and dedicated scientific investigation, the goal being to understand its foundations, develop useful models, identify methods, provide tool support, and help manage its progress. This paper will try to characterize the main approaches to software processes that were followed historically by software engineering, to identify the strengths and weaknesses, the motivations and the misconceptions that led to the continuous evolution of the field. This will lead us to an understanding of where we are now and will be the basis for a discussion of a research agenda for the future. Copyright
One of the major problems with software development processes is their complexity. Hence, one of the primary motivations in process improvement is the simplification of these complex processes. We report a set of studies to explore various simplification approaches and techniques. We used the available process documentation, questionnaires and interviews, and a set of process visualization tool fragments (pfv) to gain an understanding of the process under examination. We then used three basic analysis techniques to locate candidates for simplification and improvement: value added analysis, time usage analysis, and alternatives analysis. All three approaches proved effective in isolating problem areas for improvement. The proposed simplifications resulted in a savings of 20% in cost, 20% in human effort, 40% in elapse time, and a 30% reduction in the number of activities. 1. Introduction and Overview The process system that is the context of our study came into being as a result of a ...
We have conducted an empirical study with 23 Vietnamese software practitioners to determine Software Process Improvement (SPI) demotivators. We have compared the demotivators identified by the Vietnamese practitioners with the demotivators identified by UK practitioners. The main objective of this study is to provide SPI managers with insight into the nature of factors that can hinder the success of a SPI program, so that SPI managers can better manage those demotivators to maximize practitioners' support for an SPI program.
We used face-to-face questionnaire-based survey sessions for gathering data. We also asked the participants to rank each identified SPI demotivator on a five-point scale (high, medium, low, zero or do not know) to determine the perceived importance of each demotivator. From this, we proposed the notion of ‘perceived value’ associated with each identified demotivator.
Our findings identify the ‘high’ and ‘medium’ perceived value demotivators that can undermine SPI initiatives. The findings also show that there are differences in SPI demotivators across practitioners' groups (i.e., developers and managers) and across organisational sizes (i.e. large and small-to-medium). Moreover, our results reveal the similarities and differences between SPI demotivators as perceived by practitioners in Vietnam and the United Kingdom. The findings are expected to provide SPI managers with insight to design and implement suitable strategies to deal with the identified SPI demotivators. Copyright
The Standard ISO/IEC 15504, has been developed for the purpose of performing assessments of software and systems processes. The result is a capability-level rating for each assessed process on a rating scale from 0 to 5. The capability levels defined in the ISO/IEC 15504 measurement framework have been designed to be applicable universally to all types of processes. Aside from level 1 (the performed level) where the observable indicators differ from process to process, all the attributes from level 2 to 5 should be common and observable for all types of processes. A practical application of this feature is the possibility to perform assessments on processes from other industrial sectors (e.g. manufacturing processes), using the same measurement framework as long as we have the process defined according to specific rules.
An industrial experiment in this area was carried out by DNV on some critical processes deployed by a large company in charge of managing the Italian national gas distribution network (SNAM Rete Gas).
Specific process definitions were developed with the purpose and outcome as required by the Standard for a Process Reference Model. In order to build the Process Assessment Model, performance indicators were also identified while capability indicators from levels 2 to 5 were taken straight from the Standard (part 5).
The assessment was successfully performed on two different processes providing the organization with valuable insights on their process.
This article will illustrate step by step, how this industrial experiment was carried out and what were its results. Copyright
The Standard ISO-IEC 15504, has been developed for the purpose of performing assessments of software and systems processes. The result is a capability-level rating for each assessed process on a rating scale from 0 to 5. The capability levels defined ...