ArticlePDF Available

Iterative rework: The good, the bad, and the ugly

Authors:
  • Software and Systems Engineering Associates (S2EA)

Abstract and Figures

Iterative development can take many forms, depending on the project's goals: iterative prototyping can help evolve a user interface. Agile development is a way to closely involve a prototypical customer in a process that might repeat daily. Incremental build lets developers produce weekly builds of an evolving product. A spiral model can help the team confront and mitigate risk in an evolving product. Each iteration involves a certain amount of rework to enhance and fix existing capabilities (the good). However, excessive rework could indicate problems in the requirements, the developers' skills and motivation, the development processes or technology used, or all of the above (the bad). Exorbitant levels of rework result in truly untenable situations (the ugly). On the other hand, too little rework could indicate insufficient review and testing or too little anticipation of the product features needed to support the next version (bad that can turn ugly). Understanding and correcting the root causes of problems that result from too much or too little rework can significantly increase productivity, quality, developer morale, and customer satisfaction.
Content may be subject to copyright.
0018-9162/05/$20.00 © 2005 IEEE34 Computer
COVER FEATURE
PPuubblliisshheedd bbyy tthhee IIEEEEEE CCoommppuutteerr SSoocciieettyy
Iterative Rework:
The Good, the Bad,
and the Ugly
Because the creative processes in developing
and modifying software are subject to
myriad external and changeable forces, it
is impossible to get all but the simplest
products right in one pass. Long experi-
ence has shown the advantages of iterative devel-
opment, inwhich each iteration subsumes the pre-
vious iteration’s software and adds capabilities to
the evolving product to produce a next version. As
developers build and validate the next version’s
capabilities, they rework the previous version to
enhance its capabilities and fix defects discovered
while integrating the new and old versions.
Iterative development can take many forms,
depending on the project’s goals: Iterative proto-
typing can help evolve a user interface. Agile devel-
opment is a way to closely involve a prototypical
customer in a process that might repeat daily.
Incremental build lets developers produce weekly
builds of an evolving product. A spiral model can
help the team confront and mitigate risk in an evolv-
ing product.
Each iteration involves a certain amount of
rework to enhance and fix existing capabilities (the
good). However, excessive rework could indicate
problems in the requirements, the developers’ skills
and motivation, the development processes or tech-
nology used, or all of the above (the bad). Exorbitant
levels of rework result in truly untenable situations
(the ugly).
On the other hand, too little rework could indi-
cate insufficient review and testing or too little
anticipation of the product features needed to sup-
port the next version (bad that can turn ugly).
Understanding and correcting the root causes of
problems that result from too much or too little
rework can significantly increase productivity,
quality, developer morale, and customer satisfac-
tion.
ITERATIVE DEVELOPMENT MODELS
All forms of iterative development provide a way
to
continuously integrate and validate the evolving
product,
frequently demonstrate progress,
alert developers early on about problems,
deliver subset capabilities early on, and
systematically incorporate the inevitable rework
that occurs in software development.
Agile and incremental-build are two commonly
used iterative development models. In both mod-
els, significant changes to requirements, design con-
straints, or environmental factors such as changes
to middleware application programming interfaces
(APIs) or hardware features can require significant
rework of the design and existing code.
Agile development
Although the agile theme has several variations,
most agile process models adhere to five principles
(http://agilemanifesto.org/principles.html):
Continuously involve a prototypical customer.
In software development, some rework is both inevitable and beneficial.
But how much is too much and how little is too little? Understanding the
nature of rework and why it occurs can help answer these questions.
Richard E.
Fairley
Oregon Health and
Science University
Mary Jane
Willshire
Colorado Technical
University
Develop test cases and test scenarios before
implementing the next version.
Implement and test the next version.
Demonstrate each version to the customer and
elicit the next requirements.
Periodically deliver evolving subsets of the
product into the operational environment.
The customer’s role is to review progress and pro-
vide the story line that determines the requirements
for new capabilities and revisions to demonstrated
capabilities (www.martinfowler.com/articles/new
Methodology.html).
Figure 1 depicts an iterative model for agile devel-
opment. Some agile process models require devel-
opers to produce a next version of a running system
within one workday. Some use pair programming,
in which pairs of developers share a computer
terminal to develop their software.
Experience with agile models indicates that the
resulting products rate high in customer satisfac-
tion and have low defect levels. Customer satisfac-
tion, however, depends critically on involving a
knowledgeable prototypical customer. Some crit-
ics maintain that products from an agile process
may have functional structures, which are hard to
modify, and no design documentation. Others note
the approach’s lack of scalability.
Incremental build
In contrast to agile models, in which require-
ments and architecture evolve, the incremental-
build model is based on stable requirements and an
architectural design. As Figure 2a shows, the model
partitions requirements and architecture into a pri-
oritized sequence of builds. Each build adds capa-
bilities to the incrementally growing product.
Developers typically produce a next version of a
demonstrable system each week. Each version inte-
grates, tests, and demonstrates the progress that all
developers have made.
The incremental-build process works well when
each team consists of two to six developers plus a
team leader. Team members can work as individu-
als or perhaps in pairs using an agile process. Each
individual or pair will typically produce several
unofficial builds during each development cycle
using a copy of the current official version as a test-
bed.
The incremental-build model scales well to large
projects. Developers partition the architecture into
well-defined subsystems and allocate requirements
and interfaces to each. The team can independently
test and demonstrate subsystems, perhaps using
September 2005 35
Hear
customer
story
Specify
requirement(s)
Write
test scenario(s)
Add new features;
review, test, and
rework
Demonstrate
capabilities
Customer
Start
here
Deliver
here
Figure 1. Steps in the agile development process. The process from customer
story to demonstrated capabilities is iterative in nature, with Iterations typically
occurring in cycles of one day or less. Developers may periodically deliver
versions to users.
Build
feature set N
• • •
• • • • • •
Build
feature set 2
Build
feature set 1
Time
Validate
feature set 1
Validate
feature sets 1+2
Validate
• • • • • •
Incremental
rework
Demonstrate
feature set 1
Demonstrate
FS 1 + FS 2
Demo
. . . . .
Demo and deliver
FS1 … FSN
Validate
FS1 … FSN
Design
partitioning
Incremental
validation
Incremental
validation
Incremental
validation
(a)
(b)
Requirements
specification
Design
partitioning
Architectural
design Incremental
validation
and rework
Incremental
verification
and rework
Incremental
builds
Customer
Figure 2. An incremental-build model: (a) partitioning of the design into
prioritized build sequences and (b) build-validate-demonstrate iterations.
36 Computer
stubs and drivers as interfaces, or perhaps using
early, incremental versions of other evolving sub-
systems. System integration can proceed incre-
mentally as intermediate versions of the various
subsystems become operational.
As Figure 2b shows, successive incremental
builds can overlap. Developers could, for example,
start the next version’s detailed design while vali-
dating the present version. In contrast to the agile
approach, the customer is not in the loop, except
perhaps to observe some of the weekly demon-
strations.
TYPES OF REWORK
Both software development and maintenance
involve new work and rework. New work is con-
cerned with adding capabilities to the previous ver-
sion, while rework involves modifying a previous
version.
The basic premise of iterative development is that
rework is inevitable because events that occur dur-
ing software development make once-through
development impossible in all but the simplest
cases. Acknowledging and accommodating this
premise avoids the well-known pitfalls of massive
integration, testing, and rework that proponents
of the waterfall model—a once-through, sequen-
tial process—encounter. Iterative processes, such
as agile development and incremental build, pro-
vide the ability to gracefully modify the current ver-
sion of an evolving product while adding
capabilities to produce the next version.
As Figure 3 shows, four dimensions character-
ize software: functionality, structure, behavior, and
quality attributes. Software exists in an environ-
mental context from which it receives stimuli and
to which it delivers responses.
The software’s computational features provide
functionality. Structure is both static—programs
and data as designed and coded—and dynamic—
the interconnections among software entities that
come and go at runtime. The behavioral dimension
reflects how a system’s state changes over time in
response to computations and external events.
Quality attributes include performance, reliability,
safety, security, dependability, ease of modification,
and reusability.
Rework results in changes to one or more of
these four dimensions. A well-known form of
rework is refactoring, in which developers perform
semantics-preserving structural transformations,
usually in small steps.1The “What Is Refactoring?”
sidebar gives an example of this practice. Software
developers must take care to ensure that structural
transformations do not alter functionality or
behavior or degrade necessary quality attributes.
What Is Refactoring?
Refactoring improves the structure of software so that developers can
more easily understand, modify, evolve, document, and test it. It can
also improve the quality attributes of software components and sub-
systems or enhance their potential for reuse.
The goal is to make it easier to incorporate reusable elements, add
new elements, or accommodate future changes. Developers perform
refactoring in small steps. At each step, they perform tests to ensure that
structural transformations do not alter functionality or behavior or
degrade necessary quality attributes.
Figure A illustrates refactoring in object-oriented development, which
can involve
moving an attribute or a method from one class to another;
consolidating common attributes or methods in two different classes
into a parent class;
splitting a class into two classes;
adding an adapter to allow two components with incompatible
interfaces to work together; or
modifying the relationships among classifiers (for example, chang-
ing an inheritance relationship into a composition relationship).
Figure A. Refactoring to avoid inheritance of unneeded attributes and
operations in Class B (Attrs2 and Ops2( )). By placing shared capabilities
(Attrs1 and Ops1()) in an abstract parent class, developers can avoid
inheriting unneeded attributes and operations in subclasses.
Functionality
Structure
Behavior
Environmental context
Quality attributes
Figure 3. Four dimensions of software: functionality, structure, behavior, and quality
attributes
. All four dimensions exist within the software’s environmental context.
Class A
Attrs A1
Attrs A2
Ops A1( )
Ops A2( )
Class AB
Attrs A1
Ops A1( )
Class A
Attrs A2
Ops A2( )
Class B
Attrs B
Ops B( )
Class B
Attrs B
Ops B( )
= >
As Table 1 shows, rework can be either evolu-
tionary or avoidable,2and avoidable rework can
be retrospective or corrective.
Evolutionary rework
Evolutionary rework adds value to an evolving
product by modifying one or more of the current
version’s four dimensions (structure, functionality,
behavior, and quality) to provide new capabilities
in the next version. Evolutionary rework typically
occurs in response to changes in user requirements,
design constraints, environmental factors, or other
conditions that the developers were not aware of
or could not have foreseen when developing the
software’s previous version. Evolutionary rework is
unavoidable because the developers could not have
known about or foreseen the changes that necessi-
tate it.
Avoidable rework
In principle, avoidable rework is work that no
one would have to do had the previous work been
correct, complete, and consistent. That means the
previous work satisfies its requirements, is fit for
its intended use, and does not contain defects
(unlike hardware, software does not break from
repetitive use; all defects in software are traceable
to human error).
In practice, some amount of avoidable rework is
inevitable—even desirable—because insufficient
rework could mean that the software developers are
not doing enough refactoring, reviewing, or testing.
An excessive amount of avoidable rework, however,
reduces productivity, increases costs, delays sched-
ules, and demoralizes the development team. It also
erodes customer satisfaction because customers tend
to doubt the quality of software that has high lev-
els of rework, thinking it might mean the delivered
software will have too many defects.
Because a certain percentage of defects will
escape the development process, high defect levels
during development indicate the likelihood of high
levels of customer-discovered defects (one of soft-
ware engineering’s many counterintuitive results).
Retrospective rework. In retrospective rework,
developers modify the previous version’s function-
ality, structure, behavior, or quality attributes, or
some combination of these because the developers
of that version (likely themselves) failed to imple-
ment it in a manner that provided foreseeable capa-
bilities needed in the next version. In contrast to
evolutionary rework, retrospective rework occurs
because developers knew about the needs but
didn’t accommodate them for whatever reason
(perhaps because of excessive schedule pressure).
Suppose, for example, that rework involved
adding an interface to the next version. The rework
would be evolutionary if the customers added the
requirement for such an interface after the devel-
opers had finished the previous version. The
rework would be retrospective (and thus avoidable)
if the developers knew the capabilities the new ver-
sion would require and did not add the interface in
anticipation of needing it later on.
Corrective rework. Corrective rework occurs
when developers correct defects that result from
mistakes previous developers (most likely them-
selves) made. Defects can be due to mistakes of
commission—doing something incorrectly—or
mistakes of omission—not doing something they
or others should have done. Defects encountered
at runtime can result in failures, which could cause
system crashes or produce incorrect results or
unexpected behavior.
Corrective rework fixes defects detected in the
new capabilities or defects in a previous version
that the new capabilities expose. In iterative devel-
opment, mistakes count as defects if developers find
them
while integrating the new capabilities of the
next version with the current version or
September 2005 37
Table 1. An iterative rework taxonomy.
Type of rework Characteristics Good, bad, or ugly?
Evolutionary Work performed on a previous version of an Goodif it adds value without violating a cost or
evolving software product or system to enhance schedule constraint
and add value to it Badif it violates a cost or schedule constraint
Uglyif it smacks of gold plating
Avoidable Work performed on a previous version of an Goodsmall amounts are inevitable; better now
Retrospective evolving software product or system that than later
developers should have performed previously Badif it occurs routinely
Uglyif excessive, it indicates a need to revise
work processes
Avoidable Work performed to fix defects in the current Goodif total rework is within control limits
Corrective and previous versions of an evolving software Badif it results in patterns of special-cause effects
product or system Uglyif it results in an out-of-control
development process
38 Computer
while verifying and validating the next
version.
Verification is the process of determining
that a work product is complete, consistent,
and correct with respect to its requirements.
Validation determines if a work product
is suitable for its intended use. In iterative
development, common verification tech-
niques include traceability, peer reviews, and
testing, while testing and demonstrations are
common validation techniques.
Mistakes that developers make and correct while
developing the next versions of their work prod-
ucts do not count as defects. Defects count only in
work products that have satisfied their acceptance
criteria and are then checked into the official build
directory.
IS IT GOOD OR BAD?
In many cases, rework is obviously done in
response to changes in user requirements, design
constraints, or the operational environment (evo-
lutionary rework). In other cases, rework addresses
the failure of the previous version to provide the
capabilities needed in the next version (retrospec-
tive rework). In still others, it is in response to
detected defects (corrective rework).
Fuzzy lines
The line between evolutionary and retrospective
rework—and even whether one is good and the
other bad—can be fuzzy.
Suppose, for example, that unforeseeable refac-
toring is required to modify the previous version’s
structure to make it a suitable basis for the capa-
bilities of the next version; the rework effort is
evolutionary and thus unavoidable (good). If devel-
opers had foreseen future needs but chose not to
include them in the previous version, the rework
effort is retrospective and thus avoidable (bad).
The fuzzy part is in the interpretation. Customers
might say that they are only clarifying requirements
the developers should have understood (retrospec-
tive rework), while the developers might maintain
that the customer changed the requirements (evo-
lutionary rework).
In other cases, retrospective rework might be the
better choice. Suppose reworking version 5 would
take less effort than attempting to accommodate
version 5’s needs while building version 4. In this
case, rework to version 4 while developing version
5 is arguably delayed evolution, not retrospective
rework.
Collecting and analyzing rework data
The development team and organization must
understand the kinds of rework that occur and why,
which requires collecting and analyzing rework
data. This activity is possible even in ambiguous
situations. Because the goal is to identify major
trends and to determine the rework’s root causes,
the rework data does not have to be highly accu-
rate to indicate problem areas.
Collected rework data fits the three categories in
Table 1:
Evolutionary. Rework caused by external fac-
tors, such as changed requirements, design
constraints, environmental factors, or other
unforeseeable external events.
Retrospective. Rework to improve structure,
functionality, behavior, or quality attributes of
a previous version to accommodate the needs
of the current version while building the cur-
rent version.
Corrective. Rework to fix defects discovered in
the current version and previous versions during
reviews, tests, and demonstrations of the cur-
rent version.
Four common techniques—informal anecdotes,
observation, record keeping, and automated ver-
sion control—are suitable for collecting rework
data.
Informal anecdotes and observation are useful
indicators for small agile development teams and
incremental-build subsystem teams. Record keep-
ing is essential, although software developers tend
to avoid it because they feel it takes time that
detracts from their productivity and because they
fear that the organization will tie rework to per-
formance reviews.
Publishing sanitized rework data in summary
form and basing process improvements on the data
will show developers that the time spent in record
keeping results in more efficient and effective work
processes, thus increasing productivity. Organiza-
tions can ameliorate the fear of tying rework to per-
formance by having a nonthreatening person such
as a clerical worker ask each developer what per-
centage of the previous day was spent on new work
and evolutionary, retrospective, and corrective
rework. That person then records the composite
data for the team or project, thus preserving indi-
vidual anonymity.
Most iterative development projects require some
kind of automated version-control system because
of the many interrelated, ongoing work activities
The line
between
evolutionary and
retrospective
rework can be
fuzzy.
and work products. With automated version con-
trol, an organization can easily collect data on the
effort spent (new work and each kind of rework)
for each product version by adding electronic forms
that developers complete when they check in the
next version’s components. The collected data is
not associated with the person who enters it, thus
ensuring anonymity.
Developers should be willing to participate in
collecting rework data once they receive compos-
ite, sanitized feedback, see improvements based on
that feedback, and learn through experience that
managers will not use rework data in performance
evaluations (which is impossible if the data is
anonymous). Training classes, mentoring, and dis-
cussion groups can help developers learn to cate-
gorize rework data consistently.
HOW MUCH IS ACCEPTABLE?
For several years, our rule of thumb has been that
total rework (evolutionary plus both types of avoid-
able) is acceptable at 10 to 20 percent of the total
effort for each reporting period in an iterative devel-
opment process. The reporting period typically
varies from a week to a month. Weekly analysis of
rework data is desirable in a project’s early stages.
Less frequent reporting and analysis is appropriate
once rework stabilizes and remains within the
desired range.
In agile development, 10 to 20 percent of the effort
is about one to two hours per workday for each
developer (or developer pair in paired program-
ming). Rework of 10 to 20 percent in a weekly incre-
mental-build process is roughly a half to a full day
of effort per developer during integration, review,
testing, and demonstration of the weekly builds.
Spending more than 20 percent of total effort on
rework (evolutionary and avoidable) indicates
problems in the work processes, work products, or
both. Excessive evolutionary rework typically indi-
cates problems in requirements, design, or envi-
ronmental factors, such as an unstable operating
system or changing hardware specifications.
Excessive avoidable rework indicates problems in
the development process, the tools, the methods,
the design strategy, or the developers’ skills or moti-
vation. It could also be the result of excessive sched-
ule pressure.
Less than 10 percent rework, on the other hand,
could indicate insufficient reviewing, revising, and
testing. Robert Cringely addresses the possibility
of too much refactoring (www.pbs.org/cringely/
pulpit/pulpit20030508.html), but Martin Fowler
and Kendall Scott advise organizations to be sus-
picious if developers refactor less than 10 percent
of the code during each iteration.3
The purpose of reviewing and testing is to find
defects. Software developers are only human;
humans make mistakes that result in defects that
must be corrected. As a result, the best organiza-
tions rarely achieve less than 5 percent corrective
rework, so it is reasonable to be suspicious if evo-
lutionary, retrospective, and corrective rework total
less than 10 percent of the overall effort.
DEALING WITH THE UGLY
The “How Control Charts Work” sidebar
describes the control-chart methodology that qual-
ity initiatives such as total quality management and
six sigma use. The methodology suggests conduct-
ing a root-cause analysis and initiating corrective
action when software rework in two of three or
three of five successive reporting periods exceeds
20 percent of total effort or is less than 10 percent
of total effort.4,5
September 2005 39
How Control Charts Work
A control chart is a plot of a variable of interest versus time in which
the variable has specified control limits. In the 1920s, Walter Shewart
developed control charts as a statistical method for analyzing manu-
facturing defects in the mechanical switching relays used in telephone
switching centers. Shewart made the distinction between common- and
special-cause effects.
When plotted against time, common-cause effects exhibit random
fluctuations around a mean value. In software development, common-
cause effects of rework are the noise in the development process that
results from factors such as variations in developers’ skill levels and moti-
vations, fluctuations in requirements, lack of familiarity with the appli-
cation domain, and product complexity. One of the goals of quality
improvement is to reduce the mean and deviation of common-cause vari-
ations.
Special-cause effects are the result of exceptional situations that pro-
duce spikes and troughs in a variable of interest that cause the value to
lie outside the band defined by an upper control limit (UCL) and a lower
control limit (LCL). A process that is in control exhibits random fluc-
tuations about a mean value, and all variations are within the UCL and
LCL. An out-of-control process exhibits patterns that violate the UCLs
and LCLs for variables of interest.
For manufacturing processes, Shewart used three standard deviations
from the mean as the UCL and LCL (µ ±) and two criteria to indi-
cate patterns of special-cause effects worth investigating:
two of three successive measurements on the same side of the mean
and more than 3σbeyond it, or
four of five successive measurements on the same side of the mean
and more than 2σ beyond it
Determining the UCL and LCL of a control chart for a particular vari-
able of interest is based on pragmatic considerations. We have found
that, for most organizations, a UCL of 20 percent and an LCL of 10
percent are both desirable and achievable for software rework.
40 Computer
Figure 4 shows situations in which rework is
excessive and insufficient. In Figure 4a, rework is
excessive in three of five successive reporting peri-
ods, indicating that corrective action is warranted,
while Figure 4b shows insufficient rework in two of
three reporting periods. The anomaly in Figure 4b
might be the result of insufficient review and test-
ing, which warrants corrective action. It might also
be the result of exceptional quality in those ver-
sions, which warrants determining root causes so
that the organization can emulate causative factors
in other projects.
PROCESS IMPROVEMENT
Avoidable rework is the bane of software devel-
opment and modification. Vic Basili and Scott
Green reported that 40 to 50 percent of the work
on clean-room projects at the NASA Goddard
Space Flight Center was avoidable rework.6Other
reports indicate that rework can amount to as
much as 80 percent of total work.7-9 An analysis of
Cocomo II data indicates that most of the savings
in effort from improved software process matu-
rity, software architectures, and software risk
management came from reducing avoidable
rework.2
Clearly, identifying and remedying the causes of
avoidable rework offers outstanding opportunities
for higher quality, improved productivity, increased
worker morale, and increased customer satisfac-
tion. Figure 5 illustrates the effort saved by reduc-
ing defects (mistakes made), which reduces
avoidable rework and thus reduces total effort. A
well-known example is Raytheon’s reduction of
rework from 41 to 11 percent over four years, for
a net savings of $16 million.10
Figure 5 also illustrates the additional effort that
organizations must expend on quality enhancement
to meet the stringent requirements of safety-critical
and mission-critical systems, which demand the low-
est possible defect levels. Process improvements that
reduce defects can help offset that additional effort.
The fundamental tenet of process improvement
is that superior products result from superior
processes. From Walter Shewart’s seminal work on
quality improvement to process improvements
based on the current SEI Capability Maturity
Model Integration and ISO Software Process
Improvement and Capability Determination
process maturity models, investments in process
improvement have repeatedly yielded significant
returns.11-13 Process improvement involves changing
work activities, organizational structures, roles,
relationships, methods, tools, and techniques to
improve quality, productivity, morale, profits, and
customer satisfaction.
Identifying the root causes of avoidable rework
can provide key indicators of the processes most in
need of improvement. Improvements in require-
ments-based testing and increased attention to inter-
face design (two of the most common causes of
avoidable rework) might, for example, cut avoidable
rework in half. If avoidable rework were 40 percent
of total work (not untypical), this reduction would
boost organizational performance by 20 percent.
Avoidable rework is unnecessarily high in most
software organizations. Identifying and cor-
recting its root causes is a high-leverage mech-
anism for improving quality, productivity, morale,
Rework per
reporting period
20%
10%
Reporting
periods
(a)
(b)
Rework per
reporting period
20%
10%
Reporting
periods
Figure 4. Rework control charts Illustrating (a) excessive rework and (b)
insufficient rework.
Total effort
Defect reduction
reduces rework
and total effort
Quality
enhancement
increases total
effort
Total effort includes
10% to 20% rework
Defect density
Figure 5. Effort saved by reducing defects, thus reducing avoidable rework.
profits, and customer satisfaction. Organizations
seeking to improve these factors should try to
answer questions in six areas:
How can we collect rework data without
intruding into and disrupting the developers’
creative efforts more than is necessary?
What percentage of avoidable rework is ret-
rospective, and for which product attributes
does it occur?
How can we better anticipate the capabilities
we will need in the next version so that we can
reduce retrospective rework?
What kinds of mistakes do we make and when
do we make them? Could we detect them
sooner? How could we prevent them?
How much corrective rework does it take to
fix our mistakes?
How can we improve our processes to reduce
or eliminate these kinds of mistakes?
The answers to some of the questions about
avoidable rework may be idiosyncratic to particu-
lar projects and organizations, but most are com-
mon to all types of organizations and software-
intensive systems and products. Whether avoidable
rework comes from local circumstances or the fail-
ure to apply well-known practices, understanding
and correcting its root causes is the key to cost-
effective process improvement.
References
1. M. Fowler et al., Refactoring: Improving the Design
of Existing Code, Addison Wesley, 1999.
2. B. Boehm and V. Basili, “Software Defect Reduction
Top 10 List,” Computer, Jan. 2001, pp. 135-137.
3. M. Fowler and K. Scott, UML Distilled, 2nd ed.,
Addison Wesley, 1999, p. 28.
4. W. Shewart, Statistical Methods from the Viewpoint
of Quality Control, Dover, 1956.
5. S. Sytsma and K. Manley, “Common Control Chart
Cookbook;” www.sytsma.com/tqmtools/charts.html.
6. V. Basili and S. Green, “Software Process Evolution
at the SEL,” IEEE Software, July 1994, pp. 58-66.
7. C. Jones, Applied Software Measurement, McGraw-
Hill, 1996.
8. NSF Center for Empirically Based Software Engineer-
ing (CeBase), “Summary of the First CeBase e-Work-
shop,” Mar. 2001; www.cebase.org/www/research
Activities/defectReduction/eworkshop1/.
9. S.E. Cross, “Message from the Director,” 2002
Annual Report, Carnegie Mellon Univ., Software
Eng. Inst., p. 3; www.sei.cmu.edu/pub/documents/
misc/annual-report/2002.pdf.
10. R. Dion, “Process Improvement and the Corporate
Balance Sheet,” IEEE Software, July/Aug. 1993, pp.
28-35.
11. P. Crosby, Quality Is Free, Penguin Books, 1980.
12. P. Crosby, Quality Is Still Free, McGraw-Hill, 1996.
13. Carnegie Mellon Software Eng. Inst., “Demonstrat-
ing the Impact and Benefits of CMMI: An Update
and Preliminary Report,” Oct. 2003;www.sei.cmu.
edu/publications/documents/03.reports/03sr009.
html.
Richard E. (Dick) Fairley is a professor of com-
puter science and director of the software engi-
neering program in the OGI School of Science and
Engineering of the Oregon Health and Science Uni-
versity. His research interests include software sys-
tems engineering, analysis and design, project
management, and practical software process
improvement. Fairley received a PhD in computer
science from the University of California, Los
Angeles. He is a member of the IEEE Computer
Society. Contact him at d.fairley@computer.org.
Mary Jane Willshire is dean of computer science
for the Colorado Technical University system. Her
research interests include software engineering,
human-computer interaction, and database man-
agement systems. Willshire received a PhD in
computer science from Georgia Tech. She is a
member of the IEEE, IEEE Computer Society,
SWE, and the ACM. Contact her at mjwillshire@
coloradotech.edu.
September 2005 41
Help shape
the IEEE Computer
Society of tomorrow.
Vote for 2006 IEEE
Computer Society ofcers.
Polls open 8 August 4 October
www.computer.org/election/
... All In software development setting, reimplementing or modifying a previously completed work is considered as rework. In software development, eliminating or preventing rework completely is not possible, but excessive rework signifies anomalies in development process or project management techniques [5]. However, excessively low rework could be due to insufficient quality assurance or defect detection activities [5]. ...
... In software development, eliminating or preventing rework completely is not possible, but excessive rework signifies anomalies in development process or project management techniques [5]. However, excessively low rework could be due to insufficient quality assurance or defect detection activities [5]. Excessive rework increases the cost of development, deteriorates product quality and demotivates team members, which in turn increases the risk of project failure [6]. ...
... Disciplined as well as mature software development process, robust requirement engineering mechanism, systematic software configuration management, and mindful interface design aid in avoiding rework. Adequate knowledge management, competent, trained, and experienced project manager as well as practitioners would reduce rework in GSD [5,14]. ...
... The discrepancies due to geographical, temporal, socio-cultural, and organizational distances often lead to ambiguities, conflicts, defects and subsequently lots of rework during software development [1]. Approximately 40-50% of the software development effort is being wasted on rework [5,6,7]. This rework can eventually lead to failure of projects [3,4]. ...
... In software development setting, rework to some extent is inevitable. However, excessive rework (more than 20%) could indicate problems in software development process and project management activities whereas, meagre rework (less than 10%) could be a sign of inadequate reviews, inspections, testing, and refactoring [7]. ...
... Recurrent rework deteriorates quality and thus, impedes customer satisfaction. It also adversely affects team morale and eventually reduces its productivity [7]. Thus, rework negatively affects the succeeding factors of a project. ...
... Previous research also identified triggers for a software team's lack of shared understanding, such as reworkloosely defined as the necessity to execute some form of development task a 2nd time, or perhaps a 3rd, 4th, or ith time [68]. While there are large costs associated with rework [6], there are some indications some rework could not have been prevented [12] and is acceptable, or even desirable [25]. If there is absolutely no rework, then this could be indicative that developers are not adequately executing their tasks [57]. ...
Article
Full-text available
Building a shared understanding of non-functional requirements (NFRs) is a known but understudied challenge in requirements engineering, primarily in organizations that adopt continuous software engineering (CSE) practices. During the peak of the COVID-19 pandemic, many CSE organizations complied with working remotely due to the imposed health restrictions; many organizations continue to work remotely while implementing business processes to facilitate team communication and productivity. In remote CSE organizations, managing NFRs becomes more challenging due to the limitations of team communication. While previous research has identified the factors that lead to a lack of shared understanding of NFRs in CSE, we still have a significant gap in understanding how CSE organizations, particularly in remote work, build a shared understanding of NFRs. We conduct a 6-month ethnography-informed case study of a remote CSE organization. We identify a number of practices for building a shared understanding of NFRs, such as validating NFRs through feedback. We also studied the practices of remote collaboration and in particular, the use and affordances offered by the collaborative workspace Gather that the organization used for remote interaction; our findings suggest that it allows for informal communications instrumental for building shared understanding. In addition, we describe the limitations to building a shared understanding of NFRs in the organization, such as gaps in communication and the limited understanding of customer context. Furthermore, we conducted further interviews to validate our findings for relevance and to gain additional insights into the shared understanding of NFRs within the organization. As actionable insights, we discuss our findings in light of proactive practices that represent opportunities for software organizations to invest in building a shared understanding of NFRs in their development.
... In these cases, the prototype may be a pilot system or a partial product version such as alpha or beta, or an MVP, that is developed with the expressed intention of exploring or validating a solution option (according to our definition of prototype). In agile development, prototyping is often an integral part of the development process with regular feedback from users and other stakeholders (Fairley and Willshire 2005), e.g. through validation of software at end-of-sprint demonstrations. Thus, prototyping is often used within agile development to detail and validate requirements, and to reduce uncertainty (Bellomo et al. 2013). ...
Article
Full-text available
Context Prototyping is an established practice within product and user interface design that is also used as a requirements engineering (RE) practice within agile development. Even so, there is a lack of theory on prototyping. Aims Our main research objective is to support practitioners in improving on their prototyping practices. Method We have designed a model that describes key aspects of the practice of prototyping. The model is based on a systematic mapping study consisting of thirty-three primary studies and on empirical data from twelve case companies. We validate and demonstrate the applicability of our model through a focus group at one company and through semi-structured interviews at eleven (other) startup companies. Results Our prototyping aspects model (PAM) consists of five aspects of prototyping, namely purpose, prototype scope, prototype media, prototype use, and exploration strategy. This model has enabled practitioners to discuss their prototyping practices in terms of the concepts provided by our model. Conclusions The model can be used to categorise prototyping instances and can thereby support practitioners in reflecting and improving on their prototyping practices.
... This characteristic is directly associated with development costs, because companies estimate the projects considering the substantial effort associated with rework in code [19]. Fairley and Wilshire [20] described different types of rework, which include evolutionary, avoidable retrospective, and avoidable corrective. The different types are associated with different effects in quality and productivity. ...
Preprint
Full-text available
Context Harmful Code are code elements that harm the software. Several characteristics can cause a code be Harmful to the Software. That may result from plenty of characteristics of the source code and external issues. By example, one might associate Harmful Code with the introduction of bugs, architecture degradation, and hard comprehension. However, there is still a lack of knowledge on which are the code issues that are considered harmful from the perspective of the community of software developers. Goal In this work, we investigate the social representations of Harmful Code among a community of software developers composed of Brazilian postgraduate students and professionals from the industry. Method We conducted free association tasks with members from this community for characterizing what comes to their minds when they think about Harmful Code. Then, we compiled a set of associations that compose the social representations of Harmful Code. Results We found that the investigated community strongly associates Harmful Code with a core set of undesirable characteristics of the source code, such as bugs and different types of smells. Based on these findings, we discussed each of them to try to understand why those characteristics happen. Conclusion Our study reveals the main characteristics of Harmful Code by a community of developers, those characteristics can guide researchers on future works to better understand Harmful Code.
... In these cases, the prototype may be a pilot system or a partial product version such as alpha or beta, that is developed with the expressed intention of exploring or validating a solution option (according to our definition of prototype). In agile development, prototyping is often an integral part of the development process with regular feedback from users and other stakeholders [13], e.g. through validation of software at end-ofsprint demonstrations. Thus, prototyping is often used within agile development to detail and validate requirements, and to reduce uncertainty [2]. ...
Article
Full-text available
Context. Harmful Code denotes code snippets that harm the software quality. Several characteristics can cause this, from characteristics of the source code to external issues. By example, one might associate Harmful Code with the introduction of bugs, architecture degradation, and code that is hard to comprehend. However, there is still a lack of knowledge on which code issues are considered harmful from the perspective of the software developers community. Goal. In this work, we investigate the social representations of Harmful Code among a community of software developers composed of Brazilian postgraduate students and professionals from the industry. Method. We conducted free association tasks with members from this community for characterizing what comes to their minds when they think about Harmful Code. Then, we compiled a set of associations that compose the social representations of Harmful Code. Results. We found that the investigated community strongly associates Harmful Code with a core set of undesirable characteristics of the source code, such as bugs and different types of smells. Based on these findings, we discuss each one of them to try to understand why those characteristics happen. Conclusion. Our study reveals the main characteristics of Harmful Code by a community of developers. Those characteristics can guide researchers on future works to better understand Harmful Code.
Article
Full-text available
Iteration is ubiquitous during software development and particularly notable in complex system development. It has both positive and negative effects; the positives of iteration include improving quality and understandability, reducing complexity and maintenance, leading to innovation, and being cost-effective in the long run; Negatives of iteration include; time, cost, and effort overrun. Its management is a challenging task and becomes more complex due to the non-uniformity of the terminology used at various places. Although Software Development Life Cycles (SDLC) are highly iterative, not much work related to them has been reported in the literature. Insights into iteration are explained in this paper by defining different perspectives (Exploration, Refinement, Rework, and Negotiation) on iteration through literature review, modeling each perspective, and simulating the effect of each iterative perspective on project completion time. An attempt has been made to create awareness about efficient use of iteration during software development by informing which perspective of iteration has what kind of impact on project completion time to avoid delays.
Thesis
Full-text available
Gegenstand der vorliegenden Arbeit ist die Entwicklung neuer Systeme und deren Beschreibung durch das Modell der PGE – Produktgenerationsentwicklung nach ALBERS (Albers, Bursac & Wintergerst, 2015) als Variationen auf Basis eines Referenzsystems. In einer Potenzialanalyse verschiedener Ansätze wird die Auswahl des Modells der PGE als Grundlage für die vorliegende Arbeit begründet. Anschließend wird im ersten Schritt eine Formalisierung als Beitrag zum Modell vorgeschlagen, mit der Variationen und Charakteristika von Elementen des Referenzsystems messbar und empirisch untersuchbar werden. Weiterhin werden Ziele, die Variationen und Charakteristika gewählter Referenzsystemelemente zu Grunde liegen sowie die Auswirkungen von Variationen und Charakteristika gewählter Referenzsystem-elemente beschrieben. Dadurch werden Variationen sowie Referenzsystemelement-Charakteristika mit Innovationspotenzial und Entwicklungskosten, -aktivitäten und -risiken verknüpft. Grundlage für dieses formalisierte Modell sind Beobachtungen aus Fallbeispielen verschiedener Branchen. Darauf aufbauend wird ein Rahmenwerk entwickelt, um gewählte Referenzsystemelemente und geplante Variationen zur Realisierung eines Innovationspotenzials in Form von Kunden-, Anbieter- und Anwendernutzen hinsichtlich möglicher Entwicklungsrisiken zu bewerten. Das Rahmenwerk wird in Form von drei Bausteinen in Fallstudien exemplarisch implementiert und evaluiert. Es wird zunächst ein Vorgehen zur Ableitung potenziell zielführender Variationen bei gegebenen Anforderungen und Referenzsystemelementen gezeigt. Anschließend wird ein Ansatz zu initialen Risikobewertung anhand von Variationsarten und Charakteristika von Referenzsystemelementen als Schlüsselfaktoren entwickelt. Im letzten Baustein werden Zusammenhänge zwischen Innovationspotenzial, Variationen, Charakteristika von Referenzsystemelementen und Entwicklungsrisiken übergreifend modelliert und zur Entscheidungsunterstützung durch bedarfsgerechte Informationsdarstellung in einer VR-Umgebung visualisiert. Im letzten Teil der Arbeit wird anhand von fünf ausgewählten Fallbeispielen untersucht, wie Entwicklungsaktivitäten in Abhängigkeit von gewählten Variationsarten und Referenzsystemelementen methodisch unterstützt werden können.
Conference Paper
Almost every expert in Object-Oriented Development stresses the importance of iterative development. As you proceed with the iterative development, you need to add function to the existing code base. If you are really lucky that code base is structured just right to support the new function while still preserving its design integrity. Of course most of the time we are not lucky, the code does not quite fit what we want to do. You could just add the function on top of the code base. But soon this leads to applying patch upon patch making your system more complex than it needs to be. This complexity leads to bugs, and cripples your productivity.
Article
The list itself is based on a concise selection of empirical data and is in rough priority order. The first fact had the most effects on defect reduction on the empirical data that was used for evaluation, while the last fact was less important. The priority of the facts is discussable and depends on the context.
Article
The Software Engineering Laboratory of the National Aeronautics and Space Administration's Goddard Space Flight Center has been adapting, analyzing, and evolving software processes for the last 18 years (1976-94). Their approach is based on the Quality Improvement Paradigm, which is used to evaluate process effects on both product and people. The authors explain this approach as it was applied to reduce defects in code. In examining and adapting reading techniques, we go through a systematic process of evaluating the candidate process and refining its implementation through lessons learned from previous experiments and studies. As a result of this continuous, evolutionary process, we determined that we could successfully apply key elements of the cleanroom development method in the SEL environment, especially for projects involving fewer than 50000 lines of code (all references to lines of code refer to developed, not delivered, lines of code). We saw indications of lower error rates, higher productivity, a more complete and consistent set of code comments, and a redistribution of developer effort. Although we have not seen similar reliability and cost gains for larger efforts, we continue to investigate the cleanroom method's effect on them.< >
Article
The Software Engineering Initiative, process-improvement program undertaken by the Software Systems Laboratory in Raytheon's equipment division in mid-1988 is reviewed. The three phases of the program are the process-stabilization phase, in which the emphasis is on distilling the elements of the process actually being used and progressively institutionalizing it across all projects, the process-control phase, in which emphasis shifts to instrumenting projects to gather significant data and analyze the data to understand how to control the process, and the process-change phase, in which the emphasis is on determining how to adjust the process as a result of measurement analysis and how to diffuse the new methods among practitioners. It is shown that the process-improvement initiative has improved the equipment division's bottom line, increased productivity, and changed the corporate culture. Much of the savings came from reducing rework.< >
Article
Software's complexity and accelerated development schedules make avoiding defects difficult. We have found, however, that researchers have established objective and quantitative data, relationships, and predictive models that help software developers avoid predictable pitfalls and improve their ability to predict and control efficient software projects. The article presents 10 techniques that can help reduce the flaws in your code
Quality Is Free, Penguin Books
  • P Crosby
P. Crosby, Quality Is Free, Penguin Books, 1980.
Common Control Chart Cookbook
  • S Sytsma
  • K Manley
S. Sytsma and K. Manley, "Common Control Chart Cookbook;" www.sytsma.com/tqmtools/charts.html.