Content uploaded by Carliss Y Baldwin
Author content
All content in this area was uploaded by Carliss Y Baldwin on Jul 31, 2014
Content may be subject to copyright.
Modularity-in-Design:
An Analysis Based on the Theory of Real Options
by
Carliss Y. Baldwin and Kim B. Clark
First Draft
March 1992
Revised Draft
December 1992
This Draft
July 1994
We thank Robert Hayes and Robert C. Merton for stimulating discussions that contributed to the
ideas found in this work. We also thank Kent Bowen, Sugato Bhattacharyya, Alfred Chandler,
Clayton Christensen, Dana Clyman, Benjamin Gomes-Casseres, Marco Iansiti, Michael Jensen,
Tarun Khanna, Bruce Kogut, Joshua Lerner, Daniel Levinthal, Michael Long, Anita McGahan,
Andre Perold, Dan Raff, Richard Tedlow, Karen Wruck and seminar participants at Harvard
Business School, Boston College, Boston University, and the TIMS/ORSA and Financial
Management Association annual meetings for insights that led to us to clarify and extend the
model. We especially thank the Editor, Abraham Seidmann, the Associate Editor (name if
permitted) and three referees (name if permitted) for detailed and constructive comments on a
previous draft. The finanicial support of the Harvard Business School Division of Research is
gratefully acknowledged. Any mistakes or omissions are ours alone.
Modularity-in-Design: An Analysis Based on the Theory of Real Options
by
Carliss Y. Baldwin and Kim. B. Clark
Abstract
We investigate the economic role of modularity in the design of new products and
processes. “Modularity-in-design” is an intermediate stage in a spectrum that ranges from
“modularity-in-production” to “modularity-in-use.” In this paper, we describe each stage of
modularity, give examples of products in each stage, and show how designs can -- but do not
have to -- move from one stage to the next. Although the three stages are linked in an
evolutionary sense, each type of modularity involves a unique set of benefits and costs, and thus
requires a separate economic analysis.
The remainder of the paper focuses on modularity-in-design. We construct a formal model
of the design process based on the financial theory of options. We analyze the benefits, costs and
managerial implications of breaking apart a complex design into independent modules linked by
pre-established interfaces. The model provides a rigorous framework for dealing with the
economic effects of uncertainty, and shows how modular designs, independent experiments and
testing technologies interact to create economic value. The model also highlights specific ways in
which modularity rests on and contributes to the knowledge base and organizational structure of a
company. The structure of the model and insights gleaned from the analysis may be useful in
formulating design strategy, organizing development programs, or supervising investments in
design capabilities and knowledge.
Modularity-in-Design: An Analysis Based on the Theory of Real Options
by
Carliss Y. Baldwin and Kim. B. Clark
1. Introduction
The design of a product -- its form, its function, its aesthetics, the details of its
engineering -- has a powerful effect on its economic value. Its effect on customer perception and
experience is, of course, direct. Design can also have a dramatic effect on how a product is made
and maintained, and thus on its cost, its quality and its reliability. Therefore, in the last decade,
the concept of “design performance” has been expanded beyond what the customer sees to
include the product’s ease of manufacturability, assembly, and servicing (Nevins and Whitney,
1989). But the impact of design goes even deeper. As increasing numbers of firms face
competition centered on rapid product development, a number of researchers have turned their
attention to the effects of design on the development process itself.1 From these studies,
“modularity” has emerged as a pervasive theme and recommended best practice.
A “modular design approach” breaks apart a complex system into components that
function independently. The components are linked in ways that minimize incidental
interactions, often using standard interfaces. For example, the familiar modular phone plug is a
standard mechanical interface for connecting devices to the telephone system. Devices must also
conform to electrical and informational interfaces (current and code requirements) in order to
send or receive intelligible signals over the wires.
The economic benefits of modularity are numerous and complex. In manufacturing,
modularity contributes to scale and scope economies in production. In software, it makes code
more readable and hence easy to maintain. In product development, it makes pieces of an overall
design reusable resulting in greater efficiency and higher speed in the development process. For
users, modularity makes possible customized designs, lower maintenance costs and more
1See for example, Von Hippel (1990), Clark and Fujimoto (1991), Eppinger (1991) and Ulrich (1992).
2
efficient upgrading over the lifetime of the product.
However, a single design approach cannot obtain all of these benefits all the time. Indeed,
the existence of such diverse benefits suggests that there are in fact different kinds of modularity,
with different sources of value, and different implications in the market. Furthermore,
numerous studies of engineering design show clearly that modularity is not an inherent feature of
the product, but must be created through specific design and engineering choices, and through
specific investments. And while the benefits of modularity seem, in theory, to be quite
significant, there is wide variation in firms' ability to realize those benefits in practice.
Part of the challenge of realizing the value in modularity lies in the fact that an effective
modular design requires several kinds of knowledge -- component knowledge (how the pieces of
a system work); architectural knowledge (how components may be brought together to make a
functioning system); interface knowledge (the details of interconnections and how they affect
performance); and testing knowledge (how to calibrate performance at both component and the
system level). Prior research has shown that it is quite easy for certain types of knowledge to
“fall between the cracks” of an organization. For example, it is not uncommon for organizations
to pursue component knowledge, but to neglect knowledge of architecture or interfaces.
(Henderson and Clark, 1990; Christensen, 1994). Such firms may have modular designs, but
system performance will suffer and potential value will be lost. Moreover, firms that fall into
this trap may be vulnerable to competitive attack by firms that are able to combine existing
elements of a design in new ways. The upshot is that unless a firm invests in a coherent set of
supporting capabilities, a modular design will have far less impact than it could.
Modularity is thus attractive, but complicated. Managers charged with evaluating
potential investments in modularity must ask: What is the purpose of this investment? Will it
reduce manufacturing cost? Will it make the design process more efficient? Will it be valued by
consumers? What else must be done to make modularity effective? These questions in turn
establish a broad agenda for research.
The goal of this paper is to understand the economic benefits and costs of modularity in
the design of new products and processes. Drawing on many lines of research in engineering,
3
technology and design management, we construct a formal model of the design process based on
the financial theory of options. We use the model to analyze the effect of modularity on the
expected performance, speed and efficiency of the development process. The model provides a
rigorous framework for dealing with the economic effects of uncertainty, and shows how
modular designs, independent experiments and testing technologies interact to create economic
value. The structure of the model, and quantitative insights gleaned from the analysis, should be
useful to managers who are responsible for formulating design strategy, organizing development
programs, or supervising investments in design capabilities and knowledge.
The plan of the rest of the paper is as follows. In section 2, we review the literature,
showing how three types of modularity with different economic implications arise at different
stages in the rationalization of a product design. We then focus on modularity as a design
practice that affects new product and process development. Section 3 develops the basic model,
and considers the effect of different degrees of modularity on economic value. Section 4
analyzes the interaction of modularity with other investments to create additional value through
experimentation, variety and speed. Section 5 examines the cost of modularity, independent
experiments and testing technologies. Section 6 relates modularity to the evolving knowledge
base of a firm and describes organizational implications. Section 7 concludes.
2. Stages of Modularity
Discussions of modularity can be found in technical journals and textbooks in a number
of fields.2 Because of the diversity of applications, it is difficult to integrate the many separate
views of modularity into a single over-arching concept. However, much of the diversity in the
literature can be explained by the fact that different types of modularity arise at different stages
of a product’s evolution. For our purposes it is useful to distinguish three stages: modularity-in-
2A partial list would include: in architecture, Alexander (1964); in software, Parnas (1972), Stevens, Myers and
Constantine (1974), Myers (1975) and Kernighan and Plauger (1976); in mechanical engineering, Rinderle and Suh
(1982); in industrial design, Ulrich (1992); in manufacturing, Nevins and Whitney (1989), Han, Hitami and Yoshida
(1985); in automotive design, Clark and Fujimoto (1991) and Eppinger et al (1994); and for general product
development Rothwell and Gardiner (1988) and Von Hippel (1990).
4
production, modularity-in-design, and modularity-in-use. As shown in Figure 1, these stages
follow a logical sequence: modularity-in-design implies some degree of modularity-in-
production and modularity-in-use presupposes some degree of modularity-in-design. However,
each type of modularity offers a distinct set of economic benefits.
Modularity-in-production rationalizes a product into components and allows parts to be
standardized (e.g.. all screws the same size). Modularity-in-design goes a step further: the
product is decomposed into a set of independent subunits, which can be mixed and matched to
design a complete system. Finally, a product becomes “modular-in-use” if consumers can mix
and match components to arrive at a functioning whole. A significant degree of control over the
design is thereby transferred from the firm to its customers.
Although each stage of modularity builds on the one before, progression from one stage
to
___________________________________________________________________________
5
Figure 1
Stages of modularity and associated economic benefits
Stages Modular-in-Production
Modular-in-Design
Modular-in-Use
Economic Manufacturing
Benefits Efficiencies
Innovation
Efficiencies
Consumer
Options
Economies of scale
and specialization,
"Variety reduction"
through component
swapping and
sharing; Inventory
and flow line
economies
Mix and match of
design components;
Independent
experimentation;
Testing and
selection at the
component level
Mix and match at or
after purchase;
Component selection
by the user; Flexible
upgrading
6
the next is not inevitable. Products whose designs are difficult to improve, such as some foods3
or simple tools,4 may benefit from modularity-in-production, but gain very little from the mix
and match flexibility of modularity-in-design. In other cases, elements that are modular in the
design phase become irreversibly fused during production. For example, a jet aircraft or a ship
may be designed as a set of modular units, with a high degree of flexibility and customization.5
But once the parts have been welded together, the finished products cannot be reconfigured
except at great cost. Ships and aircraft must sacrifice modularity-in-use to structural integrity,
and in this respect differ from computer systems or living rooms, whose elements can be
reconfigured and upgraded at the user’s whim.6
But in many products the nature of modularity does develop over time, moving in an
evolutionary pattern from manufacturing to design, and from design to use. However, each
degree of modularity must be “purchased” through an investment in knowledge and supporting
systems. The returns to those investments -- the benefits of different degrees of modularity -- are
different in different stages.
In the earliest stages, manufacturing efficiencies are the primary source of gain. The
focus of design efforts is often to reduce the total number of different parts used in a product
family. Benefits of this type of modularization include economies of scale in parts manufacture;
inventory savings; and a reduction of complexity in materials flow and scheduling. In addition,
because of the reduction in complexity, a given facility may be able to handle much higher levels
of product variety than before. Group technology, variety reduction programs and design-for-
3An Oreo cookie has built-in modularity but, many would argue, no room for improvement. In contrast, mix-and-
match “designer pizzas” are a staple at many restaurants.
4Most hand tools have simple and hence very stable designs. Cooper Industries made a virtue of simplicity: its
strategy was to purchase hand tool companies, and invest in process technology, advertising and distribution, while
cutting back on variety and new product R&D. For these products, the potential benefits of design improvement were
judged not worth the effort of R&D. (Stuart and Collis, 1991)
5Rothwell and Gardiner(1988); Nevins and Whitney (1989).
6Reality is naturally quite a bit more complicated than Table 1 or the above examples indicate. In any complex
design, different parts will exhibit different degrees of modularity. For example, most automobile engines are
modular-in-production, that is, their designs have been rationalized to take advantage of economies of scale in casting
and machining, and they exploit economies of scope by sharing components within product families. However, the
tires of a car are relatively easy to remove and replace, and hence are modular-in-use. Despite this complexity, we
still think it is useful to describe a design as having an “average” degree of modularity.
7
manufacturing all use modularity-in-production as a technique for reducing manufacturing
costs.7
In the course of rationalizing parts for production, firms must collect and codify
information about the product’s basic structure.8 At this point the product develops an
architecture, i.e., a hierarchical organization of elements that defines the product and its variants.
Modules (or components) and the interfaces that connect them are a central feature of the
product's architecture and make possible innovation through variation of specific modules within
the overall architecture. Such modularity-in-design -- coupled with experimentation and testing -
- can improve the efficiency and speed of the development process. The nature of these
“innovation efficiencies” and investments needed to realize them are the primary focus of this
paper.
Once a firm has achieved modularity-in-design, moving to modularity-in-use transfers
choices that were once the designer's to the user. This transfer of “decision rights” is very
valuable to consumers who prize variety (What will I want to wear tomorrow?), or who face
uncertainty about future needs (What are the information needs of my company?).9 Therefore
consumers may be willing to pay a premium for products that are modular-in-use. However there
are risks in offering such products for once the consumer has control over what goes into the
system, he or she may add or subtract from it in ways not contemplated by the original designers.
Thus a firm introducing a product that is modular-in-use must be prepared to face competition
focused on subelements of the system.10
7For details on these techniques, see Han, Hitami and Yoshida (1985); Nevins and Whitney (1989); Suzue and
Kohdate (1990). For models and additional examples, see Rothwell and Gardiner (1988), Shirley (1990) and
Sanderson (1991).
8For example, Shirley (1990) describes how product analysis aimed at reducing production costs led to the creation of
a “library” of designs organized by module.
9On the value of decision rights generally, see Jensen and Meckling (1992). Wruck and Jensen (1994) describe the
efficiency gains from decentralizing decision rights within an integrated chemical plant. In effect, they describe the
modularization of a production process (as opposed to a product design).
10The classic example of competitive entry focused on a subset of a modular system is the entry of plug-compatible
peripheral manufacturers in the wake of IBM’s System/360. (Dorfman, 1986; Pugh et al, 1991; Christensen, 1994)
Other studies that focus on the effects of modularity-in-use on the development of particular markets include:
Langlois and Robertson, 1992 (stereos and microcomputers); Langlois, 1992 (cluster tools); Garud and
Kumaraswamy, 1993, (workstations); Gomes-Casseres, 1994 (microprocessors). In industrial economics, several
recent papers examine theoretical equilibria in markets where consumers can “mix and match” components of a
8
Overall the evidence in the literature points to important advantages in modularity, but
the creation of modularity and particularly the evolution from one degree to another is not
inherent in the technology, nor is it inevitable. The pursuit of modularity depends on choices the
firm makes, and yet the literature leaves open many questions about how modularity creates
economic value and how much modularity to pursue in production, design or use. On the one
hand, the economic benefits of modularity-in-production are fairly well understood. By
permitting parts sharing and swapping, modularity delivers classic scale and scope economies;
by reducing the number of basic building blocks, modularity also reduces the costs of
complexity.
On the other hand, less is known about the economics of modularity-in-design. Design
theorists in a number of fields have stressed the advantages of breaking apart designs into
flexible and reusable elements.11 Writing about development processes in general, Von Hippel
(1990) argued that the way in which tasks are subdivided can have a large impact on the speed,
efficiency and final outcome of the design effort. He drew on the early contributions of Simon
(1969), Marples (1961) and Alexander (1964) in framing his arguments, but, although he
discussed the conditions that ought to govern task partitioning, he did not attempt to develop a
modeling framework to analyze development choices.
In a recent seminal paper, Ulrich (1992) defined the concept of a product architecture and
argued that the choice of a “modular architecture” over an “integrated architecture” will affect
the performance, variety and efficiency of development efforts.12 Complementing Ulrich’s
taxonomic approach, Eppinger and his colleagues have mapped the actual tasks involved in
particular design processes, and shown how task ordering and partitioning can enhance or
complete system. See, for example, Matutes and Regibeau (1991); Economides and Salop (1991) and Farrell, Monroe
and Saloner (1993).
11Alexander(1964); Simon(1969); Myers (1975); Kernighan and Plauger (1976); Suh (1990).
12Ulrich proposes that there are three basic modular architectures: (1) slot architecture (each component has its own
slot); bus architecture (every thing connects to a common central component); and (3) sectional architecture (all
interfaces the same; no single central element). Ulrich’s taxonomy focuses on interfaces, while others have stressed
the hierarchical relationships among components of the system (see Marples, 1961; Parnas, Clements and Weiss,
1985; Henderson and Clark, 1990; and Christensen and Rosenbloom, 1994). As yet, there is no robust, yet
parsimonious taxonomy of modular systems.
9
impede the overall efficiency of development.13 Meanwhile, a large body of economic research
has shown that product development strategy and performance -- including development
efficiency, design quality and time-to-market -- dramatically affect competitive outcomes and
long-term industry structure.14 However, the economic studies tend to view modularity as a
technological “given,” not as a choice made by engineers and managers.15 (This is a major
difference between the economic and engineering literatures on modularity.)
For companies competing on the basis of timely and innovative products, modularity is
an important strategic choice that demands the attention of managers overseeing research, design
and development processes. Technology creates possibilities for modularity, but where to split a
design, how to assign tasks, how to link components, and, especially, how much to spend on
modularity are choices that managers must make. Their decisions in turn should be based on
informed judgment about the efficiency and other competitive advantages that may be obtained
via a modular design.
3. The Model
In practice, creating a modular design that delivers value in the market requires both
technical creativity and careful consideration of the financial costs and benefits associated with
the design. Although our model is not couched within a specific technology or engineering
discipline, it does attempt to capture the technical context in which modularity choices are made,
and to link the technical choices to financial costs and benefits. As a result, the model draws on
multiple disciplines and attempts to communicate results of interest to several different kinds of
scholars and practitioners. Our challenge in this section is to communicate what the model is
doing, and to build a common language and basis for analysis and evaluation. To meet that
13Eppinger (1991); Eppinger and McCord (1993); Eppinger et al (1994). For a synthesis of Ulrich’s and Eppinger’s
work on product design and architecture, see Ulrich and Eppinger (1994).
14A very partial list of economic studies would include: Foster (1986), Henderson and Clark (1990), Trajtenberg
(1990), Cusumano (1991), Clark and Fujimoto (1991), Ferguson and Morris (1993) Christensen (1994), Utterback
(1994) and Gomes-Casseres (1994).
15Richard Langlois’ work on the microcomputer industry is a noteworthy exception (Langlois, 1992; Langlois and
Robertson, 1992).
10
challenge we have found it useful to lay out our basic assumptions, and to discuss them in some
detail before presenting the model's formal results.
We take as our basic starting point a design team (who act as one) faced with the problem
of designing a complex, technologically uncertain product for a market of identical customers.
Customers make purchase decisions based on product performance. The product’s performance
is in turn the result of a large number of individual decisions made by the design team, but the
precise relationship between individual design decisions and ultimate performance cannot be
predicted in advance.16
The detailed assumptions used to construct the model fall into three basic categories: (1)
the organization of the design process; (2) the impact of design decisions on performance and
value; and (3) the nature of uncertainty. For each category, we list our formal assumptions first,
and then discuss the reasoning behind our choices.
3.1 The Organization of the Design Process
We assume:
(1.1) The design process consists of a set of tasks, numbered 1 to N. Each task requires a
design decision and results in the specification of a design parameter. The design
parameters in turn affect the performance of the product (as viewed by consumers), but
the precise impact of a particular decision on performance is uncertain. There is a
“current model” of the product which represents a feasible combination of the N design
parameters.
(1.2) At the outset of the design process, tasks are partitioned, i.e., divided up among different
design groups. There are obviously a very large number of ways to partition N tasks. A
16In the interest of transparency, we have had to suppress many important and interesting features of the real world.
For example, in reality, design processes typically involve many people with different (and sometimes conflicting)
objectives. Customers have widely varying tastes. Price is not the same as willingness to pay, but is set through the
interaction of demand, cost, volume and competitive behavior. All of these important concerns are omitted from the
model so that we may focus on the linkages between design and value. Fortunately, the model’s results are not very
sensitive to the precise specification, and thus insights gleaned from it can be used to guide strategy and evaluate
investments in development capabilities in a wide range of contexts.
11
modular partition divides the tasks into two or more subsets so that the detailed decisions
within each subset can be made without reference to those in other subsets.
(1.3) Purely as an analytical device, we assume that the total number of tasks, N, is divisible by
whole numbers up to j: N = j! A symmetric modular partition divides the N tasks into
independent subsystems each with an equal number of decisions. Below we will evaluate
the effects of symmetric and asymmetric modular partitions on the speed, efficiency and
performance of the design process.
Discussion. The characterization of a design process as a set of tasks and/or decisions is common
in the literature and not controversial.17 However, we focus on a situation where the design
team confronts both complexity and uncertainty, so that the design process is unstructured. In
practice, designers approach unstructured design processes in a hierarchical fashion. Initially, the
overall task is broken up into large blocks, and then smaller tasks are defined and assigned
within each block. This procedure continues and results in a multi-layered hierarchy of design
(Marples, 1961; Clark, 1985). Then as individual design decisions are made they are linked up
and tested for mutual consistency.18
A modular design approach is essentially a strategy for attacking an unstructured design
problem. The basic idea is to create “modules” within the overall design, which can be
segregated and addressed as independent subproblems. Designers divide their tasks so as to
maximize the interaction among elements within a module and minimize the interaction of
elements across modules (Alexander, 1964; Myers, 1975; Suh, 1990; Ulrich, 1992). But an
effective system solution to the overall design problem requires that designers also develop ways
17Eppinger and his colleagues have actually managed to list the tasks involved in several highly structured design
efforts. However, a question arises as to whether comparable lists can be constructed when the underlying processes
are not well understood (Eppinger et al 1994). In framing our assumptions we rely on the fact that practitioners and
design theorists across a range of fields are united in saying that modular partitions can be found or constructed for
most complex designs.
18The process of integrating individual design choices is complex and costly. Repeated cycles of problem-solving
may be necessary to get the design to work, and the end product may be an inelegant compromise. Over the last ten
years, a huge literature has developed around the theme of integrating design processes. See, for example, Nevins and
Whitney (1989); Clark and Fujimoto (1991); Iansiti, (1993).
12
to link modules together. Therefore, to make modularity work, designers must find a way to
achieve module independence and module integration at the same time. Software development
illustrates the dilemma and its solution.
A key development in the application of modularity to software was the principle of
“information hiding” initially put forward by Parnas (1972a,b). Parnas reasoned that if the details
of a particular software module’s design were hidden from other modules, then changes to that
module could be made without changing the rest of the system. The idea of separating function
(what the module does) and structure (how it accomplishes its function) was then adopted by
Constantine, Myers and Stevens (1974) as a tenet of the “structured” or “composite” approach to
software development.
The concept of information hiding implicitly requires that well-defined interfaces exist
between different subelements of a design. In software engineering the ideal interface consists
of a functional name, and a minimal set of data transferred to and from the module. The details
of how the module accomplishes its function should not be “visible” to the program that “calls”
it. Equally important, the module should operate on a general data set, and not have to "know” in
detail how the data on which it operates were stored, organized and addressed (Myers, 1975;
Schach, 1993).
More generally an interface defines the rules that a given subsystem design team needs to
follow to ensure that the results of their work can be integrated effectively with the work of other
teams. The interface thus embodies knowledge about the interactions between subsystems, and is
critical to achieving overall system performance.19 In the final section of this paper, we discuss
the knowledge that is needed to arrive at a modular partition, and describe how such knowledge
19Eppinger et al (1994) observed that “decoupling” tasks in a large design (which is the same as modularization) can
be accomplished by inserting another set of tasks earlier in the design process. The earlier tasks, in addition to
dividing the effort in a sensible way, also establish the interfaces that subsystems must adhere to. The effort of
designing interfaces is thus an investment that must be recouped later in the process. In practice, interfaces are
usually not perfect. Effective integration then requires modifications and mutual adjustments in components to
respond to unanticipated interactions. This is particularly true when the new design extends beyond established
practice; integration then requires joint problem solving and close interaction between specialized components groups
(Clark and Fujimoto, 1991; Bacon et al, 1993; Iansiti, 1993) All of this underscores the fact that modularization is an
evolutionary process that is based on and in turn supports knowledge building in an enterprise.
13
is likely to emerge and evolve over time. At this point, however, we simply posit the existence of
modular partitions and related interfaces that allow the design to be divided up into independent
subsystems.
3.2 The Effect of Design Decisions on Value
We assume:
(2.1) All consumers evaluate products in the same way: faced with two versions, all will agree
as to which is better. The “total performance” of the product, denoted X, reflects the
consumers’ willingness to pay: higher X translates into higher revenues and a higher
return to the development effort. Without loss of generality, the performance of the
“current model” is set to zero.
(2.2) The product has many attributes. Fundamentally the design parameters (hence the
designers’ collective decisions) determine the product’s attributes and the consumers’
willingness to pay. For simplicity, we assume that each design parameter contributes to
total performance in a specific and separable way, and thus X may be decomposed into a
sum of the shadow values corresponding to individual design parameters:
N
X = Σ xi ;
i=1
(2.3) The shadow values of specific design parameters are unknown to both the consumers and
the designers. Designers do not know the exact effect of any single design decision on the
product’s overall performance and value. Consumers in turn can only evaluate realized
products. Given two versions of the product, consumers can express their preferences, but
they cannot explain why a given set of design parameters has a particular value.
(2.4) At a specific point in time, T, the design team fixes the N design parameters and builds a
prototype. The prototype’s overall performance is measurable: the design team can
observe X, and compare its performance to that of the current model. In addition, if the
design was initially partitioned into modules, the design team can evaluate modular
14
combinations, that is, it can mix and match modules of the prototype with complementary
modules of the current product.
(2.5) Building and evaluating a prototype involves a cost, which we call “the cost of testing.”
Discussion. Compared to the rich set of descriptions of the design process, much less has been
written about how individual design decisions get translated into economic value. For a complex
product, the chain that connects design to value is long. At one end, technical factors may restrict
performance and slow the rate of problem solving. At the other end, consumer perceptions
(which may be evolving along with the product), pricing and competition all affect the value
captured by the company.20 In general, technical hurdles are hard to measure, consumer desires
are hard to understand and competitors’ actions are hard to predict, and yet the value of a new
product depends on the interaction of all three!21
Faced with this complexity, we have elected to map design decisions onto value in a very
simple way that still preserves the essential structure of design processes. At the core of our
assumptions are three basic observations: (1) design affects product performance, which in turn
affects consumers’ willingness to pay; and (2) products have multiple attributes (many
dimensions of performance), which consumers weigh in some fashion to arrive at a final
valuation; and (3) the design process takes time and candidate designs must be tested. We think
these observations are self-evident for many design processes, and hence should not be
controversial.
We go on to assume that (1) product attributes have a one-to-one correspondence with
design parameters; (2) consumers’ perceptions can be decomposed into several dimensions, with
overall value being a simple sum of the “shadow value” of each dimension; (3) the design
20On the evolution of consumer perceptions in response to new designs see Clark (1985) on the “hierarchy of
concept.” Evidence of changing preferences based on developing knowledge about a product can be found in
Trajtenberg’s (1990) study of the market for medical imaging technology.
21Recently there have been a few attempts to map technical, consumer and competitive parameters into financial
value using modern management science and financial modeling techniques: see, for example, Cohen et al, 1993
(new food products) or Nichols, 1993 (drug development at Merck). Mostly, however, research, design and
development efforts have not been subjected to rigorous finanacial valuation (Baldwin and Clark, 1992). This may
change as a result increased capital market oversight (Hall, 1990; Jensen 1993).
15
process takes a fixed amount of time and testing involves a fixed cost. These specific additional
assumptions were made to simplify the model. They should not be taken as literally true, but
viewed as approximations of a more complex reality.22
3.3 The Nature of Uncertainty
We assume:
(3.1) A change in any design parameter will have an uncertain impact on the value of the
system as a whole.
(3.2) The probability distribution of aggregate value is unchanged by modularization.
(3.3) (To simplify the analysis) (a) every modularization results in a normal distribution of
value for each module; (b) the expectation of value for each module is zero; and (c) the
variance, denoted σn2, for a module comprising n design parameters, increases linearly
with time (T) and the complexity (n) of the design problem: σn2=σ2nT where σ2 is a
parameter of the design process.
Discussion. The observation that design choices have uncertain effects on the performance of
complex systems pervades the literature and is a prime rationale given for decomposing designs
into modular -- independent -- subunits.23 Uncertainty in the relationship between parameters of
design and system value complicates the design problem, but in so doing creates new sources of
value. Uncertainty arises because of incomplete knowledge, either about technical relationships
(i.e. the design parameter-system performance connection), or customer response (i.e. the system
22In general, the qualitative results of the model carry over to more complex specifications. For example, the
mapping from design parameters to value can be many-to-many (many design decisions affecting many functions);
can involve intermediate variables (functions the consumer does not perceive directly); and can be nonlinear. The
time horizon and the cost of testing may be uncertain; or triggered by an internal event (like attaining a certain
performance goal). Finally, the development process may have several well-defined stages and include multiple
rounds of designing, screening and testing. Any of these extensions can be incorporated within the basic structure of
the model, although notational transparency would be lost with each additional complexity.
23The research of Eppinger and colleagues (1991; 1993; 1994) into the fine structure of design processes has shown
that there is indeed great uncertainty about the impact of individual design parameters on even well-understood
designs. Numerous empirical studies in finance and new product development show the larger effects of uncertainty
in high failure rates for new products and new ventures.
16
performance-system value connection).
Choices designers make get translated into value once the product is introduced in the
market. But before introduction, the value of a particular design is uncertain (and could even be
worse than the current product). Thus, anything the designers can do to create and act on
information in advance of the market will have significant value. Said another way, given the
uncertainty we have assumed, building options into the design process and acting on them as
information develops can have a huge impact on technological outcomes and the value of the
effort.
Of course, one can imagine a world of no uncertainty. In that world designers would
know exactly how their choices mapped into value and there would be no possibility of
introducing a design of less value than the current product. Designers would simply restrict their
choices to the set of value-increasing parameters, and would not need to build options into their
designs. That is a world of no surprises, no setbacks, no failures, no breakthroughs. It is not the
world we model. What we are after is a model that sheds light on the halting, groping, and
searching that characterizes research, design and development processes for complex products
ranging from new medical procedures to the latest in consumer electronics.24 A modular design
is essentially an options-based approach to a complex, uncertain design problem, and thus we use
an options-based model to investigate the value of modularity and how best to exploit it.
Assumptions 3.2 and 3.3 put some additional structure on the problem in order to
simplify and focus the model. The first of these deals with the effect of modularization on the
distribution of outcomes. The question is this: Aside from option value, are designs likely to be
better or worse as a result of a modularization? Design theorists and engineers are divided on
this issue, and a full answer clearly depends on the physical properties of the object being
24From the outside, these halting, groping processes may be indistinguishable from progress under certainty, because
the sequence of products actually introduced may still be marked by continuous improvement. “Bad draws” will be
evident only in terms of slippage in the schedule, and if managers have built extra time into the schedule, even these
delays would not be apparent to anyone outside the process. But even though the two processes might look the same
to an external observer, they must be managed very differently. In the first instance, there is no need to worry about
creating or exploiting options. In the second, options built into the process and exercised by managers create
significant economic value.
17
designed, as well as the designers’ skills both in perceiving opportunities to modularize, and in
designing interfaces.25 In our analysis, we assume that the distribution of value is unchanged by
modularization. This assumption is made for transparency and analytical convenience -- in
reality, modularization may make design outcomes better or worse, but the direction of the effect
will vary from case to case. But assuming modularity is neutral allows us to separate the value of
options created by modularity from its other effects on the design.26
It is also helpful if the distributions have a common shape and location under alternate
modularizations. Thus we assume a normal, zero-mean distribution for the value of the overall
system and all modules. In essence, this assumption suppresses what is going on within modules
(by assuming all modules are intrinsically alike), and allows us to focus on different ways of
dividing and combining modules.27
If module values are independent, additive, and normally distributed (assumptions 1.2,
2.2 and 3.3), and if modularization leaves the overall distribution unchanged (assumption 3.2),
then it follows that the variance of a module is an increasing function of the number of design
decisions in that module. We impose a slightly more restrictive functional form, assuming that a
module’s variance is linear in both the number of design decisions (n) and time (T). In essence,
we are saying that complex projects are more risky, and that to arrive at a dramatically different
design requires more time than to make a few superficial changes.28
25For balanced discussion of this issue, see Von Hippel (1990); Ulrich (1992); Schach (1993); and Eppinger et al
(1994). Alexander (1964), Myers (1975) and Suh (1982) are ardent proponents of modularity. On the other side,
Brooks (1975) takes the position that a “great” design must be the product of a single mind, i.e., any division of tasks
detracts from quality.
26For example, Eppinger and McCord (1993) hypothesize that modularity reduces the number of iterations needed to
complete a design. This efficiency gain would be represented in our model as a reduction in the time needed to
complete the project, and is not an option value. In real world applications, the value of higher quality and/or reduced
cycle time can be added to the value of mix and match flexibility in estimating the total economic impact of a
modular design approach.
27If outcomes (at both the system and module level) can be viewed as the sum of a large number of independent
actions, their distribution will be asymptotically normal by the Central Limit Theorem.
28Many readers will recognize that our assumptions correspond to the definition of classical Brownian motion.
Indeed, the development of a new design can be modeled as a stochastic search process. Diffusion-type searches (for
which Brownian motion is the paradigmatic case) commence with a known design, and then proceed in small steps;
at each step there is uncertainty about where the next step will lead. Diffusion processes have a wide range of
application in the physical sciences, operations research and finance, and their properties have been intensively
investigated. See Harrison (1985) or Merton (1990) on the characteristics of these processes and for references to
related work.
18
Taken as a whole, our assumptions about uncertainty work best for complex products or
systems, where learning takes place at many points, and where the final design is the result of a
large number of incremental modifications.29 The assumptions are not a good characterization of
processes where the design is simple, or where improvement rests on a radical
reconceptualization of all elements of the design. Thus our model may provide insight into
development processes for computers, automobiles, aircraft and large software systems, but will
not shed much light on how best to design shoes, apparel or furniture.
3.4 The Value of a Modular System
Given the assumptions listed above, we can very quickly arrive at an options-based value
of a modular system.30 We begin by defining a standard normal variate for X and all modular
partitions of X. Let <xi> be the values of the design decisions in a module of size αN, where α is
an integer divided by N, and αN is the number of design decisions in the module. It follows from
the assumptions that
αN
∑ xi
zα = i = 1_____ ;
σ (αNT)1/2
is distributed N(0,1).
Let V1 denote the present value of a process with only one module containing all N
decisions. The final outcome of this process, X, is the sum of the individual xi, and is normally
29Processes exhibiting both incremental and radical change can be modeled through the theoretical device of a
“jump-diffusion” process. Fundamental results of our analysis would be similar under such a process, although the
exact solutions would be different.
30Options theory, a field pioneered by Black and Scholes (1973) and Merton (1973), explicitly deals with
contingencies, that is, optimizing actions taken in response to future events. Classical option theory also provides for
the valuation of complex payoffs via dynamic replication. However, the options that we model may not have
corresponding traded securities, and for this reason, we do not invoke arbitrage in valuing a modular system’s
expected performance. Instead, we assume that future levels of performance are priced using a known discount rate.
This assumption suppresses many complexties of real product markets, which, if incorporated, would change the
details of the model but not the basic results. For a detailed discussion of dynamic replication vs. dynamic
programming approaches to real options valuation, see Dixit and Pindyck (1994).
19
distributed with mean zero and variance σ2NT. The present value of X, conditional on its being
superior to the current model, is
∞
V1 = e-rT σ (NT)1/2 ∫ z n(z)dz ; (1)
0
∞
where e-rT is a discount factor applied at time 0 to values obtained at time T, and ∫ z n(z)dz is the
0
expectation of the right tail of a standard normal distribution and equals .399.
Intuitively, equation (1) says that the development process may result in a product that is
better (X>0) or worse (X<0) than the current model. But on seeing the outcome (i.e., after testing
the design to determine X), the design team has the option to introduce the new version or
continue with the current one. Clearly it will replace the current model if and only if X is
positive. The integral is the conditional expectation of the upper half of a standard normal
distribution; σ (NT)1/2 scales that expectation for the risk of the particular development process;
and e-rT discounts future cash flows to arrive at a present value.31
The essence of modularity lies in the firm’s ability to value subsets of the system and
select for introduction the best of any subset. For a one-module development process, the
company has only two choices: introduce the new product or stick with the current model. If the
product is split into two modules, with a well-defined interface, then the company has four
choices: introduce an entirely new system, keep the old system, or combine one old module with
one new one. Similarly, three modules generates eight candidates, four modules sixteen, etc.32
The combinatorial properties of modules clearly present a new set of challenges for the
design team, since measuring the performance of subsystems requires additional resources. We
shall return to this issue in the sections on testing and knowledge-building below. For now,
31Strictly speaking, if the xi are instantaneously uncorrelated with any traded security, and outcomes are small
relative to the economy as a whole, then the appropriate discount rate is the risk-free rate (Mason and Merton, 1985).
32Implicitly we are assuming that the act of modularization partitions the design of both the old and the new product.
This assumption is most likely to be true when the older product’s design has been partially rationalized for
production (cf. Shirley, 1990). However, there are cases in which the existing product design cannot be partitioned:
for example, software engineers speak of the impossibility of breaking apart “spaghetti code” (Korson, 1986). The
company must then either fund two modular efforts (see the discussion of experimentation below), or wait a
generation to realize the option value inherent in a modular design.
20
though, we only want to know what the option to select the best subsystem does to the value of
the outcomes, not what it costs to find the best. Basically, as the design team adds modules, it
adds options -- one for each module. But the value of each option falls in proportion to the
decline in its standard deviation.33
Now let the process be partitioned into a finite number of independent modules of sizes
(αN, βN ...), where α, β, etc. are integers divided by N, and (α+β...)=1. Figure 2 shows how a
design task of complexity N may be partitioned into both symmetric and asymmetric modules.
By the same reasoning used in equation (1), the value of this effort, denoted Vα, is:
∞
Vα = e-rT σ (NT)1/2 ( α1/2 + β1/2 ...) [ ∫ z n(z) dz ] ; (2)
0
We may compare equations (1) and (2), and summarize their relationship via a simple rule of
thumb:
Figure 2
Modular Partitions
No. of
Modules Decisions per Module
1 N
Asymmetric Modular Partitions
2αΝ βΝ
3αΝ γΝ βΝ
Symmetric Modular Partitions
2 N/2 N/2
3 N/3 N/3 N/3
33It may seem counter-intuitive that value declines as risk goes down, but such a relationship is a standard result in
options theory.
21
Proposition 1. Let a design problem of complexity N be partitioned into independent modules of
sizes (αN, βN ...) as defined above. Then under the assumptions of symmetry of distribution,
normality and the conservation of value, the modular partition has value:
Vα = ( α1/2 + β1/2 ...) V1 ; (3)
relative to an unmodular design effort.
Proof. Immediate from equations (1) and (2).
From the fact that <α, β,...> are fractions that sum to one, it follows that the sum of their
square roots is greater than one. Thus, if we ignore the cost of designing interfaces and testing
outcomes (which may be substantial), a modular design is always more valuable than a
monolithic design of the same degree of complexity. Moreover, further modularizations within
an existing design increase its value: if the α-module is split into sub-modules α1 and α2, then
its contribution to overall value will rise because α11/2 + α21/2 > α1/2.
In general, there is a square root rule at work here. To see this, define VN/j as the value of
a symmetric modular partition of a design into j independent modules of size N/j (see Figure 2).
Substituting 1/j for <α,β,...> in equation (3), and collecting terms, we see that in theory,
symmetric modularizations obey a simple square root rule:
__
VN/j = √j V1 (4)
Equations (3) and (4) are special cases of Merton's (1973) result, that for any distribution
of underlying value, a portfolio of options is more valuable than an option on a portfolio. In
terms of value, complex products may be likened to a portfolio of stocks. In effect, a modular
design creates options that exploit the underlying independence of subsystems within the
product, just as options within a stock portfolio exploit the independence of individual securities.
The options permit the design team to combine the best of the old and the new design, thereby
22
enhancing the value of the overall effort.34
4. Exploiting Modularity
Modularity creates value by expanding the set of options related to a design. Under simplified
assumptions Proposition 1 offers a simple rule of thumb for estimating how much value might be
captured by modularizing a planned development program. However, much of the power of
modularity lies in its impact on other design investments. Thus exploiting the full potential of a
modular design often means that the whole development program must be reconceptualized.
New design capabilities may suddenly take on value and resources may need to be directed into
entirely new channels. In this section we explore the interactions of modular designs with other
investments in development capability.
4.1 Experimentation.
In the preceding section, we assumed that the design team would create only one new
design per module. In principle, however, the team could decide to run several experiments in
parallel. There is great diversity in practice and corresponding disagreement about the value of
parallel experimental programs. On the one hand, parallel programs are redundant, and much
cost cutting is aimed at eliminating such redundancy. On the other hand, researchers have
pointed to the presence of redundant programs at successful innovative firms, and recommended
the practice to companies seeking to emulate their success (Quinn, 1985).
In general, the value of "redundant" efforts hinges on the uncertainty of the process and
the degree of independence among the experimental results. Loosely speaking, the value of
34With a change in the definition of certain variables, our model can be extended to options that arise “in the field.”
For example, performance in the field below a threshold x* may constitute a malfunction. “Field service modularity”
allows such malfunctions to be found and fixed without tearing down and rebuilding the whole system. “Modular
upgradability” allows a similar degree of flexibility with respect to subsystem improvements, which the customer
may not want (or which may not be available) at the time the system is purchased. However, field options are the first
step towards modularity-in-use, and (as was indicated in section 2) the original system designer may not be able to
maintain control once modularity-in-use is widespead. For example, modularizing a part that breaks frequently will
enhance the efficiency of field service personnel, but may also stimulate entry by competitors into the replacement
market.
23
redundancy increases with both uncertainty and independence. Rene Stulz (1982) provided a
general analysis of redundancy in his valuation of the option to select the maximum of two risky
assets. Ron Sanchez (1991) later applied Stulz's framework to the special case of redundant
R&D.
With a symmetric, joint normal distribution, such as we have assumed, redundant
experiments affect value in a particularly transparent way. Assume that on a given module, the
design team pursues k parallel, independent design efforts, and selects the best of these for the
final design. The distribution of the maximum of the k experiments equals the distribution of the
highest component of the order statistic of a sample of k independent draws from a standard
normal distribution. Its expectation conditional on being greater than zero, denoted Q(k), is:
∞
Q(k) = k ∫ z [N(z)]k-1 n(z) dz (5)
0
where N(z) and n(z) are respectively the standard normal distribution and density functions
(Lindgren, 1968).
Q(k) replaces the integral terms in equations (1) and (2), and the previous results now
appear as special cases. We can immediately generalize Proposition 1 to allow for different
levels of experimentation within each module:
Proposition 2. Let a design problem of complexity N be partitioned into independent modules
numbered 1 to j. Let αi denote the size (degree of complexity) of the ith module. (As in
Proposition 1, the αi’s are integers divided by N, and their sum equals one.) Let ki denote the
number of independent experiments with respect to the ith module (the ki’s are positive integers).
Under the assumptions of symmetry of distribution, normality and the conservation of value, the
value of the overall program is:
j
V(α1...αj, k1...kj) = e-rT Σ σ(αiNT)1/2 Q(ki) (6)
i=1
24
Proof. Immediate by substitution of (5) into (2) and rearranging terms.
Equation (6) condenses many different ways of organizing the design process into a
simple valuation equation, It focuses attention first, on the overall complexity of the design task,
and second, on the independence of modules and of experiments within modules. If such
independence is a good approximation of reality (a question that can be answered by looking at
the diversity of thinking across different design groups), then the value of the whole can be
modeled as the sum of the values of the individual modules. The value of each module in turn
depends on uncertainty, which we posited was a function of time and per-module complexity,
and on the number of independent experiments.
Although it should not be taken literally, equation (6) can guide decisions as to where and
how much to experiment within a given design. It shows that experiments are most valuable
where module uncertainty is greatest, and when experimental outcomes are independent (i.e..
when there are several quite different lines to pursue). Modularity also enhances the value of
each experiment. Intuitively, it decouples the results of subsystem experiments and allows the
designers to select the best solution for each subsystem. The overall design then has the
properties of the "best of the best."
The interactions of modularity and experimentation can be visualized for the special case
of a symmetric modular partition.35 If all modules have the same degree of uncertainty and
complexity, then, ignoring costs, it will be optimal to run the same number of experiments on
each module. The 2j arguments of equation (6) collapse to two, and the value of the overall
effort, denoted V(j,k) is:
V(j,k) = e-rT σ(TNj)1/2 Q(k) (7)
35Any modularization must be guided by the internal structure of the design problem itself. To achieve independence
across modules, interfaces must be defined and tasks partitioned according to the natural “fault lines” of the design
and the technologies behind it. Except in very special cases (like software), the natural modules embedded in a design
are unlikely to be symmetric. But symmetric modular partitions are still a useful construct that will permit us to delve
into the interactions between modularity, experimentation and testing in sections below.
25
Figure 3 graphs this function in three dimensions. The x and y axes are respectively the number
of modules and the number of independent experiments per module, while the z axis is the value
of the overall development program from equation (7). Entries have been scaled so that the
value of one development program for an unmodular design is one: V(1,1) = 1. Throughout the
rest of the paper, all magnitudes will be scaled to this value, which we label “the benchmark
program.”
Figure 3 shows the high degree of interaction between modularity and the design team’s
experimental strategy. On each axis, as the number of modules or experiments rises (holding the
other variables constant), overall value increases at a decreasing rate. For example, one
experiment on each of ten modules is worth 3.16 times the value of the benchmark program; ten
experiments on an unmodularized program is worth 3.86 times the benchmark program. The
highest rate of increase, however, is on the 45o line: ten experiments on each of ten modules is
worth 12.20 times the value of the benchmark program. Thus combining modularity with a
multiple experiments turns out to be far more valuable than single-mindedly pursuing modularity
alone or experimentation alone.36 This means that companies which are in the process of
changing their approach to design need to be aware of the possibilities along both dimensions.
These values and
36Technically V(j,k) is a "super modular" function in the sense of Milgrom and Roberts (1990).
26
Figure 3
13579
No. of Modules
1
5
9
No. of
Experiments
0
2
4
6
8
10
12
14
Value
The Impact of Modularity and Experimentation
on Value -- 3d View
__________________________________________________________________________
the costs of attaining them (discussed below) should be part of managers’ comprehensive picture
of their company's design capabilities.
4.2 Speed
Increased speed is another important benefit of pursuing a modular, multi-experiment
design strategy. One measure of speed is the expected performance of the final product divided
by the time period of development.37 This value, denoted S, is simply the undiscounted value of
the options summed in equation (6), divided by T. For a symmetric modular partition:
S(j,k) = σ (Nj/T)1/2 Q(k) (8)
37 This is an unusual formulation of speed compared to most empirical studies of lead time in product development,
which use pure calendar time. However, most studies recognize the importance of controlling for differences in
product complexity and project content in evaluating project lead time. We have taken that notion one step further by
controlling for the quality of the outcome. Thus, in terms of the literature on product development, our measure of
speed is the inverse of a quality adjusted time-to-market.
27
If we scale this value relative to S(1,1), the speed of the benchmark program, the result is
identical to the function graphed in figure 3. Thus the figure can be viewed in two ways. If the
firm’s market power allows it to capture all of the consumer surplus, it shows the direct impact
of modularity and experimentation on firm value. Alternatively, if the firm is competing for
transient rents in a technology race, it shows the impact of modularity and experimentation on a
critical strategic variable, the speed of product improvement.
According to figure 3, the impact of design strategy variables on speed can be dramatic:
running ten experiments on each of ten modules results in an expected rate of product
improvement that is 12 times higher than that of the benchmark one-module, one-experiment
effort. In reality, the cost of designing modules, running experiments and testing outcomes will
make such high levels of modularity and experimentation uneconomic. Furthermore, many
variables besides those modeled here, will generally affect time-to-market. But in technology
races, being even a little bit better -- say one-and-a-half times faster in terms of speed -- can be a
dramatic competitive advantage (Wheelwright and Clark, 1992). Our analysis shows that a
design strategy that takes advantage of implicit modules in a complex system, and mounts
multiple experiments on modules with high uncertainty and independence, can serve as the
cornerstone of a competitive strategy based on constant innovation and rapid product
improvement.
4.3 Variety
In addition to increasing speed, modularity greatly increases the potential variety of a
product line. Variety is most often measured by the number of versions of the product that are
available. If a product is divided into j modules, and ki variants (in addition to the existing
version) are developed for module i, then the total number of products in the line, denoted L, is:
j
L(j, k1...kj) = ∏ (ki + 1) ; (9)
i=1
28
This number explodes as the number of modules increases: for example five variants on each of
five modules results in 55 = 3125 products; five variants on each of ten modules obtains 510 =
9.8 million products.
These numbers are so staggering that they must be questioned. First, it is clear even
without a model that modularity is the only way to attain high variety at a reasonable cost. In
fact, modularity-in-production makes variety attainable in the context of high-speed, mass
production: that is why modularity is a key element in design-for-manufacturing and group
technology programs aimed at reducing production costs (Han Hitami and Yoshida, 1985;
Nevins and Whitney, 1989; Shirley, 1990; Sanderson, 1991). By the same token, modularity
creates economies of scale in product design: if the goal is to achieve the highest number of
different options at the lowest cost, then the combinatorial efficiencies of modularity-in-design
are indeed compelling.
However, simply counting the number of versions of a product obscures the fact that
sheer numbers can be increased by adding versions that are practically identical. Just as
significant as the impact of modularity on the number of products, is its effect on the range of
products that can be offered. Modeling the “degree of difference” among products is a
challenging task that goes beyond the scope of this paper. However, the model does provide
ways to begin to think about variety that are deeper than the sheer number of products.
Fundamentally, the purpose of variety is to allow for differences in consumer
preferences: different versions of the product can then come closer to each consumer’s ideal.
Differences in consumer taste are often modeled by locating groups of consumers in a subspace
of product attributes (Lancaster, 1966; Matutes and Regibeau, 1992). In this model each version
of the product is a point in an N-dimensional space of attributes. The design process itself
generates new points, representing new feasible versions of the product.
With a modular design, the problem is not how to generate new versions of the product,
for each experiment on a single module generates thousands or millions of “new” versions of the
product. The problem is to discover which versions are economically significant, that is, which
are “close” enough to a subset of consumers (and far enough away from competitors) to have
29
value. Thus management of product variety in the context of a modular design involves
collecting, collating, organizing and codifying knowledge about the range of feasible products
and the “location” of potential customers.38
5. The Costs of a Modular Design
Of course, the benefits of modularity -- performance, speed and variety -- are not free.
Modular designs require an investment in architecture and interfaces, and experiments are costly
to run. Most importantly, the experimental results and their modular combinations must be
evaluated to arrive at the final design. Thus in deciding how much modularity to pursue, or how
many experiments to run, the values in equation (6) or (7) need to be compared to the relevant
costs.
As a first approximation, it is reasonable to assume that interface design costs will be
roughly proportional to the number of modules, j, and that the cost of experimentation will be
proportional to the number of experiments, k. Define J as the design cost per module and K as
the cost per experiment, and T(j,k) as the cost of testing j modules and k experiments per
module. The cost C(j,k) of a modular, multi-experiment development program is then:
C(j,k) = kK - jJ - T(j,k) (10)
The cost of testing depends on whether tests are conducted at the module or the system level, and
is the main focus of the next section.
Although they appear symmetrically in equation (8), there is an important difference
between the cost of modularizing (J) and the cost of experiments (K). If a particular
modularization and the associated interface designs survive for more than one generation, the
38Investigating new product configurations and discovering new sets of customers is fundamentally different from
improving a design for an existing set of customers. Christensen and Rosenbloom (1994) argue that “being too close”
to existing customers led disk drive manufacturers to reject internal proposals to make smaller drives. Smaller drives
were then introduced by startup ventures, and sold to entirely new groups of customers.
30
partitions and interfaces do not have to be designed again. At that point J has the character of a
sunk cost: it disappears from value calculations from that time forward. Because their cost is
sunk, if the partition and interfaces designs are proprietary, they can serve as a barrier to entry to
the development process (Sutton, 1991; Ferguson and Morris, 1993). In contrast, experimental
costs (K) are incurred each time a different experiment is run, and thus K has the character of a
variable cost.
5.1 Testing
Our analysis shows that the impact of modularity-plus-experimentation on the value,
speed and range of product improvement can be substantial. However, these economic gains
will not materialize if the design team lacks adequate testing technology. The combinatorial
explosion that results when multiple experiments on many modules must all be tested can easily
overcome the benefits of mix-and-match flexibility.
Testing technologies have made giant strides over the last twenty years. One of the
greatest effects of computer-aided-design (CAD), manufacturing (CAM) and software
engineering (CASE) has been to reduce the cost of evaluating trial solutions at many points in
the design process. Where once it was necessary to construct a physical prototype in order to test
the behavior of a module or a system, emulators and simulators now provide virtual prototypes at
a fraction of the cost. Design changes that once required costly handwork or rewiring, can now
be coded in a few keystokes.39 This drop in the cost and time required for testing constitutes a
profound change in design technology, whose economic implications are only beginning to be
understood.40
39Bauer, Collar and Tang (1992) vividly describe the difference between debugging a physical prototype, and using a
sophisticated simulation package to test circuit design:
[I]n the past, the back of a debugged board would look like a plate of spaghetti, with 300 or more wires
strteching to and fro. But when we debugged the [computer]-tested board, there were only six -- just six --
wires on the back.... In the past, debugging a system in its entirety usually took us as long as a whole year.
But, thanks to [the simulation], we had a prototype ... up and running just six weeks after debugging began.
(pp. 105-106; Italics in original)
40Of course, the new technology requires investment (a sunk cost) in tools and models that represent the objects
being tested. Sanderson (1991) models the manufacturing efficiency gains from such investments but does not
consider their effect on experimentation.
31
Testing -- whether by hand or by computer -- can take place at the level of the whole
system or at the level of individual modules. In general, testing at the module level requires
greater depth of knowledge than testing at the system level.41 To predict whether a particular
module design will work without imbedding it in a whole system means that the module's
relationship to the rest of the system must be understood in some detail. Thus the development of
system-level tests usually precedes the development of module-level tests in the evolution of a
product.
5.1.1 System Tests
If all testing takes place at the system level, and the cost per test is c, then the overall cost
T(j,k) of testing j modules and k experiments per module is:
T(j,k) = [(k+1)j -1]c (11)
This number is proportional to the number of potential products (see equation 9), and increases
very rapidly as j, the number of modules, goes up. Therefore unless system-level test costs are
very low indeed (on the order of one-millionth the value of the benchmark program), the value-
maximizing design strategy is to pursue relatively unmodularized development with limited
amounts of experimentation.
Figure 4 graphs the net present value of the overall program (obtained by subtracting
equation 11 from equation 7) for two levels of testing cost. In the first instance, a test costs 10%
of the value of the benchmark program. In the second instance, a test costs 1% of the value of the
benchmark program. To complete the specifications, we assume that the costs of designing
41Depth of knowledge should not be confused with cost. Modules are (by definition) smaller and simpler than the
systems in which they function, and thus it is usually cheaper to construct and carry out module-level tests than
system-level tests. But to calibrate the performance of a module , it is necessary to know what the system-as-a-whole
is supposed to do, and to have some idea of how the module contributes to its overall function. And often an
imperfectly-understood module can be placed in a prototype system to “see if it runs.” Of course this approach entails
cycling, rework and debugging after the fact, all of which are extremely costly. But the costs can be avoided only
through greater knowledge about the system and components (Clark, 1985).
32
modules and running experiments are each equal to 5% of the value of the benchmark program: J
= K=.05 V(1,1).
Figure 4 shows what a heavy burden system-level testing imposes on modular designs. If
system-level test costs are on the order of 10% of V(1,1) as shown in the first panel, the option
value created by modularity is completely swamped by the combinatorial cost of testing. The
optimal configuration of the development program as a whole is to run nine experiments on the
Figure 4
13579
No. of Modules
1
4
7
10
No. of
Experiments
0
1
2
3
4
5
6
Value
Profit with System-Level Testing
Cost per test = .1 V(1,1)
Optimum:
1 Module
9 Experiments
33
13579
No. of Modules
1
4
7
10
No. of
Experiments
0
1
2
3
4
5
6
Value
Cost per Test = .01 V(1,1)
Optimum:
2 Modules
8 Experiments
34
unmodularized system. In fact, in this highly stylized example, for any level of testing cost
higher than 2% of V(1,1), the best economic solution is not to modularize at all. A company
might invest in modularity for other reasons, e.g., to obtain production economies of scale, but as
long as testing costs remained high and system testing was the norm, modularity would not make
the development process more efficient.
The picture changes only slightly when costs drop by an order of magnitude. As the
second panel shows, if the cost per test is 1% of V(1,1), a larger number of modules or
experiments becomes economically feasible. But when one combines modules and experiments
(moving out along the 45o line in the XY plane), the combinatorial explosion of testing costs
once again swamps the value of mix-and-match flexibility. Under our stylized assumptions, the
optimal configuration is to design two modules and run eight experiments.
5.1.2 Module Tests
The fact that testing costs foreclose a set of valuable options creates incentives to invest
in better testing technologies. However, as we have seen, simply lowering the cost per test even
by several orders of magnitude is not enough, for the number of combinations to be tested
increases exponentially with the number of modules. Another approach is to design tests so that
the best module can be selected "at its own level."
To evaluate a subsystem without embedding it in a working prototype requires detailed
knowledge about what the subsystem contributes to the whole, as well as how different
subsystems interact. Potential dysfunctional effects of the subsystem must also be understood,
so that potentially harmful interactions can be identified ahead of time. Thus module testing
demands functional knowledge, pre-set standards and ways of measuring a module’s
performance against the standards. Knowledge of how to test goes beyond the knowledge needed
to partition tasks, design interfaces or conduct experiments, although knowledge gleaned from
these other activities may contribute to the development of testing technology.
Testing is thus another dimension on which a company may usefully invest in knowledge
about a complex system. The benefits of such an investment are twofold. First is a reduction in
35
direct testing costs. If module testing replaces system testing, the overall cost T(j,k) falls
precipitously for higher levels of j and k. The cost of testing j modules and k experiments per
module becomes:
T (j, k) = j k c; (11)
where c again denotes the cost of one test. It is significant that in moving from system-level
testing to module-level testing, the number of modules, j, now enters as a multiplicative factor
and not as an exponent.
Second and even more important is the “unleashing” of option values implicit in a
modular, multi-experiment design strategy. What module-level testing can mean for management
of a development program is shown in Figure 5, which graphs net present values for the same
single-test costs as before: c =.1V(1,1) and .01 V(1,1).42 With “high cost” testing, the optimal
configuration is now to divide the design into 7 modules and run 5 experiments on each. But
truly dramatic changes occur when module testing is combined with low single-test costs. With
test costs on the order of 1% of the value of a program, the optimal program configuration -- 170
modules and 3 experiments per module -- is no longer on the chart. Moreover, the value of the
entire development effort increases substantially. Organizing ten experiments on each of ten
modules now creates ten times as much value as the benchmark unmodularized program. (In
contrast, the highest value achievable with only system-level testing was less than four times the
value of the benchmark program.)
5.1.3 Testing Costs and Product Evolution.
The dichotomy between system and module testing shown in the figures is too stark to
represent reality. In practice, designers are not restricted to only system or only module tests and
the optimal strategy is to use whatever testing tools and protocols are available. Firms that design
42For purposes of comparison, we are assuming that it costs the same amount to test a module as a system, despite the
fact that modules are less complex than systems. In reality, module tests may be cheaper than system tests.
36
complex systems must therefore invest in both types of testing technologies.
Figure 5
13579
No. of Modules
1
4
7
10
No. of
Experiments
0
1
2
3
4
5
6
Value
Profit with Module-Level Testing
Cost per Test = .1 V(1,1)
Optimum:
7 Modules
5 Experiments
13579
No. of Modules
1
4
7
10
No. of
Experiments
0
2
4
6
8
10
12
14
Value
Cost per Test = .01 V(1,1)
Optimum:
Off the Chart
37
At any point in time, however, the balance between system and module testing will be
determined by the company’s recent investments. Specifically, has it organized its development
process around better and cheaper system-level tests and invested in the latest tools of virtual
system modeling? Or, has it placed its bets on module testing and screening off-the-shelf
components? Research conducted in the automobile, computer workstation, and software
industries shows that companies differ substantially on these dimensions of investment
strategy.43 Some differences are dictated by technology and industry circumstances, but among
close competitors, different approaches to testing may arise because of different views about the
product’s likely evolution.
Large investments in system-level testing are appropriate when the product’s boundaries
(the activities that must take place “under one roof”) have stabilized, and the potential payoff to
further modularization is low. Essentially this means that the interdependencies among the
components of the product are both tight and not very well understood. In this case the only valid
test of a system is to “make” it, where “making” it may mean constructing a virtual image. The
ability to construct, test and revise the design rapidly and at low cost can then constitute a major
competitive advantage.
However, large expenditures on elaborate system-level testing tools may prove
unproductive if system modularity itelf is changing over time. Remodularizing the system
requires testing tools that make the definition and selection of components easier. Some of these
are similar to those used in system-level testing: for example, a large-scale virtual image of the
system is useful in screening off-the-shelf components as well as in testing inhouse designs. But
there is one critical difference: module-level testing rests on the principle of information hiding.
By definition, a module-level test does not require full knowledge of the structure of the object
being tested, but only knowledge of a subset of relevant parameters. Knowing what facts about
the underlying components do not need to be communicated to the larger system (and hence can
be left out of the system’s virtual image) is the true test of the designers’ understanding of how
43Clark and Fujimoto (1991); Iansiti (1994); Cusumano (1994).
38
the system really works.
Where the cost of testing is very low (as it is now in many industries), our distinction
between system and module-level testing may appear to be hair-splitting. But product designs
may evolve differently depending on which testing strategy is emphasized, and low test costs
tend to magnify the incentives to pursue different routes. Figure 6, which plots values for testing
costs on the order of one-millionth of the value of the benchmark program -- .000001 V(1,1),
illustrates how testing technology may influence the evolution of a product’s design over several
generations.
The upper panel of figure 6 shows the payoffs to modularity as perceived by a firm that is
focused on system-level tests. In effect, its testing technology sets limits on where its designs can
go: this is what is meant by the sharp dropoff in the function that occurs as more modules are
combined with more experiments. Even with extremely low costs per test, at some point the
combinatorial explosion takes hold and it becomes uneconomic to test all possible variants of the
system. As long as all industry participants perceive the design in the same way, the architecture
of the product will stabilize in this configuration.44 At this point, those companies that were first
to invest in system testing and other design capabilities for this configuration will enjoy a
relatively long-term competitive advantage.
But suppose some industry participants perceive that further modularization is possible
through investment in task partitioning, interfaces and, importantly, module-level tests. The
second panel suggests a very different evolutionary pattern. Here the testing technology does not
limit the amount of modularity that is “manageable,” and as a result, for any given level of
modularity, there are incentives to invest in still more modularity. This means that the design
may not stabilize very quickly nor very close to its current configuration. In these circumstances,
investments in inhouse system testing capabilities that do not envision further modularization
will be a risky and possibly misguided allocation of resources.
44This stabilization corresponds to the emergence of a dominant design. See Abernathy and Utterback (1978) and
Utterback (1994).
39
Figure 6
15913 17
No. of Modules
1
25
50
No. of
Experiments
0
2
4
6
8
Value
Profit with System-level Testing
Cost per test = .000001 V(1,1)
Optimum:
4 Modules
20 Experiments
15
913 17
No. of Modules
1
25
50
No. of
Experiments
0
5
10
15
20
25
Value
Profit with Module-Level Testing
Cost per test = .000001 V(1,1)
6. Modularity, Knowledge and Organization
40
Beyond its impact on value, speed or variety of an individual development program, the
pursuit of modularity-in-design changes the way in which an enterprise creates, transmits and
stores knowledge. Therefore, a modular approach to product design is both a strategic choice,
and a commitment to a different organizational structure and pattern of evolution.
To see how modularity affects an organization through its knowledge base, consider what
happens in a firm where design is not modular. Generally, a design team taking a “monolithic”
approach will not attempt to break down the product to see what principles govern its
performance. As a result, such teams achieve only a relatively superficial understanding of the
product and its production system. Moreover, within the product and the production system
nothing will be standard: everything will be unique and idiosyncratic to the particular design.
Monolithic products and production systems may be well-adapted to a particular set of external
conditions, but in general they will not support rapid change or high variety. Nevertheless, most
complex products and production systems begin with fairly monolithic designs, because to
decompose a task requires knowledge.
A modular design approach breaks down a product into discrete subsystems and
components, organizes these via an architecture, and links the individual modules using standard
interfaces. The goal of modularization is always to achieve functionality of the whole, but
independence across the modules and subsystems. To achieve this goal, the design team must
have three types of knowledge: (1) “architectural knowledge” of how components can be
assembled into a working system; (2) “interface knowledge” of how to connect modules and how
modules affect one another; and (3) “component knowledge” of how to design and build specific
components. These three types of knowledge must be “in place” prior to the modularization
itself. To the extent that there are gaps in architectural, interface or component knowledge, the
initial modularization may fall short: either the system-as-a-whole will not function, or, as is
more common, the modules will not be fully independent and significant effort will be needed to
integrate the system after the fact.
In addition, to achieve the economic benefits inherent in modularity-in-design, the
designers must have testing technology that permits modules to be selected “at their own level.”
41
Knowledge of what to test and how to test is different (and may be somewhat deeper) than
architectural, interface or component knowledge, but is just as critical to the goal of efficient
design improvement. Testing technology also needs to be in place early in the design process:
ideally test standards should be set at the time of the initial task partition so that subsystem
design groups will know what targets to shoot for.
Thus in very specific ways -- in terms of architecture, interfaces, components and tests --
a modular design rests on greater knowledge of how the product works than a monolithic
design. Attempts to modularize beyond the boundaries of existing knowledge (ie, splitting
components whose functional interdependencies are not well understood) will lead to costly
iterations when the time comes to integrate the modules into a functioning system. Therefore, at
the outset of a development effort, a key managerial question is: how much modularity will the
enterprise’s knowledge base support? To make an informed judgment on this issue, managers
must understand the architectural and technological tradeoffs that underlie the basic design, and
appreciate the potential benefits and costs of achieving a given degree of modularity.
Knowledge is prerequisite for implementing a modular design approach, but further
knowledge is gained through the process. First, designers will be asked to partition their
information, with each group focusing on one specific component of the whole system. At the
same time, the designers must agree among themselves on the specific architecture, interfaces
and test procedures that will govern the system. Co-ordinating the task partition and overseeing
what can be complex negotiations over interfaces or tests, consumes time and effort early in the
process. But the result is a design that makes tacit knowledge explicit, and that provides for the
acquisition of more knowledge later on. Knowledge acquired late in the process can be
accommodated into the design precisely because it has been “hidden” within specific modules.45
Of course, a design team pursuing a non-modular strategy must also create some
breakdown of the effort into specific tasks. However, the motivation of task partitions in such
45Indeed a modular design can accomodate new knowledge (involving hidden structures) even after the end of the
development process. This is the principle that underlies field upgradability. Modular upgradability tends to blur the
time boundaries of the development process (Iansiti, 1994; Cusumano, 1994). It can also be of great value to users,
but, as with any kind of modularity-in-use, may stimulate entry and competition.
42
cases is merely to establish a workable division of labor, and thus assignments may follow a
simple “equalize the workload” rule or some other natural division (e.g., by number of parts). In
contrast to an ad hoc partition, a modular partition creates a framework that illuminates the
product’s internal functions and clarifies the relationship of the parts to the whole. Because it
corresponds to the way the product works, a modular partition is not only a useful way to divide
tasks, but is also a useful way to organize and store knowledge obtained in the design process
itself.46 This new knowledge in turn makes possible further refinements (and perhaps higher
degrees of modularity) in subsequent rounds of development.
Greater depth of knowledge will make possible improved system design, better module
performance, expanded variety and more rapid technical change, over and above what is
attainable through pure mix-and-match flexibility. Thus if a firm uses modularity not just as a
one-time design approach, but as template for its evolving knowledge base, modularity will not
be neutral (as was assumed) but will in fact result in better designs in later rounds of
development.
The link between modularity and the firm’s knowledge base highlights the connection
between a modular design strategy and the organization of the firm itself. The way a firm defines
subsystems and modules within a product not only sets up targets of opportunity within a design
program, but also determines a pattern of focus and specialization for the company’s human
resources. Moreover, as Henderson and Clark (1990) have argued, the interfaces that govern the
relationship between modules have counterparts in the organizational routines and processes that
link the people engaged in subsystem and module development work to one another. Thus the
modular structure of the product comes to be reflected in the organizational structure of the firm
making it.
But there is a risk. When modularization occurs as a result of “natural” product evolution
(the process described in section 2), much of the framework may be tacit and embedded in
organizational routines (Nelson and Winter, 1982). There will then be a tendency to “see” and
46Recall that the design library described in Shirley (1990) was organized by module.
43
“store” new information about components (including ways to test them) in the specific
organizational units charged with responsibility for those components. In contrast, the
architecture and interfaces may not be the responsibility of any particular person or group. In
this case there will be no natural collection point or repository for information about how to
combine existing components in new ways or how to streamline interactions among components.
Therefore unless managers take explicit steps to define and preserve knowledge about
architectures and interfaces, such knowledge can easily “fall through the cracks” of the
organization.47
Its connections with knowledge and organizational structure give modularity the
character of an organizational capability. It is a capability with great potential, but it requires
investment and is difficult to execute. Indeed, without major investment of time, resources and
creativity in testing technology, and without skill and ingenuity in its application, the power of
modularity will be held captive to the tyranny of combinatorial testing costs. However, the firm
that understands how to create effective modules, how to set up effective interfaces, how to
design clever parallel experiments, how to do module level testing, and how to do low cost
system level testing, may reap substantial rewards. But, if it is to be successful, modularity must
become deeply embedded in what the firm knows, how it is organized, and how it gets work
done.
7. Conclusion
In this paper we have tried to build a framework for understanding modularity as a
strategy of design. In the technical realm, modularity has been implemented in many industry
settings, and, accordingly, it has been the focus of much discussion and analysis in the
engineering literature. But our understanding of the economic incentives that lie behind
modularity, and its consequences in different competitve settings is limited. In developing our
47The failure of otherwise innovative firms to recognize the potential benefits of certain “architectural” product
reconfigurations is a persistent theme in the literature on innovation (Tushman and Anderson, 1986; Henderson and
Clark, 1990; Christensen, 1994; Langlois, 1994).
44
framework we have tried to put modularity-in-design in context by comparing it to modularity-
in-production and modualrity-in-use. In addition, we constructed a model that captures the
option value inherent in modularity, as well as the costs of pursuing this approach. Although the
model is relatively simple and stylized, it does suggest some important connections between
modularity-in-design and the management and organization of the firm. From an academic
perspective, it begins the process of understanding the economic reasons to pursue modularity-
in-design, and its implications for practice and for future research.
From a managerial perspective, our analysis of modularity takes choices of a technical
nature - product structure, interfaces, testing technology, and associated investments - and puts
them squarely on the agenda of a firm’s senior management. It shows how design decisions,
many of which may be buried deep within the organization, can have a major impact on value
created for consumers and on a firm’s ability to compete. It also shows how seemingly unrelated
technical decisions -- task assignments and testing technologies, for example -- are intimately
connected in the way they affect value, speed and variety. Because they are inter-related and have
a large potential impact on value and competitiveness, such issues ought not to be decided in
isolation, nor by rote application of precedent. Our analysis provides a way to frame these
decisions in business terms, and highlights the central issues to be considered in resolving them.
45
REFERENCES
Abernathy, William J. and James M. Utterback (1978) “Patterns of Industrial Innovation,”
Technology Review, 80(7):40-47.
Alexander, Christopher (1964) Notes on the Synthesis of Form, Harvard University Press,
Cambridge, MA.
Baldwin, Carliss Y. and Kim B. Clark (1992) “Capabilities and Capital Investment: New
Perspectives on Capital Budgeting,” Journal of Applied Corporate Finance, 5(2):67-82.
Bauer, Roy A., Emilio Collar and Victor Tang (1992) The Silverlake Project, Oxford University
Press, New York, NY.
Black, Fischer and Myron Scholes (1973), "The Pricing of Options and Corporate Liabilities,"
Journal of Political Economy, 81: 637-654.
Brooks, Frederick W. (1975) The Mythical Man Month, Addison Wesley, New York, NY.
Christensen, Clayton M. (1994) “The Rigid Disk Drive Industry: A History of Commercial and
Economic Turbulence,” Business History Review, 67(Winter):531-588.
Christensen, Clayton M. (1994) “Industry Maturity and the Vanishing Rationale for Industrial
R&D,” Harvard Business School Working Paper 94-059, Boston, MA, (March).
Christensen, Clayton M. and Richard S. Rosenbloom (1994) “Explaining the Attacker’s
Advantage: Technological Paradigms, Organizational Dynamics and the Value Network,”
Research Policy 22(forthcoming).
Clark, Kim B. (1985), "The Interaction of Design Hierarchies and Market Concepts in
Technological Evolution," Research Policy, 14, 5: 235-251.
Clark, Kim B. (1988), "Knowledge, Problem Solving and Innovation in the Evolutionary Firm,"
Harvard Business School Working Paper.
Clark, Kim B. and Takahiro Fujimoto (1991), Product Development Performance: Strategy,
Organization and Management in the World Auto Industry, Harvard Business School
Press, Boston, MA.
Cohen, Morris A., Jehosua Eliasburg and Teck H. Ho, “New Product Development: Performance,
Timing and the Marketing-Manufacturing Interface,” Working Paper 92-04-01, Wharton
School, University of Pennsylvania, Rev. April 1993.
Cusumano, Michael A. (1991) Japan’s Software Factories: A Challenge to U.S. Management,
Oxford University Press, Oxford, UK.
Cusumano, Michael A. and Stanley a. Smith (1994) “Beyond the Waterfall: A Comparison of
“Classic” versus PC Software Development, mimeo, Massachusetts Institute of
46
Technology (July).
Dixit, Avinash K. and Robert S. Pindyck, (1994) Investment Under Uncertainty, Princeton
University Press, Princeton, NJ.
Dorfman, Nancy S. (1986) Innovation and Market Structure: Lessons from the Computer and
Semiconductor Industries, Ballinger, Cambridge, MA.
Economides, Nicholas and Steven C. Salop (1992) “Competition and Integration among
Complements and Network Market Structure,” Journal of Industrial Economics,
40(1):105-123.
Eppinger, Steven D. (1991), "Model-based Approaches to Managing Concurrent Engineering,”
Journal of Engineering Design, 2 (4).
Eppinger, Steven D. and Kent R. McCord (1993) “Managing the Iteration Problem in Concurrent
Engineering,” MIT Working Paper 3594-93-MSA, August.
Eppinger, Steven D., Daniel E. Whitney, Robert P. Smith and David Gebala (1994) “A Model-
Based Method for Organizing Tasks in Product Development,” Research in Engineering
Design, forthcoming.
Farrell, Joseph, Hunter K. Monroe and Garth Saloner (1993) “Order Statistics, Interface
Standards and Open Systems,” mimeo, University of California, Berkeley, CA.
Ferguson, Charles H. and Charles R. Morris (1993) Computer Wars: How the West Can Win in a
Post-IBM World, Times Books, New York, NY.
Freeman, Christopher (1982) The Economics of Innovation, 2nd Ed. MIT Press, Cambridge, MA.
Foster, Richard (1986) Innovation: The Attackers Advantage, Summit Books, New York, NY.
Garud, Raghu and Arun Kumaraswamy (1993) “Changing Competitive Dynamics in Network
Industries: An Exploration of Sun Microsystems’ Open Systems Strategy,” Strategic
Management Journal, 14:351-369.
Gomes-Casseres, Benjamin (1994) Collective Competition: International Alliances in High
Technology, Harvard University Press, Cambridge, MA.
Hall, Bronwyn H. (1990) “The Impact of Corporate Restructuring on Industrial Research and
Development,” Brookings Papers on Economic Activity, 1990(1):85-133.
Han, I., K. Hitami, and T. Yoshida (1985), Group Technology Applications to Product
Management, Kluwer-Nijhoff Publishers, Boston, MA.
Harrison, J.M. (1985) Brownian Motion and Stochastic Flow Systems, John Wiley and Sons, New
York.
47
Henderson, Rebecca M. and Kim B. Clark (1990), "Generational Innovation: The
Reconfiguration of Existing Systems and the Failure of Established Firms," Administrative
Sciences Quarterly 35: 9-30.
Iansiti, Marco (1994) “Technology Integration: Managing Technological Evolution in a Complex
Environment,” Research Policy (forthcoming).
Iansiti, Marco (1994) “Shooting the Rapids: System Focused Product Development in the
Computer and Multimedia Environment,” mimeo, Harvard Business School, Boston, MA
(July).
Jensen, Michael C. and William H. Meckling (1992) “Specific and General Knowledge, and
Organizational Structure, Contract Economics, Lars Werin and Hans Wijkander, eds.,
Basil Blackwell, Ltd., Oxford, UK, pp. 251-274.
Jensen, Michael C. (1993) “The Modern Industrial Revolution, Exit and the Failure of Internal
Control Systems,” Journal of Finance, 48(3):831-880.
Kernighan, Brian and P.J. Plauger (1976) Software Tools, Addison-Wesley, Reading, MA.
Korson, Timothy D. (1986) “An Empirical Study of the Effect of Modularity on Program
Modifiability,” unpublished PhD dissertation, Georgia State University, Atlanta, GA.
Lancaster, K.J. (1966), "A New Approach to Consumer Theory," Journal of Political Economy,
74: 132-157
Langlois, Richard N. (1992a) “External Economies and Economic Progress: The Case of the
Microcomputer Industry,” Business History Review, 66:---.
Langlois, Richard N. (1992b) “Capabilities and Vertical Disintegration in Process Technology:
The Case of Semiconductor Fabrication Equipment,” mimeo, Univeristy of Connecticut,
Storrs, CT (November).
Langlois, Richard N. (1994) “Cognition and Capabilities: Opportunities Seized and Missed In the
History of the Computer Industry,” mimeo, University of Connecticut, Storrs, CT (March)
Langlois, Richard N. and Paul L. Robertson (1992) "Networks and Innovation in a Modular
System: Lessons from the Microcomputer and Stereo Component Industries," Research
Policy, 21: 297-313.
Lindgren, B.W. (1968), Statistical Theory, MacMillan Publishing Co., New York, NY.
Matutes, Carmen and Pierre Regibeau (1992) “Compatibility and Bundling of Complementary
Goods in a Duopoly,” Journal of Industrial Economics, 40(1):37-54.
Marples, David L. (1961), "The Decisions of Engineering Design," IRE Transactions on
Engineering Management 2: 55-71.
48
Mason, Scott P. and Robert Merton (1985) “The Role of Contingent Claims Analysis in
Corporate Finance,” Recent Advances in Corporate Finance, E. Altman and M.
Subrahmanyam, Richard D. Irwin, Homewood IL.
Merton, Robert C. (1973), "Theory of Rational Option Pricing," Bell Journal of Economics and
Management Science, 4: 141-183
Merton, Robert C. (1990), Continuous Time Finance, Baoil Blackwell, Cambridge, MA.
Milgrom, Paul and John Roberts (1990), "The Economics of Manufacturing: Technology,
Strategy and Organization," American Economic Review, 80(3): 511-528.
Myers, Glenford J. (1975) Reliable Software through Composite Design, Van Nostrand Reinhold
Company, New York.
Nelson, Richard R., and Sidney G. Winter (1982), An Evolutionary Theory of Economic Change,
Harvard University Press, Cambridge, MA.
Nevins, James L. and Daniel E. Whitney (1989) Concurrent Design of Products and Processes,
McGraw-Hill Publishing Company, New York, NY.
Nichols, Nancy A. “Scientific Management at Merck: An Interview with CFO Judy Lewent,”
Harvard Business Review, 72(1):88-99 January-February 1994.
Parnas, D.L. (1972a) “A Technique for Software Module Specification with Examples,”
Communications of the ACM, 15:330-336, May.
Parnas, D.L. (1972b) “On the Criteria to Be Used in Decomposing Systems into Modules,”
Communications of the ACM, 15:1053-1058, December.
Parnas, D.L., P.C. Clements and D.M.Weiss (1985) “The Modular Structure of Complex
Systems,” IEEE Transactions on Software Engineering, SE-11:259-266, March.
Pindyck, Robert S. (1991), "Irreversibility, Uncertainty and Investment," Journal of Economic
Literature, 29: 1110-1152.
Pugh, Emerson W., Lyle R. Johnson and John H. Palmer (1991) IBM’s 360 and Early 370
Systems, MIT Press, Cambridge, MA.
Quinn, James Brian (1985) “Managing Innovation: Controlled Chaos,” Harvard Business Review,
63(3):73-84.
Rinderle, J.R. and N.P. Suh, (1982) “Measures of Functional Coupling in Design,” ASME Journal
of Engineering for Industry, 104:383-388, November.
Rothwell, Roy and Paul Gardiner (1988) Journal of Marketing Management, 3(3):372-387.
Sanderson, Susan Walsh, (1991) “Cost Models for Evaluating Virtual Design Strategies in
49
Multicycle Product Families,” Journal of Engineering and Technology Management, 8:
339-358.
Sanchez, Ronald A. (1991), "Strategic Flexibility, Real Options and Product-based Strategy,
unpublished Ph.D. dissertation, Massachusetts Institute of Technology, Cambridge, MA.
Schach, Stephen R. (1993) Software Engineering, 2nd Ed., Irwin, Burr Ridge, IL.
Simon, Herbert A. (1969)The Sciences of the Artificial, MIT Press, Cambridge, MA.
Shirley, Gordon V. (1990), "Models for Managing the Redesign and Manufacture of Product
Sets," Journal of Manufacturing and Operations Management, 3: 85-104.
Stevens, W.P., G.J. Myers and Constantine, L.L. (1974) “Structured Design,” IBM Systems
Journal, 13(2):115-139.
Stuart, Toby and David Collis, “Cooper Industries’ Corporate Strategy,” Harvard Business School
Publishing Division, Boston, MA.
Stulz, Rene M. (1982), "Options on the Minimum or the Maximum of Two Risky Assets,"
Journal of Financial Economics, 10: 161-185.
Suh, Nam P. (1990) The Principles of Design, Oxford University Press, Oxford, England.
Sutton, John (1991) Sunk Cost and Market Structure: Price Competition, Advertising and the
Evolution of Concentration, MIT Press, Cambridge, MA.
Suzue, Toshio and Akira Kohdate (1990) Variety Reduction Program: A Production Strategy for
Product Diversification, Productivity Press, Cambridge, MA.
Tushman and Anderson, “Technological Discontinuities and Organizational Environments,”
Administrative Sciences Quarterly 31:439-465, 1986.
Trajentenberg, Manuel (1990) Economic Analysis of Product Innovation: The Case of CT
Scanners, Harvard University Press, Cambridge, MA.
Ulrich, Karl T. (1992), "The Role of Product Architecture in the Manufacturing Firm," Working
Paper No. 3483-92-MSA, Sloan School of Management, MIT, Cambridge, MA.
Ulrich, Karl T. and Steven D.Eppinger (1994) Methodologies for Product Design and
Development, McGraw-Hill, New York, NY.
Utterback James (1994) Mastering the Dynamics of Innovation: How Companies Can Seize
Opportunities in the Face of Technological Change, Harvard Business School Press,
Boston, MA.
Von Hippel, Eric (1990), "Task Partitioning: An Innovation Process Variable," Research Policy,
50
19: 407-418
Wruck, Karen H. and Michael C. Jensen (1993) “Science, Specific Knowledge and Total Quality
Management,” mimeo, Harvard Business School (July).
Wheelwright and Clark (1992), Revolutionizing Product Development, Free Press, New York,
NY.