ArticlePDF Available

Determining intervention thresholds that change output behavior patterns

Authors:

Abstract and Figures

This paper details a semi-automated method that can calculate intervention thresholds—that is, the minimum required intervention sizes, over a given time frame, that result in a desired change in a system's output behavior pattern. The method exploits key differences in atomic behavior profiles that exist between classifiable pre-and post-intervention behavior patterns. An automated process of systematic adjustment of the intervention variable, while monitoring the key difference, identifies the intervention thresholds. The results in turn can be studied and presented in intervention thresholds graphs in combination with final runtime graphs. Overall, this method allows modelers to move beyond ad hoc experimentation and develop a better understanding of intervention dynamics. This article presents an application of the method to the well-known World 3 model, which helps demonstrate both the procedure and its benefits.
Content may be subject to copyright.
MAIN ARTICLE
Determining intervention thresholds that
change output behavior patterns
Bob Walrave*
Abstract
This paper details a semi-automated method that can calculate intervention thresholdsthat is,
the minimum required intervention sizes, over a given timeframe, that result in a desired change
in a systems output behavior pattern. The method exploits key differences in atomic behavior
proles that exist between classiable pre- and post-intervention behavior patterns. An automated
process of systematic adjustment of the intervention variable, while monitoring the key difference,
identies the intervention thresholds. The results, in turn, can be studied and presented in
intervention threshold graphs in combination with nal runtime graphs. Overall, this method
allows modelers to move beyond ad hoc experimentation and develop a better understanding of
intervention dynamics. This article presents an application of the method to the well-known
World 3 model, which helps demonstrate both the procedure and its benets.
Copyright © 2017 System Dynamics Society
Syst. Dyn. Rev. 32, 261278 (2016)
Additional Supporting Information may be found online in the supporting information tab for this
article.
Introduction
Because so many systems and problems are characterized by dynamic
complexity, the number of studies that apply system dynamics (SD) has
increased accordingly (e.g., Repenning, 2001; Romme et al., 2010; Van
Oorschot et al., 2013). Applications of SD range from global-level analyses
(Meadows et al., 2004) to studies on the rm (Walrave et al., 2015) or
individual (Repenning, 2001) levels. Of particular interest are intervention
studies, which explore the degree of change in model behavior as a result
of alternative policies or scenarios(Yücel and Barlas, 2015, p. 173). In other
words, intervention studies pertain to how an issue or problem can be
corrected (Forrester, 1961) and rely on model-based experimentation. Such
explorations, often referred to as what-ifexperiments (Morecroft, 1988),
typically are conducted through ad hoc adjustments of key model parameters
(e.g., Repenning, 2001; Walrave et al., 2011). Yet such a manually conducted
approach implies that most modelers work with a very limited number of
experiments and evaluations, simply due to time constraints, which in turn
limits the policy formulation and analysis phase of SD (Sterman, 2000).
* Correspondence to: Bob Walrave, School of Industrial Engineering, Eindhoven University of Technology.
E-mail: b.walrave@tue.nl
Accepted by Markus Schwaninger, Received 9 November 2015; Revised 24 May 2016, 7 December 2016 and
1 February 2017; Accepted 3 February 2017
System Dynamics Review
System Dynamics Review vol 32, No 3-4 (July-December 2016): 261278
Published online in Wiley Online Library
(wileyonlinelibrary.com) DOI: 10.1002/sdr.1564
261
Although scholars have made signicant progress with automating various
parts of the SD modeling processincluding advances in automated sensitivity
analyses (e.g., Ford, 1990; Pruyt and Islam, 2016), the inclusion of different
statistical approaches for rigorous parameter estimation (e.g., Oliva, 2003;
Peterson, 1980), and parameter specication based on automated behavior
pattern feature recognition (e.g., Yücel and Barlas, 2011)modelers still lack
a focused method to automate what-if experiments. In particular, no specically
designed approach exists to determine intervention thresholds,dened here as
the minimum required intervention sizes, over a given time span, to achieve a
change in output behavior pattern that corrects the problem. Some methods
potentially could be customized to determine such intervention thresholds
(e.g., Kwakkel and Pruyt, 2015; Yücel and Barlas, 2011), but it would require
complex manipulations. Perhaps, as a result, many system dynamicists refrain
from moving beyond ad hoc experimentation, which in turn limits the
development of our understanding of intervention dynamics.
In response, this article presents a semi-automated method designed
specically to calculate intervention thresholds, by monitoring a key
difference between classiable pre- and post-intervention behavior patterns,
in terms of their atomic behavior, while systematically adjusting the
intervention variable of interest. The method is of value to modelers who want
to go beyond ad hoc experimentation and conduct systematic analyses of
intervention thresholds and how they change over time. The latter question
has long been subject to calls for increased attention, at least in organization
science settings (e.g., Hannan and Freeman, 1984). In addition, in proposing
an intervention thresholds graph, in combination with nal runtime graphs,
this article suggests a means to illustrate and study intervention dynamics.
The next section provides the building blocks for the development of the
method, including a brief review of behavior patterns and key characteristics
of atomic behavior. The steps detailed thereafter specify the process that
results in intervention thresholds graphs. To illustrate the method, this
process is applied to the well-known World 3 model (Meadows et al., 2004,
2008). This article concludes with a discussion of some benets and
limitations of the method, including suggestions for further research.
Toward intervention thresholds analyses in intervention studies
Even the simplest SD models can exhibit complex nonlinear behavior, due to
combinations of feedback loops, delays, and shifts in loop dominance. As a
result, the SD community started to explore automated model conguration
and analyses techniques, to better cope with the dynamic complexity
exhibited by many SD models. Perhaps the best-known contributions are
automated sensitivity analyses methods, such as those that rely on random
univariate sampling or multivariate Monte Carlo sampling, which are now
262 System Dynamics Review
Copyright © 2017 System Dynamics Society
DOI: 10.1002/sdr
widely incorporated into SD software packages (e.g., Ford et al., 1983; Ford,
1990). Barlas and Kanar (1999) and Yücel and Barlas (2011, 2015) advance a
method for the automatic recognition of behavior pattern features that allows
for, among other things, the automatic specication of model parameters. To
make the parameter estimation more rigorous, scholars have also suggested
integrating various statistical approaches into the SD modeling process for
calibration (e.g., full-information maximum likelihood via optimal ltering,
model reference optimization; Oliva, 2003; Peterson, 1980).
Beyond model calibration and validation, system dynamicists frequently
seek to determine the effect that parameter changes have on system behavior,
through what-if experiments. Such input manipulations often appear in the
context of intervention studies (e.g., Romme et al., 2010; Repenning, 2001;
Walrave et al., 2011). For example, explorations might address which
intervention size, at which moment in time, can break a reinforcing behavior
that has manifested itself as an unanticipated side effect, as exemplied
by xes that failstructures (Senge, 1990). In this respect, interventions
often aim to result in some particular change in output behavior patterns,
such as inducing a shift from exponential decline to goal-seeking growth in
rm performance. For such efforts, the intervention thresholds underlying
such pattern change represent highly pertinent information. Walrave et al.
(2015) calculate the intervention thresholds (i.e., months of managerial
commitment) required to counteract an unanticipated self-reinforcing
phenomenon (i.e., success trap) for all possible intervention moments (all t
in the model). When an intervention size at a given moment is smaller than
the intervention threshold, the outcome behavior is reinforced decline, but
when the intervention size increases above the threshold the outcome
behavior shifts to goal-seeking growth. Therefore, the method proposed herein
considers the intervention threshold size, relative to its timing, that is required
to achieve an anticipated change in the output behavior pattern.
Such an approach requires many unique simulation runs (possible
intervention sizes × permissible timeframe), so resource constraints likely
prevent the manual discovery of intervention thresholds. Intervention
thresholds also can rarely be deduced analytically (cf. Rudolph and
Repenning, 2002). Instead, an exploratory approach is necessary to assess all
(theoretically) possible intervention sizes over a permissible timeframe, as
might be achieved by customizing existing methods. For example, the
exploratory modeling and analysis workbench (Kwakkel et al., 2013; Kwakkel
and Pruyt, 2015) incorporates pattern classication and clustering features,
which can be used, among other things, to automatically determine output
behavior patterns. The pattern-oriented parameter specier discussed by
Yücel and Barlas (2011, 2015) also can be applied to determine a parameter
value that yields a specic output behavior pattern. Yet the adaptation of these
methods requires rather complex manipulations. Instead, this article details a
specically designed, semi-automated process, building on work by Barlas
B. Walrave: Determining intervention thresholds 263
Copyright © 2017 System Dynamics Society
DOI: 10.1002/sdr
and Kanar (1999). Note that the paper by Yücel and Barlas (2015) talks more
explicitly about atomic behavior modes. Barlas and Kanar (1999), on the other
hand, discuss atomic behavior implicitly.
Automated pattern recognition refers to the automatic discovery of
regularities [in datasets] through the use of computer algorithms and the use
of these regularities to take actions(Yücel and Barlas, 2015, p. 176). Such
an approach has been successfully applied in various research domains, such
as economics, medicine, marketing, and biology (Angstenberger, 2001;
Corduas and Piccolo, 2008).
The proposed method exploits the tendency for dynamics to reect a limited
set of behavior patterns. Based on Sterman (2000), Barlas and Kanar (1999), and
Yücel and Barlas (2011, 2015), I recognize seven main modes of behavior,
i
as
outlined in Table 1: (1) zero/constant behavior; (2) linear growth/decline;
(3) exponential growth/decline; (4) goal seeking growth/decline; (5) S-shaped
growth/decline; (6) growth and decline or decline and growth; and
(7) oscillation with/without growth/decline. These main modes can be further
subdivided into 15 behavior patterns that possess distinctive (sequences of)
rst derivatives (i.e., slope), second derivatives (i.e., curvature), and means.
In other words, every behavior pattern has a distinctive atomic behavior prole.
These atomic behavior proles effectively identify output behavior patterns
(Yücel and Barlas, 2015). To develop a parsimonious approach to identify
intervention thresholds, the current study proposes a custom approach for
systems that show classiable pre- and post-intervention output behavior
patterns, such that there is only a need to distinguish between two behavior
proles, rather than identify them, which can be achieved by comparing a
key difference in their atomic behavior.
For example, a comparison of S-shaped growth against growth and decline (see
No. 5a and No. 6a in Table 1) reveals that the latter, at some point, displays a
negative slope. The two behavior patterns can thus be distinguished by
monitoring the rst derivative of the output variable: the rst derivative of
S-shaped growth will always be positive, but the rst derivative of growth and
decline will become negative at some particular moment in time. This difference
can be monitored by making the derivative an indicator variable (in the model),
with a cut-off value of zero. The sign change in this indicator variable, in turn,
points to a change in the output behavior pattern. By monitoring it, while
systematically adjusting the intervention size between a lower and an upper
bound and over a permissible timeframe, it is possible to identify the intervention
thresholds. The rst intervention sizefor every moment in the timeframethat
causes the indicator variable to switch sign is the intervention threshold.
While some changes in behavior patterns can be captured by simply observing
the rst or second derivative, identifying other pattern changes may require a
more sophisticated approach. Consider, for example, a change from growth
and decline to oscillation. Table 1 shows an identical atomic behavior prole
for these two behavior patterns, yet only in the case of oscillation does this
264 System Dynamics Review
Copyright © 2017 System Dynamics Society
DOI: 10.1002/sdr
behavior prole unfold more than once. As such, one should introduce an
indicator variable that counts the number of switches between a positive and
negative rst derivative over the full model run. While the rst derivative of
growth and decline only switches once (from positive to negative), the rst
Table 1. Output behavior
patterns and key
characteristics of atomic
behavior
No. Output behavior pattern First derivative (slope)
Second derivative
(curvature)
1a Zero
1
00
1b Constant
1
00
2a Linear growth + 0
2b Linear decline 0
3a Exponential growth + +
3b Exponential decline 
4a Goal seeking growth +
4b Goal seeking decline +
5a S-Shaped growth
(exp. gr. goal seeking gr.)
+++
5b S-Shaped decline
(exp. decl. goal seeking decl.)
+
6a Growth and decline
2
(exp. gr. goal seek. gr.
exp. decl. goal seek. Decl.)
++++
6b Decline and growth
2
(exp. decl. goal seek. Decl.
exp. gr. goal seek. gr.)
++++
7a Oscillation
2,3
Multiple episodes of
++
Multiple episodes of
++
7b Oscillation with growth
2,3
7c Oscillation with decline
2,3
This table reveals key differences among output behavior patterns. Modelers should rst assess
any difference in slope; if no difference in slope exists (e.g., linear vs. exponential growth), they
should check for any difference in curvature; if no such difference exists (e.g., oscillation vs.
oscillation with growth), they should evaluate differences in the mean.
1
The mean effectively discriminates between the zero and the constant output behavior patterns.
2
The atomic behavior prole for growth and decline or decline and growth and oscillation (with or
without growth/decline) might be identical. Yet only in the case of oscillation does the prole
unfold more than once.
3
The different types of oscillation are best discriminated by the rst derivative of their moving averages.
B. Walrave: Determining intervention thresholds 265
Copyright © 2017 System Dynamics Society
DOI: 10.1002/sdr
derivative of oscillation switches more than once, implying that the correct cut-
off value for the aforementioned indicator variable equals 2.
Table 1 suggests means to select an appropriate indicator variable and cut-
off value for distinguishing between various behavior patterns. Note that this
approach requires the modeler to be able to anticipate the nature of the output
behavior pattern change, due to an intervention, because two specic patterns
need to be compared. The proposed method is thus not (yet) tto
accommodate unpredictable output behavior.
Intervention thresholds analysis
Figure 1 outlines the workow for the intervention thresholds analysis, which
consists of ve main steps, such that a modeler should:
1. Determine pre- and post-intervention output behavior patterns. Table 1
serves to identify these output behavior patterns.
2. Determine the indicator variable and its cut-off value. Table 1, columns 3
and 4, aid in selecting an indicator variable and the correct cut-off value
on the basis of a key difference in atomic behavior that best discriminates
between two output behavior patterns.
3. Determine the boundaries for intervention size and timing. The modeler
should decide on a (theoretically informed) lower and upper bound for
intervention size and a permissible intervention timeframe. It is the
responsibility of the modeler, who should be familiar with the models
structure and dynamic behavior, to make these decisions with great care.
Carefully designed experiments to uncover the post-intervention output
behavior pattern can assist modelers in this decision-making process.
4. Run the automated intervention thresholds analysis. This automated fourth
step involves systematic IF-THEN experiments, as shown in Figure 1. A
script is instructed to start a FOR loop
1
(Loop 1), which iterates through all
possible intervention moments. A second FOR loop (Loop 2) then starts,
which operates within loop 1 and is directed to iterate through all possible
intervention sizes until it either identies an intervention threshold or
reaches the upper bound intervention size (i.e., no intervention threshold
found). Specically, the script instructs the model to run a simulation with
the two inherited parameters (i.e., size and timing), after which the
simulation output is saved. The script then assesses the indicator variable
for an intervention threshold. If the cut-off value is not exceeded, no
intervention threshold is identied. The script then determines whether all
possible intervention sizes were assessed. If not, the intervention size is
adjusted by one increment, and the analysis repeats. If an intervention
1
This FOR loop refers to a conditional loop used in programming, not to the traditional feedback loops used by
system dynamicists.
266 System Dynamics Review
Copyright © 2017 System Dynamics Society
DOI: 10.1002/sdr
threshold is identied (i.e., cut-off value is exceeded) or all possible
intervention sizes are assessed, the script exits the second loop and
determines whether the entire timeframe was considered. If not, the
script adjusts the intervention timing by one increment; otherwise, the
script ends.
Alternatively, modelers may generate the required data through sensitivity
analyses. By modeling a STEP function on the intervention variable, where
the step size denotes the intervention size and the step time indicates the
Fig. 1. Determining
intervention thresholds
that change output
behavior patterns
B. Walrave: Determining intervention thresholds 267
Copyright © 2017 System Dynamics Society
DOI: 10.1002/sdr
intervention timing, then conducting a sensitivity analysis on these two
inputs, a modeler can generate the required raw data for the next step. When
applying this approach, the modeler must identify the actual intervention
thresholds from the raw data (i.e., by inspecting the indicator variable and
cut-off value in relation to intervention size and timing, perhaps in a
spreadsheet program). This approach circumvents the need for external
macros and may decrease the computational load, but it also limits the
potential for extensions and/or modications (e.g., investigating two-stage
interventions by including a third FOR loop).
5. Draw an intervention thresholds graph. Using the results of step 4, the
modeler creates an intervention thresholds graph, with the intervention
threshold size on the y-axis and timing on the x-axis. Figure 2 displays
an example. For every analyzed t, the graph shows the intervention
threshold. Rather than depicting a continuous line that unfolds over time,
the intervention thresholds graph presents the minimum intervention
size required to establish an anticipated shift in the output behavior
pattern (x-axis) at every analyzed intervention time during the permissible
timeframe (y-axis). In Figure 2, for example, an intervention at t=5
requires a minimum intervention size of 12 to prompt the anticipated
output behavior pattern. Any intervention at any particular moment in
time that is equal to or larger than the value in the intervention thresholds
graph thus results in the classied post-intervention behavior pattern. Any
intervention smaller than this value does not. By studying the graph, it is
possible to observe shifts in the models resistance to change, as a function
of the intervention timing.
Fig. 2. Example of an
intervention thresholds
graph
268 System Dynamics Review
Copyright © 2017 System Dynamics Society
DOI: 10.1002/sdr
Applying the intervention thresholds analysis to World 3
To illustrate the method, I turn to seminal work by the Club of Rome
(Meadows et al., 1972) and more recent updates (Meadows et al., 2004,
2008). The World 303 model (World3_03_Scenario.vmf, revision date 14
August 2008) features a dynamic system, including population, industrial
growth, food production, and limits to the Earths ecosystems, resources,
and pollution. The model also describes various scenarios. Scenario 10,
which serves as the starting point for this example, postulates that if a
particular policy package were to have been implemented by society in 1982
(i.e., intervention timing), we would have been able to maintain our standard
of living and support its improving technologies with no problems(Meadows
et al., 2008, model tab Table of Scenarios). In this scenario, the Earth does not
experience any signicant decline in human population, due to the more
sustainable interplay between its population and its carrying capacity.
An important feature of the previously mentioned policy package, which to
some extent drives model behavior, is the industrial output per capita desired
(IOPCD), which represents the desired wealth per capita (in U.S. dollars per person
per year). The higher the IOPCD, the higher the populationsdesiredliving
standard and the faster the depletion of non-renewable resources needed to
achieve this level and the higher the likelihood of population overshoot and
subsequent decline will be. The following example applies an intervention
thresholds analysis to World 3, with the size of the IOPCD as the focal intervention.
1. Determine pre- and post-intervention output behavior patterns. The description
of Scenario 10 suggests the population should follow an S-shaped growth
pattern. If interventions were introduced sometime after 1982, behavior instead
is increasingly likely to follow the growth and decline pattern. Thus, depending
on the size and timing of the intervention, a change in output behavior pattern
can be expected, from S-shaped growthtogrowthanddecline.Systematic
experimentation reveals that this observation is not strictly true, though.
Figure 3 shows the behavior of Population in eight experiments in which only
the intervention timing variedfrom 1980 to 2050, at 10-year increments. That
is, rather than introducing the policy package in 1982, different intervention
years were chosen, with a constant intervention size (i.e., at 500). As
Figure 3 illustrates, all runs show some amount of overshoot, but whereas
some runs overshoot only marginally and then stabilize (runs 1980, 1990,
and 2000), practically approaching S-shaped growth, others oscillate strongly
(runs 2010 and onward), clearly following a growth and decline pattern.
2. Determine the indicator variable and its cut-off value. The categorized
output behavior patterns aid decision making related to the appropriate
indicator variable and its cut-off value. Table 1 indicates that the main
difference between S-shaped growth and growth and decline pertains to
their slopes: always positive for the former; initially positive but then
B. Walrave: Determining intervention thresholds 269
Copyright © 2017 System Dynamics Society
DOI: 10.1002/sdr
negative for the latter. The appropriate indicator variable in this case
therefore must relate to the rst derivative of Population, with a cut-off value
of zero. As Figure 3 illustrates, though, this heuristic might not be
applicable in a strict manner in this particular example. Table 2 conrms
that the slope of the Population variable becomes negative at least once
during each run. This implies that a cut-off value of zero is not effective
in determining a change in output behavior patterns. However, as
Table 2 shows, the most drastic change in the steepest negative slope
observed (over the full model run) occurs between 2000 and 2010, which
corresponds to a change in output behavior pattern. Further inspection of
the values in Table 2 suggests setting the cut-off value at approximately
0.001 to differentiate effectively between the two behavior patterns.
From a purely technical point of view, this cut-off value does not perfectly
correspond to the prescriptions from Table 1. That is, we can speak of
S-shaped growth only if the rst derivative is never negative. Yet, as this
example serves to illustrate, the approach ts even if the model runs do not
strictly correspond to the fundamental output behavior patterns in Table 1.
These patterns likely are sufcient for many studies, and Table 1 can serve
Fig. 3. Behavior for
Population in eight
experiments
Table 2. Steepest negative
slope observed in
Population (over the full
model run)
Intervention timing Steepest negative slope (until year 2100) Behavior pattern (approximated)
1980 0.00060 S-shaped growth
1990 0.00053 S-shaped growth
2000 0.00077 S-shaped growth
2010 0.00743 Growth and decline
2020 0.01835 Growth and decline
2030 0.02096 Growth and decline
2040 0.02314 Growth and decline
2050 0.02521 Growth and decline
270 System Dynamics Review
Copyright © 2017 System Dynamics Society
DOI: 10.1002/sdr
as an inspiration for choosing indicator variables. Yet, in practical terms, the
preceding numbers clearly indicate a difference between the two sets of runs.
In cases characterized by uncertainty regarding the correct cut-off value,
modelers are advised to conduct a step 2b, which involves a limited
exploration for the purposes of indicator evaluation. First of all, a range of
cut-off values can be selected to assess cut-off value sensitivity; in the current
example, the modeler could choose a set of cut-off values ranging between
0.001 and 0.005. Furthermore, modelers should visually check the
effectiveness of both indicator variable and cut-off valuesee Figure 5 in the
results sectionand adjust the indicator variable/cut-off value if necessary.
3. Determining the boundaries for intervention size and timing. For this example,
the value of IOPCD was assigned a lower limit of $200/(person * year) and an
upper limit of $1000/(person * year); values outside of this range are
theoretically unlikely. The model default for Scenario 10 was $350/(person *
year). The period 19802050 functions as the permissible intervention
timeframe, and the original model runtime (19002100) was maintained.
4. Run the automated intervention thresholds analysis. To automate the
intervention thresholds analysis for the World 303 example, a custom
script is required. The pseudo-code in Box 1 provides building blocks to
automate the intervention thresholds analysis according to the workow
outlined in Figure 1. In online supplementary materials, I provide the code
for Microsofts Visual Basic for Applications in combination with
Microsoft Excel and Ventana Vensim DSS. Running such a script results
in a table that denotes the minimum required intervention sizes to produce
the anticipated output behavior pattern, in relation to the intervention
timing. This output serves as the input for the next step.
Box 1. Pseudo-code for intervention thresholds analysis
B. Walrave: Determining intervention thresholds 271
Copyright © 2017 System Dynamics Society
DOI: 10.1002/sdr
5. Draw an intervention thresholds graph. Finally, as explained, the
intervention thresholds graph displays the intervention threshold sizes
as a function of intervention timing. Figure 4(a) depicts the graph for this
example, based on the output of step 4. The graph should not be read as a
continuous line unfolding over time; rather, the linesin Figure 4(a)
depict intervention threshold sizes (y-axis, in absolute values of IOPCD)
that result in the anticipated output behavior pattern, at the indicated
intervention time (x-axis). That is, an IOPCD value lower than the
intervention threshold size (at a particular moment in time) results in
S-shaped growth; a higher IOPCD results in growth and decline.
Building on this graph, it is possible to draw nal runtime graphs (at
t= 2100) for the Human Welfare Index (Figure 4b) and Population
(Figure 4c). That is, Figures 4(b, c) denotes the nal runtime values
(values at t= 2100) for the Human Welfare Index and Population at each
intervention threshold. According to Figure 4(a), the model that contains
an indicator variable with a cut-off value of 0.001 has an intervention
threshold at an IOPCD of 363 (intervention size) at t= 2010 (intervention
timing). Then the nal runtime values, at t= 2100, for the Population and
the Human Welfare Index associated with this intervention threshold
equal approximately 8 billion and 82 percent respectively, as displayed
in Figure 4(b, c).
Results: Validation and interpretation
Figure 4 contains the results for ve cut-off values, with the same indicator
variable, to illustrate the sensitivity of the analysis. If the patterns changed
signicantly across different cut-off values, further investigation would be
warranted, such as by choosing or constructing a different, more robust indicator
variable and cut-off value. The results in this example instead illustrate that,
though the intervention threshold sizes are higher for higher cut-off values, the
general trend of the results remains constant, which is a sign of robustness.
To illustrate the dynamic behavior of Population that results from different
interventions, Figure 5 presents three model runs. Keeping both the
intervention timing and the cut-off value constant (at 2010 and 0.001,
respectively), three scenarios depicted an intervention size that was (a) 25
percent lower than the intervention threshold, (b) equal to the intervention
threshold, and (c) 25 percent higher than the intervention threshold. The
output of the rst run clearly shows S-shaped growth, whereas the output of
the third distinctly exhibits growth and decline. However, the second run
appears to be on the border between S-shaped growth and growth and decline.
As such, Figure 5 visually validates the indicator variable and its cut-off value,
as well as the anticipated pre- and post-intervention output behavior patterns.
272 System Dynamics Review
Copyright © 2017 System Dynamics Society
DOI: 10.1002/sdr
Fig. 4. Intervention
thresholds graph (a) and
nal runtime graphs (b, c)
for ve cut-off values
B. Walrave: Determining intervention thresholds 273
Copyright © 2017 System Dynamics Society
DOI: 10.1002/sdr
Going into detail about the implications of the results is beyond the scope of
this paper, but further inspection of the dynamics in Figure 4(a), for a cut-off
value of 0.001, underscores some important observations. As Figure 4(a)
shows, the initial high levels of IOPCD are sustainable, in that they do not
result in an undesired change in output behavior pattern. A healthy balance
between resource demand and availability can be maintained. Yet around
the year 1990, high levels of IOPCD are no longer sustainable, and the
intervention threshold size drops very quickly up to 2018. Thereafter, a limit
is reached; that is, a thresholdseems to appear within the intervention
thresholds graph. From this point on, even the smallest IOPCD cannot prevent
the undesired change in output behavior patterns; Population always
overshoots a sustainable balance between resource demand and availability,
followed by a signicant decline. This point crops up abruptly and is associated
with a big negative step in the nal Human Welfare Index. This nding also
serves to illustrate the importance of investigating intervention dynamics.
It is important to note, with respect to the former observation, that the
different interventions, over the permissible timeframe, had dissimilar
incubation times, due to the xed nal runtime. Behavior characteristics thus
may be pushed beyond the simulation horizon. For example, a particular
parameter change might slow down the growth part of a growth and decline
output behavior pattern, thereby pushing the decline part beyond the
simulation horizon. To counteract this potential bias, a relatively long
minimum delay of 50 years was maintained between the intervention and
nal run. Nevertheless, the observed threshold does not necessarily exist in
the behavior space of the model. The results, however, represent a tipping
point in a policy context: the moment in time when an intervention is still
able to trigger a particular output behavior pattern, before a given deadline.
More results could also be distilled from this analysis. For example, the
steep decline in the IOPCD, required to prevent the system from overshooting,
Fig. 5. Behavior for
Population resulting from
three interventions at
t= 2010
274 System Dynamics Review
Copyright © 2017 System Dynamics Society
DOI: 10.1002/sdr
clearly illustrates the importance of the timing of the intervention. Further
analysis of these ndingsand other results that can be developed with
intervention thresholds analyses and graphsrepresent interesting avenues
for research.
Discussion, outlook, and conclusion
Studies that apply SD have steadily increased, largely due to the increasingly
complex nature of [systems and] common problems facedby researchers,
policy makers, and practitioners alike (Rahmandad et al., 2015, p. 1). Human
intuition falls short in navigating such situations, and formal models become
indispensable for learning and decision making (Oliva, 2003). As models and
their dynamics become increasingly complex, modelers turn to automated
model conguration and analysis techniques (Ford, 1990; Peterson, 1980;
Pruyt and Islam, 2016; Yücel and Barlas, 2011). Following in this tradition,
this article details a dedicated, semi-automated method to uncover
intervention thresholds that result in changes in output behavior patterns. In
the context of intervention studies, the proposed intervention thresholds
analysis can discover the set of minimally required intervention sizes, over
a permissible timeframe, to address an issue or problem.
The method exploits differences between classiable pre- and post-
intervention behavior patterns, in terms of a key difference in atomic
behavior proles (Barlas and Kanar, 1999). Through an automated process of
systematic adjustments of the intervention variable, while simultaneously
monitoring the key difference (i.e., the indicator variable), intervention
thresholds can be calculated. The results of this analysis then can be
presented in an intervention thresholds graph, which denotes the minimum
intervention size (y-axis) in relation to intervention timing (x-axis). This graph
illustrates the change in the systems resistance to interventions as a function
of intervention timing. The method differs from existing frameworks (e.g.,
Kwakkel and Pruyt, 2015; Yücel and Barlas, 2011), in that it is designed
specically to calculate intervention thresholds (graphs) and is more
lightweight as a result. In turn, system dynamicists can go beyond manually
conducted, ad hoc experimentsas are commonly presented in management
and organization science (e.g., Romme et al., 2010; Walrave et al., 2011)
which should stimulate new studies of intervention dynamics.
This method aims to uncover a change in output behavior patterns, to solve
an issue or problem, but potentially it could be used in research settings in
which no such change is expected. For example, Van Oorschot et al. (2011)
describe the effectiveness of different interventions (decision-making
heuristics) for a particular performance indicator (new product sales) but do
not anticipate any change in output behavior pattern as a result of this
intervention; the performance indicator is always characterized by S-shaped
B. Walrave: Determining intervention thresholds 275
Copyright © 2017 System Dynamics Society
DOI: 10.1002/sdr
growth. The method proposed herein could be customized, however, to
identify automatically which intervention, at which moment in time, results
in maximum performance (e.g., based on the magnitude of sales).
Furthermore, this method could be extended to determine intervention
thresholds that underlie tipping points in the behavior space of a model. A
tipping point is an important property of a dynamic system that indicates
the critical size of a variable at which a change in loop dominance occurs, at
a particular moment in time. Some of the system communitiesmost
inuential contributions build on tipping point analyses to support their
arguments and insights. For example, Rudolph and Repennings (2002)
disaster dynamics study demonstrates how the accumulation of interruptions
can drive an organizational system from a self-regulating system to a fragile,
self-escalating regime(p. 1). In this context, the tipping point reects the
particular, critical setting that causes the system to undergo a fundamental
change in behaviorthat is, a shift in loop dominance. Some researchers have
been successful at deducing tipping points analytically (typically, because
their models can be described using rst- and second-order differential
equations), but it remains a challenge for researchers facing more complex
models. The proposed method and further developments of this approach
could thus prove very valuable in efforts to probe for tipping points. Some
challenges still need to be overcome, though. First, when there is no clear-
cut intervention variable, modelers must identify a parameter that drives the
tipping point or else adapt the method to facilitate multiple parameters.
Second, detecting a shift in loop dominance is more complicated than
identifying an anticipated change in output behavior patterns. Even if a loop
is (and remains) dominant, system behavior might change signicantly. Further
research should extend the presented method to address these challenges.
Every method is subject to limitations; the one presented herein is attuned to
systems with low to intermediate behavior complexity, because it requires
classiable pre- and postintervention output behavior patterns. In its current
form, the method is not applicable to extremely complex models with
unpredictable output behavior, despite being sufcient for many cases.
Therefore, further work could extend this method to deal with increasing
complexity, such as by means of incorporating automatic pattern recognition to
distinguish more than two output behavior patterns (see Yücel and Barlas, 2015).
Furthermore, as noted, the World 3 example maintains a xed nal runtime
(at t= 2100), while varying intervention timing over a set timeframe. As a
result, the different interventions had dissimilar incubation times, which
could push the behavior characteristics beyond the simulation horizon. If a
modeler is interested in uncovering tipping points in a policy context, this
potential limitation is not really a problem, but in other cases a dynamic nal
runtime may be required.
The computational load involved with this method also might be
problematic in some cases, such as analyses that include many possible
276 System Dynamics Review
Copyright © 2017 System Dynamics Society
DOI: 10.1002/sdr
intervention sizes and a large permissible intervention window and that are
subject to small increment sizes for both intervention size and timing. To
manage the computational load, modelers might increase or decrease
increment sizes, depending on the computing power available. I recommend
modelers start their intervention thresholds analysis with relatively large
increment sizes, which will decrease the time required to run the analysis
(and perhaps adjust some settings, such as lower and upper bound
intervention sizes). The increment sizes can then be set to smaller values to
render smooth intervention thresholds and nal runtime graphs.
Note
i. Yücel and Barlas (2015) recognize seven main modes but 25 different
behavior patterns, rather than the 15 in Table 1. This difference arises
because Yücel and Barlas describe more variations of the growth-and-
decline and decline-and-growth patterns. For ease of understandability, I
omit these variations and stay true(er) to the fundamental modes described
by Sterman (2000).
Acknowledgments
I thank the three anonymous reviewers and the journal editors for their
insightful comments and suggestions. Furthermore, I would like to
acknowledge Sharon Dolmans, Georges Romme, and Kim van Oorschot for
their valuable input. I also gratefully acknowledge the members of the System
Dynamics Community for the feedback provided during the 2014 annual
meeting in Delft, the Netherlands.
References
Angstenberger L. 2001. Dynamic Fuzzy Pattern Recognition with Applications to
Finance and Engineering. Kluwer Academic: Dordrecht.
Barlas Y, Kanar K. 1999. A dynamic pattern-oriented test for model validation. In
Proceedings of the Fourth System Science European Congress, Valencia, Spain.
Corduas M, Piccolo D. 2008. Time series clustering and classication by the
autoregressive metric. Computational Statistics and Data Analysis 52(4): 18601872.
Ford A. 1990. Estimating the impact of efciency standards on the uncertainty of the
northwest electric system. Operations Research 38(4): 580597.
Ford A, Amlin J, Backus G. 1983. A practical approach to sensitivity testing of system
dynamics models. In Proceedings of the First International Conference of the System
Dynamics Society, Chestnut Hill, MA.
Forrester JW. 1961. Industrial Dynamics. MIT Press: Cambridge, MA.
B. Walrave: Determining intervention thresholds 277
Copyright © 2017 System Dynamics Society
DOI: 10.1002/sdr
Hannan MT, Freeman J. 1984. Structural inertia and organizational change. American
Sociological Review 49(2): 149164.
Kwakkel JH, Pruyt E. 2015. Using system dynamics for grand challenges: the ESDMA
approach. Systems Research and Behavioral Science 32(3): 358375.
Kwakkel JH, Auping WL, Pruyt E. 2013. Dynamic scenario discovery under deep uncertainty:
the future of copper. Technological Forecasting and Social Change 80(4): 789800.
Meadows DH, Meadows DL, Behrens WW. 1972. The Limits to Growth. Universe
Books: New York.
Meadows DH, Randers J, Meadows DL. 2004. Limits to Growth: The 30-Year Update.
Chelsea Green Publishing: White River Junction, VT.
Meadows DH, Randers J, Meadows DL. 2008. Limits to Growth: The 30-Year Update,
Model revision 3, 14 August 2008. Chelsea Green Publishing: White River Junction, VT.
Morecroft JDW. 1988. System dynamics and microworlds for policymakers. European
Journal of Operations Research 59(3): 927.
Oliva R. 2003. Model calibration as a testing strategy for system dynamics models.
European Journal of Operations Research 151(3): 552568.
Peterson DW. 1980. Statistical tools for system dynamics. In Elements of the System
Dynamics Method, Randers J (ed). Productivity Press: Cambridge, MA; 143161.
Pruyt E, Islam T. 2016. On generating and exploring the behavior space of complex
models. System Dynamics Review 31(4): 220249.
Rahmandad H, Oliva R, Osgood N. 2015. Analytical Methods for Dynamic Modelers.
MIT Press: Cambridge, MA.
Repenning NP. 2001. Understanding re ghting in new product development. Journal
of Product Innovation Management 18(5): 285300.
Romme AGL, Zollo M, Berends P. 2010. Dynamic capabilities, deliberate learning and
environmental dynamism: a simulation model. Industrial and Corporate Change
19(4): 12711299.
Rudolph JW, Repenning NP. 2002. Disaster dynamics: understanding the role of
quantity in organizational collapse. Administrative Science Quarterly 47(1): 130.
Senge P. 1990. The Firth Discipline: The Art and Practice of the Learning Organization.
Doubleday: New York.
Sterman JD. 2000. Business Dynamics: Systems Thinking and Modeling for a Complex
World. McGraw Hill: New York.
Van Oorschot KE, Langerak F, Sengupta K. 2011. Escalation, de-escalation, or
reformulation: effective interventions in delayed NPD projects. Journal of Product
Innovation Management 28(6): 848867.
Van Oorschot KE, Akkermans H, Sengupta K, Van Wassenhove LN. 2013. Anatomy of a
decision trap in complex new product development projects. Academy of
Management Journal 56(1): 285307.
Walrave B, Van Oorschot KE, Romme AGL. 2011. Getting trapped in the suppression of
exploration: a simulation model. Journal of Management Studies 48(8): 17271751.
Walrave B, Van Oorschot KE, Romme AGL. 2015. How to counteract the suppression of
exploration in publicly traded corporations. R&D Management 45(5): 458473.
Yücel G, Barlas Y. 2011. Automated parameter specication in dynamic feedback
models based on behavior pattern features. System Dynamics Review 27(2): 195215.
Yücel G, Barlas Y. 2015. Pattern recognition for model testing, calibration, and
behavior analysis. In Analytical Methods for Dynamic Modelers, Rahmadad H, Oliva
O, Osgood ND (eds). MIT Press: Cambridge, MA; 173206.
278 System Dynamics Review
Copyright © 2017 System Dynamics Society
DOI: 10.1002/sdr
... Behavior-modes are most simply defined as the pattern over time expressed in the variable of interest (see Appendix Fig. A1). The primary behavior modes are zero/constant, linear growth/decline, exponential growth/ decline, goal seeking growth/decline, S-shaped growth/decline, growth and decline or decline and growth, and oscillation with/without growth/decline (Walrave 2016). When identifying shifts in behaviormode, a behavior pattern measure must be specified and created in the model for further analysis given the lack of automated means to identify and measure behavior-mode changes. ...
... Using atomic behavior patterns and their associated threshold indicators 5 (i.e., the point where a model behavior shifts from one atomic behavior pattern to another; often observed using the first and/or second derivatives), a model is iteratively tested using pre-determined intervention sizes and times, which represent the "what-if" questions of a proposed new policy, until the intervention threshold is reached or the entire search space has been simulated (i.e., where no intervention thresholds were identified). Using the resulting simulation data, an intervention threshold graph may be constructed to indicate the required intervention size at a given time to create the desired shift in behavior pattern (Walrave 2016). This approach can be implemented using sensitivity analysis (described above), but where the nature of the intervention, its size, and time applied to the model are sensitivity inputs parameters. ...
... Following Walrave (2016) an indicator value was specified to determine if the desired behavior shift was achieved. In this case, the indicator variable used was the moving average of native animals and its first derivative. ...
Article
The use of dynamic systems models by scientists, managers, and policy-makers is becoming more common due to the increasingly complex nature of ecological and socioeconomic problems. Unfortunately, most scientific training in the life sciences only includes dynamic modeling as elective, supplementary courses at a beginners-level, which is not conducive to generating the expertise needed to properly develop, test, and learn from dynamic modeling approaches and risks utilization of poor quality models and adoption of unreliable recommendations. The objective of this paper is to fill part of that gap, particularly regarding model experimentation , by summarizing key concepts in experimental design for simulation experiments and illustrating hands-on examples of experiments needed for developing a deeper understanding of complex, dynamic systems. The experiments include extreme conditions testing, sensitivity analyses of model behaviors given variation in both parameter values and graphical (table) functions, and "what-if?" experiments (e.g., counterfactual trajec-tories, boundary-adequacy tests, and intervention threshold experiments). Each experimental example describes the theoretical foundation of the test, illustrates its application using an ecological systems model, and increases in degree of difficulty from novice to advanced skill levels. By doing so, we demonstrate consistent, scientific means to glean valuable insights about the model's structure-behavior link, uncover any unforeseen model flaws or incorrect formulations, and enhance the confidence (validity) of the model for its intended use.
... Additionally, the magnitude and duration of the disturbance can also be set as stochastic parameters to test the effect of different disturbances on the system outcomes. Such effects can be simulated running Monte Carlo simulations in an SD model (see, for example, Herrera, 2017;Walrave, 2016;Moxnes, 2005). ...
Article
Full-text available
This paper discusses our experience in using system dynamics to facilitate resilience planning for food security in rural communities that are exposed to ever-increasing climatic pressures in Guate-mala. The social-ecological systems literature is rich in examples where policies to enhance resilience are deduced from factors generally accepted to be present in resilient systems (e.g. redundancy, connectivity and polycentrism). This deductive approach risks being overly sim-plistic. As an alternative, this paper explores how insights from analysing the structure-behaviour relationship of complex dynamic systems can be used to generate tailored policies. The results show that stability in food systems is mainly driven by key strategic resources that moderate the effects of environmental changes on food availability and affordability. Moreover, our experience highlights the importance of analysing mechanisms that determine a system's behaviour while and after the system is affected by a disturbance to formulate effective resilience policies.
... Alternatively, the current model could also be used to systematically review all possible combinations of interventions at the moment that structural decline sets in, in order to identify from the bottom-up, which combinations of interventions generate the most positive result (see e.g. Walrave, 2016). ...
Article
Full-text available
The need for challenge-led innovation policies to address grand societal challenges is increasingly recognised at various policy levels. This raises questions how to overcome a variety of ‘failures’ prohibiting innovations to flourish. A key-line of thought in theory and policy emerged since the late 1990s on the role of system failures, next to more conventional market-failure thinking. More recently, scholarly work introduced the notion of ‘transformational failures’, which implies an even broader perspective on innovation failures as resting in challenges related to transforming entire systems of production and consumption. This paper combines the literature on Technological Innovation Systems (TIS) with literature on multi-level approaches to sustainability transitions to make a contribution to this debate. In particular, this paper argues that the current literature, so far, has failed to explore how different kinds of policies, or policy mixes, can overcome transformational failures. The paper uses a simulation model (i.e. a system dynamics model) and illustrative examples on electric vehicles to explore relations between transformational failures and (mixes of) policy interventions. A key conclusion is that, in particular in the case where an emerging TIS is in a competitive relation with an incumbent system, overcoming transformational failures can be realised either by directly addressing the incumbent system, for instance by taking away its resources (which may be political challenging). Alternatively, the model results show that a clever mix of policy interventions elsewhere in the system may lead to sufficient performance improvements of the emerging TIS so that it can challenge the incumbent system on its own – albeit with a need for substantial additional resources.
... Alternatively, the current model could also be used to systematically review all possible combinations of interventions at the moment that structural decline sets in, in order to identify from the bottom-up, which combinations of interventions generate the most positive result (see e.g. Walrave, 2016). ...
Conference Paper
The field of sustainability transitions has attracted much attention in the past decade. Two conceptual frameworks have continued to structure debates and analyses in this field. These are Technological Innovation Systems (TIS) and the Multi-level Perspective (MLP). Currently the application and development of these analytical frameworks and related governance perspectives are dominated by qualitative case approaches. Whilst some notable exceptions exist, few have explored the use of formal, modelling approaches to understand the ways in which dynamically complex socio-technical systems transform, or to better comprehend the impact of policy interventions in shaping socio-technical transformations—despite the fact that the abundance of qualitative case descriptions provides a strong basis for formal model building. By means of this paper, we aim to make a contribution to transition studies literature, by developing and exploring a stochastic system dynamics model on socio-technical transitions. The model is grounded in the TIS concept of ‘motors of innovation’, and combines this framework with the notion of ‘transition pathways’ that was developed as part of multi-level-framework thinking. By doing so, the dynamic model can be used to trace how policy interventions for promoting the development of technological innovation systems play out differently depending on the broader context. As such, the paper also contributes to the field by making a crossover between the two key-frameworks of this field. Whilst the latter has been argued to have potential major benefits in terms of better understanding sustainability transitions, few have paused to consider the relationship between TIS and MLP in more detail. The research question that this paper addresses is: “How do policy investment decisions for supporting technological innovation systems shape their development in the context of various socio-technical transition pathways?” This question is addressed by exploring in a novel system dynamics model how two distinctive policy strategies and one hybrid policy strategy shape the interplay between innovation system motors, depending on four transition contexts. The policy intervention strategies are: 1) taking a technology push approach by supporting knowledge development and diffusion; 2) taking a market pull approach by supporting entrepreneurship, socio-technical system building and market development; and 3) a hybrid combination of 1 and 2. Following Geels and Schot’s (2007) typology, socio-technical transition pathways are modelled on the basis of two criteria: 1) whether the innovation system has already developed prior to an external landscape event; and 2) whether the relationship between innovation system and broader regime context is competitive or symbiotic in nature. This results in twelve policy intervention scenario’s, and one additional base-line scenario. The paper discusses the most notable results across these scenarios. These include the following. The first conclusion is, perhaps unsurprisingly, that in a context where the innovation system has already developed when a landscape pressure occurs, and when relationships are symbiotic rather than competitive, chances to successfully develop an innovation system into a self-sustaining system (i.e. independent of external resources such as policy support) are most likely. The second conclusion is that in all scenario’s, hybrid policy strategies combining technology push and market pull, are most likely to generate self-sustaining innovation systems. The third conclusion is that in all scenario’s, maintaining a reasonable level of support for an innovation system before a landscape event occurs is beneficial. This of course raises governance questions in the light of limited public resources, and requires governance capability to monitor and anticipate landscape developments. Finally, the paper ends with a discussion of possible avenues for future research.
Article
System dynamics (SD) modeling studies aim to reveal the causes of problematic dynamic behaviors and eliminate them through policy design and analysis. The analyst conducts sensitivity/scenario analyses and what‐if experiments to reveal the input–output relationships during modeling. However, during these analyses and investigations, the identification of input‐parameter spaces that cause the generation of different SD model behavior patterns is time consuming and susceptible to human bias. Therefore, we propose a metamodel‐based procedure for SD models that considers the necessity for unbiased and automated analysis and insight generation. The approach uses the random forest algorithm for metamodel generation and extracts interpretable IF–THEN rules from the metamodel, thereby identifying input subspaces that generate different qualitative or numerical SD model outputs. We illustrate the proposed approach using two well‐established SD models. These case studies reveal how the model analyst can utilize the proposed method to capture input–output relationships.
Article
Full-text available
Variance‐based sensitivity analysis can provide a comprehensive understanding of the input factors that drive model behavior, complementing more traditional system dynamics methods with quantitative metrics. This paper presents the methodology of a variance‐based sensitivity analysis of the Biomass Scenario Learning Model, a published STELLA model of interactions among investment, production, and learning in an emerging competitive industry. We document the methodology requirements, interpretations, and constraints, and compute estimated sensitivity indices and their uncertainties. We show that application of variance‐based sensitivity analysis to the model allows us to test for non‐additivity, identify influential and interactive variables, and confirm model formulation. To enable use of this type of sensitivity analysis in other system dynamics models, we provide this study's R code, annotated to facilitate adaptation to other studies. A related paper describes application of these techniques to the much larger Biomass Scenario Model.
Book
Full-text available
Simulation modeling is increasingly integrated into research and policy analysis of complex sociotechnical systems in a variety of domains. Model-based analysis and policy design inform a range of applications in fields from economics to engineering to health care. This book offers a hands-on introduction to key analytical methods for dynamic modeling. Bringing together tools and methodologies from fields as diverse as computational statistics, econometrics, and operations research in a single text, the book can be used for graduate-level courses and as a reference for dynamic modelers who want to expand their methodological toolbox. The focus is on quantitative techniques for use by dynamic modelers during model construction and analysis, and the material presented is accessible to readers with a background in college-level calculus and statistics. Each chapter describes a key method, presenting an introduction that emphasizes the basic intuition behind each method, tutorial style examples, references to key literature, and exercises. The chapter authors are all experts in the tools and methods they present. The book covers estimation of model parameters using quantitative data; understanding the links between model structure and its behavior; and decision support and optimization. An online appendix offers computer code for applications, models, and solutions to exercises.
Article
Full-text available
We conducted a longitudinal process study of one firm's failed attempt to develop a new product. Our extensive data analysis suggests that teams in complex dynamic environments characterized by delays are subject to multiple "information filters" that blur their perception of actual project performance. Consequently, teams do not realize their projects are in trouble and repeatedly fall into a "decision trap" in which they stretch current project stages at the expense of future stages. This slowly and gradually reduces the likelihood of project success. However, because of the information filters, teams fail to notice what is happening until it is too late.
Article
Full-text available
This article examines the role that the quantity of non-novel events plays in precipitating disaster through the development of a formal (mathematical) system-dynamics model. Building on existing case studies of disaster, we develop a general theory of how an organizational system responds to an on-going stream of non-novel interruptions to existing plans and procedures. We show how an overaccumulation of interruptions can shift an organizational system from a resilient, self-regulating regime, which offsets the effects of this accumulation, to a fragile, self-escalating regime that amplifies them. We offer a new characterization of the conditions under which organizations may be prone to major disasters caused by an accumulation of minor interruptions. Our analysis provides both theoretical insights into the causes of organizational crises and practical suggestions for those charged with preventing them.
Article
Top management teams frequently overemphasize efforts to exploit the current product portfolio, even in the face of the strong need to step up exploration activities. This mismanagement of the balance between explorative R&D activities and exploitation of the current product portfolio can result in the so-called ‘success trap’, the situation where explorative activities are fully suppressed. The success trap constitutes a serious threat to the long-term viability of a firm. Recent studies of publicly traded corporations suggest the suppression of exploration arises from the interplay between the executive team’s myopic forces, the board of directors as gatekeeper of the capital market, and the exploitation-exploration investments and their outcomes. In this paper, system dynamics modeling serves to identify and test ways in which top management teams can counteract this suppression process. For instance, we find that when the executive board is suppressing exploration, the board of directors can still prevent the success trap by actively intervening in the exploitation-exploration strategy.
Article
In this paper, we argue and show how system dynamics modelling can be combined with exploratory modelling and analysis in order to address grand societal challenges, which are almost without exception characterized by dynamic complexity and deep uncertainty. Addressing such issues requires the systematic exploration of different hypotheses related to model formulation and model parametrization and their effect on the kinds of behavioral dynamics that can occur. Through two cases, we illustrate the combination of system dynamics and exploratory modelling. The first case illustrates its use for discovering different types of dynamics related to metal and mineral scarcity. The second case illustrates its use for worst-case discovery in water scarcity. We conclude that exploratory system dynamics modelling represents a promising approach for addressing deeply uncertain dynamically complex societal challenges. Copyright © 2013 John Wiley & Sons, Ltd.