Conference PaperPDF Available

Advancing Simulation Experimentation Capabilities with Runtime Interventions

Authors:

Abstract and Figures

Experimentation is a critical capability of simulations that allows one to test different scenarios safely and cost-effectively. In particular, agent-based simulations have been used in experimenting with different policy options to aid decision makers. Highly utilized experimentation methods such as parameter sweeping aim to explore the relationship between the initial parameter values (i.e., input) and simulation results (i.e., outputs). Experimentation, which involves changes of simulation states on-the-fly, is often conducted ad-hoc and entails manual code adjustments which are time consuming and error-prone. In this paper, we present a framework that facilitates intervening in a running simulation to change simulation states in a semi-automated manner so that a simulation user can explore alternative worlds. In our framework, such an intervention is implemented using an injection mechanism. The framework allows the user to weigh different policy options rapidly with minimal effort. We illustrate its use in an urban agent-based model.
Content may be subject to copyright.
ADVANCING SIMULATION EXPERIMENTATION CAPABILITIES
WITH RUNTIME INTERVENTIONS
Joon-Seok Kim
Hamdi Kavak
Dept. of Geography & Geoinformation Science
George Mason University
4400 University Drive
Fairfax, VA, USA
{jkim258, hkavak}@gmu.edu
Umar Manzoor
Dept. of Computer Science
Tulane University
6823 St. Charles Ave.
New Orleans, LA, USA
umanzoor@tulane.edu
Andreas Züfle
Dept. of Geography & Geoinformation Science
George Mason University
4400 University Drive
Fairfax, VA, USA
azufle@gmu.edu
SpringSim-ANSS, 2019 April 29-May 2, Tucson, AZ, USA; c
2019 Society for Modeling & Simulation International (SCS)
ABSTRACT
Experimentation is a critical capability of simulations that allows one to test different scenarios safely and
cost-effectively. In particular, agent-based simulations have been used in experimenting with different policy
options to aid decision makers. Highly utilized experimentation methods such as parameter sweeping aim
to explore the relationship between the initial parameter values (i.e., input) and simulation results (i.e.,
outputs). Experimentation, which involves changes of simulation states on-the-fly, is often conducted ad-
hoc and entails manual code adjustments which are time consuming and error-prone. In this paper, we
present a framework that facilitates intervening in a running simulation to change simulation states in a
semi-automated manner so that a simulation user can explore alternative worlds. In our framework, such
an intervention is implemented using an injection mechanism. The framework allows the user to weigh
different policy options rapidly with minimal effort. We illustrate its use in an urban agent-based model.
Keywords: injection, intervention, checkpoint, experimentation, what-if analysis
1 INTRODUCTION
One of the grand challenges of social scientists is the lack of objective knowledge of the actual causes
of observed behaviors in the real world. Conducting the necessary experimental work for understanding
causality in social behaviors and systems is often impractical or unethical, while observational modeling,
data mining and machine learning approaches struggle to infer causality from correlation patterns. Planners
and decision-makers often rely upon agent-based simulation to help them understand and forecast a variety
of scenarios that involve complex human social systems and behaviors. In particular, decision-makers often
Kim, Kavak, Manzoor and Züfle
seek to identify, characterize, and model causal processes at different scales and for different social systems
to help explain or predict certain patterns of behavior for a wide range of applications, including disease
spread (Carley et al. 2006, Perez and Dragicevic 2009, Crooks and Hailegiorgis 2014), crime and riots
(Malleson, Heppenstall, and See 2010, Pires and Crooks 2017), and economic systems (Farmer and Foley
2009).
An agent-based model (ABM) is a computerized simulation of individual entities called agents, which in-
teract through prescribed rules in a well-defined environment. The goal in designing an ABM is to abstract
complex real-world human behavior into a finite set of realistic rules leading to a system of socially plau-
sible behavior. This goal can be achieved through careful validation and verification of the model, using
sensitivity analysis (Kleijnen 2005) and model calibration (Lamperti, Roventini, and Sani 2018).
Once an ABM has been implemented, validated, verified and calibrated for a particular application, it can be
used to experimentally evaluate new policies, unexpected events, and disruptive changes to the simulation
environment. For instance, in an economic system ABM new economic policies can be employed. Such
“what-if” scenario explorations allow us to observe outcomes of policy changes in settings that are too com-
plex to describe by analytic models. What-if analysis is a crucial process used to diagnose a model, analyze
its features, and pinpoint a circumstance for better decision making. In general terms, such scenario explo-
rations are realized by (1) changing the initial conditions of the simulation and (2) making changes during
runtime. While both allow realizing what-if analyses, their implementation and exploration capabilities are
vastly different.
Changing the initial conditions of a simulation, also known as initialization, is a more common approach
to scenario explorations. Grow (2017) proposed the use of a regression metamodel to facilitate the under-
standing of the complex behaviors in agent-based simulations. This type of metamodel is quite effective for
computational demography as it is highly accessible and easy to communicate. Thiele, Kurth, and Grimm
(2014) proposed the RNetLogo package by linking NetLogo (widely used toolkit for agent-based modeling)
and R (statistical analysis tool) to use the established methods to develop tools to facilitate parameter estima-
tion and sensitivity analysis in agent-based models. Concurrently, Ligmann-Zielinska et al. (2014) proposed
a simulation framework based on quantitative uncertainty and sensitivity analyses to develop parsimonious
socio-ecological agent-based models to understand model behavior and explore the outcome space.
Further, many popular agent-based modeling platforms, such as NetLogo and AnyLogic, provide standard-
ized ways for users to specify parameter value combinations and replications. This allows users to run
multiple scenario alternatives with minimal effort. On the other hand, such runs can only support narrow
explorations due to limiting the changes at the initialization level.
Researchers developed alternative mechanisms to increase experimentation capabilities of simulations. One
popular technique for runtime explorations is called “cloning” (Li, Cai, and Turner 2017). Simply put,
cloning makes a copy of a simulation while it’s running and allows the cloned instance to run semi-
independently (Hybinette and Fujimoto 1997). Majority of contributions in this line of research come from
parallel simulation community that aims to create massive-scale models and explore runtime changes (Hybi-
nette and Fujimoto 1997, Hybinette and Fujimoto 2001, Yoginath and Perumalla 2018). While these studies
provide efficient ways to share computational resources with cloned instances, they heavily rely on running
simulations on high-performance computing infrastructures (Yoginath and Perumalla 2018). Many ABM
platforms and open source tools do not support high-performance computing infrastructures making them
inaccessible to a wider community of simulation modelers. That is where our study comes into play.
In this work, we propose a framework to prescribe changes into a running ABM to support semi-automated
“what-if” analyses. Our framework provides an interface for existing ABMs, which allows a user to define
checkpoints in a running simulation. At these checkpoints, changes to model parameters can be prescribed,
and the checkpoint can be loaded at a later time to compare outcomes of different strategies and policies
Kim, Kavak, Manzoor and Züfle
in alternative simulation time-lines originating from the same checkpoint differing merely in the prescribed
changes. Our framework allows to define conditions for simulation checkpoints (such as the occurrence
of a disaster or the observed outbreak of a disease), and allows to define multiple policies to intervene the
running simulation (such as defining different solutions to prevent the spread of a disease), which may be
defined by a human expert based on information available at the time of the simulation checkpoint. Once
defined, the prescribed changes are automatically injected into the simulation. In addition to our injection
mechanism, we describe our injection builder for user-friendly analysis of the current simulation state and
for allowing the user to register prescriptions to the model and their conditions. We further demonstrate
the effectiveness of our framework using a practical use-case based on an ABM that simulates patterns of
life in an urban environment to study the spread of influenza-like disease. We evaluate various intervention
scenarios based on vaccinations, closures of public places, and employment of a stay-home-order that we
inject into our simulation.
2 FRAMEWORK FOR SIMULATION EXPERIMENTATION
Experimentation is a significant process to collect scientific evidence to support a hypothesis. It requires,
however, tremendous time and effort to not only 1) conduct experiments themselves but also 2) prepare
them. This section introduces a framework for simulation experimentation, focusing on how to expedite
the process of experimentation. Fig. 1 shows our proposed process of experimentation. It is an iterative
process of simulation of a model, intervention, and examination. Note that the model in Fig. 1 is a tested
and certified computational model that went through the validation and verification process; thus no change
of model is needed. In this paper, intervention denotes an intended action to intervene between simulation
and an analyst to achieve the altered outcome in hopes of resolving scientific questions. To decide whether
an analyst will proceed more experimentation is required after examination of collected results. In what
follows, the main loop is explained in detail.
Main instance
SimulatorModel
Inspector
Injection
builder
Predicates
Intervened
instancen
Result
ExaminationInterventions
instructions
Resultn
Reporting
Figure 1: Process of experimentation.
2.1 Instruction Injection
Dependency injection is a software design pattern that allows modules to be loosely coupled (Prasanna
2009). It enables a client to configure behaviors without explicit code changes, providing the flexibility
of configuration. Similarly, we adopt an injection mechanism as the implementation of interventions. We
interpret the content of injection as an instruction, a procedure that may affect simulation and produce
different results of the simulation. Instruction injection is an act to inject instructions into the simulation.
Kim, Kavak, Manzoor and Züfle
It allows an analyst to do experimentation without altering models and simulators at the code level. This
mechanism facilitates what-if analysis with less effort so that one can focus on their experimentation and
analysis. From an analyst’s perspective, to temporarily close public places to see how the decision impacts
simulation is an example of interventions (see Section 3).
2.2 Injection Builder
From an implementation perspective, an instruction manipulates values and invokes a series of functions
dynamically. To inject instructions into a running simulation, it is necessary to define an interface or injection
point. Namely, instruction injection requires a pair of an instruction and a point when the instruction is
triggered. This point can be a specific simulation time or a transition between states depending on paradigms
in simulation modeling. Fig. 2 illustrates an injection point between stand st+1and the differentiation of
simulation occurs from st+1with intervention δ.
St
δ
St+1
St+1
δ
St+n
St+n
St- 1
δ
Figure 2: Influence of intervention.
It is paramount to minimize the effort of an analyst. For this purpose, we have designed an injection builder
which is a tool that facilitates the creation of instruction injection in a semi-automated or automated way.
In the initial phase of simulation, an analyst might not recognize phenomena interesting to them unless they
see tangible results. Once they identify their targets to do experimentation, a task to define predicates of
interest is given. A predicate is a description of the condition to produce injection points. For instance, to
check if the number of infected agents in a disease spread ABM is greater than five percent of the population
can be a predicate. Our inspector module enables users to interact with simulation and carry out this task.
We note that for the task no change in code level is involved. If an analyst is aware of targets and injection
points beforehand, the inspection process is not required. We formally define an intervention as follows.
Definition 1 (Intervention).Intervention δis a pair of (t,λ), where t is an injection time and λis a set of
instructions.
2.3 Implementation
This subsection addresses the implementation of our framework including the inspector, injection builder,
and intervention agent (see Fig. 3). We implemented the framework based on MASON (Luke, Cioffi-Revilla,
Panait, Sullivan, and Balan 2005), a multiagent simulation library core in Java. Although MASON provides
inspectors that inspect properties, they are not designed to evaluate predicates during runtime. Thus, we
developed modules to dynamically evaluate predicates given by users. There is a trade-off between flexibility
and performance of dynamic evaluation. That is, dynamic evaluation allows to check undefined predicates,
but it entails subsidiary operations such as parsing predicates and searching methods. Therefore, handling
processed data such as statistical data at the code level is preferable to specifying detailed predicates. The
augmented inspectors are helpful when an analyst wants to pause simulation at the time when user-defined
conditions are met. In this setting, inspectors are used to pinpointing an injection time. If an analyst is aware
of injection timing, they can create intervention using the injection builder without explicit inspection. It
Kim, Kavak, Manzoor and Züfle
also allows creating a checkpoint of simulation on the fly. A checkpoint is used to restore the simulation
to the checkpoint for analysis of alternative timelines. Moreover, it expedites the experimentation process
because it is not required to simulate from the beginning.
Agent
Scheduler Intervention
Agent
Checkpoint
Inspector
Processed
Data inspect
push
Intervention
build
load
inject
Simulator load
run
step
Injection
Builder
push
create
Figure 3: Conceptual architecture of framework.
The injection builder is a library that facilitates performing instructions, namely invoking public methods
defined by users. It enables to generate various combinatorial test cases. As a result of building, the injec-
tion builder produces an intervention file that can be loaded from a simulator. The file contains intervention
information such as time, access methods, and parameters corresponding to each method. To implement the
injection mechanism, we introduced an intervention agent which is an agent to maintain schedules of inter-
ventions without disturbing user-defined agents. Restoring simulation from a checkpoint, the intervention
agent loads intervention files and schedules instructions.
3 USE CASE
To illustrate the usability of our framework, we created an agent-based urban model that simulates the
patterns of life with the goal to investigate the spread of influenza-like diseases. In this model, agents are
situated on a synthetic geographic area created based on procedural city generation techniques (Kim, Kavak,
and Crooks 2018). When it comes to behaviors, agents follow simple routines such as eating, working,
commuting, and visiting recreational facilities. In the real-world as well as in our simulated world, such
facilities serve as the hub for spreading contact/proximity-based diseases. Thus, disease spread is a useful
example to illustrate the important capabilities of our framework.
Disease models have been heavily investigated using compartmental mathematical models. In these models,
a population is divided into compartments based on disease stages, each compartment is then represented
as a state, and changes in these states are captured using simple rate-based differential equations. However,
such an approach assumes that all members of a compartment are homogeneous in their characteristics and
their spatial locations are not captured. This makes it challenging to represent populations realistically and
to develop reliable intervention scenarios. Agent-based disease models have the potential to capture the
heterogeneity of populations thanks to its bottom-up modeling approach. Such models have been used for
many critical purposes including public health policy development (Apolloni et al. 2009, Kumar et al. 2013),
investigation of biological virus evolution (Roche, Drake, and Rohani 2011), and forecasting (Nsoesie et al.
2013), etc. Here our purpose is not to create a novel disease model, but to show the applicability of our
framework in a critical domain.
Kim, Kavak, Manzoor and Züfle
S E I R
𝛽𝜎 𝛾
Between-host transmission
Within-host progression
Figure 4: A depiction of a disease transmission process with transition rate parameters.
3.1 Disease Model
We based our disease model on a well-known Susceptible-Exposed-Infectious-Recovered (SEIR) approach
where each of these letters represents different stages of an epidemic disease (Riley 2007). Here, Sindicates
that the person is Susceptible to infection but not infected yet, Erepresents a latent state for disease where
the person is Exposed to the disease but not infectious yet, Irepresents the disease state where the person
is Infectious and can also infect other people, and finally Rrepresents Recovered, the disease stage when
the person is immune to the disease either through vaccine or recovery. Following Barrett et al. 2008, we
constructed two stages of disease model: within-host progression and between-host transmission. A simple
depiction of this model is illustrated in Fig. 4.
The within-host progression part of the model deals with the life-cycle of the disease within an individual. In
other words, disease progression is represented as a finite state machine with specific transition probabilities.
All individuals are assumed to be starting with Susceptible(S, i.e., prone to getting infected). Unless an
individual gets the virus, Sdisease stage stays the same. Once the virus is in the body, the disease stage
immediately becomes Exposed (E) and stays like that for a certain time which is captured using the transition
rate σ. Then, the disease progresses to become Infectious (I). Similarly, this stage continues for a certain
time and the person becomes Recovered (R) which is captured using the transition rate γ. We assume that
people who are Recovered will be immune to the disease regardless of their future contacts with infectious
individuals.
The between-host transmission part of the disease model deals with the infection of a susceptible individual
through meeting with at least one Infectious (I) individual. Such meetings have the probability of trans-
mitting the virus from infectious individuals to susceptible individuals through contact or being at close
proximity. Following Barrett et al. 2008, we use the between-host transmission probability formulated in
Equation (1) for any individual iwho are susceptible.
β=1exp(τ
kK
Nkln(1ksiρ)) .(1)
Here, τis the duration of contact with infectious individuals, Nkis the number of infectivities with kfrom the
set of all infectivities K,siis the susceptibility of the individual, and ρis the probability that one susceptible
individual gets infected by one infectious individual during one minute of contact. We compute ρ, a disease-
specific property, using the formula R0=ρ/γ, where R0is the reproduction number for the disease. We
assume that all non-infected individuals have the same susceptibility s=1 and all infectious individuals
have the same infectivity k=1. Thus the probability βsimply becomes β=1exp(τNln(1ρ)), where
Nis the number of infectious individuals. To use these parameters in a realistic way, we follow Balcan et al.
2009 who estimated reproduction number R0as well as transition rates σand γ. All these parameters and
their values are summarized in Table 1.
Kim, Kavak, Manzoor and Züfle
Table 1: Disease parameter values used in the model.
R0ρ σ γ
1.75 34.21hour 26.41hour 601hour
25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55
Day
0
10
20
30
40
50
60
70
80
90
100
Population percentage
Disease progression (baseline)
Susceptible
Exposed
Infectous
Recovered
Epidemic peak
Figure 5: Baseline disease spread progression over ten replications. Each color represents a different per-
centage of disease stages within the population. The solid lines indicate the average values while shaded
areas show 95% confidence band.
3.2 Simulation Setup, Intervention Scenarios, and Results
We populated a small-scale synthetic city with 6,000 inhabitants and added various places including work-
places, homes, restaurants, and other venues (e.g., recreational places). Agents follow their daily activities
such as working when it is a work day, eating when feeling hungry, and visiting venues. Each simulation
step represents 5 minutes in the real world time. We start the simulation at day 0 and run it for over 60
days while the first 25 days are considered warm-up period for agents to establish their networks. At the
beginning of day 30, we introduce 0.5% of the population a hypothetical influenza-like virus. Fig. 5 shows
our results for the baseline runs with the introduction of the virus. The most critical result is that, around
day 36, the population reaches its highest percentage of Infectious people (34%) while Susceptible,Ex-
posed, and Recovered population compartments seem to be plausible. We used our framework to create
three intervention scenarios and test the effects of different mitigation options for the epidemic.
Scenario 1 - Vaccination
In this intervention scenario, a certain percentage of the population is vaccinated before the start of the
first infection time (day 26). The vaccinated population is randomly sampled from the entire population, is
assumed to be in the recovered stage once vaccinated (not susceptible to disease). We tested three different
values (10%, 30%, and 50%) for the percentages of vaccinated people with ten replications. Fig. 6 shows
the comparative results among these interventions against the baseline. It is clear that vaccination even at
a low level reduces the peak of the epidemic. That is, 10% vaccination reduces the peak of the epidemic
by 13%; 30% vaccination reduces the peak of the epidemic by 37%, and 50% vaccination reduces the
peak of the epidemic by 62%. Overall, vaccination appears to be a plausible strategy to cope with such
epidemics.
Kim, Kavak, Manzoor and Züfle
29 31 33 35 37 39 41 43 45 47 49
Day
0
10
20
30
Population percentage
Impact of vaccination on infectious population
Baseline
10% vaccinated
30% vaccinated
50% vaccinated
Figure 6: The impact of different percentages of vaccinated people on the infectiousness of the disease.
The shaded areas show 95% confidence band coverage while the lines within the shaded areas indicate the
average values.
29 31 33 35 37 39 41 43 45 47 49
Day
0
10
20
30
Population percentage
Closure period
Impact of place closure on infectious population
Baseline
10% closed
30% closed
50% closed
Figure 7: The impact of closing different percentage of public places on the infectiousness of the disease.
Lines indicate the average values. Confidence bands are hidden because they overlap on each other making
it challenging to read.
Scenario 2 - Place Closure
In this intervention scenario, the city ordered a certain percentage of meeting venues to be closed for a
certain number of days. As a result, people who aim to visit those places spend their time in their current
location for the length of the planned visit. Similar to the vaccination case, the choice of places to be
closed are randomly sampled. With ten replications, we tested 10%, 30%, and 50% of places closed for
six days as indicated in Fig. 7. Unlike the vaccination case, place closures do not help to mitigate the
peak of the infectious population as much. Place closures only reduced the epidemic peak by 1%, 5%,
and 10%, respectively. Further, there were more infectious cases, compared to the baseline, right after the
closure period ended. As a result, considering the cost to businesses against very limited gain, place closures
doesn’t appear to be a feasible solution. We tried changing the closure period to three days, but our results
were almost identical to the 6-day closure intervention.
Scenario 3 - Stay Home Order
The third and final intervention scenario is the order from the city that asks people to stay home when they
know that they are sick. In this scenario, we randomly sampled 10%, 30%, and 50% of the population to
Kim, Kavak, Manzoor and Züfle
29 31 33 35 37 39 41 43 45 47 49
Day
0
10
20
30
Population percentage
Order active period
Impact of 'stay home' order on infectious population
Baseline
10% followed
30% followed
50% followed
Figure 8: The impact of stay home orders on the infectious population. Lines indicate the average values.
Confidence bands are hidden because they overlap on each other making it challenging to read.
follow such orders starting with day 33 as shown in Fig. 8. Again, we conducted ten replications to capture
uncertainties caused by stochastic processes in the model. Results indicate that these orders are even more
ineffective than place closures. When 10%, 30%, and 50% of people follow such orders, the peak of the
epidemic only reduced by 0.6%, 1.5%, and 7.5%, respectively. These results show that even worse
than place closure, asking people to stay home is not an effective strategy.
To summarize, vaccination appears to be the most effective method, reducing the peak of the epidemic
compared to the other two. Stay home and place closure measures don’t seem to work based on our inter-
pretation. We want to note that our model does not preclude people to work or does not close workplaces.
4 CONCLUSION
In spite of the importance of experimentation, it has been conducted ad-hoc and involves manual code
adjustments which are time-consuming and error-prone. In this paper, we introduced a framework that
facilitates changing agent-based model states while running the simulation to explore alternative worlds in
a semi-automated manner. Such a framework allows the simulation user to explore different policy options
rapidly with minimal effort. To demonstrate the effectiveness of our framework, we implemented an agent-
based urban model that simulates the spread of influenza-like diseases based on the framework. No code
for experimentation is included in the model. We successfully dispensed prescriptions for a given problem
through experimentation.
To reach a persuasive conclusion from the outcome of experimentation, numerous runs and substantial
resources such as computing power are required. Our future work is to extend our framework to leveraging
cloud computing. We further aim to extend our framework to facilities the automated comparison between
different “what-if” instances to support users in exploring relationship and causality between prescribed
changes and observable simulation results.
ACKNOWLEDGMENTS
This project is sponsored by the Defense Advanced Research Projects Agency (DARPA) under cooperative
agreement No.HR00111820005. The content of the information does not necessarily reflect the position or
the policy of the Government, and no official endorsement should be inferred.
Kim, Kavak, Manzoor and Züfle
REFERENCES
Apolloni, A., V. S. A. Kumar, M. V. Marathe, and S. Swarup. 2009, Dec. “Computational Epidemiology in
a Connected World”. Computer vol. 42 (12), pp. 83–86.
Balcan, D., H. Hu, B. Goncalves, P. Bajardi, C. Poletto, J. J. Ramasco, D. Paolotti, N. Perra, M. Tizzoni,
W. Van den Broeck, V. Colizza, and A. Vespignani. 2009, Sep. “Seasonal transmission potential and
activity peaks of the new influenza A(H1N1): a Monte Carlo likelihood analysis based on human mo-
bility”. BMC Medicine vol. 7 (1), pp. 45.
Barrett, C. L., K. R. Bisset, S. G. Eubank, X. Feng, and M. V. Marathe. 2008, Nov. “EpiSimdemics: An
efficient algorithm for simulating the spread of infectious disease over large realistic social networks”.
In SC ’08: Proceedings of the 2008 ACM/IEEE Conference on Supercomputing, pp. 1–12.
Carley, K. M., D. B. Fridsma, E. Casman, A. Yahja, N. Altman, L.-C. Chen, B. Kaminsky, and D. Nave.
2006. “BioWar: scalable agent-based model of bioattacks”. IEEE Transactions on Systems, Man, and
Cybernetics-Part A: Systems and Humans vol. 36 (2), pp. 252–265.
Crooks, A. T., and A. B. Hailegiorgis. 2014. “An agent-based modeling approach applied to the spread of
cholera”. Environmental Modelling & Software vol. 62, pp. 164–177.
Farmer, J. D., and D. Foley. 2009. “The economy needs agent-based modelling”. Nature vol. 460 (7256),
pp. 685.
Grow, A. 2017. Regression Metamodels for Sensitivity Analysis in Agent-Based Computational Demogra-
phy, pp. 185–210. Springer International Publishing.
Hybinette, M., and R. Fujimoto. 1997. “Cloning: a novel method for interactive parallel simulation”. In wsc,
pp. 444–451. IEEE.
Hybinette, M., and R. M. Fujimoto. 2001. “Cloning parallel simulations”. ACM Transactions on Modeling
and Computer Simulation (TOMACS) vol. 11 (4), pp. 378–407.
Kim, J.-S., H. Kavak, and A. Crooks. 2018. “Procedural city generation beyond game development”.
SIGSPATIAL Special vol. 10 (2), pp. 34–41.
Kleijnen, J. P. 2005. “An overview of the design and analysis of simulation experiments for sensitivity
analysis”. European Journal of Operational Research vol. 164 (2), pp. 287–300.
Kumar, S., J. J. Grefenstette, D. Galloway, S. M. Albert, and D. S. Burke. 2013. “Policies to reduce in-
fluenza in the workplace: impact assessments using an agent-based model”. American journal of public
health vol. 103 (8), pp. 1406–1411.
Lamperti, F., A. Roventini, and A. Sani. 2018. “Agent-based model calibration using machine learning
surrogates”. Journal of Economic Dynamics and Control vol. 90, pp. 366–389.
Li, X., W. Cai, and S. J. Turner. 2017, May. “Cloning Agent-Based Simulation”. ACM Trans. Model. Com-
put. Simul. vol. 27 (2), pp. 15:1–15:24.
Ligmann-Zielinska, A., D. B. Kramer, K. Spence Cheruvelil, and P. A. Soranno. 2014, 10. “Using Uncer-
tainty and Sensitivity Analyses in Socioecological Agent-Based Models to Improve Their Analytical
Performance and Policy Relevance”. PLOS ONE vol. 9 (10), pp. 1–13.
Luke, S., C. Cioffi-Revilla, L. Panait, K. Sullivan, and G. Balan. 2005. “Mason: A multiagent simulation
environment”. Simulation vol. 81 (7), pp. 517–527.
Malleson, N., A. Heppenstall, and L. See. 2010. “Crime reduction through simulation: An agent-based
model of burglary”. Computers, environment and urban systems vol. 34 (3), pp. 236–250.
Kim, Kavak, Manzoor and Züfle
Nsoesie, E. O., J. S. Brownstein, N. Ramakrishnan, and M. V. Marathe. 2013. “A systematic review of stud-
ies on forecasting the dynamics of influenza outbreaks”. Influenza and Other Respiratory Viruses vol. 8
(3), pp. 309–316.
Perez, L., and S. Dragicevic. 2009. “An agent-based approach for modeling dynamics of contagious disease
spread”. International journal of health geographics vol. 8 (1), pp. 50.
Pires, B., and A. T. Crooks. 2017. “Modeling the emergence of riots: A geosimulation approach”. Comput-
ers, Environment and Urban Systems vol. 61, pp. 66–80.
Prasanna, D. R. 2009. Dependency Injection. 1st ed. Greenwich, CT, USA, Manning Publications Co.
Riley, S. 2007. “Large-Scale Spatial-Transmission Models of Infectious Disease”. Science vol. 316 (5829),
pp. 1298–1301.
Roche, B., J. M. Drake, and P. Rohani. 2011, Mar. “An Agent-Based Model to study the epidemiological
and evolutionary dynamics of Influenza viruses”. BMC Bioinformatics vol. 12 (1), pp. 87.
Thiele, J. C., W. Kurth, and V. Grimm. 2014. “Facilitating Parameter Estimation and Sensitivity Analysis of
Agent-Based Models: A Cookbook Using NetLogo and ’R”’. Journal of Artificial Societies and Social
Simulation vol. 17 (3), pp. 11.
Yoginath, S. B., and K. S. Perumalla. 2018, January. “Scalable Cloning on Large-Scale GPU Platforms with
Application to Time-Stepped Simulations on Grids”. ACM Trans. Model. Comput. Simul. vol. 28 (1),
pp. 5:11–5:26.
AUTHOR BIOGRAPHIES
JOON-SEOK KIM is a Postdoctoral Research Fellow in Department of Geography and Geoinformation
Science at George Mason University. His research interests lie in spatial and spatiotemporal databases,
geospatial simulation, and privacy protection. His email address is jkim258@gmu.edu.
HAMDI KAVAK is a Research Associate in the Department of Geography and Geoinformation Science
at George Mason University. His research focuses on data-driven methods to study human behavior in the
areas of cybersecurity and urban systems. His email address is hkavak@gmu.edu.
UMAR MANZOOR is a Postdoctoral Research Fellow in the Department of Computer Science at Tulane
University. His research focuses on multi-agent modelling and simulation. His email address is uman-
zoor@tulane.edu.
ANDREAS ZÜFLE is an assistant professor at the Department of Geography and Geoinformation Science
at George Mason University (GMU). Dr. Züfle’s research expertise includes big spatial data, spatial data
mining, social network mining, and uncertain database management. His email address is azufle@gmu.edu.
... Efficiency of CNS representation and modelling has been recently discussed as a fundamental element of CNS evaluation protocol [1,24]. This involves the efficiency of data processing, concerned with the time delays compared with the real-time data flow [60], as well as the efficiency of modelling, connected to the runtime of CNS simulations [19,61]. Few studies have considered the modelling efficiency and proved that the runtime of the CNS simulations can be influenced by the network components, such as the number of simulated features and the network size [19,61]. ...
... This involves the efficiency of data processing, concerned with the time delays compared with the real-time data flow [60], as well as the efficiency of modelling, connected to the runtime of CNS simulations [19,61]. Few studies have considered the modelling efficiency and proved that the runtime of the CNS simulations can be influenced by the network components, such as the number of simulated features and the network size [19,61]. Meanwhile, developing CNS towards DT-CNSs with increasing complexity levels poses new research challenges to real-time data processing and related efficiency evaluation. ...
... Efficiency required for the DT-CNS dynamics involves the discussion from two perspectives: the efficiency of data processing, concerned with the time delays compared with the realtime data flow [60], as well as the efficiency of modelling, connected to the runtime of DT-CNSs [19,61]. ...
Article
Full-text available
This study proposes an extendable modelling framework for Digital Twin-Oriented Complex Networked Systems (DT-CNSs) with a goal of generating networks that faithfully represent real-world social networked systems. Modelling process focuses on (i) features of nodes and (ii) interaction rules for creating connections that are built based on individual node’s preferences. We conduct experiments on simulation-based DT-CNSs that incorporate various features and rules about network growth and different transmissibilities related to an epidemic spread on these networks. We present a case study on disaster resilience of social networks given an epidemic outbreak by investigating the infection occurrence within specific time and social distance. The experimental results show how different levels of the structural and dynamics complexities, concerned with feature diversity and flexibility of interaction rules respectively, influence network growth and epidemic spread. The analysis revealed that, to achieve maximum disaster resilience, mitigation policies should be targeted at nodes with preferred features as they have higher infection risks and should be the focus of the epidemic control.
... They simulate networks that are "frozen" in time, with a dynamic process taking place on the networks where parameters of this process do not change during the simulation (e.g. epidemic spreading process on static social networks with a fixed [4], [5], [7], [13], [15], [16], [17], [18], [19] [22], [23], [24], [25], [27], [28], [29], [30], [31], [32], [33], [34], [35], [36], [37], [38], [39], [41], [43], [44], [45], [52], [54], [55], [56], [57], [58], [59] G2a [60], [61] G2b [9], [12], [20], [21], [40], [42], [46] [47], [48], [49], [50], [51], [62], [63], [64] [65], [66] G3 [67] [68], [69], [70] infection rate [59], [71]). Most studies focus on SNSs in G1 [4], [5], [7], [13], [15]- [19], [22]- [39], [41], [43]- [45], [52], [54]- [59], [72]. ...
... epidemic spreading process with a non-changeable infection rate on social network that evolves over time [66]). There are many studies on SNSs that model only the dynamics of network structure in this space [9], [12], [20], [21], [39], [40], [42], [46]- [51], [62]- [66], [74] while few of them additionally consider the dynamic processes [65], [66]. ...
... As an indicator of the SNSs' efficiency, the runtime of the simulation is considered by few studies [12], [62], which is 6 VOLUME 4, 2016 This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and content may change prior to final publication. ...
Article
Full-text available
The ability to faithfully represent real social networks is critical from the perspective of testing various what-if scenarios which are not feasible to be implemented in a real system as the system’s state would be irreversibly changed. High fidelity simulators allow one to investigate the consequences of different actions before introducing them to the real system. For example, in the context of social systems, an accurate social network simulator can be a powerful tool used to guide policy makers, help companies plan their advertising campaigns or authorities to analyse fake news spread. In this study we explore different Social Network Simulators (SNSs) and assess to what extent they are able to mimic the real social networks. We conduct a critical review and assessment of existing Social Network Simulators under the Digital Twin-Oriented Modelling framework proposed in our previous study. We subsequently extend one of the most promising simulators from the evaluated ones, to facilitate generation of social networks of varied structural complexity levels. This extension brings us one step closer to a Digital Twin Oriented SNS (DT Oriented SNS). We also propose an approach to assess the similarity between real and simulated networks with the composite performance indexes based on both global and local structural measures, while taking runtime of the simulator as an indicator of its efficiency. We illustrate various characteristics of the proposed DT Oriented SNS using a well known Karate Club network as an example. While not considered to be of sufficient complexity, the simulator is intended as one of the first steps on a journey towards building a Digital Twin of a social network that perfectly mimics the reality.
... We review and discuss the above mentioned three elements in the context of the modelling generations while considering two types of heterogeneity (i) the existence of a given element (represented with a ) and in addition to its existence, (ii) the capability of an element to change over time (represented with a ) (See Table 2). [6], [20], [80], [24], [2], [3], [55], [71], [73], [75], [53], [36], [5], [18], [54], [49], [74], [79], [9], [10], [52], [83], [59], [33], [1], [39], [32], [57], [11], [30], [23], [31] [86], [84], [77], [78], [68], [27] G2a [40], [14] G2b [34], [69], [4], [7], [8], [72], [28], [76], [45], [15], [50] [29], [43], [22], [13] [51], [44] G3 [65] [46], [26], [66] Generation 1 (G1) of models focuses on dynamic process on static networks (see Table 2 and Fig. 1). They simulate networks that are "frozen" in time, with a dynamic process taking place on the networks where parameters of this process do not change during the simulation (e.g. ...
... epidemic spreading process with a non-changeable infection rate on social network that evolves over time [44]). There are many studies on SNSs that model only the dynamics of network structure in this space [4,7,8,13,15,22,28,29,34,43,44,45,50,51,61,69,72,74,76] while few of them additionally consider the dynamic processes [44,51]. SNSs in G3 focus on evolving dynamic processes on evolving networks with interrelations between them (See Fig. 4). ...
... As is shown in Table 3, most SNSs focus on static networks with no attributes, which [6], [20], [80], [24], [2], [3], [85], [55], [71], [73], [86], [84], [75] [77], [78], [31], [40], [27], [68], [21], [1], [9], [83], [59], [33], [39], [10], [32], [57], [11], [30], [23], [53], [36], [5], [74] [18], [54] [49], [52], [79], [70] G2b/G3 [34], [69], [87], [51] [4], [7], [8], [72], [28], [76] [4], [45] [15] [13], [44] [50] [29], [43], [22] are categorised as G1/G2a models and characterised with the lowest level of complexity in each dimension. They generate networks based on predetermined network statistics and connection principles about topology [1,2,3,6,9,10,11,20,23,24,30,31,32,33,39,55,57,59,71,73,75,77,78,80,83,84,85,86]. ...
Preprint
The ability to faithfully represent real social networks is critical from the perspective of testing various what-if scenarios which are not feasible to be implemented in a real system as the system's state would be irreversibly changed. High fidelity simulators allow one to investigate the consequences of different actions before introducing them to the real system. For example, in the context of social systems, an accurate social network simulator can be a powerful tool used to guide policy makers, help companies plan their advertising campaigns or authorities to analyse fake news spread. In this study we explore different Social Network Simulators (SNSs) and assess to what extent they are able to mimic the real social networks. We conduct a critical review and assessment of existing Social Network Simulators under the Digital Twin-Oriented Modelling framework proposed in our previous study. We subsequently extend one of the most promising simulators from the evaluated ones, to facilitate generation of social networks of varied structural complexity levels. This extension brings us one step closer to a Digital Twin Oriented SNS (DT Oriented SNS). We also propose an approach to assess the similarity between real and simulated networks with the composite performance indexes based on both global and local structural measures, while taking runtime of the simulator as an indicator of its efficiency. We illustrate various characteristics of the proposed DT Oriented SNS using a well known Karate Club network as an example. While not considered to be of sufficient complexity, the simulator is intended as one of the first steps on a journey towards building a Digital Twin of a social network that perfectly mimics the reality.
... Data processing Time delays compared with the real time data flow [69] Simulation/Modelling Runtime of the simulation/modelling [5,37] Reproducibility Same data Yes/No ...
... Efficiency required for the CNS dynamics involves the discussion from two perspectives: the efficiency of data processing, concerned with the time delays compared with the real-time data flow [69], as well as the efficiency of modelling, connected to the runtime of CNSs [5,37]. Reproducibility requires the equivalent results of the same tasks [27]. ...
Preprint
This study proposes an extendable modelling framework for Digital Twin-Oriented Complex Networked Systems (DT-CNSs) with a goal of generating networks that faithfully represent real systems. Modelling process focuses on (i) features of nodes and (ii) interaction rules for creating connections that are built based on individual node's preferences. We conduct experiments on simulation-based DT-CNSs that incorporate various features and rules about network growth and different transmissibilities related to an epidemic spread on these networks. We present a case study on disaster resilience of social networks given an epidemic outbreak by investigating the infection occurrence within specific time and social distance. The experimental results show how different levels of the structural and dynamics complexities, concerned with feature diversity and flexibility of interaction rules respectively, influence network growth and epidemic spread. The analysis revealed that, to achieve maximum disaster resilience, mitigation policies should be targeted at nodes with preferred features as they have higher infection risks and should be the focus of the epidemic control.
... Epidemiological ABMs that are spatially-explicit represent agent mobility using activity sequences where agents move between various points of interest (POIs) grouped by type (i.e., home, school, work) [26,27]. These activity sequences are informed by time-use and transport surveys or by patterns extracted from phone records [35]. ...
... ABMs are touted in epidemiology for their ability to overcome limitations of traditional SIR models and their variations, which treat individuals as homogeneous, interactions as equal and global, and the spatial distribution of individuals as uniform [4,14]. As a result, ABMs have been developed to simulate seasonal influenza [26,27], pandemics including H1N1 [8,15,29], Ebola [30,36], and COVID-19 [6,12,16,37], and smaller outbreaks of small-pox [7], anthrax [9], the pneumonic plague [43], and dengue [19]. Agentbased simulations aim to forecast disease spread dynamics, estimate social and economic impacts, develop policy intervention strategies, and better understand the relationship between local processes and disease emergence. ...
... At the core of our approach to generating very large, high fidelity, and socially plausible LBSN data is a novel geo-social agent-based simulation framework [13,15]. We use the term geo-social to refer to a model that makes explicit use of geographical information and social behaviors in the simulation process. ...
... Using the geo-social agent-based model as a basis [13,15], we have designed a SEIR compartmental disease representation. The SEIR acronym refers a "Susceptible" individual who has the potential to be "Exposed" to a disease. ...
Article
Full-text available
Human mobility and social networks have received considerable attention from researchers in recent years. What has been sorely missing is a comprehensive data set that not only addresses geometric movement patterns derived from trajectories, but also provides social networks and causal links as to why movement happens in the first place. To some extent, this challenge is addressed by studying location-based social networks (LBSNs). However, the scope of real-world LBSN data sets is constrained by privacy concerns, a lack of authoritative ground-truth, their sparsity, and small size. To overcome these issues we have infused a novel geographically explicit agent-based simulation framework to simulate human behavior and to create synthetic but realistic LBSN data based on human patterns-of-life (i.e., a geo-social simulation). Such data not only captures the location of users over time, but also their motivation, and interactions via temporal social networks. We have open sourced our framework and released a set of large data sets for the SIGSPATIAL community. In order to showcase the versatility of our simulation framework, we added disease a model that simulates an outbreak and allows us to test different policy measures such as implementing mandatory mask use and various social distancing measures. The produced data sets are massive and allow us to capture 100% of the (simulated) population over time without any data uncertainty, privacy-related concerns, or incompleteness. It allows researchers to see the (simulated) world through the lens of an omniscient entity having perfect data.
... At the core of our approach to generating very large, high fidelity, and socially plausible LBSN data is a novel geo-social agent-based simulation framework [13,15]. We use the term geo-social to refer to a model that makes explicit use of geographical information and social behaviors in the simulation process. ...
... Using the geo-social agent-based model as a basis [13,15], we have designed a SEIR compartmental disease representation. The SEIR acronym refers a "Susceptible" individual who has the potential to be "Exposed" to a disease. ...
Technical Report
Full-text available
Human mobility and social networks have received considerable attention from researchers in recent years. What has been sorely missing is a comprehensive data set that not only addresses geometric movement patterns derived from trajectories, but also provides social networks and causal links as to why movement happens in the first place. To some extent, this challenge is addressed by studying location-based social networks (LBSNs). However, the scope of real-world LBSN data sets is constrained by privacy concerns, a lack of authoritative ground-truth, their sparsity, and small size. To overcome these issues we have infused a novel geographically explicit agent-based simulation framework to simulate human behavior and to create synthetic but realistic LBSN data based on human patterns-of-life (i.e., a geo-social simulation). Such data not only captures the location of users over time, but also their motivation, and interactions via temporal social networks. We have open sourced our framework and released a set of large data sets for the SIGSPATIAL community. In order to showcase the versatility of our simulation framework, we added disease a model that simulates an outbreak and allows us to test different policy measures such as implementing mandatory mask use and various social distancing measures. The produced data sets are massive and allow us to capture 100% of the (simulated) population over time without any data uncertainty, privacy-related concerns, or incompleteness. It allows researchers to see the (simulated) world through the lens of an omniscient entity having perfect data.
... Changing simulation states on-the-fly is often conducted ad-hoc and entails manual code adjustments, which are time-consuming and error-prone. Therefore, to facilitate Human Domain research team simulation state change related requests, we developed an innovative injection mechanism to automatically inject prescribed changes into the simulation on-the-fly, for details see Kim et al. (2019b). ...
Article
Full-text available
We introduce the Urban Life agent-based simulation used by the Ground Truth program to capture the innate needs of a human-like population and explore how such needs shape social constructs such as friendship and wealth. Urban Life is a spatially explicit model to explore how urban form impacts agents’ daily patterns of life. By meeting up at places agents form social networks, which in turn affect the places the agents visit. In our model, location and co-location affect all levels of decision making as agents prefer to visit nearby places. Co-location is necessary (but not sufficient) to connect agents in the social network. The Urban Life model was used in the Ground Truth program as a virtual world testbed to produce data in a setting in which the underlying ground truth was explicitly known. Data was provided to research teams to test and validate Human Domain research methods to an extent previously impossible. This paper summarizes our Urban Life model’s design and simulation along with a description of how it was used to test the ability of Human Domain research teams to predict future states and to prescribe changes to the simulation to achieve desired outcomes in our simulated world.
... The motivation of our EITL stems from an hands-on experience of development of agent-based epidemic simulations [5,6] and familiarity of the challenge designers' perspective against challengers 1 . Similar to the Challenge on Mobility Intervention for Epidemics, as a challenge designer, we have provided a black-box model that allows challengers to obtain only observable information and conduct experiments to find prescriptions. ...
Conference Paper
Full-text available
Due to complexity of social phenomena, it is a big challenge to predict the curves of epidemics that spread via social contacts and to control such epidemics. Misguided policies to mitigate epidemics may result in catastrophic consequences such as financial crisis, massive unemployment, and the surge of the number of critically ill patients exceeding the capacity of hospitals. In particular, under/overestimation of efficacy of interventions can mislead policymakers about perception of evolving situations. To avoid such pitfalls, we propose Expert-in-the-Loop (EITL) prescriptive analytics using mobility intervention for epidemics. Rather than employing a purely data-driven approach, the key advantage of our approach is to leverage experts' best knowledge in estimating disease spreading and the efficacy of interventions which allows us to efficiently narrow down factors and the scope of combinatorial possible worlds. We introduce our experience to develop Expert-in-the-Loop simulations during the Challenge on Mobility Intervention for Epidemics. We demonstrate that misconceptions about the causality can be corrected in the iterations of consulting with experts, developing simulations, and experimentation.
Article
Full-text available
The common trend in the scientific inquiry of urban areas and their populations is to use real-world geographic and population data to understand, explain, and predict urban phenomena. We argue that this trend limits our understanding of urban areas as dealing with arbitrarily collected geographic data requires technical expertise to process; moreover, population data is often aggregated, sparsified, or anonymized for privacy reasons. We believe synthetic urban areas generated via procedural city generation, which is a technique mostly used in the gaming area, could help improve the state-of-the-art in many disciplines which study urban areas. In this paper, we describe a selection of research areas that could benefit from such synthetic urban data and show that the current research in procedurally generated cities needs to address specific issues (e.g., plausibility) to sufficiently capture real-world cities and thus take such data beyond gaming.
Article
Full-text available
Cloning is a technique to efficiently simulate a tree of multiple what-if scenarios that are unraveled during the course of a base simulation. However, cloned execution is highly challenging to realize on large, distributed memory computing platforms, due to the dynamic nature of the computational load across clones, and due to the complex dependencies spanning the clone tree. We present the conceptual simulation framework, algorithmic foundations, and runtime interface of CloneX, a new system we designed for scalable simulation cloning. It efficiently and dynamically creates whole logical copies of a dynamic tree of simulations across a large parallel system without full physical duplication of computation and memory. The performance of a prototype implementation executed on up to 1,024 graphical processing units of a supercomputing system has been evaluated with three benchmarks—heat diffusion, forest fire, and disease propagation models—delivering a speed up of over two orders of magnitude compared to replicated runs. The results demonstrate a significantly faster and scalable way to execute many what-if scenario ensembles of large simulations via cloning using the CloneX interface.
Article
Full-text available
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.
Article
Simulation cloning is an efficient way to analyze multiple configurations in a parameter exploration task. A simulation model usually contains a set of tunable parameters for exploring different configurations of a system. To evaluate different design alternatives, multiple simulation instances need to be launched, each evaluating a different parameter configuration. It usually takes a considerable amount of time to execute these simulation instances. Simulation cloning is proposed to reuse computations among simulation instances and to shorten the overall execution time. It is a challenging task to design cloning strategies to explore the computation sharing among simulation instances while maintaining the correctness of execution. In this article, we propose two agent-based simulation (ABS) cloning strategies, the top-down cloning strategy and the bottom-up cloning strategy. The top-down cloning strategy is initially designed and can only be applied to limited scenarios. The bottom-up cloning strategy is an improved strategy to overcome the limitation of the top-down cloning strategy. In the experiments, the effectiveness of the two strategies is analyzed. To show the performance advantages and generality of the bottom-up cloning strategy, a large-scale ABS parameter exploration task is performed, and results are discussed in the article.
Article
Taking agent-based models (ABM) closer to the data is an open challenge. This paper explicitly tackles parameter space exploration and calibration of ABMs combining supervised machine-learning and intelligent sampling to build a surrogate meta-model. The proposed approach provides a fast and accurate approximation of model behaviour, dramatically reducing computation time. In that, our machine-learning surrogate facilitates large scale explorations of the parameter-space, while providing a powerful filter to gain insights into the complex functioning of agent-based models. The algorithm introduced in this paper merges model simulation and output analysis into a surrogate meta-model, which substantially ease ABM calibration. We successfully apply our approach to the Brock and Hommes (1998) asset pricing model and to the "Island" endogenous growth model (Fagiolo and Dosi, 2003). Performance is evaluated against a relatively large out-of-sample set of parameter combinations, while employing different user-defined statistical tests for output analysis. The results demonstrate the capacity of machine learning surrogates to facilitate fast and precise exploration of agent-based models' behaviour over their often rugged parameter spaces.
Chapter
Agent-based computational simulation models can be complex and this can make it difficult to understand which processes are driving model behaviour. Sensitivity analysis by means of metamodels can greatly facilitate the understanding of the behaviour of complex simulation models. However, this method has so far largely been neglected in agent-based computational demography, with few exceptions. In this chapter, I illustrate how sensitivity analysis can be conducted by means of regression metamodels. I argue that this type of metamodel is particularly attractive for use in agent-based computational demography due to the fact that most demographers have at least a basic understanding of multiple regression. This makes this type of metamodel highly accessible and easy to communicate. After describing the basics of regression metamodels, I illustrate their use by conducting a sensitivity analysis of an agent-based model of educational assortative mating that is based on data on the structure of Belgian marriage markets between 1921 and 2012. I close the chapter with a discussion of the benefits and limitations of regression metamodels and point the reader to further readings on this topic.
Article
MASON is a fast, easily extensible, discrete-event multi-agent simulation toolkit in Java, designed to serve as the basis for a wide range of multi-agent simulation tasks ranging from swarm robotics to machine learning to social complexity environments. MASON carefully delineates between model and visualization, allowing models to be dynamically detached from or attached to visualizers, and to change platforms mid-run. This paper describes the MASON system, its motivation, and its basic architectural design. It then compares MASON to related multi-agent libraries in the public domain, and discusses six applications of the system built over the past year which suggest its breadth of utility.