A Common Protocol for Agent-Based Social Simulation
ABSTRACT Traditional (i.e. analytical) modelling practices in the social sciences rely on a very well established, although implicit, methodological protocol, both with respect to the way models are presented and to the kinds of analysis that are performed. Unfortunately, computer-simulated models often lack such a reference to an accepted methodological standard. This is one of the main reasons for the scepticism among mainstream social scientists that results in low acceptance of papers with agent-based methodology in the top journals. We identify some methodological pitfalls that, according to us, are common in papers employing agent-based simulations, and propose appropriate solutions. We discuss each issue with reference to a general characterization of dynamic micro models, which encompasses both analytical and simulation models. In the way, we also clarify some confusing terminology. We then propose a three-stage process that could lead to the establishment of methodological standards in social and economic simulations.
- SourceAvailable from: Johannes Heinonen[Show abstract] [Hide abstract]
ABSTRACT: Effective management of natural resources requires understanding both the dynamics of the natural systems being subjected to management and the decision-making behaviour of stakeholders who are involved in the management process. We suggest that simulation modelling techniques can provide a powerful method platform for the transdisciplinary integration of ecological, economic and sociological aspects that is needed for exploring the likely outcomes of different management approaches and options. A concise review of existing literature on ecological and socio-economic modelling and approaches at the interface of these fields is presented followed by a framework coupling an individual-based ecological model with an agent-based socio-economic model. In this framework, each individual of the species of interest is represented on a spatially-explicit landscape, allowing the incorporation of individual variability. The socio-economic model also simulates inter-agent variability through the assignment of different attitudes and decision-making options for different agents; these may represent farmers, estate managers, policy-makers, the general public and/or other stakeholders. This structure enables variation in attitudes and circumstances of individual stakeholders, together with interactions between stakeholders, to be simulated. We discuss strengths and limitations of such an approach, and the information requirements for building a robust model to inform a real management situation.iEMSs 2012 - Managing Resources of a Limited Planet; 07/2012
- [Show abstract] [Hide abstract]
ABSTRACT: A major challenge in agent-based modelling is the management of the process to generate executable simulations from the initial conceptual models. This process is complex and usually involves several roles, which may raise communication problems due to the diverse backgrounds and perspectives of participants and the use of non-explicit knowledge. This situation demands a clear separation and precise definition of the multiple aspects of the process, in order to facilitate their understanding, grasp their relationships and develop them. This paper addresses this goal with a fine-step refinement process for information based on the use of domain-specific languages. It considers analysis contexts that include a particular theoretical framework, domain, type of problem and target platform. For a given context, the process formally defines modelling languages conceptually close to the different aspects relevant to it. It also defines mappings between concepts in those languages. Researchers develop simulations by specifying models with the languages, and share and refine information by using mappings between these models. This infrastructure provides guidance throughout the process and makes the information involved explicit. A case study of continuous double auctions illustrates the approach.Computational and Mathematical Organization Theory 03/2012; 18(1):91-112. · 0.42 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: Individual-based modeling is a growing technique in the HIV transmission and prevention literature, but insufficient attention has been paid to formally evaluate the quality of reporting in this field. We present reporting recommendations for individual-based models for HIV treatment and prevention, assess the quality of reporting in the existing literature, and comment on the contribution of this model type to HIV policy and prediction. We developed reporting recommendations for individual-based HIV transmission mathematical models, and through a systematic search, used them to evaluate the reporting in the existing literature. We identified papers that employed individual-based simulation models and were published in English prior to December 31, 2012. Articles were included if the models they employed simulated and tracked individuals, simulated HIV transmission between individuals in a particular population, and considered a particular treatment or prevention intervention. The papers were assessed with the reporting recommendations. Of 214 full text articles examined, 32 were included in the evaluation, representing 20 independent individual-based HIV treatment and prevention mathematical models. Manuscripts universally reported the objectives, context, and modeling conclusions in the context of the modeling assumptions and the model's predictive capabilities, but the reporting of individual-based modeling methods, parameterization and calibration was variable. Six papers discussed the time step used and one discussed efforts to maintain internal validity in coding. Individual-based models represent detailed HIV transmission processes with the potential to contribute to inference and policy making for many different regions and populations. The rigor in reporting of assumptions, methods, and calibration of individual-based models focused on HIV transmission and prevention varies greatly. Higher standards for reporting of statistically rigorous calibration and model assumption testing need to be implemented to increase confidence in existing and future modeling results.PLoS ONE 01/2013; 8(9):e75624. · 3.73 Impact Factor
A Common Protocol for Agent-Based Social Simulation
Roberto Leombruni, Matteo Richiardi, LABORatorio Revelli Centre for Employment Studies, via Real Collegio 30,
10024 Moncalieri, Torino, Italy. Email: email@example.com
Nicole J. Saam, Institut für Soziologie Ludwig-Maximilians-Universität München, Konradstr. 6 D-80801 München,
Germany. Email: firstname.lastname@example.org
Michele Sonnessa, Department of Computer Science, University of Torino, Corso Svizzera 185, Torino, Italy. Email:
Many people involved in agent-based research1
thought they should have. It is now increasingly
recognised that many systems are characterized by the
fact that their aggregate properties cannot be deduced
simply by looking at how each component behaves, the
interaction structure itself playing a crucial role. On
one hand, the traditional approach of simplifying
everything may often “throw the baby out with the bath
water”. On the other hand, trying to specify a more
detailed interaction structure or a more realistic
individual behaviour, and the system easily becomes
analytically intractable, or simply very difficult to
manipulate algebraically. On the contrary, agent-based
modelling (ABM) allows a flexible design of how the
individual entities behave and interact, since the results
are computed and need not be solved analytically. This
comes certainly at a cost (see below), but it may be the
only way to proceed with certain research questions.
However, the crude numbers tell a rather different
story: for instance, among the top 20 economic journals
we were able to find only 7 articles based on ABM2,
among the 26,698 articles that were published since the
seminal work conducted at the Santa Fe Institute
(Anderson et al., 1988).3 Looking back in time even
more we can add only 2 more papers4. If we think of
1 and also a limited number of non-practitioners (see
for instance Freeman, 1998)
2 Arifovic (1995), Arifovic (1996), Andreoni (1995),
Arthur (1991), Arthur (1994), Gode (1993) and
3 We looked for journal articles containing the words
“agent-based”, “multi-agent”, “computer simulation”,
“computer experiment”, “microsimulation”, “genetic
dilemma AND simulation” and variations in their title,
keywords or abstract in the EconLit database, the
bibliography of world economics literature. Note
however that EconLit sometimes does not report
keywords and abstracts. We have thus integrated the
resulting list with the references cited in the review
articles cited above. The ranking is provided in .
4 Tullock and Campbell (1970) and Schelling (1969).
Traditional (i.e. analytical) modelling practices in the
social sciences rely on a very well established,
although implicit, methodological protocol, both with
respect to the way models are presented and to the
kinds of analysis that are performed. Unfortunately,
computer-simulated models often lack such a reference
to an accepted methodological standard. This is one of
the main reasons for the scepticism among mainstream
social scientists that results in low acceptance of papers
with agent-based methodology in the top journals. We
identify some methodological pitfalls that, according to
us, are common in papers employing agent-based
simulations, and propose appropriate solutions. We
discuss each issue with reference to a general
characterization of dynamic micro models, which
encompasses both analytical and simulation models. In
the way, we also clarify some confusing terminology.
We then propose a three-stage process that could lead
to the establishment of methodological standards in
social and economic simulations.
Keywords: Agent-based, simulations, methodology,
A Lagrange fellowship by ISI Foundation is gratefully
acknowledged by MR and MS.
Our starting point is rather disappointing evidence:
despite the upsurge in agent-based research witnessed
in the past 15 years (see the reviews by Tesfatsion,
2001a,b,c and Wan, 2002) and despite all the
expectations they have raised, agent-based simulations
haven’t succeeded yet in finding a place in the standard
social scientist’s toolbox.
agent-based models that attracted the interest of a wider
audience, the list shrinks to Schelling’s segregation
models, where the simulation is worked out on a sheet
of paper, and to the El Farol bar problem by Arthur,
which led to a whole stream of literature on minority
games. Overall, we should then conclude that agent-
based modelling counts for less than 0.03% of the top
economic research. It seems to be confined only to
specialized journals like the Journal of Economic
Dynamics and Control5, ranking 23rd, the Journal of
Artificial Societies and Social Simulation, and
Computational Economics, both which are not ranked.
A notable exception is the Journal of Economic
Behavior and Organization, ranked 32nd, which
sometimes publishes research in ABM.
Among the top 10 sociological journals we were able
to find only 11 articles based on ABM.6 They have
been published in four journals: The American
Sociological Review (ranking 1st; 4 articles), the
American Journal of Sociology (ranking 2nd; 5 articles),
The Annual Review of Sociology (ranking 4th, 1
article) and Sociological Methodology (ranking 10th, 1
Agent-based models have solid methodological
foundations7. However, the greater freedom they have
granted to researchers (in terms of model design) has
often degenerated in a sort of anarchy (in terms of
design, analysis and presentation). For instance, there
is no clear classification of the different ways in which
agents can exchange and communicate: every model
proposes its own interaction structure. Also, there is
not a standard way to treat the artificial data stemming
from the simulation runs, in order to provide a
description of the dynamics of the system, and many
articles seem to ignore the basics of experimental
design. Often, the comparison between artificial and
real data is overly naïf, and the parameters’ values are
chosen without proper discussion. Finally, too often it
is not possible to understand the details of the
implementation of an agent-based simulation. This
makes replication a difficult, sometimes impossible
task, thus violating the basic principle of scientific
practice and confining the knowledge generated by
agent-based simulations to no more than anecdotal
This has to be contrasted with traditional analytical
modelling, which relies on a very well established,
although implicit, methodological protocol, both with
respect to the way models are presented and to the kind
of analysis that are performed.
Think for example about the organization of most
papers. There is generally a detailed reference to the
literature; the model often adopts an existing
framework and extends, or departs from, well-known
models only in limited respects. This allows a concise
description, and saves more space for the results, which
are finally confronted with the empirical data. When
estimation is involved measures of the validity and
reliability of the estimates are always presented, in a
very standardized way.
Of course one reason for the lack of a standard protocol
for agent-based research is the relatively young age of
the methodology. Leave it by its own, one could say,
and a best practice will spontaneously emerge.
However, some discussion on the desirability of such a
standard and on its characteristics may help. The
example of the Cowles Commission suggests that this
is indeed a promising direction. The Commission was
founded in 1932 by the businessman and economist
Alfred Cowles in Colorado Springs, moved first to
Chicago in 1939 and finally to Yale in 1955, where it
became established as the Cowles Foundation. As its
motto (“Science is Measurement”) indicates, the
Cowles Commission was dedicated to the pursuit of
linking economic theory to mathematics and statistics.
Its main contributions to economics lie in its "creation"
and consolidation of two important fields – general
equilibrium theory and econometrics. The Commission
focused its attention on some particular problems,
namely the estimation of large, simultaneous equation
models, with a strong concern for identification and
hypothesis testing. Its prestige and influence set the
priorities for theoretical developments elsewhere too,
and its recommendations are generally followed today
in economics (Klevorick, 1983).
The objective of this paper is obviously less ambitious.
We simply identify the need for a common protocol for
methodological pitfalls that are common in papers
employing agent-based simulations, distinguishing
between four different issues: link with the literature
(section 1), structure of the models (section 2), analysis
(section 3) and replicability (section 4). We then
propose a three-stage process that could lead to the
establishment of methodological standards in social
and economic simulation (section 5).
We discuss some
5 JEDC has a section devoted to computational
methods in economics and finance
6 We looked for journal articles containing the words
variations in their title, keywords or abstract in the
Sociological Abstracts database. All abstracts have
been checked for subject matter dealing with ABM.
We used the 2001 Citation Impact Factors (CIF)
ranking for Sociology journals (93 journals).
7 for a brief account of the analogies and differences
between agent-based simulations and traditional
analytical modelling see Leombruni and Richiardi
2. Links with the literature
As we have seen, the advantage of agent-based
simulations over more traditional approaches lies in the
flexibility they allow in model specification. Of course
more freedom means more heterogeneity. While
analytical models generally build on the work of their
predecessors, agent-based simulations often depart
radically from the existing literature. This is a problem
in two respects. First, more space is needed to explain
the model structure: since the overall length of a
published paper in social science journals cannot
generally exceed 25 to 30 pages, this implies that less
space is available for discussing the results.
Considering that the description of the model dynamics
and the estimation procedure also requires more space
than in traditional analytical models (see Leombruni
and Richiardi, 2005), this results in papers that are
often either too dense or too long.
The second problem is that in departing from the
existent literature, the model results become more
difficult to assess.
Our position is simple: each article should include
references to the theoretical background of the social or
economic phenomenon that is investigated. A new
model should always refer to the models, if any, with
respect to which it is innovating. This holds for
incremental and (even more) for radical innovations.
All variations should be motivated, either in isolation
or jointly. Moreover, since birthrights matter, reference
should be made not only to previous agent-based
models, if any, but also to the relevant non-simulation
literature. After all, the
computational, and we have to talk with the
type of individual behaviour (optimising, satisficing,
Too often the reader of a paper using agent-based
simulations has to work all these properties out
himself. On the contrary, in more traditional papers
models are often immediately classified as based on
“overlapping generations of intertemporally optimising
asymmetric information” … We believe that having all
the main features of a simulation model clearly and
immediately stated would greatly increase the
understanding of simulation-based models, and
facilitate the comparison of alternative specifications.
Bayesian game with
mainstream is not
Once a model has been specified the issue of analysing
its behaviour arises. To this regard, simulation models
differ in a radical sense from traditional analytical
ones. Simulations suffer from the problem of stating
general propositions about the dynamics of the model
starting only from point observations.10 The point is
that, although simulations do consist of a well-defined
set of functions that unambiguously define the macro
dynamics of the system, they do not offer a compact set
of equations – together with their inevitable algebraic
solution (Leombruni and Richiardi, 2005).
Think of the following general characterization of
dynamic micro models. Assume that at each time t an
..1 , ∈
, is well described by a state
3. Structure of the model
There are some basic features that characterize a
simulation model. Some are technical: above all, the
treatment of time (discrete or continuous8) and the
treatment of fate (stochastic or deterministic), the
representation of space (topology), the population
evolution (birth and death processes). Some are less
technical: the treatment of heterogeneity (which
variables differ across individuals and how), the
interaction structure (localized or non-localized), the
coordination structure (centralized, decentralized9), the
variable , and let the evolution of her state
variable be specified by the difference equation:
where x-i is the state of all individuals other than i and α
are some structural parameters.
Now, an important decision has to be made concerning
the objective itself of the analysis. Generally, we are
interested in some statistics Y defined over the entire
8 There is some confusion in the literature to this
regard, and it should be an aim of the methodological
clarification we are calling for to address it. For
discrete-time simulation social scientists generally
mean that the state of the system is updated (i.e.
observed) only at discrete (generally constant) time
intervals. No reference is made to the timing of events
within a period – see, for example, Allison (1982).
Conversely, a model is said to be continuous-time
event-driven when the state of the system is updated
every time a new event occurs (Lancaster, 1990;
Lawless, 1982). In this case it is necessary to isolate all
the events and define their exact timing.
Note that discrete-time simulation is a natural option
when continuous, flow variables are modelled, and the
definition of an event becomes more arbitrary. For this
reason (and mainly in the Computer Science literature)
the definitions above are sometimes reversed.
9 Examples of centralized coordination mechanisms
other than the usual, unrealistic Walrasian auctioneer
(the hypothetical market-maker who matches supply
and demand to get a single price for a good) generally
assumed by traditional analytical models include real
auctions, stock exchange books, etc. Examples of
bargaining, barter, etc.
10 Note that this is not equivalent to saying that
simulations are an inductive way of doing science:
induction comes at the moment of explaining the
behaviour of the model (Axelrod, 1997). Epstein
qualifies the agent-based simulation approach as
‘generative’ (Epstein, 1999), while the logic behind it
refers to abduction (Leombruni, 2002; Werker and
11 These statistics can either be a macro aggregate, or a
micro indicator, as in the case of individual strategies.
In both cases, as a general rule all individual actions,
which in turn depend on individual states, matter.
micro-model (no matter whether simulated or
analytically solved) both the individual and the
aggregate scale are defined, two broad definitions can
in fact be used. One is a definition of equilibrium at a
micro-level, as a state where individual strategies are
constant.13 The other is a definition of equilibrium at a
macro-level, as a state where some relevant (aggregate)
statistics of the system are stationary.
Note that we can have equilibrium at the micro-level
but disequilibrium at the macro-level (think for
instance of population growth in developing countries,
or of periods of financial instability), or the opposite
(e.g. stable evolutionary models).
Contrary to traditional microeconomic models,
sociological theories and agent-based simulations
generally refer to the second definition. In ABM
individual behaviour is generally less sophisticated,
and expectations are sometimes not even defined.
Thus, the invariance of some aggregate measure is
preferred as a definition of equilibrium.
Both cases can be expressed as a convergence of (3) to
a function not dependent on t14:
0 , 1
Traditional analytical models often impose equilibrium
conditions from the onset, assuming that they are
always met. Equation (4) is then valid right from the
start: the system jumps to the equilibrium. This leads to
a backward logical situation, since we need to assume
the answer to the problem (which equilibrium the
economy will reach) in order to analyse the problem
itself (what path will the economy follow from its
initial endowment to equilibrium). On the other hand,
in social and economic agent-based simulations, as in
much of evolutionary economics, the focus of the
interest is on whether an equilibrium will eventually
emerge, i.e. be selected by the dynamics of the system.
These different definitions and methods of analysis
may confuse the non-practitioner. Great attention
should then be paid to clearly define which equilibrium
concept has been used, and the strategy adopted to
identify the equilibria (e.g. evolutionary selection).
The function g expresses the behaviour of the model
with respect to the variable Y we are interested in. As
we have seen, in an agent-based simulation it remains
unknown. However, some intuition on its shape can be
gained by running many simulations with different
parameters, and analysing their relationship with the
outcome of interest. There are two scales on which
such an exercise can be done: a global level and a local
13 This definition applies both to the traditional homo
sociologicus and the traditional homo oeconomicus. In
the first paradigm individuals follow social norms and
hence never change their behaviour. In the latter,
individuals with rational expectations maximize their
14 Or even not dependent on the initial conditions
, , 1
Of course, there may be (possibly infinitely) many
aggregate statistics to look at. Traditional analytical
models are generally constrained in their choice of
which statistics to look at by analytical tractability.
Agent-based simulations are not. Thus, as a general
rule full exploration should be performed. Full
exploration means that the behaviour of all meaningful
individual and aggregate variables is explored, with
reference to the results currently available in the
literature. For instance, in a model of labour
participation, if firm production is defined, aggregate
production (business cycles, etc.) should also be
investigated. However, in many cases full exploration
is not particularly meaningful. This may happen when
some parts of the model (e.g. the demand side for
firms’ output in a model of labour participation) are
only sketched. The model is then investigated only
with respect to a subset of all defined variables. When
such a partial exploration is performed, this should be
clearly stated, and the motivations explained.
Regardless of the specification for fi, we can always
solve equation (2) by iteratively substituting each term
xi,t using (1):
1 0 , 0 , 1
The law of motion (3) uniquely relates the value of Y at
any time t to the initial conditions of the system and to
the values of the parameters12. Traditional models
generally assume very simple functional forms for fi, in
order to have analytically tractable expressions for gt.
This function, which is also known as an input-output
transformation function, can then be investigated by
computing derivatives, etc., and its parameters
estimated in the real data. On the other hand, in agent-
based simulations gt easily grows enormous, hindering
any attempt at algebraic manipulation. In order to
reconstruct it and explain the behaviour of the
simulation model we must then rely on the analysis of
the artificial data coming out from many different
simulation runs, with different values of the
Before turning to the data another decision has to be
made, and clearly stated: whether the analysis of the
model is performed in equilibrium, out-of-equilibrium,
or both. In this regard, a clarification on the notion
itself of equilibrium is also needed. Since in every
1 0 ,
12 Sometimes we are interested in the relationship
between different (aggregate) statistics: e.g. the
unemployment rate and the inflation rate in a model
with individuals searching on the job market and firms
setting prices. The analysis proposed here is still valid
however: once the dynamics of each statistics is known
over time, the relationship between them is univocally
level. In a global investigation, we are interested in
how the model behaves in broad regions of the
parameters’ space, i.e. for general values of the initial
conditions and the parameters. This is generally the
case when the model is built with a theoretical
perspective: the relationship between inputs and
outputs has to be understood per se, without reference
to the real data. On the other hand, in a local
investigation we are interested in the model only in
restricted regions of the parameters’ space. This is
generally the case when the model is built with an
empirical goal: we want to replicate some empirical
phenomenon of interest and thus we want to explore
the dynamics of our model only around the estimated
values of the parameters.
A global investigation is generally done by letting all
parameters and initial conditions vary (in a random or
systematic way), and then imposing a metamodel
1 0 ,0 , 1
on the artificial data, where β are some coefficients to
be estimated in the artificial data. Note that this is
nothing else than a sensitivity analysis on all the
Of course, the final choice of a particular specification
for the metamodel remains to a certain extent arbitrary.
However, there are methodologies that help when
solving this (meta)model selection problem (see
Hendry and Krolzig, 2001). Moreover, as long as two
different specifications provide the same description of
the dynamics of the model in the relevant range of the
parameters and the exogenous variables, we should not
bother too much about which one is closest to the ‘true’
A local investigation around given values of the
parameters can also be done by keeping all the
parameters constant but one, which is varied. A
graphical (bivariate) description of the dependency of
Yt on that parameter is often reported, without recurring
to a metamodel (see the section on sensitivity analysis
below). The crucial point for a local investigation is of
course the choice of the values of the parameters. An
obvious option is to choose the values for which the
behaviour of the simulated system is as close to the
behaviour of the real system as possible, i.e. their
estimates in the real data.
Finally, statistical testing of the properties found in the
artificial data should always be performed. For
instance, the assertion that the model has reached a
stationary state (macro-equilibrium) Ye, for given
inputs (x0, α), must be tested for stationarity or, better,
4.4 Estimation / Calibration
Parameter estimation can be preliminary to a local
investigation (around the estimates), or can follow the
global investigation of the behaviour of the simulated
system. Here, we refer to estimation as the process of
choosing the values of the parameters that maximise
the accordance of the model’s behaviour (somehow
measured) with the real-world system. We thus do not
distinguish between estimation and calibration. Of
course there are relevant examples in the literature17
where the two terms are given (slightly) different
meanings (see for instance Kydland and Prescott,
1996). However, we agree with Hansen and Heckman
(1996, p.91) that
<<the distinction drawn between calibrating and
estimating the parameters of a model is artificial at
best. Moreover, the justification for what is called
“calibration” is vague and confusing. In a profession
that is already too segmented, the construction of such
artificial distinctions is counterproductive.>>
While invocating a convergence towards the adoption
of the term “estimation”, which seems best suited to
foster the dialogue between agent-based simulation
practitioners and econometricians, with respect to this
point we advance only a weak methodological
recommendation: to carefully define any terminology
Of course not all parameters deserve the same
treatment. Some of them have very natural real
counterparts, and thus their value is known: we know
the concepts which these parameters represent. The
concepts are operationalized. It is possible to collect
empirical data on the indicators which operationalize
the concepts. E.g., the preferences of parties who
participate in negotiations may be measured by using
questionnaires and document analysis. With respect to
these parameters, the simulation is run with empirical
data. Unknown parameters require a different
treatment. The fact that the function gt is not known
implies that it is not possible use it directly for
estimating the values of the parameters. But structural
estimation is still possible via simulation-based
estimation techniques (Gourieroux and Monfort, 1997;
Mariano et al., 2000; Train, 2003). For instance, we
can maximise an approximation of the likelihood
instead of the likelihood (Maximum Simulated
Likelihood). The same principle can be applied to the
(generalised) method of moments estimation, which
can be replaced by simulated approximations (Method
15 Here, the distinction between in-sample and out-of-
sample values, and the objection that two formulations
may fit equally well the first, but not the latter, is not
meaningful. Any value in the relevant range can be
included in the artificial experiments.
16 Ergodicity means that a time average is indeed
representative of the full ensemble. So, if the system is
ergodic, each simulation run gives a good description
of the overall behavior of the system.
17 For an overview on the discussion see Dawkins,
Srinivasan and Walley, 2001, pp. 3661ff.
of Simulated Moments): one simply needs to generate
simulated data according to the model and choose
parameters that make moments of this simulated data
as close as possible to the moments of the true data. A
special case of this is the Method of Simulated Scores,
where the moments are based on the first order
conditions of maximum likelihood. Finally, the method
of Indirect Inference uses a simplified auxiliary model,
and produces parameter estimates such that the
estimates of the auxiliary model based upon the real
data are as close as possible to those based upon
simulated data from the original model. Clearly, a
natural choice for the auxiliary model is our metamodel
It is important to stress that the estimation stage is
often missing in agent-based models. When the issue of
parameters choice is considered, most agent-based
simulations offer a rough calibration “by hand”. This
adds to the feeling of fuzziness that many non-
practitioners have, when confronting with the
methodology. Conversely, we believe that rigorous
estimation procedures should be used, and all relevant
4.5 Sensitivity Analysis
Sensitivity analysis does not only refer to the problem
of sampling the parameters space, already described
when we talk about global and local investigation of
the behaviour of the model. The term “sensitivity
analysis” is generally used to describe a family of
methods for altering the input values of the model in
various ways. Such analyses are included in the
validation step of almost all technical simulations (see
Law and Kelton 1991, pp. 310ff). In the natural
sciences and engineering, sensitivity analysis is thus a
standard method for verifying simulation models. The
three major purposes of sensitivity analysis are
corroborating the central results of the simulation,
revealing possible variations in the results and guiding
future research by highlighting the most important
processes for further investigation.
A short review of simulation textbooks and other
studies reveals that the term is currently used as a
general catch all for diverse techniques: there is no
precise definition and no special methodology
currently associated with this term. We define
sensitivity analysis as a collection of tools and methods
used for investigating how sensitive the output values
of a model are to changes in the input values (see
Chattoe, Saam and Möhring 2000). A “good”
simulation model (or a “significant” result) is believed
to occur when the output values of interest remain
within an interval (which has to be defined ), despite
“significant” changes in the input values (which also
have to be defined). The development of a typology of
sensitivity analyses involves a
consideration of the status of “input” and “output”
along with a range of possible measures of change or
stability (lack of change). The following kinds of
deliberate input variability can all be seen as
commonly used examples of sensitivity analysis:
Random Seed Variation: Testing for the effect
of random elements in the model by repeating
a simulation using a different sequence of
computer generated random numbers for each
Noise Type and Noise Level Variation:
Testing for the effects of variation in
stochastic elements of the model by varying
the distribution of noise (from normal to
uniform errors for example) or its level for a
particular distribution (changing the mean or
variance of a normally distributed error).
Varying the stochastic elements of a model in
this way differs from varying the random seed
because noise distributions and levels are
parameters of the model while the actual set of
random numbers generated comprise variables
for a particular run of the simulation.
Parameter Variation: Although adjustments to
noise type and level are particular cases of
parameter variation, parameters are used to
refer to a much wider range of fixed or quasi-
fixed elements in models. Indeed, parameter
variation is the nearest we have to a
“paradigm case” for sensitivity analysis, if
only because the term parameter is used so
loosely that very few variables under the
control of the simulator definitely fall outside
it. Parameters can be “physical” (the time
taken between conception and birth in a
demographic simulation), “cognitive” (the rate
of forgetting during some decision-making
task) and “behavioural” (the rule used by
consumers to relate current consumption to
Temporal Model Variation: In order to
simplify social processes for simulation, it is
often desirable to make assumptions about the
order of actions and whether these take place
in discrete or continuous time. It has long
been known (Huberman and Glance 1993)
that interesting results from Cellular Automata
are not necessarily robust to changes from
discrete to continuous time or from fixed to
random updating of cells.
Variation in the level of data aggregation:
Although not involving simulation, papers by
Attanasio and Weber (1993, 1994) suggest
another form of
Econometric studies of consumption at the
aggregate level are forced to make joint
hypotheses about both individual rationality
and aggregation. Studies making use of
consumption data at the household level
reveal the instability of the econometric
results (output) to changes in the level of data
aggregation (input). In particular, there is an
important role for
techniques (Merz 1994) in exploring the
effectiveness of econometric modelling at
capturing important patterns in individual
microeconomic data cannot be used to
criticise macroeconomic models directly
because its aggregate effects cannot easily be
explored. Simulation permits econometric
estimations based on the aggregate data
generated by the model to be compared
directly to distributions
Variation in the decision processes and
capabilities of the agents: Most of the types of
sensitivity analysis discussed so far make
sense only in the context of “traditional”
equation based approaches to modelling.
However, agent based approaches like
Evolutionary Game Theory allow us to
investigate the aggregate
interactions between individual agents with
differing decision processes and capabilities.
Well known examples are provided by the
Evolutionary Game Theory literature (Weibull
1995) and that on evolutionary tournaments
(Axelrod 1987, Miller et al. 1994).
Variation of sample size: Testing for the effect
of sample size in the model by repeating a
simulation using a different sample size for
each run. Especially, the model output may
vary with small samples.
Stanislaw (1986) has developed a framework for
understanding the concept of validity and how it
applies to simulation research. He considers:
theory validity: the validity of the theory
relative to the simuland (the real-world
model validity: the validity of the model
relative to the theory; and
program validity: the validity of the simulator
(the program that simulates) relative to the
For assessing the overall validity of the simulator all
three validities have to be considered. However, from
an empirical science perspective this definition should
also keep in mind that the real-world system is not just
given by the theory. Empirical sciences, like sociology
and economics, have elaborated validity concepts for
operational validity: the validity of the
theoretical concept (e.g. intelligence) relative
to its indicator (e.g. an intelligence test or
empirical validity: the validity of the
empirically occurring true value relative to its
Traditional (i.e., not formalized) empirical sociological
research has to consider theory validity, operational
validity, and empirical validity. Traditional economic
research additionally considers
Simulation studies which are theory-based and data-
based will have to consider all five types of validity.
A short review of simulation textbooks and
other studies reveals that the term validation is
currently used as a general catch all for
diverse techniques: there is no precise
definition and no special methodology
Established tests for validation are the Turing
test, the test of face validity, and the test of
event validity. Each test is suited to measure a
particular type of validity (or combination of
validities). Sterman (1984: 52) has suggested
heuristic questions rather than tests for
validation. These questions are interpreted as
tests that aid the diagnosis of errors and assist
in the confidence-building process in the
model. The confidence stems from an
appreciation of the structure of the model, its
general behaviour characteristics and its
ability to generate accepted responses to set
policy changes. In the following we present
some of his questions. Heuristic questions that
address the validity of model structure are:
Structure Verification: Is the model structure
consistent with the relevant descriptive
knowledge of the system?
Extreme Conditions: Does each equation
make sense even when its inputs take on
Most social and economic simulators still omit any
form of sensitivity analysis. There is also a definite
lack of methodological literature on sensitivity analysis
in the social sciences (but see Kleijnen 1992, 1995a,b
and a few general methodological texts on sensitivity
analysis: Deif 1986, Fiacco 1983, 1984, Köhler 1996
and Ríos Insua 1990).
Our position is: the central results of a simulation
model should be corroborated, possible variations in
the results should be revealed and future research
should be guided by highlighting the most important
processes for further investigation. After all, only
robust results are important and will be of interest to
the mainstream. And, highlighting the most important
processes for further investigation helps – especially,
but not only - non-simulation colleagues in coping with
complex simulation models.
Even an erroneous model can be estimated. For that
reason, any model has to be validated. The term
“validity” can be formally defined as the degree of
homomorphism between one system and a second
system that it purportedly represents (Vandierendonck
with this term.
18 Homomorphism is used as the criterion for validity
rather than isomorphism, because the goal of
abstraction is to map an n-dimensional system onto an
m-dimensional system, where m < n. If m and n are
equal, the systems are isomorphic.
19 For a discussion on the confusion that surrounds the
basic definition of validity, see Bailey, 1988.
Boundary Adequacy (Structure): Are the
important concepts for addressing the problem
endogenous to the model?
Heuristic questions that address validity of model
Behaviour Reproduction: Does the model
generate the symptoms of the problem,
behaviour modes, phasing, frequencies and
other characteristics of the behaviour of the
behaviour arise if an assumption of the model
Family Member: Can the model reproduce the
behaviour of other examples of systems in the
same class as the model?
Extreme policy: Does the model behave
properly when subjected to extreme policies
or test inputs?
This list of question is not complete. In particular, since
validation of simulation models also requires testing
the program’s validity, in addition to the other
measures of validity necessary for traditional analytical
models, further questions might be:
Bug tracking: Are the implications of the
model (at least those that can be derived
without the assistance of the computer)
replicated by the computer program used?
Modifications of the model due to technical /
architectural implementation: Are the results
of the model robust to modifications in the
technical details of the implementation (e.g.
order of events when simultaneous actions are
Only once a model has been thoroughly validated we
can be confident enough to trust possibly surprising
behaviours, which may point to the existence of a
previously unrecognised mode of behaviour in the real
However, most social and economic simulation studies
still omit any test of validity. There is also a lack of
methodological literature on validity in simulation (but
see Dijkum, DeTombe and Kuijk, 1999).
Our position is: the results of a simulation model
should be validated. Although there are different types
of validity, each scientist knows which type of validity
he/she claims for his/her model. Therefore, each
simulation study should include an appropriate test of
the type of validity that the scientist claims for his/her
Moreover, validation may be seen as a social process
(Sterman 1984: 51), not just as a methodological one.
Therefore, a crucial element in validation is the
replicability of a simulation model. We turn to this
issue in the following section.
Many aspects of simulation models contribute to
determine their degree of replicability: among them are
formalisms, development methodologies.
Since agent based models are expressed through
computer programs, the first requirement is their open
source license distribution. But of course an effective
documentation as well as the choice of a standard tool
makes the difference between a “black box” and a
well-documented agent based simulation. Model
documentation should separate
technicalities from the conceptual description, since
simulations are always a mix of conceptual model and
technical choices that depend on the computer
architecture and the operating system.
It’s been a long time since computer scientists faced
the problem of defining a formalism in order to
document in a very general way any software
implementation. Of course, to become useful such a
formalism has also to be adopted as a standard. A
promising approach has been introduced with UML.
The Unified Modelling Language (UML), developed
by the Object Management Group20, is an attempt to
create a formalism, independent from development
methodology, that can be used to represent both the
static application structure
implementation and different aspects of its dynamic
behaviour. To use an official definition (OMG 2003),
«[t]he Unified Modelling Language (UML) is a
language for specifying, visualizing, constructing, and
documenting the artifacts of software systems».
Even if UML is closely oriented to software design, it
is generic enough to be adapted to describe any
algorithmic and object-oriented artefact, like ABM.
The principle of UML design is that computer
programs cannot be represented with one formalism
only. Not only the source code, but also graphical
diagrams are necessary to give a reader the key to
understand, replicate and modify a program. The OMG
has defined many standard diagrams. Some of the most
Class diagrams, which describe on one side
the collection of static model elements, like
classes and types, and on the other side their
contents and relationships.
Use cases diagrams, which specify the
required use of a system. Typically, they are
used to show what a system is supposed to do
and how software users interact with the
Activity diagrams, which emphasize the
sequence and condition of agents’ behaviours.
The actions coordinated by activity models
can be initiated because other actions finish
executing or because events external to the
language, tools, representation
of a software
20 The Object Management Group (OMG) is an open
membership, not-for-profit consortium that produces
and maintains computer industry specifications for
interoperable enterprise applications. Among its
members are the leading companies in the computer
industry (see http://www.omg.org).
21 For an agent based modeller the concept of an actor
may create some confusion. According to the UML
Even if all documents are potentially useful to improve
model unambiguousness, we propose the consistent use
of at least two views: a static representation, with a
Class diagram, and a dynamic view, showing the
sequence of events that characterizes the simulation
Class diagrams can be used for the definition of model
organization, with particular interest in its static aspects
and the association relationships among entities. Agents
are represented by classes, their characteristics by
attributes, their capabilities by methods.
State Machine diagrams, which describe
discrete behaviours by showing the finite
sequence of states during the lifetime of an
Sequence diagrams, which focus on the
message interchange between a number of
objects. Each message is exchanged within a
lifeline, a box identifying the duration of a
Much effort has been spent on trying to define a subset
of UML, specifically suitable to represent multi-agent
systems (Bauer et al. 2001, Huget 2002 and Odell et al.
+ negotiate ( ) :void
+ coupleInvestors ( ) : void
Market Maker Market
+evaluateAnOffer(price : Float ):boolean
+placeAnOffer (seller :Seller ) : void
some single, cyclic or grouped events may be
generated. The arrows show the chain of calls
originating from any event. As shown in figure 2, the
arrow connecting time and the object receiving the
event notification is labelled with the @ symbol. It is
used to specify when the event is raised and the name
of the event. In the case of looped events, the @t..r
notation is used, where t is the instant the event is
raised for the first time and r is the loop frequency.
Besides stressing the importance of source code
availability, we are convinced that the choice of a
standard tool, rather that the use a general-purpose
programming language) could facilitate the diffusion
and the replicability of agent based models. In the
development of ABM tools two different approaches
are emerging. The Starlogo/Netlogo (Resinck 1994)
experience is based on the idea of an ABM specific
language, while the Swarm library (Minar et al. 1996)
and some of its followers (JAS, RePast
protocol in the design process, implemented in
standard programming languages (Java, C, etc.). These
platforms also provide a set of tools, organized in
Figure 1 An example of a Class diagram
In particular Class diagrams can be used to show three
types of relationships:
an association is a generic relationship
between two classes, sometimes indicating
multiplicity rules (e.g. one-to-one, one-to-
many, many-to-many) for the relationship;
a generalization is the equivalent of an
inheritance relationship in object-oriented
terms (an "is-a" relationship);
a dependency points out when a class uses
another class, perhaps as a member variable or
a parameter, and so "depends" on that class.
Figure 1 shows how classes of agents are associated
and which attributes and operations each agent is
characterized by. The full reference to the symbols
used in the diagram can be found in Si Alhir (2003).
But a static view of the system is not enough to fully
document a simulation model: a dynamic view has to
be introduced. For a discrete event simulation the
Sequence diagram looks best suited to show how
events affect the objects during the experiment
execution. However, in order to achieve an effective
dynamic representation we propose a custom
utilization of this diagram.
The Time-Sequence diagram (Sonnessa, 2004) extends
the UML Sequence diagram by showing on the left-
hand side a special actor21: time. From the time line
22) represent a
symbolism, each object or class defined within the
software architecture is represented by squared boxes
(the class notation), while each external element (like
human operators, hardware equipment) interacting with
the software is represented by a stylized human symbol
22 JAS (http://jaslibrary.sourceforge.net); RePast
libraries, with the aim of hiding and sharing common
Our opinion is that both approaches are superior to
building models from scratch every time using custom
heterogeneous libraries and toolkits.
and putting together
In order to advance from simple methodological
recommendations to the development of a widely
recognized common protocol, we suggest a three
First step: Creation of a working group and
Development of a questionnaire.
We propose that a working group composed by
representatives from scientific
professional associations (e.g. the European Social
Simulation Association) is created. A questionnaire
should then be developed by the working group in
order to collect data on simulation approaches as well
as the model structures, methods of optimisation,
estimation, validation etc. of each newly published
simulation model. This questionnaire should include a
mixture of standardized
questions. Standardized questions will help in
categorizing newly published simulation models. Non-
standardized questions will help in collecting all sorts
of data on the methods applied (e.g., the type of
validity that a paper claims, the method(s) applied for
testing the model’s validity, a reference for each
method). We have created a draft for the proposed
questionnaire in the Appendix.
Second step: The questionnaire is distributed by
professional simulation journals.
Professional simulation journals in sociology and
economics (JASSS, Computational Economics, etc.)
will be asked to send the questionnaire to each author
who submits a simulation model for publication. Each
author will be requested to fill in the questionnaire.
However, his/her answers will have no effect on the
paper being published.
Third step: The working group analyses the data and
recommends a voluntary initial methodological
standard for agent-based simulations.
The working group analyses the data and recommends
a voluntary initial methodological standard for agent-
based simulations, defining
methodological rigour for each type of simulation
model. The standard may define sub-standards that
depend on the type of simulation model. Finally, the
standard will be published, together with a list of
references for each recommendation.
a minimum of
Figure 2 An example of a Time-Sequence diagram
Professional simulation journals in sociology and
economics may adopt the standard and send to their
referees a checklist in order to facilitate the evaluation
of newly submitted manuscripts.
In this paper we argued that agent-based modelling in
the social sciences needs a more widely shared
common methodological protocol.
Traditional analytical modelling practices rely on very
well established, although implicit, methodological
standards, both with respect to the way the models are
presented and to the kind of analyses that are
performed. These standards are useful because (1) they
contribute to the creation of a common language
among scientists, (2) they can be referred to without
detailed discussion, (3) they force model homogeneity
and hence comparability,
methodological awareness and guide individual
scientists towards better quality research.
Unfortunately, computer-simulated models often lack
such a reference to accepted methodological standards.
This is one of the main reasons for the scepticism
among mainstream social scientists that results in the
low acceptance of papers
(4) they increase
methodology in the top journals. We identified some
methodological pitfalls that, according to us, are
common in papers employing agent-based simulations.
They relate to the following problematic areas: links
with the literature, description of the model structure,
identification of the dimensions along which the model
behaviour is investigated, definition of equilibrium,
interpretation of the model behaviour, estimation of the
parameters, sensitivity analysis, validation, description
of the computer implementation of the model and
replicability of the results.
Although for each issue we discussed the different
options available and identified what we consider to be
the best practices, we did not intend to propose such a
proposed a three-stage process that could lead to the
establishment of methodological standards in social
and economic simulations. This process should start
from the creation of a working group of representatives
from scientific journals and professional associations
(e.g. the European Social Simulation Association). This
working group should develop a questionnaire (for
which we propose a draft copy) that would be
distributed by professional simulation journals to their
authors. The working group should then analyse the
results and publish a list of methodological
recommendations, i.e. a protocol.
ourselves. Rather, we
23 The state of the system is updated (i.e. observed) only at discrete (generally constant) time intervals. No reference is made
to the timing of events within a period.
24 The state of the system is updated every time a new event occurs. All events are isolated and their exact timing defined.
A Common Protocol for Agent-Based Social Simulation – Draft Questionnaire
The objective of this questionnaire is the establishment of methodological standards in social
and economic simulation. Traditional analytical modelling practice in the social sciences rely
on a very well established, although implicit, methodological protocol, both with respect to
the way models are presented and to the kind of analysis that are performed. Unfortunately,
computer-simulated models often lack such a reference to an accepted methodological
standard. This is a main reason for the scepticism among mainstream social scientists that
results in the low acceptance of papers with agent-based methodology in the top journals. It is
the goal of this initiative to increase the rate of acceptance of papers with agent-based
methodology in the top journals.
Please respond to the following questions in order to help us to increase the methodological
rigour in agent-based social and economic simulation. The first part of the questionnaire
should be regarded as a sort of checklist of all the features we think are relevant in an agent-
based model. Please add some notes if you think more information would be useful. The
second part of the questionnaire requests more details on some specific issues.
1. Links with the literature
Is your model based on some existing model in simulation
Is your model based on some existing model in non-simulation
Does the paper contain a survey on the theoretical background of the phenomenon that is
Does the paper contain a survey of the relevant simulation and non-simulation models?
2. Structure of the model
Have you clarified:
the goal of your model (empirical or theoretical)
whether the implications are testable with real data
the evolution of the population (static or dynamic)
o if static: the total number of agents
o if dynamic: birth and death mechanisms
the treatment of time (discrete23 or continuous24)
the treatment of fate (deterministic or stochastic)
25 auction, book, etc.
26 bargaining, etc.
27 The behaviour of all meaningful individual and aggregate variables is explored, with reference to the results
currently available in the literature. For instance, in a model of labour participation, if firm production is defined,
aggregate production (business cycles, etc.) is also investigated.
28 The model is investigated only with respect to the behaviour of some variables of interest
29 defined as a state where individual strategies do not change anymore.
30 defined as a state where some relevant (aggregate) statistics of the system becomes stationary.
Have you classified your model with respect to:
the topological space (no space, nD lattices, graphs…)
the type of agent behaviour (optimising, satisficing..)
the interaction structure (localized or non-localized)
the coordination structure (centralized25 or
how expectations are formed (rational, adaptive or other)
Have you clarified the objective of the analysis (full exploration27
or partial exploration28)?
Have you clarified the focus of the analysis (equilibrium at micro-
level29, equilibrium at macro-level30, out-of-equilibrium)?
Has statistical testing of the properties found in the artificial data
Have the parameters of the model been estimated / calibrated
based on real data?
Has a sensitivity analysis been performed?
Has validation been performed?
Is the presentation detailed enough to allow the replication of the
Have you used a simulation platform to implement your model?
If any, have you clarified which simulation platform you have
Can the simulation be run online?
Graphical presentation of the model structure:
? UML diagrams (specify)
? Other diagrams (specify)
? Upon request
Now, please add some details concerning the following specific issues:
a) Is your exploration performed only on a subset of the parameters’ space? If yes, please
b) Which kind of statistical analysis have you performed on the artificial data?
? descriptive statistics
? multivariate analysis (metamodelling)
? stationarity / ergodicity tests on artificial time series
? other (please specify)……………………………….
c) If multivariate analysis / statistical tests have been performed, please list the methods you
d) Please list all meaningful parameters that had to be initialized and indicate the method(s)
you used for estimation or calibration (e.g. beta: calibrated/estimated from statistical
data/empirical data collection).please indicate a reference for each method)
e) Please mark those features that you tested for sensitivity.
? Random seed variation
? Noise type and noise level variation
? Variation in the level of data aggregation
? Variation in the decision processes and
capabilities of the agents
? Variation of sample size (esp. small
? other: ……………………………
? Parameter variation
? Temporal model variation (discrete to
continuous time or from fixed to random
updating of cells)
f) Please indicate the method(s) you applied for testing the model’s sensitivity on input
variation (please give a reference for each method).
g) Please state the type of validity that you claim for your model.
h) Please indicate the method(s) you applied for testing the model’s validity (please give a
reference for each method).
Comments on this questionnaire
You have completed this questionnaire whose aim is to increase the methodological rigour in
agent-based social and economic simulation. Do you have any comments or recommendations
for us to improve this questionnaire?
Thanks a lot for participating
Allison, P., Leinhardt, S. (ed.) (1982) Discrete time
methods for the analysis of event histories, Jossey-
Bass, pp. 61-98.
Anderson, P.W., Arrow, K.J., Pines, D. (ed.) (1988)
The Economy as an Evolving Complex System,
Andreoni, J., Miller, J. (1995) Auctions with adaptive
artificial agents in «Games and Economic
Behavior», n. 10, pp. 39–64.
Arifovic, J. (1955) Genetic algorithms and inflationary
economies in «Journal of Monetary Economics»,
vol. 36, n. 1, pp. 219–243.
Arifovic, J. (1996) The behavior of the exchange rate
in the genetic algorithm and experimental
economies in «Journal of Political Economy», vol.
104 n. 3, pp. 510-541.
Arthur, B. (1991) On designing economic agents that
behave like human agents: A behavioral approach
to bounded rationality in «American Economic
Review», n. 81, pp. 353–359.
Arthur, B. (1994) Inductive reasoning and bounded
rationality in «American Economic Review», n. 84,
Askenazi, M., Burkhart, R., Langton, C., Minar, N.
(1996) The Swarm Simulation System: A Toolkit for
Building Multi-agent Simulations, Santa Fe Institute
Working Paper, n. 96-06-042.
Attanasio, O.P. e Weber, G. (1993) Consumption
Growth, the Interest Rate and Aggregation in
«Review of Economic Studies», n. 60, pp. 631-649.
Attanasio, O.P. e Weber, G. (1994) The UK
Consumption Boom of the Late 1980’s: Aggregate
Implications of Microeconomic Evidence in «The
Economic Journal», n. 104, pp. 1269-1302.
Axelrod, R.M. (1987) The Evolution of Strategies in
the iterated Prisoner’s Dilemma, in L.D. Davis
(ed.) Genetic Algorithms and Simulated Annealing,
London, Pitman, pp. 32-41.
Bailey, K.D. (1988) The Conceptualization of Validity.
Research», n. 17, pp. 117-136.
Bauer, B., Odell, J., Parunak, H. (2000) Extending
UML for Agents in G. Wagner, Y. Lesperance, E.
Yu (eds.) Proceedings of the Agent-Oriented
Information Systems Workshop (AOIS), Austin, pp.
Bauer, B., Muller, J.P., Odell, J. (2001) Agent UML: a
formalism for specifying multiagent software
systems in «International Journal on Software
Engineering and Knowledge
(IJSEKE)», vol. 1, n. 3.
Campbell, C.D., Tullock, G. (1970) Computer
simulation of a small voting system in «Economic
Journal», vol. 80 n. 317, pp. 97–104.
Chattoe, E.S., Saam, N.J., Möhring, M. (2000)
Sensitivity analysis in the social sciences: problems
and prospects in G.N. Gilbert, U. Mueller, R.
Suleiman, K.G. Troitzsch, (eds.) Social Science
Microsimulation: Tools for Modeling, Parameter
in «Social Science
Optimization, and Sensitivity Analysis, Heidelberg,
Physica Verlag, pp. 243-273.
Dawkins, C., Srinivasan T.N., Whalley J. (2001)
Calibration in J.J. Heckman, E. Leamer. (eds.)
Handbook of Econometrics. Vol. 5. Elsevier, pp.
Deif, A.S. (1986) Sensitivity Analysis in Linear
Systems, Berlin, Springer.
DeTombe, D., van Dijkum, C., van Kuijk, E. (eds.)
Dunne, P., Hunter, A., Wan, H.A. (2002) Autonomous
Agent Models of Stock Markets in «Artificial
Intelligence Review», n. 17, pp. 87–128.
Fiacco, A.V. (1983) Introduction to Sensitivity and
Stability Analysis in Non-linear Programming,
Paris, Academic Press.
Fiacco, A.V. (ed.) (1984) Sensitivity, Stability and
Parametric Analysis, Amsterdam, North-Holland.
Epstein, J.M. (1999) Agent Based Models and
Generative Social Science in «Complexity», IV (5)
Freeman, R. (1998) War of the models: Which labour
market institutions for the 21st century? in «Labour
Economics», n. 5, pp. 1-24.
Gode, D. K., Sunder, S. (1993) Allocative efficiency of
markets with zero-intelligence traders: Markets as
a partial substitute for individual rationality, in
«Journal of Political Economy», n. 101, pp. 119–
Gourieroux, C., Monfort, A. (1997) Simulation-based
econometric methods, Oup/Core Lecture Series,
Oxford University Press.
Hendry, D.F., Krolzig, H.M. (2001) Automatic
econometric model selection, London, Timberlake
Herreiner, A., Kirman, F., Weisbuch, G. (2000) Market
organization and trading
«Economic Journal», n. 110, pp. 411-436.
Huberman, B.A., Glance, N. (1993) Evolutionary
Games and Computer Simulations, in «Proceedings
of the National Academy of Sciences of the United
States of America», n. 90, August, pp. 7716-7718.
Huget, M. (2002) Agent UML class diagrams revisited
in B. Bauer, K. Fischer, J. Muller, B. Rumpe (eds.),
Proceedings of Agent Technology and Software
Engineering (AgeS), Erfurt, Germany.
Kalaitzidakis, P., Mamuneas, T.P., Stengos, T. (2003)
Rankings of academic journals and institutions in
economics in «Journal of the European Economic
Association» vol. 1, n. 6, pp. 1346–1366.
Kleijnen, J.P.C. (1992) Sensitivity Analysis of
Simulation Experiments: Regression Analysis and
Statistical Design in «Mathematics and Computers
in Simulation», n. 34, pp. 297-315.
Kleijnen, J.P.C. (1995a) Sensitivity Analysis and
Optimization of System
Regression Analysis and Statistical Design of
Experiments in «System Dynamics Review», n. 11,
Kleijnen, J.P.C. (1995b) Sensitivity Analysis and
Related Analyses: A Survey of Some Statistical