Content uploaded by Geeske Scholz
Author content
All content in this area was uploaded by Geeske Scholz on Jul 10, 2017
Content may be subject to copyright.
Eberlen, J., et al (2017). Simulate this! An Introduction to Agent-Based Models
and their Power to Improve your Research Practice.
International Review of
Social Psychology
, 30(1), 149–160, DOI: https://doi.org/10.5334/irsp.115
RESEARCH ARTICLE
Simulate this! An Introduction to Agent-Based Models
and their Power to Improve your Research Practice
Julia Eberlen*
, Geeske Scholz† and Matteo Gagliolo‡
The method of agent-based modeling is rarely used in social psychology, but has the potential to
complement and improve traditional research practices. An agent-based model (ABM) consists of a number
of virtual individuals – the “agents” – interacting in an articial, experimenter-controlled environment. In
this article, we discuss several characteristics of ABMs that could prove particularly useful with respect
to recent recommendations aimed at countering issues related to the current “replication crisis”. We
address the potential synergies between planning and implementing an ABM on the one hand, and the
endeavor of pre-registration on the other. We introduce ABMs as tools for both the generation and the
improvement of theory, testing of hypotheses, and for extending traditional experimental approaches by
facilitating the investigation of social processes from the intra-individual all the way up to the societal
level. We describe examples of ABMs in social psychology, including a detailed description of the CollAct
model of social learning. Finally, limitations and drawbacks of agent-based modeling are discussed. In
annex 1 and 2, we provide literature and tool recommendations for getting started with an ABM.
Keywords: Social Psychology; Agent-Based Models; Computational Social Sciences; Replication Crisis;
Methodology
Introduction
In recent years, more and more phenomena that were
believed to be established results in social psychology
have been found to disappear when the original
experiments were replicated (e.g. Klein et al., 2014;
Open Science Collaboration, 2015). Recommendations
to prevent replication failures are plentiful and include
building better theoretical foundations, pre-registering
studies before starting the data collection, as well as
encouragements to use different statistical methods.
We believe that agent-based modeling can be a helpful
tool for social psychologists if used as a complement
to traditional research methods. Notably, researchers
can profit from synergies between the development
of an agent-based model (ABM) and several of the
recommendations made in the light of what has become
known as the “replication crisis”. Therefore, we introduce
agent-based modeling and its uses for social psychology
in the current quest for improved research practices, and
provide tools and knowledge necessary to start creating
ABMs yourself.
Agent-based modeling is part of a larger, powerful family
of computational modeling techniques that are used to
better understand and explore social phenomena. These
methods are designed to explore not only the end state
of social and cognitive processes, but also the dynamics
of the process itself (Richardson, Dale & Marsh, 2014).
Agent-based modeling consists of creating an artificial
population of agents that can represent individuals,
organizations, or several groups within a society. Agents
can display considerable variability, both by belonging
to different groups with inherently different traits, and
by possessing traits or displaying behaviors to different
degrees (Epstein & Axtell, 1996). In an ABM, agents
interact with their environment, including other agents,
but also (simulated) resources and physical structures.
These interaction processes take time into account, and
thus can explicitly simulate dynamical processes. Agent-
based modeling allows for the development of testable
theories and systematic experimentation through
simulation (Conte & Paolucci, 2014). An exploration of
the artificial experimental setup, using different rules and
parameters, allows modelers to define a set of rules or
theoretical assumptions about the agent’s behavior which
are sufficient to reproduce phenomena of interest at the
level of the artificial population.
While there are others who argued for the use of ABMs
in social psychology before (Smaldino, 2016; Smaldino,
Calanchini & Picket, 2015; Smith & Beasley, 2015; Hughes,
Clegg, Robinson, & Crowder, 2012; Smith & Conrey,
* Center for Social and Cultural Psychology, Université libre de
Bruxelles, BE
† Institute of Environmental Systems Research, University of
Osnabrück, DE
‡ Group for Research on Ethnic Relations, Migration and Equality,
Université libre de Bruxelles, BE
Corresponding author: Julia Eberlen (Julia.Eberlen@ulb.ac.be)
Eberlen et al: Introducing Agent-Based Models to Social Psychology150
2007; Jackson, Rand, Lewis, Norton & Gray, 2017), the use
of ABMs in social psychology is still rare. However, we
believe that the current climate of renewal, including the
aforementioned new methodological recommendations
to improve replicability and a lowered threshold for the
use of computer programming, offers ideal conditions for
the integration of agent-based modeling into the canvas
of research methods in social psychology. We suspect the
biggest remaining hurdle to use ABMs as a complementary
tool in social psychology is the lack of knowledge about
the many advantages and the synergic effects with the
quest to improve current research practices. Therefore,
we first discuss what social psychologists can gain by
using ABMs. Specifically, we discuss how ABMs are
helpful for theory development, planning and executing
real-life experiments, and how they can provide support
in the endeavor of pre-registration. In addition to these
advantages, having created an ABM can also facilitate
scientific communication as it demands a clear and
detailed understanding of the hypothesis, and it can be
used as a didactical tool to illustrate the key aspects of your
research. These characteristics provide the researcher
with ample opportunity to practice open science. Finally,
compared with traditional methods, ABMs can facilitate
the simultaneous exploration of interactions between
intra-, interpersonal, and intergroup phenomena.
To illustrate the use of ABMs for social psychology, we
take an in-depth view at a model of social learning through
group interaction, the CollAct model (Scholz et al., 2014;
Scholz 2016), and describe several other examples of
ABMs relevant to social psychologists. Following the
presentation of CollAct, we discuss the limitations and
drawbacks of agent-based modeling.
To ease the way into getting started with agent-based
modeling, we provide a selection of literature, tools and
pointers in the annex. This annex is tailored specifically to
the needs and background of social psychologists, so you
can make use of skills you might already possess.
The Role of ABMs in Current Social Psychology
The way classic methods and practices of experimental
design, data collection and data analysis were used over
decades in social psychology are increasingly challenged
and criticized (cf. Banks, Rogelberg, Woznyj, Landis
& Rupp, 2016; Simmons, Nelson & Simonsohn, 2011).
Accounts of failures to replicate prominent studies have
made it into mainstream media, as was recently the
case with research on the “power pose” effect, where
assuming an open assertive posture was believed to
lead to changes in hormone levels, higher propensity to
risk taking, and feeling more powerful (Carney, Cuddy
& Yap, 2010; media coverage: Friedman, 2016; Gelman,
Fung & Miller, 2016). Due to such replication failures,
methods and practices that were considered common
standard before are now questioned and actively
discouraged (e.g. unjustified and small sample sizes,
presenting exploratory analyses as confirmatory), and
methodological alternatives as well as improvements are
proposed (Asendorpf et al., 2013; Nosek & Lakens, 2014;
Pashler & Wagenmakers, 2012).
Currently, several projects are dedicated to rebuilding
the foundation of social psychology. Direct replications of
studies originally conducted before the onset of what some
call a “revolution” (Spellman, 2015) aim at re-establishing
a common ground for our discipline (Klein et al., 2014).
The wealth of propositions to improve the quality
of our research, sparked by this “crisis of confidence”
(Pashler & Wagenmakers, 2012), can be interpreted as an
encouraging sign. Indeed, suggestions for improvement
concern all steps of the scientific process, from the
theoretical foundation and formulation of the hypothesis
(Klein, 2014) to experimental design, pre-registration of
intended studies (Nosek & Lakens, 2014), data collection,
and the analysis of the obtained data (Funder et al.,
2014). An additional prominent recommendation is to
practice open science, making the entire research process
publicly accessible, providing detailed descriptions of
the experimental setup, and letting everybody access
the obtained data, along with data analysis scripts
(Nosek, Spies & Motyl, 2012). Proponents of these
recommendations hope that facilitating communication
about, and understanding of, experimental research will
lead to an increase of its quality (Nosek, Spies & Motyl,
2012). In the following sections, we argue that the
proposed measures to combat questionable research
practices (Banks et al., 2016) are partly overlapping with
the tasks required in the early stages of computational
modeling, turning ABMs into a useful complement to
current social psychology research methods.
What Is an Agent-Based Model?
An agent-based model is a way of conducting virtual
experiments consisting of computer simulations. At the
core of every ABM are the agents which can be defined
as “[…] a computer system that is situated in some
environment, and that is capable of autonomous action in
this environment in order to meet its design objectives.”
(Wooldridge, 1999, p. 29, adapted from Wooldridge &
Jennings, 1995). Agents are implemented as part of the
source code of a computer program. During a simulation
run, agents act autonomously according to rules that
have been defined when programming the model. They
can sense the environment and interact with it, as well
as with other agents, according to rules implemented by
the modeler. Agents’ actions have consequences, thereby
updating the current state of the whole system for the
next set of actions.
Besides the concept of individual agents, ABMs are
all about interactions. These interactions are based on
the behavior rules and can lead to a phenomenon called
“emergence”, a behavior or structure on a higher level (e.g.
the formation of a new norm) which cannot be directly
reduced to actions on the lower level (few individuals
behaving in a specific way). Instead, emergence is the
aggregate result of the behaviors of a large number
of individuals at the lowest (micro/individual) level.
Emergent phenomena are not “built in” but originate
from the simulated interactions between smaller
entities, the agents. In this way, behaviors or patterns
can be “grown” (Epstein & Axtell, 1996) from defined
Eberlen et al: Introducing Agent-Based Models to Social Psychology 151
interactions at the micro level. Emergence is not specific
to ABMs but is observed in the real world as well (e.g.
in the formation of a norm or the creation of an ant
trail). By having the possibility and need to define
agents’ behaviors, ABMs provide a means to integrate
knowledge that is currently splintered across social
psychology and relate emergent patterns that exist
on an aggregate level to the individual level, at which
behaviors arise.
Agent-Based Models and Pre-Registrations
A prominent measure amongst the recommendations
aimed at improving the scientific process is the relatively
new practice of pre-registration (e.g. van’t Veer & Giner-
Sorolla, 2016; Nosek & Lakens, 2014). Despite its young age
– at least in psychology – there are already journals who
either pledge to give pre-registered studies preferential
treatment (Chambers, Forstmann & Pruszy, 2017) or even
dedicate their space entirely to registered reports, such
as the journal Comprehensive Results in Social Psychology.
Pre-registration means that a study will have to go through
the review process and obtain approval before the data is
even collected, based on the quality of the hypothesis and
experimental design. Submitting a pre-registration entails
writing a detailed outline of the planned experiment,
following a blueprint such as the ones provided by
aspredicted.org (n.d.) and by van’t Veer and Giner-Sorolla
(2016). Such a comprehensive and concrete representation
of the theoretical foundation of the planned experiment
is in essence the same as the first stage in the creation of
an ABM. To create an ABM, every feature or behavior that
will be included in the model needs to be explicitly stated
and formalized. This is necessary not only because there is
a protocol, or a best-practice way to go about, but for the
implementation of the model itself.
The process of specifying all components of a model
helps to limit hypotheses to those that are explicitly
integrated into the model: what is not there cannot be
taken into account, as it will have no influence on the
model output. Provided the model itself is already in
an advanced stage, gaps in the theory become obvious,
that is, when parts needed to make the model run are
missing, such as specific interaction rules. This procedure
is extremely helpful to avoid gaps that might otherwise
invite researchers to form a posteriori hypotheses, and
meaningfully explain (or explain away) unexpected
experimental results.
In short, while designing an ABM, a researcher
necessarily creates an elaborate pre-registration template,
and by implementing it, she puts the experimental design
to its first test. On the other hand, if a psychologist wants
to employ an ABM, having a well-written pre-registration
can be of immense help for setting it up.
While not all modelers agree that an official protocol
is necessary to unify the way in which these details are
reported (e.g. Smaldino, 2016), there are protocols used
more and more frequently for communicating models,
such as the ODD and ODD+D (Grimm et al., 2006, 2010;
Müller et al., 2013) that help structure and communicate
implicit knowledge.
In addition, the elaboration of a plan for an ABM and
creating the model itself can improve the theoretical
background of the proposed experiment. This can then
lead to an increase in the quality of the experimental
design as the researcher has a precise idea of what she is
actually investigating.
Theory Building through Agent-Based Models
The suggestion to build an experiment on a solid
theoretical framework (Klein, 2014; Świątkowski &
Dompnier, 2017) is the one that most readily supports
the use of ABMs. Theories serve as an anchoring tool for
the research process. A well-elaborated theory informs
the experimental hypothesis, as even with different
experimental setups, the crucial components to be tested
can be identified. This does not necessary mean that the
theory will be confirmed. However, without an underlying
theory, a study can explore possible relations between
different variables, but the researcher won’t be able to
draw confirmatory conclusions as there is no theoretical
frame of reference against which the data obtained can be
tested. Testing an intriguing, counterintuitive hypothesis
can be successful, but it might only be successful because
of the specific dataset obtained and not because there is
actually an underlying general phenomenon (Klein, 2014).
In addition, without a well-elaborated theory, data can
be explored in many different ways until the researcher
obtains an interesting result. It is only in the second step
that she will explain the reasoning behind the results,
filling in the gaps of the initial theory.
Agent-based modeling is a method that lends itself
extremely well to the elaboration of theories: As stated
above, in order to create an ABM, it is vital to define the
specific components of the model before the beginning
of the actual modeling process. This forces the researcher
to think thoroughly about the purpose of the model in
the first place. Agents’ knowledge and behavior must be
set and formalized in a conceptual model, whose level of
detail allows for an implementation in a programming
language (cf. Salamon, 2011). Thereby, it becomes obvious
if a necessary portion of information is missing. A typical
example for this are rules for dynamic processes and
interaction, which are missing in many theories, such as in
the Theory of Planned Behavior (Ajzen, 1991). Creating an
ABM based on the theoretical reasoning of the researcher
can be useful to avoid the auxiliary assumptions which
Świątkowski and Dompnier argue could be a culprit in
the current situation in social psychology (2017). Often,
additional (social) influences are not explicitly taken
into account in the statistical analyses. ABMs allow
experimenting without additional influences, or including
assumed social components explicitly, thereby testing
their importance in the initial theoretical assumptions.
Once the model is implemented, it can be used to test
different parameter configurations. This can be useful
to test the robustness of a theory or phenomenon
investigated: minor parameter variations or an increase in
the number of agents should not lead to a major change
in the results obtained with the model. Thereby, obtaining
replicable results from the experimental setup built by the
Eberlen et al: Introducing Agent-Based Models to Social Psychology152
researcher should not depend solely on recreating original
experimental conditions down to the last detail. The
reader interested in learning more about theory building
through agent-based modeling will find some excellent
and detailed discussions in Smaldino (2016), Smaldino et
al. (2015), Hughes, Clegg, Robinson and Crowder (2012),
as well as Smith and Conrey (2007).
Agent-Based Modeling Can Help to Improve Scientic
Communication
Another aspect frequently evoked by proponents of better
research practices is openness: making material necessary
for the experiment, as well as the data obtained, publicly
available (Nosek, Spies & Motyl, 2012). While not true
for all agent-based modelers, it is increasingly common
practice to publish the model, either directly in the form of
its code or online with a user interface. Personal websites,
blogs, but also collaborative software repositories
such as GitHub (github.com, n.d.), and the numerous
model libraries such as the CoMSES library found on
openABM (CoMSES Computational Model Library, n.d.)
make publishing models easy. Sharing ABMs this way
allows other researchers and even lay-people possessing
the sufficient skills to try out the model for themselves
(see, as an example, Gray et al., 2014). Interactive user
interfaces especially benefit scientific communication:
Seeing the implemented theory in action can be a great
help in understanding it in more depth, and it does not
require the user to possess programming skills. Anybody
can then actively explore the implementation of the
experimental hypothesis by manipulating the variables,
exploring what behavior has an impact on specific agents,
on subgroups or on the whole population, and whether
new phenomena do emerge with different configurations.
Proceeding this way, not only can the data be shared, but
the whole experiment can be reproduced by others.
Linking Dierent Levels of Analysis
Agent-based modeling is an excellent tool to investigate
social phenomena at different levels, from the personal
to the societal. Experiments in social psychology have
to navigate between an experimental setup that mimics
real-world situations at the risk of complicating later data
analysis, and an over-controlled environment trying to
eliminate all potentially interfering variables. In ABMs,
we can create the experimental environment containing
the exact amount of detail needed. There are no particular
limitations to the number and nature of details that
can be included in an ABM, except that they have to be
computable – be it deterministically or stochastically.
However, a strong bias towards simplicity should be
adopted, as overly complicated models are more difficult
to analyze, and the number of model parameters can
be prohibitive due to the dimensionality of the space of
possible parameter settings, which can become too large
to be searched efficiently. We discuss this aspect of agent-
based modeling in more detail in the section “Limitations
and Drawbacks of ABMs”.
The freedom of choice when designing ABMs includes
different levels of aggregation. Doise (1982) provides one
possible description of such levels. He evokes them as
the intra-individual (cognitive level), the inter-individual
(situational level, specific to interactions between people
or between individuals and a precise situation), the
level relative to an individual’s position in society, and
finally the level of ideologies, general ethical reference
frames, or beliefs a society develops. When conducting
an experiment in social psychology, the researcher can
strive to manipulate and observe variables on different
levels of analysis. However, it remains difficult to observe
and explore in an experimental setting, for example, the
circular influence of a change in individual behavior, on
the individual’s neighbors, which then influences a larger
group or “society”, which then ripples down again to the
intra-individual level. The advent of social media data
provides some support for investigating the interaction
of the group level with individual phenomena in the
real world. Nonetheless, disentangling different levels
of analysis in this context holds restrictions as well,
such as the lack of knowledge about intra-individual
decision processes. This can again limit the possibility
of investigating links between emergent, societal-level
phenomena caused by individual or group-level behaviors.
Agent-based models, however, can provide us with
insights on all of these levels. We can, for example,
implement certain traits, preferences or behaviors within
agents, as rules of interaction with other agents and the
modeled environment. Subsequently, we might be able to
observe the emergence of phenomena that are situated
at the level of the society as a whole. A very simple, yet
effective example of such a model is Schelling’s model
of residential segregation (1978), where individual
preferences to live close to some people who share
specific characteristics lead to completely segregated
neighborhoods.
Inherently, models are simplifications of reality. Even so,
agent-based models provide the possibility to implement
as much detail as we require. On the one hand, this allows
us to render a naturally complex situation simpler, thereby
disentangling different variables and their influence. On
the other hand, however, we can also implement the
results of a highly controlled experimental situation in
a more realistic setting. An example of this could be to
use data obtained through an implicit association test
(IAT, Greenwald, McGhee & Schwartz, 1998) to model
individual biases of our agents. Then, by running the
model over several time steps, we can observe how
personal biases in several individuals might lead to larger
phenomena, spanning more than the intra- or even inter-
individual level.
Using Agent-Based Models to Widen the Scope of
Investigation
A typical problem of experiments in social psychology
is the low statistical power of many experiments,
constraining their capacity to detect a true effect. The two
main reasons for this are small sample sizes and a small
base rate occurrence of the effect investigated in the
general population, provided it exists. Creating an ABM
of the experimental hypothesis will not allow a researcher
Eberlen et al: Introducing Agent-Based Models to Social Psychology 153
to continue doing underpowered laboratory or field
studies. However, it can complement a well-designed,
sufficiently powered study: Once a model is implemented
and validated through comparison to experimental or
empirical data, it is possible to increase the number of
agents to a sample size that is not obtainable by traditional
methods. It is then interesting to observe whether the
phenomena observed with a smaller number of agents
still hold for an entire population, when interactions of
a larger number of agents are taken into account. This
also serves as a test of whether change in the dependent
measures used in the experimental setting is truly due to
the experimental manipulation or hypothesized variables
and not another unaccounted influence: the model
contains only what the researcher implemented, so if a
phenomenon is not observed in the model but only in the
real-world setting, the underlying theoretical assumption
are at least incomplete. Another possibility would be to
introduce additional interaction structures, such as agents
situated in a network. This has been done, for example, by
Luhman and Rajaram in 2015 (see following section).
Another advantage of ABMs is to change interaction
rules and variables to explore in which way this affects
simulation results. This type of model exploration is one
of the greatest strength of agent-based modeling, and can
lead to the formulation of new research questions. For
example, the effects of a specific variable or interaction
rule can be tested in the model, and if interesting effects
appear, these can be tested in follow-up real-world
experiments. Additionally, starting out with an ABM
implementation of the hypothesis allows testing it under
ideal conditions before investing it experimentally. The
researcher will then have a better understanding, given
her hypothesis, of the magnitude of effect she might
expect. This increases her ability to estimate whether her
resources are well invested in this study.
Agent-Based Models in Social Psychology
Agent-based modeling has previously been used both by
social psychologists and by modelers making use of social
psychological paradigms. We use four recent examples to
illustrate how ABMs have been used to explore research
questions relevant for social psychologist. Then, we
provide a more detailed description of the CollAct model
(Scholz et al., 2014; Scholz, 2016), in order to illustrate the
main principles of agent-based modeling.
Festinger’s social comparison theory (SCT) has been
implemented by Van Rooy, Wood, and Tran (2016). The
model is based on a connectionist framework, where each
agent is capable of relatively complex learning principles,
as well as a dynamic network context, where agents create
and loosen ties based on similarity in their attitudes. This
implementation of the SCT brought new insights to both
social psychologists and modelers: For psychologists,
breaking down SCT into step-by-step instructions for a
computer program clears up “aspects of the theory [that]
are couched in ambiguous verbal descriptions” (Van Rooy,
Wood & Tran, 2016). Agent-based modelers, on the other
hand, have a tendency to simplify the cognitive aspect of
agents, thereby reducing the (social) psychological validity
of a model. Van Rooy, Wood, and Traan’s model is also a
good example of the integration of ABMs and a real-life
experiment, where one reproduced similar results to the
other.
Gray and colleagues (2014) have explored group
formation in a homogeneous population, based on
reciprocity and transitivity. The agents in this model initially
don’t belong to different groups, nor do they possess
features that would justify an external classification in one
group or another. Rather, the model investigated whether
cooperating and defecting in the prisoner’s dilemma
would, over time, lead to group formation in a population
where agents can form and sever ties with each other. The
influence of trust, reciprocity, transitivity, and the number
of agents on the formation of groups was manipulated
by the researchers, and you can do the same on www.
mpmlab.org/groups/. This interactive version provides a
compelling additional tool to help readers understand the
underlying reasoning and implications of the variables
implemented in this model of group formation.
Luhmann and Rajaram’s model (2015) of memory
transmission in groups is an illustration of the use
of agent-based modeling as a complementary tool.
First, they simulated different aspects of empirically
investigated phenomena of memory transmission
in groups. By choosing to build their ABM based on
the same experimental paradigms as the real-world
experiments in the relevant literature, their model held
a validation with regards to experimental data. After this
corroboration of their ABM, they extended it to a higher
number of agents beyond the sample sizes possible in
the controlled laboratory experiments. This allowed
Luhmann and Rajaram to discuss the robustness of the
investigated phenomena: if the theoretical assumptions
are true and the collected data generalizable, they should
lead to the same outcome, whether implemented in
the model or tested in a laboratory setup, and should
withstand variations in sample sizes. Furthermore, using
an ABM allowed them to explore new dimensions of
memory transmission: agent communities had the same
initial setup for individual behavior, but differed in
network structure. This configuration allowed Luhmann
and Rajaram to observe how memory transmission
works outside of closed groups, which would have been
unfeasible in a traditional experimental setting.
Finally, agent-based models have already played a role in
the current debate about replicability. Namely, Smaldino
and McElreath created an ABM to investigate how the
quality of publication evolves in an environment that
values original research over replication efforts (Smaldino
& McElreath, 2016), unfortunately with discouraging
results. This final example illustrates the value of creating
an ABM prior to implementing a real-life experiment or
intervention method: The results of the ABM suggest
that low-effort science is more successful than high-
quality – but also time and resource intensive – science,
and replication efforts in the current form might not be
sufficient to prevent future editions of the current crisis
of replication. This ABM also illustrates that an agent can
be a unit other than a person: Smaldino and McElreath
Eberlen et al: Introducing Agent-Based Models to Social Psychology154
consider that each agent as representing a lab rather than
an individual researcher.
The examples of ABMs given here are by no means
exhaustive: for more examples, see Jackson et al (2017),
who provide a table of ABMs relevant to social psychology.
To further demonstrate the creation and usefulness of an
ABM, we introduce the CollAct model in more detail.
CollAct, a Model of Group Discussions
To understand the technique of agent-based modeling
in more detail, we now discuss CollAct (simulating
collaborative activities; Scholz, 2016; Scholz et al.,
2014), an ABM of group interaction, as a more elaborate
example of a model using findings of social psychology.
CollAct is an explorative model, designed to help to
analyze factors that influence learning and explore the
social dynamics occurring in group discussions. It is built
upon the idea that group interaction can foster social
learning processes. These are in turn expected to enable
or promote social change for sustainability (e.g. Muro &
Jeffrey, 2008). To this end, CollAct models both cognitive
knowledge (referring to knowledge about a topic at stake)
and relational knowledge (referring to the perception
of other participants and self-perception), as well as
learning. Agents in CollAct discuss an abstract issue (e.g.
a management plan) and try to reach a consensus. Here,
consensus is defined as a general agreement that might
include aspects of the discussed topic on which certain
participants have doubts or disagreements, but do not
communicate them. Cognitive and relational knowledge
are used to interpret incoming messages and decide upon
further actions (sending out a message, and if so, which
message). CollAct is implemented in Repast Simphony
(North et al., 2013). An executable version and an ODD
description of CollAct can be downloaded here: https://
www.openabm.org/model/4255/version/1/view. Please
note that, in order to run this model, you need to install
Repast Simphony first (see Annex 2).
In CollAct, agents discuss with each other in a virtual
room called discussion by exchanging messages. Messages
contain information about the speaker, the content,
which is an aspect of the issue at stake, and whether or
not this content should be included in the consensus.
Thereby, the discussion takes place in a turn-taking
manner, and all agents hear all messages. If more than one
agent wants to speak, a random process decides which
agent is first. Furthermore, a protocol saves the recent
messages and frequency of content-related messages, to
assure path-dependency in the discussion. Content that a
sufficient number of messages advocated for is included
in the consensus. We focus our description on the class
participant, which implements the agents. Agents in
CollAct have mental models, referring to personal internal
representations of the surrounding world that determine
how one observes the environment (Johnson-Laird, 1983;
Jones, Ross, Lynam, Perez & Leitch, 2011; Kolkman, 2005;
Norman, 1983). Agents use the knowledge in their mental
models to evaluate the perception of the environment,
i.e., to interpret the incoming messages. Every agent
has a mental model consisting of two “sub-models”: the
substantive model (knowledge about the topic at hand)
and the relational model (knowledge about other actors
and self-perception). Individual characteristics are grasped
through different knowledge in the mental models
(about the discussed topic, other participants, and self-
perception). The relational models of agents are modeled
as real numbers between 0 and 1. Substantive models
represent the importance that an agent attributes to a
set of aspects of the issue currently discussed, which can
be communicated in a message. Substantive models are
implemented as an array. An array is a data type you can
imagine as a box with different compartments, labeled
by increasing numbers. Scholz et al. (2014) linked every
compartment to a specific aspect (e.g., compartment
4 refers to aspect xy). A “1” implies an agent finds this
aspect important, a “0” that the agent does not find it
important or does not know about it. Figure 1 shows
a representation of such a substantive model, a box
with different compartments filled with 0’s and 1’s.
Learning is simulated through change in the substantive
and/or relational model of an agent. The implementation
of learning was based on the findings that confrontation
with new knowledge can lead to a change in concepts
(Anderson, 2000), and that people develop concepts
Figure 1: Example displaying how the substantive model is implemented in an array. The agent changes her substantive
model three times, learning new aspects the first two times, and forgetting an aspect the third time.
Eberlen et al: Introducing Agent-Based Models to Social Psychology 155
quickly on little evidence and have a tendency to stick to
these concepts without strong evidence in contradiction of
them (Dörner, 1999). Figure 1 shows how the substantive
model of an agent may develop during the simulation run.
The emerging consensus in the simulation run is
modeled by a similar array. In this way, it is easy to compare
whether and to what extent individual agents’ mental
models and the negotiated consensus overlap in the end.
In group discussions, conformity is one major influence
(cf. Baron & Kerr, 2003). To mediate and integrate effects
from the relational model with the substantive model
and the ongoing discussion, conformity is modeled as a
cognitive bias. To this end, the Asch effect (Asch, 1951)
and the halo effect (Thorndike, 1920) are integrated as
thresholds in the behavior of the agents. The strength of
these biases is a parameter that can be set to test different
scenarios in the simulation.
The routine for whether an agent decides to speak up,
and what they would say, is implemented as a decision tree
comprehending stochastic influences. Figure 2 displays
the core of this decision routine. To understand how the
decision tree is used, we describe one possible path along
its branches: In the beginning, the agent checks whether
she is interested in the content or the speaker/sender of
the message. To this end, the message is first compared to
the agent’s own substantive and relational model. If the
derived values suggest that the agent is neither interested
in the content nor in the speaker, the agent decides to
send out a message with a new content.
CollAct was set up stepwise, testing each model part (e.g.
the discussion) for proper and reasonable dynamics and
outputs. This proceeding helps to avoid implementation
errors and understand the dynamics produced by the
different model parts (e.g. whether and how path-
dependency in the discussion works), thus facilitating
model-building and understanding of the model results.
To estimate the influence of different parameter values on
the outcome, a sensitivity analysis was performed using
the parameter sweep function from Repast Simphony.
During a parameter sweep, the program performs several
runs of the model with different parameter values within
a range predefined by the modeler. For the final model,
results were discussed with experts and compared to
existing literature.
CollAct produces discussions with successive clusters
of messages on the same aspects of an issue at stake, the
development of a shared understanding, and the shift
of roles through learning in relational models. Figure 3
displays the output from one single run, in which different
agents (speakers) “talk” about different contents.
When it comes to factors having an influence on
the consensus and on the amount of learning, the
important factors turn out to be group size, the level of
controversy within the discussion, available knowledge,
knowledge distribution, and conformity. For the influence
of conformity on the consensus, results suggest that
while high conformity and a low controversy in the
discussion both foster a broad consensus comprehending
many aspects, cognitive learning is needed to build a
shared understanding and to increase the support of a
consensus (overlap of the consensus with the mental
models of agents). This result is intuitive. Nevertheless,
in the scientific discussion on social learning in natural
resources management, the need for cognitive learning
Figure 2: Agents’ decision routine for choosing a message. Possible outcomes are in green boxes. r refers to a random
number, while the probabilities for the Asch and the halo effect to occur are parameters that can be varied. The values
for content and person are derived when the agent evaluates whether she agrees to the last message (is the content
included in my substantive model?) and who the speaker was (value in relational model).
Eberlen et al: Introducing Agent-Based Models to Social Psychology156
and thus the need for higher resources (e.g. time) is often
not as prominent, and only recently Beers and colleagues
(2016) found that conflictual interaction might enhance
learning. This demonstrates that using an ABM such
as CollAct as a thinking tool can help to achieve more
consistent conclusions from assumptions, including your
own. Another result from CollAct is that high mutual
esteem and the building of a shared understanding
reinforce each other. This is due to the implementation
on the micro level: first, the probability for agents to learn
from each other depends upon their mutual esteem; and
second, mutual esteem tends to be high in presence of a
similar opinion. These two micro-level assumptions result
in a reinforcing feedback between high mutual esteem
and the building of a shared understanding at group
level. Hence, CollAct can serve as a thinking-tool to link
theoretical assumptions at the micro level to emergent
outcomes at the group level, supporting the analyses of
trade-offs in group interaction.
Moreover, not only the parameters, but also micro-
level assumptions and input values for agents’ mental
models can be varied. Through a variation of micro-level
assumptions (e.g. a different decision tree) different
hypotheses can be tested for their ability to reproduce
realistic model outputs. In this way, hypotheses can
be specified, and gaps in an explanation (where no
realistic behavior is observed) may become obvious.
Simulating different mental model combinations can aid
to specify characteristic group compositions that result
in interesting outcomes in simulation experiments.
Such characteristic group compositions and dynamics
can then be further investigated in empirical research,
and if confirmed, CollAct may serve to test intervention
measures (e.g. increasing the controversy of the
discussion).
Limitations and Drawbacks of Agent-Based Models
Despite the numerous benefits of agent-based modeling
as a research tool, there are several challenges associated
with creating ABMs. We address common difficulties such
as the integration of too many features and the choice of
the parameters. The results of models are often criticized
for being either trivial or, on the other hand, too complex
and therefore probably wrong because of their surprising
results (Waldherr & Wijermans, 2013).
As we mentioned already in the section “Linking
Different Levels of Analysis”, selecting the adequate
number of parameters, features, and behaviors to include
in the model can be challenging, both on the practical
and the theoretical level. In practical terms, integrating
Figure 3: Output from CollAct showing successive clusters of messages on the same aspects of an issue at stake (contents
0–30, referring to a specific compartment of the substantive model of agents). Different speakers are displayed by
their identification number (e.g. in time step 11, agent number 1 sends a message about content 12, followed by a
discussion about this content in which all six agents send messages). A value of –1 for the speaker (blue) means that
no agent is speaking at that time step.
Eberlen et al: Introducing Agent-Based Models to Social Psychology 157
a large amount of details will make programming the
model more challenging, as each model feature needs
to be defined and integrated with the other model
components in a meaningful way. However, even if the
scientist overcomes this hurdle, a model with a large
number of parameters will be of limited theoretical
value. One reason for this is the so-called “curse of
dimensionality”: increasing the variables integrated in
the model decreases the number of observations per cell.
If we want to obtain meaningful statistical results, it is
useful to either keep the number of variables as low as
possible or increase the number of agents and runs. A
different and arguably even more important aspect of the
“curse of dimensionality” is that it might not be possible
to interpret the model in a meaningful way if there
are too many variables to take into account. As social
psychologists, we are very well versed at simultaneously
referring to and ignoring this limit in our work (“further
investigation of V taking X, Y, and Z in addition to W, into
account, is needed to clarify this question”). Creating
an ABM gives us the possibility to explore further, and
more complex interactions between variables, but
any modeler has to be careful to avoid including more
components than necessary. This is the principle of
Occam’s razor applied to agent-based modeling: limiting
model parameters to those strictly necessary for the
implementation of the hypothesis.
However, the ability to accommodate more than the
necessary number of variables can also be a strength of
ABMs: it allows us to explore which variables actually
add value and to identify those that are critical to the
implemented theory. By stepwise increasing or reducing
model complexity, ABMs can help us to define the limits
of a theory, hypothesis, or experimental finding more
clearly. Once these limits are identified, the researcher
can make an informed decision as to whether the
hypothesis is suited to be tested in a traditional, real-
world experiment.
Regardless of the number of components
implemented, their selection, such as the number and
type of agent, rules, and updating processes. can also be
subject to criticism. This problem of model specification
is not unique to ABMs: in other types of models, e.g.
equation-based models, details that have to be specified
in ABMs are aggregated in functions and parameters,
and thus, uncertain design choices are “hidden” in the
model. In an ABM, they can be made explicit. This leads
to a better understanding of the processes that the
modeler set out to investigate in the first place, as well
as more transparency in comparison to equation-based
models or experimental designs where the details of the
theoretical foundations can be glossed over (Wilensky &
Rand, 2015, p. 36). At the same time, the freedom and
necessity to define all model elements is the largest
challenge when designing an ABM. It is possible to leave
in free parameters that cannot be empirically measured
to design an executable model. Such parameters can be
varied to explore the model behavior, and also, they can
be calibrated to empirical data of the system which is
modeled. This bears the danger of “overfitting”, or the
process of adapting values for parameters so precisely
that they describe one specific sample of data instead of
a more general phenomenon.
What differentiates the processes of selecting features
and calibrating parameters from the questionable research
practices of adapting hypotheses after the collection of
data, or leaving out collected variables and data points in
order to make a scientific contribution more interesting
for publication, lies in the nature of the ABM as a closed
system. Of course, the modeler can simply refrain from
reporting certain aspects of her model, but she cannot
leave them out of her implementation. However, we have
to stress that, ultimately, there is no substitute for good
scientific conduct and research ethics.
Another pitfall of ABMs is to take the implemented
procedure based on the theoretical assumptions and
results model as the mirror image of the same processes
and observations in the real world. While the model can
serve as a “proof of concept”, it cannot be conclusive
evidence by itself. Rather, as stressed before, it is a
valuable addition to existing research methods and can
help alleviate difficulties researchers encounter when
using empirical methods alone, as described in the first
part of this article.
Quite often, the results obtained from an ABM seem
obvious. This is similar to hindsight bias in classic
experiments. There are two scenarios: First, by creating an
ABM, we have realized that the results are actually trivial
– in this case, the ABM has fulfilled its use as a thinking
tool. Second, the possibly unexpected result seems
suddenly more plausible than the initially predicted
outcome. Differing from a real-world experiment, here,
we have the support of the implemented ABM that the
surprising result is actually the outcome of the dynamics
programmed into the model.
Finally, the main obstacle to overcome is often that of
learning to build an ABM, which can be quite challenging
and involve a steep learning curve. It implies acquiring
both new theoretical knowledge about ABMs and new
practical skills for programming the model. We provide
suggestions for accessible literature on agent-based
modeling in annex 1. These will help you to familiarize
with the necessary knowledge to translate a verbal
theory or hypothesis into an ABM. Regarding the use of
programming languages, many scientists might already
be more familiar with these than they think: languages
like R Development Core Team, 2008 and Python Software
foundation, n.d. are more and more common instruments
for statistical analyses and programming experiments. To
facilitate the choice between the different tools available,
annex 2 contains a number of tools, packages and software
recommendations for the creation of ABMs.
Conclusion
Despite the previous appeals for the use of agent-based
modeling, it is not a commonly used method in social
psychology yet. We believe that the time for ABMs has
come, not least because of the need for improvement
of our current scientific methods. Pre-registrations,
improving the theoretical foundation of our research,
and other recommendations to combat the replication
crisis can create synergies with, or directly profit from,
Eberlen et al: Introducing Agent-Based Models to Social Psychology158
the creation of ABMs. We assume the main reason for the
limited use of ABMs in social psychology is merely that
this technique is not (yet) widely known, and those who
have heard about ABMs are intimidated by the challenging
process of getting acquainted with the technique.
The wider use of computer programming in psychology
may lower the hurdle to get involved with creating ABMs. The
number of researchers using some form of programming, be
it for data analysis, designing experiments, or any other aim
such as producing a website, has grown in the last decade.
Moreover, the usability of the available software libraries
has improved. Therefore, the creation of a computer-based
simulation should not seem as intimidating, as it does not
have such a steep learning curve as just a few years back.
Still, integrating a new methodology into an established
research routine can be time consuming. Here, again,
agent-based modeling has characteristics that can facilitate
the steps to a first model. Due to its use in several different
disciplines, uniting researchers with different theoretical as
well as methodological backgrounds, there is a large number
of different tools available. In the annex, we provide a non-
exhaustive list of those resources and instruments. We start
with different model libraries and journals dedicated at
least in part to ABMs, as sources of further examples. This
is followed by recommendations of practical guidelines in
the literature, online tutorials, and introductions. Then, you
will find a short selection of programs and programming
languages that can be used for agent-based modeling, with
a focus on accessibility as well as pre-existing knowledge
social psychologists might already possess. Finally, we give
recommendations on where to look if you are searching for
more experienced modelers to initiate collaborations, as
well as university departments who might organize classes
on agent-based modeling.
Agent-based modeling as a complementary research
method has a high potential to facilitate and improve
research practice of social-psychologists. Particularly in the
light of the current replication crisis, and with increasing
computer literacy, we believe that this is the right point in
time for social psychologists to start using ABMs.
Additional Files
The additional files for this article can be found as follows:
• Annexe 1. How to Get Started with Agent-Based Mod-
eling: Introductory Literature, Model Libraries and
Tutorials. DOI: https://doi.org/10.5334/irsp.115.s1
• Annexe 2. Software and Languages for the Creation
and Exploration of Agent-Based Models. DOI:
https://doi.org/10.5334/irsp.115.s2
Competing Interests
The authors have no competing interests to declare.
References
Ajzen, I. (1991). The theory of planned behavior.
Organizational Behavior and Human Decision
Processes, 50(2), 179–211. DOI: https://doi.org/
10.1016/0749-5978(91)90020-T
Anderson, J. R. (2000). Cognitive Psychology and Its
Implications. 6. New York: Worth Publishers.
Asch, S. E. (1951). The Effects of Group Pressure Upon
the Modification and Distortion of Judgments. In:
Guetzkow, H. (Ed.), Groups, Leadership and Men:
Research in Human Relations, pp. 177–190. Carnegie
Press.
Asendorpf, J. B., Conner, M., De Fruyt, F.,
De Houwer, J., Denissen, J. J. A., Fiedler, K.,
Wicherts, J. M., et al. (2013). Recommendations
for Increasing Replicability in Psychology. European
Journal of Personality, 27(2), 108–119. DOI: https://
doi.org/10.1002/per.1919
AsPredicted: Home. (n.d.). Retrieved October 19, 2016,
from: https://aspredicted.org/index.php.
Banks, G. C., Rogelberg, S. G., Woznyj, H. M.,
Landis, R. S. & Rupp, D. E. (2016). Editorial:
Evidence on Questionable Research Practices: The
Good, the Bad, and the Ugly. Journal of Business
and Psychology, 31(3), 323–338. DOI: https://doi.
org/10.1007/s10869-016-9456-7
Baron, R. S. & Kerr, N. L. (2003). Group Process, Group
Decision, Group Action (2nd ed., p. 271). Open
University Press.
Beers, P. J., Mierlo, B. & Hoes, A. C. (2016). Toward
an Integrative Perspective on Social Learning
in System Innovation Initiatives. Ecology and
Society, 21(1): DOI: https://doi.org/10.5751/
ES-08148-210133
Carney, D. R., Cuddy, A. J. C. & Yap, A. J. (2010).
Power Posing Brief Nonverbal Displays Affect
Neuroendocrine Levels and Risk Tolerance.
Psychological Science, 21(10), 1363–1368. DOI:
https://doi.org/10.1177/0956797610383437
Chambers, C. D., Forstmann, B. & Pruszynski, J. A.
(2017). Registered reports at the European Journal
of Neuroscience: consolidating and extending peer-
reviewed study pre-registration. European Journal
of Neuroscience, 45(5), 627–628. DOI: https://doi.
org/10.1111/ejn.13519
CoMSES Computational Model Library. (n.d.). Retrieved
from: https://www.openabm.org/models.
Conte, R. & Paolucci, M. (2014). On agent-based modeling
and computational social science. Frontiers in
Psychology, 1–9. DOI: https://doi.org/10.3389/
fpsyg.2014.00668
Doise, W. (1982). L’Explication en psychologie sociale.
Paris: Presses universitaires de France.
Dörner, D. (1999). Bauplan für eine Seele. Hamburg: Rowohlt.
Epstein, J. M. & Axtell, R. L. (1996). Growing Articial
Societies. Social Science from the Bottom Up.
Washington, D.C., USA: The Brooking Institution.
Friedman, M. (2016, September 27). Amy Cuddy Responds
to Claims That “Power Poses” May Not Work.
Retrieved October 18, 2016, from: http://www.
marieclaire.com/career-advice/news/a22830/
power-posing-researcher-questions-study/.
Funder, D. C., Levine, J. M., Mackie, D. M.,
Morf, C. C., Sansone, C., Vazire, S. & West, S. G.
Eberlen et al: Introducing Agent-Based Models to Social Psychology 159
(2014). Improving the Dependability of Research in
Personality and Social Psychology Recommendations
for Research and Educational Practice. Personality
and Social Psychology Review, 18(1), 3–12. DOI:
https://doi.org/10.1177/1088868313507536
Gelman, A., Fung, K. & Miller, M. (2016, January
19). The Power of the “Power Pose.” Slate.
Retrieved from: http://www.slate.com/articles/
health_and_science/science/2016/01/amy_
cuddy_s_power_pose_research_is_the_latest_
example_of_scientific_overreach.html.
Gray, K., Rand, D. G., Ert, E., Lewis, K., Hershman, S.
& Norton, M. I. (2014). The Emergence of
“Us and Them” in 80 Lines of Code Modeling
Group Genesis in Homogeneous Populations.
Psychological Science. DOI: https://doi.org/10.1177/
0956797614521816
Greenwald, A. G., McGhee, D. E. & Schwartz, J. L. K.
(1998). Measuring individual differences in implicit
cognition: The implicit association test. Journal of
Personality and Social Psychology, 74(6), 1464–1480.
DOI: https://doi.org/10.1037/0022-3514.74.6.1464
Grimm, V., Berger, U., Bastiansen, F., Ostrom, E.,
Ginot, V., Giske, J., et al. (2006). A standard
protocol for describing individual-based and
agent-based models. Ecological Modelling,
198(1–2), 115–126. DOI: https://doi.org/10.1016/j.
ecolmodel.2006.04.023
Grimm, V., Berger, U., De Angelis, D. L., Polhill, J.
G., Giske, J. & Railsback, S. F. (2010). The ODD
protocol: A review and first update. Ecological
Modelling, 221(23), 2760–2768. DOI: https://doi.
org/10.1016/j.ecolmodel.2010.08.019
Hughes, H. P. N., Clegg, C. W., Robinson, M. A. &
Crowder, R. M. (2012). Agent-based modelling
and simulation: The potential contribution to
organizational psychology. Journal of Occupational
and Organizational Psychology, 85(3), 487–502. DOI:
https://doi.org/10.1111/j.2044-8325.2012.02053.x
Jackson, J.C., Rand, D., Lewis, K., Norton, M.I. &
Gray, K. (2017). Agent-based Modeling: A Guide
for Social Psychologists. Social Psychological and
Personality Science. DOI: https://doi.org/10.1177/
1948550617691100
Johnson-Laird, P. N. (1983). Mental Models. Cambridge:
Cambridge University Press.
Jones, N. A., Ross, H., Lynam, T., Perez, P. & Leitch, A.
(2011). Mental Models: An Interdisciplinary Syn-
thesis of Theory and Methods. Ecology and Society,
16(1), 46. DOI: https://doi.org/10.5751/ES-03802-
160146
Klein, R. A., Ratliff, K. A., Vianello, M., Adams R. B., Jr.,
Bahník, Š., Bernstein, M. J., Nosek, B. A., et al.
(2014). Investigating variation in replicability: A
“many labs” replication project. Social Psychology,
45(3), 142–152. DOI: https://doi.org/10.1027/
1864-9335/a000178
Klein, S. B. (2014). What can recent replication failures
tell us about the theoretical commitments of
psychology? Theory & Psychology, 24(3), 326–338.
DOI: https://doi.org/10.1177/0959354314529616
Kolkman, M. J. (2005). Controversies in water manage-
ment: Frames and mental models. Environmental
Impact Assessment Review, 27(7). 685–706. DOI:
https://doi.org/10.1016/j.eiar.2007.05.005
Luhmann, C. C. & Rajaram, S. (2015). Memory Trans-
mission in Small Groups and Large Networks an
Agent-Based Model. Psychological Science. DOI:
https://doi.org/10.1177/0956797615605798
Müller, B., Bohn, F., Dreßler, G., Groeneveld, J.,
Klassert, C., Martin, R., et al. (2013). Describing
human decisions in agent-based models – ODD+D,
an extension of the ODD protocol. Environmental
Modelling & Software, 48(C), 37–48. DOI: https://
doi.org/10.1016/j.envsoft.2013.06.003
Muro, M. & Jeffrey, P. (2008). A critical review of
the theory and application of social learning
in participatory natural resource management
processes. Journal of Environmental Planning and
Management, 51(3), 325–344. DOI: https://doi.
org/10.1080/09640560801977190
Norman, D. A. (1983). Some observations on mental
models. In: Gentner, D. & Stevens, A. (Eds.), Mental
Models, 7–14. Hillsdale: Lawrence Erlbaum.
North, M. J., Collier, N. T., Ozik, J., Tatara, E. R., Macal,
C. M., Bragen, M. & Sydelko, P. (2013). Complex
adaptive systems modeling with Repast Simphony.
Complex Adaptive Systems Modeling, 1, 3. DOI:
https://doi.org/10.1186/2194-3206-1-3
Nosek, B. A. & Lakens, D. (2014). Registered reports:
A method to increase the credibility of published
results. Social Psychology, 45(3), 137–141. DOI:
https://doi.org/10.1027/1864-9335/a000192
Nosek, B. A., Spies, J. R. & Motyl, M. (2012). Scientific
Utopia II. Restructuring Incentives and Practices to
Promote Truth Over Publishability. Perspectives on
Psychological Science, 7(6), 615–631. DOI: https://
doi.org/10.1177/1745691612459058
Open Science Collaboration. (2015). Estimating the
reproducibility of psychological science. Science,
349(6251), aac4716. DOI: https://doi.org/10.1126/
science.aac4716
Pashler, H. & Wagenmakers, E.-J. (2012). Editors’
Introduction to the Special Section on
Replicability in Psychological Science a Crisis
of Confidence? Perspectives on Psychological
Science, 7(6), 528–530. DOI: https://doi.
org/10.1177/1745691612465253
Python [Computer Software]. (n.d.). Retrieved from:
http://python.org.
R Development Core Team. (2008). R: A language
and environment for statistical computing.
R Foundation for Statistical Computing, Vienna,
Austria. Retrieved from: http://www.R-project.org.
Richardson, M., Dale, R. & Marsh, K. (2014). Complex
Dynamical Systems in Social and Personality Psy-
chology. In: Reis, H. & Judd, C. (Eds.), Handbook of
Research Methods in Social and Personality Psychology.
Eberlen et al: Introducing Agent-Based Models to Social Psychology160
Cambridge: Cambridge University Press. DOI:
https://doi.org/10.1017/CBO9780511996481.015
Salamon, T. (2011). Design of Agent-Based Models:
Developing Computer Simulations for a Better
Understanding of Social Processes. Repin, Czech
Republic: Bruckner Publishing.
Scholz, G. (2016). How participatory methods facilitate
social learning in natural resource management:
An exploration of group interaction using
interdisciplinary syntheses and agent-based
modeling (Doctoral dissertation). Retrieved from:
Universität Osnabrück Repositorium (Accession
Identifier: urn:nbn:de:gbv:700-2016010713775).
Scholz, G., Pahl-Wostl, C. & Dewulf, A. (2014). An agent-
based model of consensus building. In: Miguel,
Amblard, Barceló & Madella (Eds.) Advances in
Computational Social Science and Social Simulation.
Barcelona: Autònoma.
Simmons, J. P., Nelson, L. D. & Simonsohn, U.
(2011). False-Positive Psychology Undisclosed
Flexibility in Data Collection and Analysis Allows
Presenting Anything as Significant. Psychological
Science, 22(11), 1359–1366. DOI: https://doi.
org/10.1177/0956797611417632
Smaldino, P. E. (2016). Models Are Stupid, and We
Need More of Them. Draft of chapter to appear
In: Vallacher, R. R., Nowak, A. & Read, S. J. (Eds.).
Computational Models in Social Psychology.
Forthcoming in 2016 from Psychology Press.
Retrieved from: https://www.researchgate.net/
publication/289540152_Models_Are_Stupid_and_
We_Need_More_of_Them.
Smaldino, P. E., Calanchini, J. & Pickett, C. L. (2015).
Theory development with agent-based models.
Organizational Psychology Review, 5(4), 300–317.
DOI: https://doi.org/10.1177/2041386614546944
Smaldino, P. E. & McElreath, R. (2016). The natural
selection of bad science. Royal Society Open Science,
3(9), 160384. DOI: https://doi.org/10.1098/rsos.
160384
Smith, E. R. & Beasley, A. (2015). Agent-Based Modeling.
In: Gawronski, B. & Bodenhausen, G. V. (Eds.), Theory
and Explanation in Social Psychology, 390–407. New
York, NY: Guilford Press.
Smith, E. R. & Conrey, F. R. (2007). Agent-Based
Modeling: A New Approach for Theory Building
in Social Psychology. Personality and Social
Psychology Review, 11(1), 87–104. DOI: https://doi.
org/10.1177/1088868306294789
Spellman, B. A. (2015). A Short (Personal) Future History
of Revolution 2.0. Perspectives on Psychological
Science, 10(6), 886–899. DOI: https://doi.org/
10.1177/1745691615609918
Świątkowski, W. & Dompnier, B. (2017). Replicability
Crisis in Social Psychology: Looking at the Past to
Find New Pathways for the Future. International
Review of Social Psychology, 30(1). DOI: https://doi.
org/10.5334/irsp.66
Thorndike, E. L. (1920). A constant error in psychological
ratings. Journal of Applied Psychology, 4(1), 25–29.
DOI: https://doi.org/10.1037/h0071663
Van Rooy, D., Wood, I. & Tran, E. (2016). Modelling
the Emergence of Shared Attitudes from Group
Dynamics Using an Agent-Based Model of Social
Comparison Theory. Systems Research and
Behavioral Science, 33(1), 188–204. DOI: https://
doi.org/10.1002/sres.2321
van’t Veer, A. E. & Giner-Sorolla, R. (2016). Pre-
registration in social psychology—A discussion
and suggested template. Journal of Experimental
Social Psychology, 67, 2–12. DOI: https://doi.
org/10.1016/j.jesp.2016.03.004
Waldherr, A. & Wijermans, N. (2013). Communicating
Social Simulation Models to Sceptical Minds. Journal
of Articial Societies and Social Simulation, 16(4), 13.
DOI: https://doi.org/10.18564/jasss.2247
Wilensky, U. & Rand, W. (2015). An Introduction to
Agent-Based Modeling. Modeling Natural, Social
and Engineered Complex Systems with NetLogo.
Cambridge, Massachusetts, USA: MIT Press.
Wooldridge, M. (1999). Intelligent Agents. In: Weiss,
G. (Ed.), Multiagent Systems: A Modern Approach
to Distributed Articial Intelligence, 27–78.
Massachusetts Institute of Technology.
Wooldridge, M. & Jennings, N. R. (1995). Intelligent
agents: theory and practice. The Knowledge
Engineering Review, 10(2), 115–152. DOI: https://
doi.org/10.1017/S0269888900008122
How to cite this article: Eberlen, J., Scholz, G. and Gagliolo, M. (2017). Simulate this! An Introduction to Agent-Based Models
and their Power to Improve your Research Practice.
International Review of Social Psychology
, 30(1), 149–160, DOI: https://doi.
org/10.5334/irsp.115
Published: 03 July 2017
Copyright: © 2017 The Author(s). This is an open-access article distributed under the terms of the Creative Commons Attribution
4.0 International License (CC-BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the
original author and source are credited. See http://creativecommons.org/licenses/by/4.0/.
OPEN ACCESS
International Review of Social Psychology
is a peer-reviewed open access journal published
by Ubiquity Press.